text
stringlengths
559
401k
source
stringlengths
13
121
Algebraic Geometry is an algebraic geometry textbook written by Robin Hartshorne and published by Springer-Verlag in 1977. == Importance == It was the first extended treatment of scheme theory written as a text intended to be accessible to graduate students, and is considered to be the standard reference. This book was cited when Hartshorne was awarded the Leroy P. Steele Prize for mathematical exposition in 1979. == Contents == The first chapter, titled "Varieties", deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert's Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references. The second and the third chapters, "Schemes" and "Cohomology", form the technical heart of the book. The last two chapters, "Curves" and "Surfaces", respectively explore the geometry of 1- and 2-dimensional objects, using the tools developed in the chapters 2 and 3. == Notes == == References == Hartshorne, Robin (1977). Algebraic Geometry. Berlin, New York: Springer-Verlag. doi:10.1007/978-1-4757-3849-0. ISBN 978-0-387-90244-9. MR 0463157. Zbl 0367.14001. Shatz, Stephen S. (1979), "Review: Robin Hartshorne, Algebraic geometry", Bull. Amer. Math. Soc. (N.S.), 1 (3): 553–560, doi:10.1090/S0273-0979-1979-14618-4
Wikipedia/Algebraic_Geometry_(book)
In mathematics, real algebraic geometry is the sub-branch of algebraic geometry studying real algebraic sets, i.e. real-number solutions to algebraic equations with real-number coefficients, and mappings between them (in particular real polynomial mappings). Semialgebraic geometry is the study of semialgebraic sets, i.e. real-number solutions to algebraic inequalities with-real number coefficients, and mappings between them. The most natural mappings between semialgebraic sets are semialgebraic mappings, i.e., mappings whose graphs are semialgebraic sets. == Terminology == Nowadays the words 'semialgebraic geometry' and 'real algebraic geometry' are used as synonyms, because real algebraic sets cannot be studied seriously without the use of semialgebraic sets. For example, a projection of a real algebraic set along a coordinate axis need not be a real algebraic set, but it is always a semialgebraic set: this is the Tarski–Seidenberg theorem. Related fields are o-minimal theory and real analytic geometry. Examples: Real plane curves are examples of real algebraic sets and polyhedra are examples of semialgebraic sets. Real algebraic functions and Nash functions are examples of semialgebraic mappings. Piecewise polynomial mappings (see the Pierce–Birkhoff conjecture) are also semialgebraic mappings. Computational real algebraic geometry is concerned with the algorithmic aspects of real algebraic (and semialgebraic) geometry. The main algorithm is cylindrical algebraic decomposition. It is used to cut semialgebraic sets into nice pieces and to compute their projections. Real algebra is the part of algebra which is relevant to real algebraic (and semialgebraic) geometry. It is mostly concerned with the study of ordered fields and ordered rings (in particular real closed fields) and their applications to the study of positive polynomials and sums-of-squares of polynomials. (See Hilbert's 17th problem and Krivine's Positivestellensatz.) The relation of real algebra to real algebraic geometry is similar to the relation of commutative algebra to complex algebraic geometry. Related fields are the theory of moment problems, convex optimization, the theory of quadratic forms, valuation theory and model theory. == Timeline of real algebra and real algebraic geometry == 1826 Fourier's algorithm for systems of linear inequalities. Rediscovered by Lloyd Dines in 1919 and Theodore Motzkin in 1936. 1835 Sturm's theorem on real root counting 1856 Hermite's theorem on real root counting. 1876 Harnack's curve theorem. (This bound on the number of components was later extended to all Betti numbers of all real algebraic sets and all semialgebraic sets.) 1888 Hilbert's theorem on ternary quartics. 1900 Hilbert's problems (especially the 16th and the 17th problem) 1902 Farkas' lemma (Can be reformulated as linear positivstellensatz.) 1914 Annibale Comessatti showed that not every real algebraic surface is birational to RP2 1916 Fejér's conjecture about nonnegative trigonometric polynomials. (Solved by Frigyes Riesz.) 1927 Emil Artin's solution of Hilbert's 17th problem 1927 Krull–Baer Theorem (connection between orderings and valuations) 1928 Pólya's Theorem on positive polynomials on a simplex 1929 B. L. van der Waerden sketches a proof that real algebraic and semialgebraic sets are triangularizable, but the necessary tools have not been developed to make the argument rigorous. 1931 Alfred Tarski's real quantifier elimination. Improved and popularized by Abraham Seidenberg in 1954. (Both use Sturm's theorem.) 1936 Herbert Seifert proved that every closed smooth submanifold of R n {\displaystyle \mathbb {R} ^{n}} with trivial normal bundle, can be isotoped to a component of a nonsingular real algebraic subset of R n {\displaystyle \mathbb {R} ^{n}} which is a complete intersection (from the conclusion of this theorem the word "component" can not be removed). 1940 Marshall Stone's representation theorem for partially ordered rings. Improved by Richard Kadison in 1951 and Donald Dubois in 1967 (Kadison–Dubois representation theorem). Further improved by Mihai Putinar in 1993 and Jacobi in 2001 (Putinar–Jacobi representation theorem). 1952 John Nash proved that every closed smooth manifold is diffeomorphic to a nonsingular component of a real algebraic set. 1956 Pierce–Birkhoff conjecture formulated. (Solved in dimensions ≤ 2.) 1964 Krivine's Nullstellensatz and Positivestellensatz. Rediscovered and popularized by Stengle in 1974. (Krivine uses real quantifier elimination while Stengle uses Lang's homomorphism theorem.) 1964 Lojasiewicz triangulated semi-analytic sets 1964 Heisuke Hironaka proved the resolution of singularity theorem 1964 Hassler Whitney proved that every analytic variety admits a stratification satisfying the Whitney conditions. 1967 Theodore Motzkin finds a positive polynomial which is not a sum of squares of polynomials. 1972 Vladimir Rokhlin proved Gudkov's conjecture. 1973 Alberto Tognoli proved that every closed smooth manifold is diffeomorphic to a nonsingular real algebraic set. 1975 George E. Collins discovers cylindrical algebraic decomposition algorithm, which improves Tarski's real quantifier elimination and allows to implement it on a computer. 1973 Jean-Louis Verdier proved that every subanalytic set admits a stratification with condition (w). 1979 Michel Coste and Marie-Françoise Roy discover the real spectrum of a commutative ring. 1980 Oleg Viro introduced the "patch working" technique and used it to classify real algebraic curves of low degree. Later Ilya Itenberg and Viro used it to produce counterexamples to the Ragsdale conjecture, and Grigory Mikhalkin applied it to tropical geometry for curve counting. 1980 Selman Akbulut and Henry C. King gave a topological characterization of real algebraic sets with isolated singularities, and topologically characterized nonsingular real algebraic sets (not necessarily compact) 1980 Akbulut and King proved that every knot in S n {\displaystyle S^{n}} is the link of a real algebraic set with isolated singularity in R n + 1 {\displaystyle \mathbb {R} ^{n+1}} 1981 Akbulut and King proved that every compact PL manifold is PL homeomorphic to a real algebraic set. 1983 Akbulut and King introduced "Topological Resolution Towers" as topological models of real algebraic sets, from this they obtained new topological invariants of real algebraic sets, and topologically characterized all 3-dimensional algebraic sets. These invariants later generalized by Michel Coste and Krzysztof Kurdyka as well as Clint McCrory and Adam Parusiński. 1984 Ludwig Bröcker's theorem on minimal generation of basic open semialgebraic sets (improved and extended to basic closed semialgebraic sets by Scheiderer.) 1984 Benedetti and Dedo proved that not every closed smooth manifold is diffeomorphic to a totally algebraic nonsingular real algebraic set (totally algebraic means all its Z/2Z-homology cycles are represented by real algebraic subsets). 1991 Akbulut and King proved that every closed smooth manifold is homeomorphic to a totally algebraic real algebraic set. 1991 Schmüdgen's solution of the multidimensional moment problem for compact semialgebraic sets and related strict positivstellensatz. Algebraic proof found by Wörmann. Implies Reznick's version of Artin's theorem with uniform denominators. 1992 Akbulut and King proved ambient versions of the Nash-Tognoli theorem: Every closed smooth submanifold of Rn is isotopic to the nonsingular points (component) of a real algebraic subset of Rn, and they extended this result to immersed submanifolds of Rn. 1992 Benedetti and Marin proved that every compact closed smooth 3-manifold M can be obtained from S 3 {\displaystyle S^{3}} by a sequence of blow ups and downs along smooth centers, and that M is homeomorphic to a possibly singular affine real algebraic rational threefold 1997 Bierstone and Milman proved a canonical resolution of singularities theorem 1997 Mikhalkin proved that every closed smooth n-manifold can be obtained from S n {\displaystyle S^{n}} by a sequence of topological blow ups and downs 1998 János Kollár showed that not every closed 3-manifold is a projective real 3-fold which is birational to RP3 2000 Scheiderer's local-global principle and related non-strict extension of Schmüdgen's positivstellensatz in dimensions ≤ 2. 2000 János Kollár proved that every closed smooth 3–manifold is the real part of a compact complex manifold which can be obtained from C P 3 {\displaystyle \mathbb {CP} ^{3}} by a sequence of real blow ups and blow downs. 2003 Welschinger introduces an invariant for counting real rational curves 2005 Akbulut and King showed that not every nonsingular real algebraic subset of RPn is smoothly isotopic to the real part of a nonsingular complex algebraic subset of CPn == References == S. Akbulut and H.C. King, Topology of real algebraic sets, MSRI Pub, 25. Springer-Verlag, New York (1992) ISBN 0-387-97744-9 Bochnak, Jacek; Coste, Michel; Roy, Marie-Françoise. Real Algebraic Geometry. Translated from the 1987 French original. Revised by the authors. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 36. Springer-Verlag, Berlin, 1998. x+430 pp. ISBN 3-540-64663-9 Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise Algorithms in real algebraic geometry. Second edition. Algorithms and Computation in Mathematics, 10. Springer-Verlag, Berlin, 2006. x+662 pp. ISBN 978-3-540-33098-1; 3-540-33098-4 Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1; 0-8218-4402-4 == Notes == == External links == The Role of Hilbert Problems in Real Algebraic Geometry (PostScript) Real Algebraic and Analytic Geometry Preprint Server
Wikipedia/Real_algebraic_geometry
In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation. Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry, and the term homography, which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity. The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field (incidence geometry, see also synthetic geometry); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations". For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field. Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the results remain true, or may be generalized to projective geometries for which these theorems do not hold. == Geometric motivation == Historically, the concept of homography had been introduced to understand, explain and study visual perspective, and, specifically, the difference in appearance of two plane objects viewed from different points of view. In three-dimensional Euclidean space, a central projection from a point O (the center) onto a plane P that does not contain O is the mapping that sends a point A to the intersection (if it exists) of the line OA and the plane P. The projection is not defined if the point A belongs to the plane passing through O and parallel to P. The notion of projective space was originally introduced by extending the Euclidean space, that is, by adding points at infinity to it, in order to define the projection for every point except O. Given another plane Q, which does not contain O, the restriction to Q of the above projection is called a perspectivity. With these definitions, a perspectivity is only a partial function, but it becomes a bijection if extended to projective spaces. Therefore, this notion is normally defined for projective spaces. The notion is also easily generalized to projective spaces of any dimension, over any field, in the following way: Given two projective spaces P and Q of dimension n, a perspectivity is a bijection from P to Q that may be obtained by embedding P and Q in a projective space R of dimension n + 1 and restricting to P a central projection onto Q. If f is a perspectivity from P to Q, and g a perspectivity from Q to P, with a different center, then g ⋅ f is a homography from P to itself, which is called a central collineation, when the dimension of P is at least two. (See § Central collineations below and Perspectivity § Perspective collineations.) Originally, a homography was defined as the composition of a finite number of perspectivities. It is a part of the fundamental theorem of projective geometry (see below) that this definition coincides with the more algebraic definition sketched in the introduction and detailed below. == Definition and expression in homogeneous coordinates == A projective space P(V) of dimension n over a field K may be defined as the set of the lines through the origin in a K-vector space V of dimension n + 1. If a basis of V has been fixed, a point of V may be represented by a point (x0, ..., xn) of Kn+1. A point of P(V), being a line in V, may thus be represented by the coordinates of any nonzero point of this line, which are thus called homogeneous coordinates of the projective point. Given two projective spaces P(V) and P(W) of the same dimension, a homography is a mapping from P(V) to P(W), which is induced by an isomorphism of vector spaces f : V → W. Such an isomorphism induces a bijection from P(V) to P(W), because of the linearity of f. Two such isomorphisms, f and g, define the same homography if and only if there is a nonzero element a of K such that g = af. This may be written in terms of homogeneous coordinates in the following way: A homography φ may be defined by a nonsingular (n+1) × (n+1) matrix [ai,j], called the matrix of the homography. This matrix is defined up to the multiplication by a nonzero element of K. The homogeneous coordinates [x0 : ... : xn] of a point and the coordinates [y0 : ... : yn] of its image by φ are related by y 0 = a 0 , 0 x 0 + ⋯ + a 0 , n x n ⋮ y n = a n , 0 x 0 + ⋯ + a n , n x n . {\displaystyle {\begin{aligned}y_{0}&=a_{0,0}x_{0}+\dots +a_{0,n}x_{n}\\&\vdots \\y_{n}&=a_{n,0}x_{0}+\dots +a_{n,n}x_{n}.\end{aligned}}} When the projective spaces are defined by adding points at infinity to affine spaces (projective completion) the preceding formulas become, in affine coordinates, y 1 = a 1 , 0 + a 1 , 1 x 1 + ⋯ + a 1 , n x n a 0 , 0 + a 0 , 1 x 1 + ⋯ + a 0 , n x n ⋮ y n = a n , 0 + a n , 1 x 1 + ⋯ + a n , n x n a 0 , 0 + a 0 , 1 x 1 + ⋯ + a 0 , n x n {\displaystyle {\begin{aligned}y_{1}&={\frac {a_{1,0}+a_{1,1}x_{1}+\dots +a_{1,n}x_{n}}{a_{0,0}+a_{0,1}x_{1}+\dots +a_{0,n}x_{n}}}\\&\vdots \\y_{n}&={\frac {a_{n,0}+a_{n,1}x_{1}+\dots +a_{n,n}x_{n}}{a_{0,0}+a_{0,1}x_{1}+\dots +a_{0,n}x_{n}}}\end{aligned}}} which generalizes the expression of the homographic function of the next section. This defines only a partial function between affine spaces, which is defined only outside the hyperplane where the denominator is zero. == Homographies of a projective line == The projective line over a field K may be identified with the union of K and a point, called the "point at infinity" and denoted by ∞ (see Projective line). With this representation of the projective line, the homographies are the mappings z ↦ a z + b c z + d , where a d − b c ≠ 0 , {\displaystyle z\mapsto {\frac {az+b}{cz+d}},{\text{ where }}ad-bc\neq 0,} which are called homographic functions or linear fractional transformations. In the case of the complex projective line, which can be identified with the Riemann sphere, the homographies are called Möbius transformations. These correspond precisely with those bijections of the Riemann sphere that preserve orientation and are conformal. In the study of collineations, the case of projective lines is special due to the small dimension. When the line is viewed as a projective space in isolation, any permutation of the points of a projective line is a collineation, since every set of points are collinear. However, if the projective line is embedded in a higher-dimensional projective space, the geometric structure of that space can be used to impose a geometric structure on the line. Thus, in synthetic geometry, the homographies and the collineations of the projective line that are considered are those obtained by restrictions to the line of collineations and homographies of spaces of higher dimension. This means that the fundamental theorem of projective geometry (see below) remains valid in the one-dimensional setting. A homography of a projective line may also be properly defined by insisting that the mapping preserves cross-ratios. == Projective frame and coordinates == A projective frame or projective basis of a projective space of dimension n is an ordered set of n + 2 points such that no hyperplane contains n + 1 of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension n has at most n + 1 vertices. Projective spaces over a commutative field K are considered in this section, although most results may be generalized to projective spaces over a division ring. Let P(V) be a projective space of dimension n, where V is a K-vector space of dimension n + 1, and p : V ∖ {0} → P(V) be the canonical projection that maps a nonzero vector to the vector line that contains it. For every frame of P(V), there exists a basis e0, ..., en of V such that the frame is (p(e0), ..., p(en), p(e0 + ... + en)), and this basis is unique up to the multiplication of all its elements by the same nonzero element of K. Conversely, if e0, ..., en is a basis of V, then (p(e0), ..., p(en), p(e0 + ... + en)) is a frame of P(V) It follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the first fundamental theorem of projective geometry. Every frame (p(e0), ..., p(en), p(e0 + ... + en)) allows to define projective coordinates, also known as homogeneous coordinates: every point may be written as p(v); the projective coordinates of p(v) on this frame are the coordinates of v on the base (e0, ..., en). It is not difficult to verify that changing the ei and v, without changing the frame nor p(v), results in multiplying the projective coordinates by the same nonzero element of K. The projective space Pn(K) = P(Kn+1) has a canonical frame consisting of the image by p of the canonical basis of Kn+1 (consisting of the elements having only one nonzero entry, which is equal to 1), and (1, 1, ..., 1). On this basis, the homogeneous coordinates of p(v) are simply the entries (coefficients) of the tuple v. Given another projective space P(V) of the same dimension, and a frame F of it, there is one and only one homography h mapping F onto the canonical frame of Pn(K). The projective coordinates of a point a on the frame F are the homogeneous coordinates of h(a) on the canonical frame of Pn(K). == Central collineations == In above sections, homographies have been defined through linear algebra. In synthetic geometry, they are traditionally defined as the composition of one or several special homographies called central collineations. It is a part of the fundamental theorem of projective geometry that the two definitions are equivalent. In a projective space, P, of dimension n ≥ 2, a collineation of P is a bijection from P onto P that maps lines onto lines. A central collineation (traditionally these were called perspectivities, but this term may be confusing, having another meaning; see Perspectivity) is a bijection α from P to P, such that there exists a hyperplane H (called the axis of α), which is fixed pointwise by α (that is, α(X) = X for all points X in H) and a point O (called the center of α), which is fixed linewise by α (any line through O is mapped to itself by α, but not necessarily pointwise). There are two types of central collineations. Elations are the central collineations in which the center is incident with the axis and homologies are those in which the center is not incident with the axis. A central collineation is uniquely defined by its center, its axis, and the image α(P) of any given point P that differs from the center O and does not belong to the axis. (The image α(Q) of any other point Q is the intersection of the line defined by O and Q and the line passing through α(P) and the intersection with the axis of the line defined by P and Q.) A central collineation is a homography defined by a (n+1) × (n+1) matrix that has an eigenspace of dimension n. It is a homology, if the matrix has another eigenvalue and is therefore diagonalizable. It is an elation, if all the eigenvalues are equal and the matrix is not diagonalizable. The geometric view of a central collineation is easiest to see in a projective plane. Given a central collineation α, consider a line ℓ that does not pass through the center O, and its image under α, ℓ′ = α(ℓ). Setting R = ℓ ∩ ℓ′, the axis of α is some line M through R. The image of any point A of ℓ under α is the intersection of OA with ℓ′. The image B′ of a point B that does not belong to ℓ may be constructed in the following way: let S = AB ∩ M, then B′ = SA′ ∩ OB. The composition of two central collineations, while still a homography in general, is not a central collineation. In fact, every homography is the composition of a finite number of central collineations. In synthetic geometry, this property, which is a part of the fundamental theory of projective geometry is taken as the definition of homographies. == Fundamental theorem of projective geometry == There are collineations besides the homographies. In particular, any field automorphism σ of a field F induces a collineation of every projective space over F by applying σ to all homogeneous coordinates (over a projective frame) of a point. These collineations are called automorphic collineations. The fundamental theorem of projective geometry consists of the three following theorems. Given two projective frames of a projective space P, there is exactly one homography of P that maps the first frame onto the second one. If the dimension of a projective space P is at least two, every collineation of P is the composition of an automorphic collineation and a homography. In particular, over the reals, every collineation of a projective space of dimension at least two is a homography. Every homography is the composition of a finite number of perspectivities. In particular, if the dimension of the implied projective space is at least two, every homography is the composition of a finite number of central collineations. If projective spaces are defined by means of axioms (synthetic geometry), the third part is simply a definition. On the other hand, if projective spaces are defined by means of linear algebra, the first part is an easy corollary of the definitions. Therefore, the proof of the first part in synthetic geometry, and the proof of the third part in terms of linear algebra both are fundamental steps of the proof of the equivalence of the two ways of defining projective spaces. == Homography groups == As every homography has an inverse mapping and the composition of two homographies is another, the homographies of a given projective space form a group. For example, the Möbius group is the homography group of any complex projective line. As all the projective spaces of the same dimension over the same field are isomorphic, the same is true for their homography groups. They are therefore considered as a single group acting on several spaces, and only the dimension and the field appear in the notation, not the specific projective space. Homography groups also called projective linear groups are denoted PGL(n + 1, F) when acting on a projective space of dimension n over a field F. Above definition of homographies shows that PGL(n + 1, F) may be identified to the quotient group GL(n + 1, F) / F×I, where GL(n + 1, F) is the general linear group of the invertible matrices, and F×I is the group of the products by a nonzero element of F of the identity matrix of size (n + 1) × (n + 1). When F is a Galois field GF(q) then the homography group is written PGL(n, q). For example, PGL(2, 7) acts on the eight points in the projective line over the finite field GF(7), while PGL(2, 4), which is isomorphic to the alternating group A5, is the homography group of the projective line with five points. The homography group PGL(n + 1, F) is a subgroup of the collineation group PΓL(n + 1, F) of the collineations of a projective space of dimension n. When the points and lines of the projective space are viewed as a block design, whose blocks are the sets of points contained in a line, it is common to call the collineation group the automorphism group of the design. == Cross-ratio == The cross-ratio of four collinear points is an invariant under the homography that is fundamental for the study of the homographies of the lines. Three distinct points a, b and c on a projective line over a field F form a projective frame of this line. There is therefore a unique homography h of this line onto F ∪ {∞} that maps a to ∞, b to 0, and c to 1. Given a fourth point on the same line, the cross-ratio of the four points a, b, c and d, denoted [a, b; c, d], is the element h(d) of F ∪ {∞}. In other words, if d has homogeneous coordinates [k : 1] over the projective frame (a, b, c), then [a, b; c, d] = k. == Over a ring == Suppose A is a ring and U is its group of units. Homographies act on a projective line over A, written P(A), consisting of points U[a, b] with projective coordinates. The homographies on P(A) are described by matrix mappings U [ z , 1 ] ( a c b d ) = U [ z a + b , z c + d ] . {\displaystyle U[z,1]{\begin{pmatrix}a&c\\b&d\end{pmatrix}}=U[za+b,\ zc+d].} When A is a commutative ring, the homography may be written z ↦ z a + b z c + d , {\displaystyle z\mapsto {\frac {za+b}{zc+d}}\ ,} but otherwise the linear fractional transformation is seen as an equivalence: U [ z a + b , z c + d ] ∼ U [ ( z c + d ) − 1 ( z a + b ) , 1 ] . {\displaystyle U[za+b,\ zc+d]\thicksim U[(zc+d)^{-1}(za+b),\ 1].} The homography group of the ring of integers Z is modular group PSL(2, Z). Ring homographies have been used in quaternion analysis, and with dual quaternions to facilitate screw theory. The conformal group of spacetime can be represented with homographies where A is the composition algebra of biquaternions. == Periodic homographies == The homography h = ( 1 1 0 1 ) {\displaystyle h={\begin{pmatrix}1&1\\0&1\end{pmatrix}}} is periodic when the ring is Z/nZ (the integers modulo n) since then h n = ( 1 n 0 1 ) = ( 1 0 0 1 ) . {\displaystyle h^{n}={\begin{pmatrix}1&n\\0&1\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}.} Arthur Cayley was interested in periodicity when he calculated iterates in 1879. In his review of a brute force approach to periodicity of homographies, H. S. M. Coxeter gave this analysis: A real homography is involutory (of period 2) if and only if a + d = 0. If it is periodic with period n > 2, then it is elliptic, and no loss of generality occurs by assuming that ad − bc = 1. Since the characteristic roots are exp(±hπi/m), where (h, m) = 1, the trace is a + d = 2 cos(hπ/m). == See also == W-curve == Notes == == References == Artin, E. (1957), Geometric Algebra, Interscience Publishers Baer, Reinhold (2005) [First published 1952], Linear Algebra and Projective Geometry, Dover, ISBN 9780486445656 Berger, Marcel (2009), Geometry I, Springer-Verlag, ISBN 978-3-540-11658-5, translated from the 1977 French original by M. Cole and S. Levy, fourth printing of the 1987 English translation Beutelspacher, Albrecht; Rosenbaum, Ute (1998), Projective Geometry: From Foundations to Applications, Cambridge University Press, ISBN 0-521-48364-6 Hartshorne, Robin (1967), Foundations of Projective Geometry, New York: W.A. Benjamin, Inc Hirschfeld, J. W. P. (1979), Projective Geometries Over Finite Fields, Oxford University Press, ISBN 978-0-19-850295-1 Meserve, Bruce E. (1983), Fundamental Concepts of Geometry, Dover, ISBN 0-486-63415-9 Yale, Paul B. (1968), Geometry and Symmetry, Holden-Day == Further reading == Patrick du Val (1964) Homographies, quaternions and rotations, Oxford Mathematical Monographs, Clarendon Press, Oxford, MR0169108 . Gunter Ewald (1971) Geometry: An Introduction, page 263, Belmont:Wadsworth Publishing ISBN 0-534-00034-7. == External links == Media related to Homography at Wikimedia Commons
Wikipedia/Projective_linear_transformation
In the mathematical field of algebraic geometry, a singular point of an algebraic variety V is a point P that is 'special' (so, singular), in the geometric sense that at this point the tangent space at the variety may not be regularly defined. In case of varieties defined over the reals, this notion generalizes the notion of local non-flatness. A point of an algebraic variety that is not singular is said to be regular. An algebraic variety that has no singular point is said to be non-singular or smooth. The concept is generalized to smooth schemes in the modern language of scheme theory. == Definition == A plane curve defined by an implicit equation F ( x , y ) = 0 {\displaystyle F(x,y)=0} , where F is a smooth function is said to be singular at a point if the Taylor series of F has order at least 2 at this point. The reason for this is that, in differential calculus, the tangent at the point (x0, y0) of such a curve is defined by the equation ( x − x 0 ) F x ′ ( x 0 , y 0 ) + ( y − y 0 ) F y ′ ( x 0 , y 0 ) = 0 , {\displaystyle (x-x_{0})F'_{x}(x_{0},y_{0})+(y-y_{0})F'_{y}(x_{0},y_{0})=0,} whose left-hand side is the term of degree one of the Taylor expansion. Thus, if this term is zero, the tangent may not be defined in the standard way, either because it does not exist or a special definition must be provided. In general for a hypersurface F ( x , y , z , … ) = 0 {\displaystyle F(x,y,z,\ldots )=0} the singular points are those at which all the partial derivatives simultaneously vanish. A general algebraic variety V being defined as the common zeros of several polynomials, the condition on a point P of V to be a singular point is that the Jacobian matrix of the first-order partial derivatives of the polynomials has a rank at P that is lower than the rank at other points of the variety. Points of V that are not singular are called non-singular or regular. It is always true that almost all points are non-singular, in the sense that the non-singular points form a set that is both open and dense in the variety (for the Zariski topology, as well as for the usual topology, in the case of varieties defined over the complex numbers). In case of a real variety (that is the set of the points with real coordinates of a variety defined by polynomials with real coefficients), the variety is a manifold near every regular point. But it is important to note that a real variety may be a manifold and have singular points. For example the equation y3 + 2x2y − x4 = 0 defines a real analytic manifold but has a singular point at the origin. This may be explained by saying that the curve has two complex conjugate branches that cut the real branch at the origin. == Singular points of smooth mappings == As the notion of singular points is a purely local property, the above definition can be extended to cover the wider class of smooth mappings (functions from M to Rn where all derivatives exist). Analysis of these singular points can be reduced to the algebraic variety case by considering the jets of the mapping. The kth jet is the Taylor series of the mapping truncated at degree k and deleting the constant term. == Nodes == In classical algebraic geometry, certain special singular points were also called nodes. A node is a singular point where the Hessian matrix is non-singular; this implies that the singular point has multiplicity two and the tangent cone is not singular outside its vertex. == See also == Milnor map Resolution of singularities Singular point of a curve Singularity theory Smooth scheme Zariski tangent space == References ==
Wikipedia/Regular_point_of_an_algebraic_variety
In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem. == Definitions == Given a graph G = (V, E), a matching M in G is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share common vertices. A vertex is matched (or saturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex is unmatched (or unsaturated). A maximal matching is a matching M of a graph G that is not a subset of any other matching. A matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. The following figure shows examples of maximal matchings (red) in three graphs. A maximum matching (also known as maximum-cardinality matching) is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number ν ( G ) {\displaystyle \nu (G)} of a graph G is the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs. A perfect matching is a matching that matches all vertices of the graph. That is, a matching is perfect if every vertex of the graph is incident to an edge of the matching. A matching is perfect if | M | = | V | / 2 {\displaystyle |M|=|V|/2} . Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, the size of a maximum matching is no larger than the size of a minimum edge cover: ⁠ ν ( G ) ≤ ρ ( G ) {\displaystyle \nu (G)\leq \rho (G)} ⁠. A graph can only contain a perfect matching when the graph has an even number of vertices. A near-perfect matching is one in which exactly one vertex is unmatched. Clearly, a graph can only contain a near-perfect matching when the graph has an odd number of vertices, and near-perfect matchings are maximum matchings. In the above figure, part (c) shows a near-perfect matching. If every vertex is unmatched by some near-perfect matching, then the graph is called factor-critical. Given a matching M, an alternating path is a path that begins with an unmatched vertex and whose edges belong alternately to the matching and not to the matching. An augmenting path is an alternating path that starts from and ends on free (unmatched) vertices. Berge's lemma states that a matching M is maximum if and only if there is no augmenting path with respect to M. An induced matching is a matching that is the edge set of an induced subgraph. == Properties == In any graph without isolated vertices, the sum of the matching number and the edge covering number equals the number of vertices. If there is a perfect matching, then both the matching number and the edge cover number are |V | / 2. If A and B are two maximal matchings, then |A| ≤ 2|B| and |B| ≤ 2|A|. To see this, observe that each edge in B \ A can be adjacent to at most two edges in A \ B because A is a matching; moreover each edge in A \ B is adjacent to an edge in B \ A by maximality of B, hence | A ∖ B | ≤ 2 | B ∖ A | . {\displaystyle |A\setminus B|\leq 2|B\setminus A|.} Further we deduce that | A | = | A ∩ B | + | A ∖ B | ≤ 2 | B ∩ A | + 2 | B ∖ A | = 2 | B | . {\displaystyle |A|=|A\cap B|+|A\setminus B|\leq 2|B\cap A|+2|B\setminus A|=2|B|.} In particular, this shows that any maximal matching is a 2-approximation of a maximum matching and also a 2-approximation of a minimum maximal matching. This inequality is tight: for example, if G is a path with 3 edges and 4 vertices, the size of a minimum maximal matching is 1 and the size of a maximum matching is 2. A spectral characterization of the matching number of a graph is given by Hassani Monfared and Mallik as follows: Let G {\displaystyle G} be a graph on n {\displaystyle n} vertices, and λ 1 > λ 2 > … > λ k > 0 {\displaystyle \lambda _{1}>\lambda _{2}>\ldots >\lambda _{k}>0} be k {\displaystyle k} distinct nonzero purely imaginary numbers where 2 k ≤ n {\displaystyle 2k\leq n} . Then the matching number of G {\displaystyle G} is k {\displaystyle k} if and only if (a) there is a real skew-symmetric matrix A {\displaystyle A} with graph G {\displaystyle G} and eigenvalues ± λ 1 , ± λ 2 , … , ± λ k {\displaystyle \pm \lambda _{1},\pm \lambda _{2},\ldots ,\pm \lambda _{k}} and n − 2 k {\displaystyle n-2k} zeros, and (b) all real skew-symmetric matrices with graph G {\displaystyle G} have at most 2 k {\displaystyle 2k} nonzero eigenvalues. Note that the (simple) graph of a real symmetric or skew-symmetric matrix A {\displaystyle A} of order n {\displaystyle n} has n {\displaystyle n} vertices and edges given by the nonozero off-diagonal entries of A {\displaystyle A} . == Matching polynomials == A generating function of the number of k-edge matchings in a graph is called a matching polynomial. Let G be a graph and mk be the number of k-edge matchings. One matching polynomial of G is ∑ k ≥ 0 m k x k . {\displaystyle \sum _{k\geq 0}m_{k}x^{k}.} Another definition gives the matching polynomial as ∑ k ≥ 0 ( − 1 ) k m k x n − 2 k , {\displaystyle \sum _{k\geq 0}(-1)^{k}m_{k}x^{n-2k},} where n is the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials. == Algorithms and computational complexity == === Maximum-cardinality matching === A fundamental problem in combinatorial optimization is finding a maximum matching. This problem has various algorithms for different classes of graphs. In an unweighted bipartite graph, the optimization problem is to find a maximum cardinality matching. The problem is solved by the Hopcroft-Karp algorithm in time O(√VE) time, and there are more efficient randomized algorithms, approximation algorithms, and algorithms for special classes of graphs such as bipartite planar graphs, as described in the main article. === Maximum-weight matching === In a weighted bipartite graph, the optimization problem is to find a maximum-weight matching; a dual problem is to find a minimum-weight matching. This problem is often called maximum weighted bipartite matching, or the assignment problem. The Hungarian algorithm solves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. It uses a modified shortest path search in the augmenting path algorithm. If the Bellman–Ford algorithm is used for this step, the running time of the Hungarian algorithm becomes O ( V 2 E ) {\displaystyle O(V^{2}E)} , or the edge cost can be shifted with a potential to achieve O ( V 2 log ⁡ V + V E ) {\displaystyle O(V^{2}\log {V}+VE)} running time with the Dijkstra algorithm and Fibonacci heap. In a non-bipartite weighted graph, the problem of maximum weight matching can be solved in time O ( V 2 E ) {\displaystyle O(V^{2}E)} using Edmonds' blossom algorithm. === Maximal matchings === A maximal matching can be found with a simple greedy algorithm. A maximum matching is also a maximal matching, and hence it is possible to find a largest maximal matching in polynomial time. However, no polynomial-time algorithm is known for finding a minimum maximal matching, that is, a maximal matching that contains the smallest possible number of edges. A maximal matching with k edges is an edge dominating set with k edges. Conversely, if we are given a minimum edge dominating set with k edges, we can construct a maximal matching with k edges in polynomial time. Therefore, the problem of finding a minimum maximal matching is essentially equal to the problem of finding a minimum edge dominating set. Both of these two optimization problems are known to be NP-hard; the decision versions of these problems are classical examples of NP-complete problems. Both problems can be approximated within factor 2 in polynomial time: simply find an arbitrary maximal matching M. === Counting problems === The number of matchings in a graph is known as the Hosoya index of the graph. It is #P-complete to compute this quantity, even for bipartite graphs. It is also #P-complete to count perfect matchings, even in bipartite graphs, because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix. However, there exists a fully polynomial time randomized approximation scheme for counting the number of bipartite matchings. A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm. The number of perfect matchings in a complete graph Kn (with n even) is given by the double factorial (n − 1)!!. The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are given by the telephone numbers. The number of perfect matchings in a graph is also known as the hafnian of its adjacency matrix. === Finding all maximally matchable edges === One of the basic problems in matching theory is to find in a given graph all edges that may be extended to a maximum matching in the graph (such edges are called maximally matchable edges, or allowed edges). Algorithms for this problem include: For general graphs, a deterministic algorithm in time O ( V E ) {\displaystyle O(VE)} and a randomized algorithm in time O ~ ( V 2.376 ) {\displaystyle {\tilde {O}}(V^{2.376})} . For bipartite graphs, if a single maximum matching is found, a deterministic algorithm runs in time O ( V + E ) {\displaystyle O(V+E)} . === Online bipartite matching === The problem of developing an online algorithm for matching was first considered by Richard M. Karp, Umesh Vazirani, and Vijay Vazirani in 1990. In the online setting, nodes on one side of the bipartite graph arrive one at a time and must either be immediately matched to the other side of the graph or discarded. This is a natural generalization of the secretary problem and has applications to online ad auctions. The best online algorithm, for the unweighted maximization case with a random arrival model, attains a competitive ratio of 0.696. == Characterizations == Kőnig's theorem states that, in bipartite graphs, the maximum matching is equal in size to the minimum vertex cover. Via this result, the minimum vertex cover, maximum independent set, and maximum vertex biclique problems may be solved in polynomial time for bipartite graphs. Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching and the Tutte theorem provides a characterization for arbitrary graphs. == Applications == === Matching in general graphs === A Kekulé structure of an aromatic compound consists of a perfect matching of its carbon skeleton, showing the locations of double bonds in the chemical structure. These structures are named after Friedrich August Kekulé von Stradonitz, who showed that benzene (in graph theoretical terms, a 6-vertex cycle) can be given such a structure. The Hosoya index is the number of non-empty matchings plus one; it is used in computational chemistry and mathematical chemistry investigations for organic compounds. The Chinese postman problem involves finding a minimum-weight perfect matching as a subproblem. === Matching in bipartite graphs === Graduation problem is about choosing minimum set of classes from given requirements for graduation. Hitchcock transport problem involves bipartite matching as sub-problem. Subtree isomorphism problem involves bipartite matching as sub-problem. == See also == Matching in hypergraphs - a generalization of matching in graphs. Fractional matching. Dulmage–Mendelsohn decomposition, a partition of the vertices of a bipartite graph into subsets such that each edge belongs to a perfect matching if and only if its endpoints belong to the same subset Edge coloring, a partition of the edges of a graph into matchings Matching preclusion, the minimum number of edges to delete to prevent a perfect matching from existing Rainbow matching, a matching in an edge-colored bipartite graph with no repeated colors Skew-symmetric graph, a type of graph that can be used to model alternating path searches for matchings Stable matching, a matching in which no two elements prefer each other to their matched partners Independent vertex set, a set of vertices (rather than edges) no two of which are adjacent to each other Stable marriage problem (also known as stable matching problem) == References == == Further reading == Lovász, László; Plummer, M. D. (1986), Matching Theory, Annals of Discrete Mathematics, vol. 29, North-Holland, ISBN 0-444-87916-1, MR 0859549 Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein (2001), Introduction to Algorithms (second ed.), MIT Press and McGraw–Hill, Chapter 26, pp. 643–700, ISBN 0-262-53196-8{{citation}}: CS1 maint: multiple names: authors list (link) András Frank (2004). On Kuhn's Hungarian Method – A tribute from Hungary (PDF) (Technical report). Egerváry Research Group. Michael L. Fredman and Robert E. Tarjan (1987), "Fibonacci heaps and their uses in improved network optimization algorithms", Journal of the ACM, 34 (3): 595–615, doi:10.1145/28869.28874, S2CID 7904683. S. J. Cyvin & Ivan Gutman (1988), Kekule Structures in Benzenoid Hydrocarbons, Springer-Verlag Marek Karpinski and Wojciech Rytter (1998), Fast Parallel Algorithms for Graph Matching Problems, Oxford University Press, ISBN 978-0-19-850162-6 == External links == A graph library with Hopcroft–Karp and Push–Relabel-based maximum cardinality matching implementation
Wikipedia/Matching_(graph_theory)
In mathematics, cylindrical algebraic decomposition (CAD) is a notion, along with an algorithm to compute it, that is fundamental for computer algebra and real algebraic geometry. Given a set S of polynomials in Rn, a cylindrical algebraic decomposition is a decomposition of Rn into connected semialgebraic sets called cells, on which each polynomial has constant sign, either +, − or 0. To be cylindrical, this decomposition must satisfy the following condition: If 1 ≤ k < n and π is the projection from Rn onto Rn−k consisting in removing the last k coordinates, then for every pair of cells c and d, one has either π(c) = π(d) or π(c) ∩ π(d) = ∅. This implies that the images by π of the cells define a cylindrical decomposition of Rn−k. The notion was introduced by George E. Collins in 1975, together with an algorithm for computing it. Collins' algorithm has a computational complexity that is double exponential in n. This is an upper bound, which is reached on most entries. There are also examples for which the minimal number of cells is doubly exponential, showing that every general algorithm for cylindrical algebraic decomposition has a double exponential complexity. CAD provides an effective version of quantifier elimination over the reals that has a much better computational complexity than that resulting from the original proof of Tarski–Seidenberg theorem. It is efficient enough to be implemented on a computer. It is one of the most important algorithms of computational real algebraic geometry. Searching to improve Collins' algorithm, or to provide algorithms that have a better complexity for subproblems of general interest, is an active field of research. == Implementations == Mathematica: CylindricalDecomposition QEPCAD -- Quantifier Elimination by Partial Cylindrical Algebraic Decomposition redlog Maple: The RegularChains Library and ProjectionCAD == References == Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise Algorithms in real algebraic geometry. Second edition. Algorithms and Computation in Mathematics, 10. Springer-Verlag, Berlin, 2006. x+662 pp. ISBN 978-3-540-33098-1; 3-540-33098-4 Strzebonski, Adam. Cylindrical Algebraic Decomposition from MathWorld. Cylindrical Algebraic Decomposition in Chapter 6 ("Combinatorial Motion Planning") of Planning algorithms by Steven M. LaValle. Accessed 8 February 2023 Caviness, Bob; Johnson, Jeremy; Quantifier Elimination and Cylindrical Algebraic Decomposition. Texts and Monographs in Symbolic Computation. Springer-Verlag, Berlin, 1998. Collins, George E.: Quantifier elimination for the elementary theory of real closed fields by cylindrical algebraic decomposition, Second GI Conf. Automata Theory and Formal Languages, Springer LNCS 33, 1975. Davenport, James H.; Heintz, Joos: Real quantifier elimination is doubly exponential, Journal of Symbolic Computation, 1988. Volume 5, Issues 1–2, ISSN 0747-7171,
Wikipedia/Cylindrical_algebraic_decomposition
A lattice is an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of a partially ordered set in which every pair of elements has a unique supremum (also called a least upper bound or join) and a unique infimum (also called a greatest lower bound or meet). An example is given by the power set of a set, partially ordered by inclusion, for which the supremum is the union and the infimum is the intersection. Another example is given by the natural numbers, partially ordered by divisibility, for which the supremum is the least common multiple and the infimum is the greatest common divisor. Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities. Since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras. These lattice-like structures all admit order-theoretic as well as algebraic descriptions. The sub-field of abstract algebra that studies lattices is called lattice theory. == Definition == A lattice can be defined either order-theoretically as a partially ordered set, or as an algebraic structure. === As partially ordered set === A partially ordered set (poset) ( L , ≤ ) {\displaystyle (L,\leq )} is called a lattice if it is both a join- and a meet-semilattice, i.e. each two-element subset { a , b } ⊆ L {\displaystyle \{a,b\}\subseteq L} has a join (i.e. least upper bound, denoted by a ∨ b {\displaystyle a\vee b} ) and dually a meet (i.e. greatest lower bound, denoted by a ∧ b {\displaystyle a\wedge b} ). This definition makes ∧ {\displaystyle \,\wedge \,} and ∨ {\displaystyle \,\vee \,} binary operations. Both operations are monotone with respect to the given order: a 1 ≤ a 2 {\displaystyle a_{1}\leq a_{2}} and b 1 ≤ b 2 {\displaystyle b_{1}\leq b_{2}} implies that a 1 ∨ b 1 ≤ a 2 ∨ b 2 {\displaystyle a_{1}\vee b_{1}\leq a_{2}\vee b_{2}} and a 1 ∧ b 1 ≤ a 2 ∧ b 2 . {\displaystyle a_{1}\wedge b_{1}\leq a_{2}\wedge b_{2}.} It follows by an induction argument that every non-empty finite subset of a lattice has a least upper bound and a greatest lower bound. With additional assumptions, further conclusions may be possible; see Completeness (order theory) for more discussion of this subject. That article also discusses how one may rephrase the above definition in terms of the existence of suitable Galois connections between related partially ordered sets—an approach of special interest for the category theoretic approach to lattices, and for formal concept analysis. Given a subset of a lattice, H ⊆ L , {\displaystyle H\subseteq L,} meet and join restrict to partial functions – they are undefined if their value is not in the subset H . {\displaystyle H.} The resulting structure on H {\displaystyle H} is called a partial lattice. In addition to this extrinsic definition as a subset of some other algebraic structure (a lattice), a partial lattice can also be intrinsically defined as a set with two partial binary operations satisfying certain axioms. === As algebraic structure === A lattice is an algebraic structure ( L , ∨ , ∧ ) {\displaystyle (L,\vee ,\wedge )} , consisting of a set L {\displaystyle L} and two binary, commutative and associative operations ∨ {\displaystyle \vee } and ∧ {\displaystyle \wedge } on L {\displaystyle L} satisfying the following axiomatic identities for all elements a , b ∈ L {\displaystyle a,b\in L} (sometimes called absorption laws): a ∨ ( a ∧ b ) = a {\displaystyle a\vee (a\wedge b)=a} a ∧ ( a ∨ b ) = a {\displaystyle a\wedge (a\vee b)=a} The following two identities are also usually regarded as axioms, even though they follow from the two absorption laws taken together. These are called idempotent laws. a ∨ a = a {\displaystyle a\vee a=a} a ∧ a = a {\displaystyle a\wedge a=a} These axioms assert that both ( L , ∨ ) {\displaystyle (L,\vee )} and ( L , ∧ ) {\displaystyle (L,\wedge )} are semilattices. The absorption laws, the only axioms above in which both meet and join appear, distinguish a lattice from an arbitrary pair of semilattice structures and assure that the two semilattices interact appropriately. In particular, each semilattice is the dual of the other. The absorption laws can be viewed as a requirement that the meet and join semilattices define the same partial order. === Connection between the two definitions === An order-theoretic lattice gives rise to the two binary operations ∨ {\displaystyle \vee } and ∧ . {\displaystyle \wedge .} Since the commutative, associative and absorption laws can easily be verified for these operations, they make ( L , ∨ , ∧ ) {\displaystyle (L,\vee ,\wedge )} into a lattice in the algebraic sense. The converse is also true. Given an algebraically defined lattice ( L , ∨ , ∧ ) , {\displaystyle (L,\vee ,\wedge ),} one can define a partial order ≤ {\displaystyle \leq } on L {\displaystyle L} by setting a ≤ b if a = a ∧ b , or {\displaystyle a\leq b{\text{ if }}a=a\wedge b,{\text{ or }}} a ≤ b if b = a ∨ b , {\displaystyle a\leq b{\text{ if }}b=a\vee b,} for all elements a , b ∈ L . {\displaystyle a,b\in L.} The laws of absorption ensure that both definitions are equivalent: a = a ∧ b implies b = b ∨ ( b ∧ a ) = ( a ∧ b ) ∨ b = a ∨ b {\displaystyle a=a\wedge b{\text{ implies }}b=b\vee (b\wedge a)=(a\wedge b)\vee b=a\vee b} and dually for the other direction. One can now check that the relation ≤ {\displaystyle \leq } introduced in this way defines a partial ordering within which binary meets and joins are given through the original operations ∨ {\displaystyle \vee } and ∧ . {\displaystyle \wedge .} Since the two definitions of a lattice are equivalent, one may freely invoke aspects of either definition in any way that suits the purpose at hand. == Bounded lattice == A bounded lattice is a lattice that additionally has a greatest element (also called maximum, or top element, and denoted by 1 , {\displaystyle 1,} or by ⊤ {\displaystyle \top } ) and a least element (also called minimum, or bottom, denoted by 0 {\displaystyle 0} or by ⊥ {\displaystyle \bot } ), which satisfy 0 ≤ x ≤ 1 for every x ∈ L . {\displaystyle 0\leq x\leq 1\;{\text{ for every }}x\in L.} A bounded lattice may also be defined as an algebraic structure of the form ( L , ∨ , ∧ , 0 , 1 ) {\displaystyle (L,\vee ,\wedge ,0,1)} such that ( L , ∨ , ∧ ) {\displaystyle (L,\vee ,\wedge )} is a lattice, 0 {\displaystyle 0} (the lattice's bottom) is the identity element for the join operation ∨ , {\displaystyle \vee ,} and 1 {\displaystyle 1} (the lattice's top) is the identity element for the meet operation ∧ . {\displaystyle \wedge .} a ∨ 0 = a {\displaystyle a\vee 0=a} a ∧ 1 = a {\displaystyle a\wedge 1=a} It can be shown that a partially ordered set is a bounded lattice if and only if every finite set of elements (including the empty set) has a join and a meet. Every lattice can be embedded into a bounded lattice by adding a greatest and a least element. Furthermore, every non-empty finite lattice is bounded, by taking the join (respectively, meet) of all elements, denoted by 1 = ⋁ L = a 1 ∨ ⋯ ∨ a n {\textstyle 1=\bigvee L=a_{1}\lor \cdots \lor a_{n}} (respectively 0 = ⋀ L = a 1 ∧ ⋯ ∧ a n {\textstyle 0=\bigwedge L=a_{1}\land \cdots \land a_{n}} ) where L = { a 1 , … , a n } {\displaystyle L=\left\{a_{1},\ldots ,a_{n}\right\}} is the set of all elements. == Connection to other algebraic structures == Lattices have some connections to the family of group-like algebraic structures. Because meet and join both commute and associate, a lattice can be viewed as consisting of two commutative semigroups having the same domain. For a bounded lattice, these semigroups are in fact commutative monoids. The absorption law is the only defining identity that is peculiar to lattice theory. A bounded lattice can also be thought of as a commutative rig without the distributive axiom. By commutativity, associativity and idempotence one can think of join and meet as operations on non-empty finite sets, rather than on pairs of elements. In a bounded lattice the join and meet of the empty set can also be defined (as 0 {\displaystyle 0} and 1 , {\displaystyle 1,} respectively). This makes bounded lattices somewhat more natural than general lattices, and many authors require all lattices to be bounded. The algebraic interpretation of lattices plays an essential role in universal algebra. == Examples == For any set A , {\displaystyle A,} the collection of all subsets of A {\displaystyle A} (called the power set of A {\displaystyle A} ) can be ordered via subset inclusion to obtain a lattice bounded by A {\displaystyle A} itself and the empty set. In this lattice, the supremum is provided by set union and the infimum is provided by set intersection (see Pic. 1). For any set A , {\displaystyle A,} the collection of all finite subsets of A , {\displaystyle A,} ordered by inclusion, is also a lattice, and will be bounded if and only if A {\displaystyle A} is finite. For any set A , {\displaystyle A,} the collection of all partitions of A , {\displaystyle A,} ordered by refinement, is a lattice (see Pic. 3). The positive integers in their usual order form an unbounded lattice, under the operations of "min" and "max". 1 is bottom; there is no top (see Pic. 4). The Cartesian square of the natural numbers, ordered so that ( a , b ) ≤ ( c , d ) {\displaystyle (a,b)\leq (c,d)} if a ≤ c and b ≤ d . {\displaystyle a\leq c{\text{ and }}b\leq d.} The pair ( 0 , 0 ) {\displaystyle (0,0)} is the bottom element; there is no top (see Pic. 5). The natural numbers also form a lattice under the operations of taking the greatest common divisor and least common multiple, with divisibility as the order relation: a ≤ b {\displaystyle a\leq b} if a {\displaystyle a} divides b . {\displaystyle b.} 1 {\displaystyle 1} is bottom; 0 {\displaystyle 0} is top. Pic. 2 shows a finite sublattice. Every complete lattice (also see below) is a (rather specific) bounded lattice. This class gives rise to a broad range of practical examples. The set of compact elements of an arithmetic complete lattice is a lattice with a least element, where the lattice operations are given by restricting the respective operations of the arithmetic lattice. This is the specific property that distinguishes arithmetic lattices from algebraic lattices, for which the compacts only form a join-semilattice. Both of these classes of complete lattices are studied in domain theory. Further examples of lattices are given for each of the additional properties discussed below. == Examples of non-lattices == Most partially ordered sets are not lattices, including the following. A discrete poset, meaning a poset such that x ≤ y {\displaystyle x\leq y} implies x = y , {\displaystyle x=y,} is a lattice if and only if it has at most one element. In particular the two-element discrete poset is not a lattice. Although the set { 1 , 2 , 3 , 6 } {\displaystyle \{1,2,3,6\}} partially ordered by divisibility is a lattice, the set { 1 , 2 , 3 } {\displaystyle \{1,2,3\}} so ordered is not a lattice because the pair 2, 3 lacks a join; similarly, 2, 3 lacks a meet in { 2 , 3 , 6 } . {\displaystyle \{2,3,6\}.} The set { 1 , 2 , 3 , 12 , 18 , 36 } {\displaystyle \{1,2,3,12,18,36\}} partially ordered by divisibility is not a lattice. Every pair of elements has an upper bound and a lower bound, but the pair 2, 3 has three upper bounds, namely 12, 18, and 36, none of which is the least of those three under divisibility (12 and 18 do not divide each other). Likewise the pair 12, 18 has three lower bounds, namely 1, 2, and 3, none of which is the greatest of those three under divisibility (2 and 3 do not divide each other). == Morphisms of lattices == The appropriate notion of a morphism between two lattices flows easily from the above algebraic definition. Given two lattices ( L , ∨ L , ∧ L ) {\displaystyle \left(L,\vee _{L},\wedge _{L}\right)} and ( M , ∨ M , ∧ M ) , {\displaystyle \left(M,\vee _{M},\wedge _{M}\right),} a lattice homomorphism from L to M is a function f : L → M {\displaystyle f:L\to M} such that for all a , b ∈ L : {\displaystyle a,b\in L:} f ( a ∨ L b ) = f ( a ) ∨ M f ( b ) , and {\displaystyle f\left(a\vee _{L}b\right)=f(a)\vee _{M}f(b),{\text{ and }}} f ( a ∧ L b ) = f ( a ) ∧ M f ( b ) . {\displaystyle f\left(a\wedge _{L}b\right)=f(a)\wedge _{M}f(b).} Thus f {\displaystyle f} is a homomorphism of the two underlying semilattices. When lattices with more structure are considered, the morphisms should "respect" the extra structure, too. In particular, a bounded-lattice homomorphism (usually called just "lattice homomorphism") f {\displaystyle f} between two bounded lattices L {\displaystyle L} and M {\displaystyle M} should also have the following property: f ( 0 L ) = 0 M , and {\displaystyle f\left(0_{L}\right)=0_{M},{\text{ and }}} f ( 1 L ) = 1 M . {\displaystyle f\left(1_{L}\right)=1_{M}.} In the order-theoretic formulation, these conditions just state that a homomorphism of lattices is a function preserving binary meets and joins. For bounded lattices, preservation of least and greatest elements is just preservation of join and meet of the empty set. Any homomorphism of lattices is necessarily monotone with respect to the associated ordering relation; see Limit preserving function. The converse is not true: monotonicity by no means implies the required preservation of meets and joins (see Pic. 9), although an order-preserving bijection is a homomorphism if its inverse is also order-preserving. Given the standard definition of isomorphisms as invertible morphisms, a lattice isomorphism is just a bijective lattice homomorphism. Similarly, a lattice endomorphism is a lattice homomorphism from a lattice to itself, and a lattice automorphism is a bijective lattice endomorphism. Lattices and their homomorphisms form a category. Let L {\displaystyle \mathbb {L} } and L ′ {\displaystyle \mathbb {L} '} be two lattices with 0 and 1. A homomorphism from L {\displaystyle \mathbb {L} } to L ′ {\displaystyle \mathbb {L} '} is called 0,1-separating if and only if f − 1 { f ( 0 ) } = { 0 } {\displaystyle f^{-1}\{f(0)\}=\{0\}} ( f {\displaystyle f} separates 0) and f − 1 { f ( 1 ) } = { 1 } {\displaystyle f^{-1}\{f(1)\}=\{1\}} ( f {\displaystyle f} separates 1). == Sublattices == A sublattice of a lattice L {\displaystyle L} is a subset of L {\displaystyle L} that is a lattice with the same meet and join operations as L . {\displaystyle L.} That is, if L {\displaystyle L} is a lattice and M {\displaystyle M} is a subset of L {\displaystyle L} such that for every pair of elements a , b ∈ M {\displaystyle a,b\in M} both a ∧ b {\displaystyle a\wedge b} and a ∨ b {\displaystyle a\vee b} are in M , {\displaystyle M,} then M {\displaystyle M} is a sublattice of L . {\displaystyle L.} A sublattice M {\displaystyle M} of a lattice L {\displaystyle L} is a convex sublattice of L , {\displaystyle L,} if x ≤ z ≤ y {\displaystyle x\leq z\leq y} and x , y ∈ M {\displaystyle x,y\in M} implies that z {\displaystyle z} belongs to M , {\displaystyle M,} for all elements x , y , z ∈ L . {\displaystyle x,y,z\in L.} == Properties of lattices == We now introduce a number of important properties that lead to interesting special classes of lattices. One, boundedness, has already been discussed. === Completeness === A poset is called a complete lattice if all its subsets have both a join and a meet. In particular, every complete lattice is a bounded lattice. While bounded lattice homomorphisms in general preserve only finite joins and meets, complete lattice homomorphisms are required to preserve arbitrary joins and meets. Every poset that is a complete semilattice is also a complete lattice. Related to this result is the interesting phenomenon that there are various competing notions of homomorphism for this class of posets, depending on whether they are seen as complete lattices, complete join-semilattices, complete meet-semilattices, or as join-complete or meet-complete lattices. "Partial lattice" is not the opposite of "complete lattice" – rather, "partial lattice", "lattice", and "complete lattice" are increasingly restrictive definitions. === Conditional completeness === A conditionally complete lattice is a lattice in which every nonempty subset that has an upper bound has a join (that is, a least upper bound). Such lattices provide the most direct generalization of the completeness axiom of the real numbers. A conditionally complete lattice is either a complete lattice, or a complete lattice without its maximum element 1 , {\displaystyle 1,} its minimum element 0 , {\displaystyle 0,} or both. === Distributivity === Since lattices come with two binary operations, it is natural to ask whether one of them distributes over the other, that is, whether one or the other of the following dual laws holds for every three elements a , b , c ∈ L , {\displaystyle a,b,c\in L,} : Distributivity of ∨ {\displaystyle \vee } over ∧ {\displaystyle \wedge } a ∨ ( b ∧ c ) = ( a ∨ b ) ∧ ( a ∨ c ) . {\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge (a\vee c).} Distributivity of ∧ {\displaystyle \wedge } over ∨ {\displaystyle \vee } a ∧ ( b ∨ c ) = ( a ∧ b ) ∨ ( a ∧ c ) . {\displaystyle a\wedge (b\vee c)=(a\wedge b)\vee (a\wedge c).} A lattice that satisfies the first or, equivalently (as it turns out), the second axiom, is called a distributive lattice. The only non-distributive lattices with fewer than 6 elements are called M3 and N5; they are shown in Pictures 10 and 11, respectively. A lattice is distributive if and only if it does not have a sublattice isomorphic to M3 or N5. Each distributive lattice is isomorphic to a lattice of sets (with union and intersection as join and meet, respectively). For an overview of stronger notions of distributivity that are appropriate for complete lattices and that are used to define more special classes of lattices such as frames and completely distributive lattices, see distributivity in order theory. === Modularity === For some applications the distributivity condition is too strong, and the following weaker property is often useful. A lattice ( L , ∨ , ∧ ) {\displaystyle (L,\vee ,\wedge )} is modular if, for all elements a , b , c ∈ L , {\displaystyle a,b,c\in L,} the following identity holds: ( a ∧ c ) ∨ ( b ∧ c ) = ( ( a ∧ c ) ∨ b ) ∧ c . {\displaystyle (a\wedge c)\vee (b\wedge c)=((a\wedge c)\vee b)\wedge c.} (Modular identity) This condition is equivalent to the following axiom: a ≤ c {\displaystyle a\leq c} implies a ∨ ( b ∧ c ) = ( a ∨ b ) ∧ c . {\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge c.} (Modular law) A lattice is modular if and only if it does not have a sublattice isomorphic to N5 (shown in Pic. 11). Besides distributive lattices, examples of modular lattices are the lattice of submodules of a module (hence modular), the lattice of two-sided ideals of a ring, and the lattice of normal subgroups of a group. The set of first-order terms with the ordering "is more specific than" is a non-modular lattice used in automated reasoning. === Semimodularity === A finite lattice is modular if and only if it is both upper and lower semimodular. For a lattice of finite length, the (upper) semimodularity is equivalent to the condition that the lattice is graded and its rank function r {\displaystyle r} satisfies the following condition: r ( x ) + r ( y ) ≥ r ( x ∧ y ) + r ( x ∨ y ) . {\displaystyle r(x)+r(y)\geq r(x\wedge y)+r(x\vee y).} Another equivalent (for graded lattices) condition is Birkhoff's condition: for each x {\displaystyle x} and y {\displaystyle y} in L , {\displaystyle L,} if x {\displaystyle x} and y {\displaystyle y} both cover x ∧ y , {\displaystyle x\wedge y,} then x ∨ y {\displaystyle x\vee y} covers both x {\displaystyle x} and y . {\displaystyle y.} A lattice is called lower semimodular if its dual is semimodular. For finite lattices this means that the previous conditions hold with ∨ {\displaystyle \vee } and ∧ {\displaystyle \wedge } exchanged, "covers" exchanged with "is covered by", and inequalities reversed. === Continuity and algebraicity === In domain theory, it is natural to seek to approximate the elements in a partial order by "much simpler" elements. This leads to the class of continuous posets, consisting of posets where every element can be obtained as the supremum of a directed set of elements that are way-below the element. If one can additionally restrict these to the compact elements of a poset for obtaining these directed sets, then the poset is even algebraic. Both concepts can be applied to lattices as follows: A continuous lattice is a complete lattice that is continuous as a poset. An algebraic lattice is a complete lattice that is algebraic as a poset. Both of these classes have interesting properties. For example, continuous lattices can be characterized as algebraic structures (with infinitary operations) satisfying certain identities. While such a characterization is not known for algebraic lattices, they can be described "syntactically" via Scott information systems. === Complements and pseudo-complements === Let L {\displaystyle L} be a bounded lattice with greatest element 1 and least element 0. Two elements x {\displaystyle x} and y {\displaystyle y} of L {\displaystyle L} are complements of each other if and only if: x ∨ y = 1 and x ∧ y = 0. {\displaystyle x\vee y=1\quad {\text{ and }}\quad x\wedge y=0.} In general, some elements of a bounded lattice might not have a complement, and others might have more than one complement. For example, the set { 0 , 1 / 2 , 1 } {\displaystyle \{0,1/2,1\}} with its usual ordering is a bounded lattice, and 1 2 {\displaystyle {\tfrac {1}{2}}} does not have a complement. In the bounded lattice N5, the element a {\displaystyle a} has two complements, viz. b {\displaystyle b} and c {\displaystyle c} (see Pic. 11). A bounded lattice for which every element has a complement is called a complemented lattice. A complemented lattice that is also distributive is a Boolean algebra. For a distributive lattice, the complement of x , {\displaystyle x,} when it exists, is unique. In the case that the complement is unique, we write ¬ x = y {\textstyle \lnot x=y} and equivalently, ¬ y = x . {\textstyle \lnot y=x.} The corresponding unary operation over L , {\displaystyle L,} called complementation, introduces an analogue of logical negation into lattice theory. Heyting algebras are an example of distributive lattices where some members might be lacking complements. Every element z {\displaystyle z} of a Heyting algebra has, on the other hand, a pseudo-complement, also denoted ¬ x . {\textstyle \lnot x.} The pseudo-complement is the greatest element y {\displaystyle y} such that x ∧ y = 0. {\displaystyle x\wedge y=0.} If the pseudo-complement of every element of a Heyting algebra is in fact a complement, then the Heyting algebra is in fact a Boolean algebra. === Jordan–Dedekind chain condition === A chain from x 0 {\displaystyle x_{0}} to x n {\displaystyle x_{n}} is a set { x 0 , x 1 , … , x n } , {\displaystyle \left\{x_{0},x_{1},\ldots ,x_{n}\right\},} where x 0 < x 1 < x 2 < … < x n . {\displaystyle x_{0}<x_{1}<x_{2}<\ldots <x_{n}.} The length of this chain is n, or one less than its number of elements. A chain is maximal if x i {\displaystyle x_{i}} covers x i − 1 {\displaystyle x_{i-1}} for all 1 ≤ i ≤ n . {\displaystyle 1\leq i\leq n.} If for any pair, x {\displaystyle x} and y , {\displaystyle y,} where x < y , {\displaystyle x<y,} all maximal chains from x {\displaystyle x} to y {\displaystyle y} have the same length, then the lattice is said to satisfy the Jordan–Dedekind chain condition. === Graded/ranked === A lattice ( L , ≤ ) {\displaystyle (L,\leq )} is called graded, sometimes ranked (but see Ranked poset for an alternative meaning), if it can be equipped with a rank function r : L → N {\displaystyle r:L\to \mathbb {N} } sometimes to Z {\displaystyle \mathbb {Z} } , compatible with the ordering (so r ( x ) < r ( y ) {\displaystyle r(x)<r(y)} whenever x < y {\displaystyle x<y} ) such that whenever y {\displaystyle y} covers x , {\displaystyle x,} then r ( y ) = r ( x ) + 1. {\displaystyle r(y)=r(x)+1.} The value of the rank function for a lattice element is called its rank. A lattice element y {\displaystyle y} is said to cover another element x , {\displaystyle x,} if y > x , {\displaystyle y>x,} but there does not exist a z {\displaystyle z} such that y > z > x . {\displaystyle y>z>x.} Here, y > x {\displaystyle y>x} means x ≤ y {\displaystyle x\leq y} and x ≠ y . {\displaystyle x\neq y.} == Free lattices == Any set X {\displaystyle X} may be used to generate the free semilattice F X . {\displaystyle FX.} The free semilattice is defined to consist of all of the finite subsets of X , {\displaystyle X,} with the semilattice operation given by ordinary set union. The free semilattice has the universal property. For the free lattice over a set X , {\displaystyle X,} Whitman gave a construction based on polynomials over X {\displaystyle X} 's members. == Important lattice-theoretic notions == We now define some order-theoretic notions of importance to lattice theory. In the following, let x {\displaystyle x} be an element of some lattice L . {\displaystyle L.} x {\displaystyle x} is called: Join irreducible if x = a ∨ b {\displaystyle x=a\vee b} implies x = a or x = b . {\displaystyle x=a{\text{ or }}x=b.} for all a , b ∈ L . {\displaystyle a,b\in L.} If L {\displaystyle L} has a bottom element 0 , {\displaystyle 0,} some authors require x ≠ 0 {\displaystyle x\neq 0} . When the first condition is generalized to arbitrary joins ⋁ i ∈ I a i , {\displaystyle \bigvee _{i\in I}a_{i},} x {\displaystyle x} is called completely join irreducible (or ∨ {\displaystyle \vee } -irreducible). The dual notion is meet irreducibility ( ∧ {\displaystyle \wedge } -irreducible). For example, in Pic. 2, the elements 2, 3, 4, and 5 are join irreducible, while 12, 15, 20, and 30 are meet irreducible. Depending on definition, the bottom element 1 and top element 60 may or may not be considered join irreducible and meet irreducible, respectively. In the lattice of real numbers with the usual order, each element is join irreducible, but none is completely join irreducible. Join prime if x ≤ a ∨ b {\displaystyle x\leq a\vee b} implies x ≤ a or x ≤ b . {\displaystyle x\leq a{\text{ or }}x\leq b.} Again some authors require x ≠ 0 {\displaystyle x\neq 0} , although this is unusual. This too can be generalized to obtain the notion completely join prime. The dual notion is meet prime. Every join-prime element is also join irreducible, and every meet-prime element is also meet irreducible. The converse holds if L {\displaystyle L} is distributive. Let L {\displaystyle L} have a bottom element 0. An element x {\displaystyle x} of L {\displaystyle L} is an atom if 0 < x {\displaystyle 0<x} and there exists no element y ∈ L {\displaystyle y\in L} such that 0 < y < x . {\displaystyle 0<y<x.} Then L {\displaystyle L} is called: Atomic if for every nonzero element x {\displaystyle x} of L , {\displaystyle L,} there exists an atom a {\displaystyle a} of L {\displaystyle L} such that a ≤ x ; {\displaystyle a\leq x;} Atomistic if every element of L {\displaystyle L} is a supremum of atoms. However, many sources and mathematical communities use the term "atomic" to mean "atomistic" as defined above. The notions of ideals and the dual notion of filters refer to particular kinds of subsets of a partially ordered set, and are therefore important for lattice theory. Details can be found in the respective entries. == See also == Join and meet – Concept in order theory Map of lattices – Concept in mathematics Orthocomplemented lattice – Bound lattice in which every element has a complementPages displaying short descriptions of redirect targets Total order – Order whose elements are all comparable Ideal – Nonempty, upper-bounded, downward-closed subset and filter (dual notions) Skew lattice – Algebraic StructurePages displaying wikidata descriptions as a fallback (generalization to non-commutative join and meet) Eulerian lattice Post's lattice – lattice of all clones (sets of logical connectives closed under composition and containing all projections) on a two-element set {0, 1}, ordered by inclusionPages displaying wikidata descriptions as a fallback Tamari lattice – mathematical object formed by an order on the way of parenthesing an expressionPages displaying wikidata descriptions as a fallback Young–Fibonacci lattice 0,1-simple lattice === Applications that use lattice theory === Note that in many applications the sets are only partial lattices: not every pair of elements has a meet or join. Pointless topology Lattice of subgroups Spectral space Invariant subspace Closure operator Abstract interpretation Subsumption lattice Fuzzy set theory Algebraizations of first-order logic Semantics of programming languages Domain theory Ontology (computer science) Multiple inheritance Formal concept analysis and Lattice Miner (theory and tool) Bloom filter Information flow Ordinal optimization Quantum logic Median graph Knowledge space Regular language learning Analogical modeling == Notes == == References == == External links ==
Wikipedia/Lattice_theory
In algebra (in particular in algebraic geometry or algebraic number theory), a valuation is a function on a field that provides a measure of the size or multiplicity of elements of the field. It generalizes to commutative algebra the notion of size inherent in consideration of the degree of a pole or multiplicity of a zero in complex analysis, the degree of divisibility of a number by a prime number in number theory, and the geometrical concept of contact between two algebraic or analytic varieties in algebraic geometry. A field with a valuation on it is called a valued field. == Definition == One starts with the following objects: a field K and its multiplicative group K×, an abelian totally ordered group (Γ, +, ≥). The ordering and group law on Γ are extended to the set Γ ∪ {∞} by the rules ∞ ≥ α for all α ∈ Γ, ∞ + α = α + ∞ = ∞ + ∞ = ∞ for all α ∈ Γ. Then a valuation of K is any map v : K → Γ ∪ {∞} that satisfies the following properties for all a, b in K: v(a) = ∞ if and only if a = 0, v(ab) = v(a) + v(b), v(a + b) ≥ min(v(a), v(b)), with equality if v(a) ≠ v(b). A valuation v is trivial if v(a) = 0 for all a in K×, otherwise it is non-trivial. The second property asserts that any valuation is a group homomorphism on K×. The third property is a version of the triangle inequality on metric spaces adapted to an arbitrary Γ (see Multiplicative notation below). For valuations used in geometric applications, the first property implies that any non-empty germ of an analytic variety near a point contains that point. The valuation can be interpreted as the order of the leading-order term. The third property then corresponds to the order of a sum being the order of the larger term, unless the two terms have the same order, in which case they may cancel and the sum may have larger order. For many applications, Γ is an additive subgroup of the real numbers R {\displaystyle \mathbb {R} } in which case ∞ can be interpreted as +∞ in the extended real numbers; note that min ( a , + ∞ ) = min ( + ∞ , a ) = a {\displaystyle \min(a,+\infty )=\min(+\infty ,a)=a} for any real number a, and thus +∞ is the unit under the binary operation of minimum. The real numbers (extended by +∞) with the operations of minimum and addition form a semiring, called the min tropical semiring, and a valuation v is almost a semiring homomorphism from K to the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together. === Multiplicative notation and absolute values === The concept was developed by Emil Artin in his book Geometric Algebra writing the group in multiplicative notation as (Γ, ·, ≥): Instead of ∞, we adjoin a formal symbol O to Γ, with the ordering and group law extended by the rules O ≤ α for all α ∈ Γ, O · α = α · O = O for all α ∈ Γ. Then a valuation of K is any map | ⋅ |v : K → Γ ∪ {O} satisfying the following properties for all a, b ∈ K: |a|v = O if and only if a = 0, |ab|v = |a|v · |b|v, |a+b|v ≤ max(|a|v, |b|v), with equality if |a|v ≠ |b|v. (Note that the directions of the inequalities are reversed from those in the additive notation.) If Γ is a subgroup of the positive real numbers under multiplication, the last condition is the ultrametric inequality, a stronger form of the triangle inequality |a+b|v ≤ |a|v + |b|v, and | ⋅ |v is an absolute value. In this case, we may pass to the additive notation with value group Γ + ⊆ ( R , + ) {\displaystyle \Gamma _{+}\subseteq (\mathbb {R} ,+)} by taking v+(a) = −log |a|v. Each valuation on K defines a corresponding linear preorder: a ≼ b ⇔ |a|v ≤ |b|v. Conversely, given a "≼" satisfying the required properties, we can define valuation |a|v = {b: b ≼ a ∧ a ≼ b}, with multiplication and ordering based on K and ≼. === Terminology === In this article, we use the terms defined above, in the additive notation. However, some authors use alternative terms: our "valuation" (satisfying the ultrametric inequality) is called an "exponential valuation" or "non-Archimedean absolute value" or "ultrametric absolute value"; our "absolute value" (satisfying the triangle inequality) is called a "valuation" or an "Archimedean absolute value". === Associated objects === There are several objects defined from a given valuation v : K → Γ ∪ {∞} ; the value group or valuation group Γv = v(K×), a subgroup of Γ (though v is usually surjective so that Γv = Γ); the valuation ring Rv is the set of a ∈ K with v(a) ≥ 0, the prime ideal mv is the set of a ∈ K with v(a) > 0 (it is in fact a maximal ideal of Rv), the residue field kv = Rv/mv, the place of K associated to v, the class of v under the equivalence defined below. == Basic properties == === Equivalence of valuations === Two valuations v1 and v2 of K with valuation group Γ1 and Γ2, respectively, are said to be equivalent if there is an order-preserving group isomorphism φ : Γ1 → Γ2 such that v2(a) = φ(v1(a)) for all a in K×. This is an equivalence relation. Two valuations of K are equivalent if and only if they have the same valuation ring. An equivalence class of valuations of a field is called a place. Ostrowski's theorem gives a complete classification of places of the field of rational numbers Q : {\displaystyle \mathbb {Q} :} these are precisely the equivalence classes of valuations for the p-adic completions of Q . {\displaystyle \mathbb {Q} .} === Extension of valuations === Let v be a valuation of K and let L be a field extension of K. An extension of v (to L) is a valuation w of L such that the restriction of w to K is v. The set of all such extensions is studied in the ramification theory of valuations. Let L/K be a finite extension and let w be an extension of v to L. The index of Γv in Γw, e(w/v) = [Γw : Γv], is called the reduced ramification index of w over v. It satisfies e(w/v) ≤ [L : K] (the degree of the extension L/K). The relative degree of w over v is defined to be f(w/v) = [Rw/mw : Rv/mv] (the degree of the extension of residue fields). It is also less than or equal to the degree of L/K. When L/K is separable, the ramification index of w over v is defined to be e(w/v)pi, where pi is the inseparable degree of the extension Rw/mw over Rv/mv. === Complete valued fields === When the ordered abelian group Γ is the additive group of the integers, the associated valuation is equivalent to an absolute value, and hence induces a metric on the field K. If K is complete with respect to this metric, then it is called a complete valued field. If K is not complete, one can use the valuation to construct its completion, as in the examples below, and different valuations can define different completion fields. In general, a valuation induces a uniform structure on K, and K is called a complete valued field if it is complete as a uniform space. There is a related property known as spherical completeness: it is equivalent to completeness if Γ = Z , {\displaystyle \Gamma =\mathbb {Z} ,} but stronger in general. == Examples == === p-adic valuation === The most basic example is the p-adic valuation νp associated to a prime integer p, on the rational numbers K = Q , {\displaystyle K=\mathbb {Q} ,} with valuation ring R = Z ( p ) , {\displaystyle R=\mathbb {Z} _{(p)},} where Z ( p ) {\displaystyle \mathbb {Z} _{(p)}} is the localization of Z {\displaystyle \mathbb {Z} } at the prime ideal ( p ) {\displaystyle (p)} . The valuation group is the additive integers Γ = Z . {\displaystyle \Gamma =\mathbb {Z} .} For an integer a ∈ R = Z , {\displaystyle a\in R=\mathbb {Z} ,} the valuation νp(a) measures the divisibility of a by powers of p: ν p ( a ) = max { e ∈ Z ∣ p e divides a } ; {\displaystyle \nu _{p}(a)=\max\{e\in \mathbb {Z} \mid p^{e}{\text{ divides }}a\};} and for a fraction, νp(a/b) = νp(a) − νp(b). Writing this multiplicatively yields the p-adic absolute value, which conventionally has as base 1 / p = p − 1 {\displaystyle 1/p=p^{-1}} , so | a | p := p − ν p ( a ) {\displaystyle |a|_{p}:=p^{-\nu _{p}(a)}} . The completion of Q {\displaystyle \mathbb {Q} } with respect to νp is the field Q p {\displaystyle \mathbb {Q} _{p}} of p-adic numbers. === Order of vanishing === Let K = F(x), the rational functions on the affine line X = F1, and take a point a ∈ X. For a polynomial f ( x ) = a k ( x − a ) k + a k + 1 ( x − a ) k + 1 + ⋯ + a n ( x − a ) n {\displaystyle f(x)=a_{k}(x{-}a)^{k}+a_{k+1}(x{-}a)^{k+1}+\cdots +a_{n}(x{-}a)^{n}} with a k ≠ 0 {\displaystyle a_{k}\neq 0} , define va(f) = k, the order of vanishing at x = a; and va(f /g) = va(f) − va(g). Then the valuation ring R consists of rational functions with no pole at x = a, and the completion is the formal Laurent series ring F((x−a)). This can be generalized to the field of Puiseux series K{{t}} (fractional powers), the Levi-Civita field (its Cauchy completion), and the field of Hahn series, with valuation in all cases returning the smallest exponent of t appearing in the series. === π-adic valuation === Generalizing the previous examples, let R be a principal ideal domain, K be its field of fractions, and π be an irreducible element of R. Since every principal ideal domain is a unique factorization domain, every non-zero element a of R can be written (essentially) uniquely as a = π e a p 1 e 1 p 2 e 2 ⋯ p n e n {\displaystyle a=\pi ^{e_{a}}p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{n}^{e_{n}}} where the e's are non-negative integers and the pi are irreducible elements of R that are not associates of π. In particular, the integer ea is uniquely determined by a. The π-adic valuation of K is then given by v π ( 0 ) = ∞ {\displaystyle v_{\pi }(0)=\infty } v π ( a / b ) = e a − e b , for a , b ∈ R , a , b ≠ 0. {\displaystyle v_{\pi }(a/b)=e_{a}-e_{b},{\text{ for }}a,b\in R,a,b\neq 0.} If π' is another irreducible element of R such that (π') = (π) (that is, they generate the same ideal in R), then the π-adic valuation and the π'-adic valuation are equal. Thus, the π-adic valuation can be called the P-adic valuation, where P = (π). === P-adic valuation on a Dedekind domain === The previous example can be generalized to Dedekind domains. Let R be a Dedekind domain, K its field of fractions, and let P be a non-zero prime ideal of R. Then, the localization of R at P, denoted RP, is a principal ideal domain whose field of fractions is K. The construction of the previous section applied to the prime ideal PRP of RP yields the P-adic valuation of K. == Vector spaces over valuation fields == Suppose that Γ ∪ {0} is the set of non-negative real numbers under multiplication. Then we say that the valuation is non-discrete if its range (the valuation group) is infinite (and hence has an accumulation point at 0). Suppose that X is a vector space over K and that A and B are subsets of X. Then we say that A absorbs B if there exists a α ∈ K such that λ ∈ K and |λ| ≥ |α| implies that B ⊆ λ A. A is called radial or absorbing if A absorbs every finite subset of X. Radial subsets of X are invariant under finite intersection. Also, A is called circled if λ in K and |λ| ≥ |α| implies λ A ⊆ A. The set of circled subsets of L is invariant under arbitrary intersections. The circled hull of A is the intersection of all circled subsets of X containing A. Suppose that X and Y are vector spaces over a non-discrete valuation field K, let A ⊆ X, B ⊆ Y, and let f : X → Y be a linear map. If B is circled or radial then so is f − 1 ( B ) {\displaystyle f^{-1}(B)} . If A is circled then so is f(A) but if A is radial then f(A) will be radial under the additional condition that f is surjective. == See also == Discrete valuation Euclidean valuation Field norm Absolute value (algebra) == Notes == == References == == External links == Danilov, V.I. (2001) [1994], "Valuation", Encyclopedia of Mathematics, EMS Press Discrete valuation at PlanetMath. Valuation at PlanetMath. Weisstein, Eric W. "Valuation". MathWorld.
Wikipedia/Valuation_theory
Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (1965) by Michael Spivak is a brief, rigorous, and modern textbook of multivariable calculus, differential forms, and integration on manifolds for advanced undergraduates. == Description == Calculus on Manifolds is a brief monograph on the theory of vector-valued functions of several real variables (f : Rn→Rm) and differentiable manifolds in Euclidean space. In addition to extending the concepts of differentiation (including the inverse and implicit function theorems) and Riemann integration (including Fubini's theorem) to functions of several variables, the book treats the classical theorems of vector calculus, including those of Cauchy–Green, Ostrogradsky–Gauss (divergence theorem), and Kelvin–Stokes, in the language of differential forms on differentiable manifolds embedded in Euclidean space, and as corollaries of the generalized Stokes theorem on manifolds-with-boundary. The book culminates with the statement and proof of this vast and abstract modern generalization of several classical results: The cover of Calculus on Manifolds features snippets of a July 2, 1850 letter from Lord Kelvin to Sir George Stokes containing the first disclosure of the classical Stokes' theorem (i.e., the Kelvin–Stokes theorem). == Reception == Calculus on Manifolds aims to present the topics of multivariable and vector calculus in the manner in which they are seen by a modern working mathematician, yet simply and selectively enough to be understood by undergraduate students whose previous coursework in mathematics comprises only one-variable calculus and introductory linear algebra. While Spivak's elementary treatment of modern mathematical tools is broadly successful—and this approach has made Calculus on Manifolds a standard introduction to the rigorous theory of multivariable calculus—the text is also well known for its laconic style, lack of motivating examples, and frequent omission of non-obvious steps and arguments. For example, in order to state and prove the generalized Stokes' theorem on chains, a profusion of unfamiliar concepts and constructions (e.g., tensor products, differential forms, tangent spaces, pullbacks, exterior derivatives, cube and chains) are introduced in quick succession within the span of 25 pages. Moreover, careful readers have noted a number of nontrivial oversights throughout the text, including missing hypotheses in theorems, inaccurately stated theorems, and proofs that fail to handle all cases. == Other textbooks == A more recent textbook which also covers these topics at an undergraduate level is the text Analysis on Manifolds by James Munkres (366 pp.). At more than twice the length of Calculus on Manifolds, Munkres's work presents a more careful and detailed treatment of the subject matter at a leisurely pace. Nevertheless, Munkres acknowledges the influence of Spivak's earlier text in the preface of Analysis on Manifolds. Spivak's five-volume textbook A Comprehensive Introduction to Differential Geometry states in its preface that Calculus on Manifolds serves as a prerequisite for a course based on this text. In fact, several of the concepts introduced in Calculus on Manifolds reappear in the first volume of this classic work in more sophisticated settings. == See also == == Footnotes == === Notes === === Citations === == References == Auslander, Louis (1967), "Review of Calculus on manifolds—a modern approach to classical theorems of advanced calculus", Quarterly of Applied Mathematics, 24 (4): 388–389 Botts, Truman (1966), "Reviewed Work: Calculus on Manifolds by Michael Spivak", Science, 153 (3732): 164–165, doi:10.1126/science.153.3732.164-a Hubbard, John H.; Hubbard, Barbara Burke (2009) [1998], Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach (4th ed.), Upper Saddle River, N.J.: Prentice Hall (4th edition by Matrix Editions (Ithaca, N.Y.)), ISBN 978-0-9715766-5-0 [An elementary approach to differential forms with an emphasis on concrete examples and computations] Katz, Victor J. (1979), "The History of Stokes' Theorem", Mathematics Magazine, 52 (3), Mathematical Association of America: 146–156, doi:10.2307/2690275 Loomis, Lynn Harold; Sternberg, Shlomo (2014) [1968], Advanced Calculus (Revised ed.), Reading, Mass.: Addison-Wesley (revised edition by Jones and Bartlett (Boston); reprinted by World Scientific (Hackensack, N.J.)), pp. 305–567, ISBN 978-981-4583-93-0 [A general treatment of differential forms, differentiable manifolds, and selected applications to mathematical physics for advanced undergraduates] Munkres, James (1968), "Review of Calculus on Manifolds", The American Mathematical Monthly, 75 (5): 567–568, doi:10.2307/2314769, JSTOR 2314769 Munkres, James (1991), Analysis on Manifolds, Redwood City, Calif.: Addison-Wesley (reprinted by Westview Press (Boulder, Colo.)), ISBN 978-0-201-31596-7 [An undergraduate treatment of multivariable and vector calculus with coverage similar to Calculus on Manifolds, with mathematical ideas and proofs presented in greater detail] Nickerson, Helen K.; Spencer, Donald C.; Steenrod, Norman E. (1959), Advanced Calculus, Princeton, N.J.: Van Nostrand, ISBN 978-0-486-48090-9 {{citation}}: ISBN / Date incompatibility (help) [A unified treatment of linear and multilinear algebra, multivariable calculus, differential forms, and introductory algebraic topology for advanced undergraduates] Rudin, Walter (1976) [1953], Principles of Mathematical Analysis (3rd ed.), New York: McGraw Hill, pp. 204–299, ISBN 978-0-07-054235-8 [An unorthodox though rigorous approach to differential forms that avoids many of the usual algebraic constructions] Spivak, Michael (2018) [1965], Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (Mathematics Monograph Series), New York: W. A. Benjamin, Inc. (reprinted by Addison-Wesley (Reading, Mass.) and Westview Press (Boulder, Colo.)), ISBN 978-0-8053-9021-6 [A brief, rigorous, and modern treatment of multivariable calculus, differential forms, and integration on manifolds for advanced undergraduates] Spivak, Michael (1999) [1970], A Comprehensive Introduction to Differential Geometry, Vol. 1 (3rd ed.), Houston, Tex.: Publish or Perish, Inc., ISBN 978-0-9140-9870-6 [A thorough account of differentiable manifolds at the graduate level; contains a more sophisticated reframing and extensions of Chapters 4 and 5 of Calculus on Manifolds] Tu, Loring W. (2011) [2008], An Introduction to Manifolds (2nd ed.), New York: Springer, ISBN 978-1-4419-7399-3 [A standard treatment of the theory of smooth manifolds at the 1st year graduate level]
Wikipedia/Calculus_on_Manifolds_(book)
In mathematics, algebraic spaces form a generalization of the schemes of algebraic geometry, introduced by Michael Artin for use in deformation theory. Intuitively, schemes are given by gluing together affine schemes using the Zariski topology, while algebraic spaces are given by gluing together affine schemes using the finer étale topology. Alternatively one can think of schemes as being locally isomorphic to affine schemes in the Zariski topology, while algebraic spaces are locally isomorphic to affine schemes in the étale topology. The resulting category of algebraic spaces extends the category of schemes and allows one to carry out several natural constructions that are used in the construction of moduli spaces but are not always possible in the smaller category of schemes, such as taking the quotient of a free action by a finite group (cf. the Keel–Mori theorem). == Definition == There are two common ways to define algebraic spaces: they can be defined as either quotients of schemes by étale equivalence relations, or as sheaves on a big étale site that are locally isomorphic to schemes. These two definitions are essentially equivalent. === Algebraic spaces as quotients of schemes === An algebraic space X comprises a scheme U and a closed subscheme R ⊆ U × U satisfying the following two conditions: 1. R is an equivalence relation as a subset of U × U 2. The projections pi: R → U onto each factor are étale maps. Some authors, such as Knutson, add an extra condition that an algebraic space has to be quasi-separated, meaning that the diagonal map is quasi-compact. One can always assume that R and U are affine schemes. Doing so means that the theory of algebraic spaces is not dependent on the full theory of schemes, and can indeed be used as a (more general) replacement of that theory. If R is the trivial equivalence relation over each connected component of U (i.e. for all x, y belonging to the same connected component of U, we have xRy if and only if x=y), then the algebraic space will be a scheme in the usual sense. Since a general algebraic space X does not satisfy this requirement, it allows a single connected component of U to cover X with many "sheets". The point set underlying the algebraic space X is then given by |U| / |R| as a set of equivalence classes. Let Y be an algebraic space defined by an equivalence relation S ⊂ V × V. The set Hom(Y, X) of morphisms of algebraic spaces is then defined by the condition that it makes the descent sequence H o m ( Y , X ) → H o m ( V , X ) ⟶ ⟶ H o m ( S , X ) {\displaystyle \mathrm {Hom} (Y,X)\rightarrow \mathrm {Hom} (V,X){{{} \atop \longrightarrow } \atop {\longrightarrow \atop {}}}\mathrm {Hom} (S,X)} exact (this definition is motivated by a descent theorem of Grothendieck for surjective étale maps of affine schemes). With these definitions, the algebraic spaces form a category. Let U be an affine scheme over a field k defined by a system of polynomials g(x), x = (x1, ..., xn), let k { x 1 , … , x n } {\displaystyle k\{x_{1},\ldots ,x_{n}\}\ } denote the ring of algebraic functions in x over k, and let X = {R ⊂ U × U} be an algebraic space. The appropriate stalks ÕX, x on X are then defined to be the local rings of algebraic functions defined by ÕU, u, where u ∈ U is a point lying over x and ÕU, u is the local ring corresponding to u of the ring k{x1, ..., xn} / (g) of algebraic functions on U. A point on an algebraic space is said to be smooth if ÕX, x ≅ k{z1, ..., zd} for some indeterminates z1, ..., zd. The dimension of X at x is then just defined to be d. A morphism f: Y → X of algebraic spaces is said to be étale at y ∈ Y (where x = f(y)) if the induced map on stalks ÕX, x → ÕY, y is an isomorphism. The structure sheaf OX on the algebraic space X is defined by associating the ring of functions O(V) on V (defined by étale maps from V to the affine line A1 in the sense just defined) to any algebraic space V which is étale over X. === Algebraic spaces as sheaves === An algebraic space X {\displaystyle {\mathfrak {X}}} can be defined as a sheaf of sets X : ( Sch / S ) et o p → Sets {\displaystyle {\mathfrak {X}}:({\text{Sch}}/S)_{\text{et}}^{op}\to {\text{Sets}}} such that There is a surjective étale morphism h X → X {\displaystyle h_{X}\to {\mathfrak {X}}} the diagonal morphism Δ X / S : X → X × X {\displaystyle \Delta _{{\mathfrak {X}}/S}:{\mathfrak {X}}\to {\mathfrak {X}}\times {\mathfrak {X}}} is representable. The second condition is equivalent to the property that given any schemes Y , Z {\displaystyle Y,Z} and morphisms h Y , h Z → X {\displaystyle h_{Y},h_{Z}\to {\mathfrak {X}}} , their fiber-product of sheaves h Y × X h Z {\displaystyle h_{Y}\times _{\mathfrak {X}}h_{Z}} is representable by a scheme over S {\displaystyle S} . Note that some authors, such as Knutson, add an extra condition that an algebraic space has to be quasi-separated, meaning that the diagonal map is quasi-compact. == Algebraic spaces and schemes == Algebraic spaces are similar to schemes, and much of the theory of schemes extends to algebraic spaces. For example, most properties of morphisms of schemes also apply to algebraic spaces, one can define cohomology of quasicoherent sheaves, this has the usual finiteness properties for proper morphisms, and so on. Proper algebraic spaces over a field of dimension one (curves) are schemes. Non-singular proper algebraic spaces of dimension two over a field (smooth surfaces) are schemes. Quasi-separated group objects in the category of algebraic spaces over a field are schemes, though there are non quasi-separated group objects that are not schemes. Commutative-group objects in the category of algebraic spaces over an arbitrary scheme which are proper, locally finite presentation, flat, and cohomologically flat in dimension 0 are schemes. Not every singular algebraic surface is a scheme. Hironaka's example can be used to give a non-singular 3-dimensional proper algebraic space that is not a scheme, given by the quotient of a scheme by a group of order 2 acting freely. This illustrates one difference between schemes and algebraic spaces: the quotient of an algebraic space by a discrete group acting freely is an algebraic space, but the quotient of a scheme by a discrete group acting freely need not be a scheme (even if the group is finite). Every quasi-separated algebraic space contains a dense open affine subscheme, and the complement of such a subscheme always has codimension ≥ 1. Thus algebraic spaces are in a sense "close" to affine schemes. The quotient of the complex numbers by a lattice is an algebraic space, but is not an elliptic curve, even though the corresponding analytic space is an elliptic curve (or more precisely is the image of an elliptic curve under the functor from complex algebraic spaces to analytic spaces). In fact this algebraic space quotient is not a scheme, is not complete, and is not even quasi-separated. This shows that although the quotient of an algebraic space by an infinite discrete group is an algebraic space, it can have strange properties and might not be the algebraic space one was "expecting". Similar examples are given by the quotient of the complex affine line by the integers, or the quotient of the complex affine line minus the origin by the powers of some number: again the corresponding analytic space is a variety, but the algebraic space is not. == Algebraic spaces and analytic spaces == Algebraic spaces over the complex numbers are closely related to analytic spaces and Moishezon manifolds. Roughly speaking, the difference between complex algebraic spaces and analytic spaces is that complex algebraic spaces are formed by gluing affine pieces together using the étale topology, while analytic spaces are formed by gluing with the classical topology. In particular there is a functor from complex algebraic spaces of finite type to analytic spaces. Hopf manifolds give examples of analytic surfaces that do not come from a proper algebraic space (though one can construct non-proper and non-separated algebraic spaces whose analytic space is the Hopf surface). It is also possible for different algebraic spaces to correspond to the same analytic space: for example, an elliptic curve and the quotient of C by the corresponding lattice are not isomorphic as algebraic spaces, but the corresponding analytic spaces are isomorphic. Artin showed that proper algebraic spaces over the complex numbers are more or less the same as Moishezon spaces. == Generalization == A far-reaching generalization of algebraic spaces is given by algebraic stacks. In the category of stacks we can form even more quotients by group actions than in the category of algebraic spaces (the resulting quotient is called a quotient stack). == Citations == == References == Artin, Michael (1969), "The implicit function theorem in algebraic geometry", in Abhyankar, Shreeram Shankar (ed.), Algebraic geometry: papers presented at the Bombay Colloquium, 1968, of Tata Institute of Fundamental Research studies in mathematics, vol. 4, Oxford University Press, pp. 13–34, ISBN 978-0-19-617607-9, MR 0262237 Artin, Michael (1971), Algebraic spaces, Yale Mathematical Monographs, vol. 3, Yale University Press, ISBN 978-0-300-01396-2, MR 0407012 Knutson, Donald (1971), Algebraic Spaces, Lecture Notes in Mathematics, vol. 203, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0059750, ISBN 978-3-540-05496-2, MR 0302647 == External links == Danilov, V.I. (2001) [1994], "Algebraic space", Encyclopedia of Mathematics, EMS Press Algebraic space in the stacks project
Wikipedia/Algebraic_space
The concept of a Projective space plays a central role in algebraic geometry. This article aims to define the notion in terms of abstract algebraic geometry and to describe some basic uses of projective spaces. == Homogeneous polynomial ideals == Let k be an algebraically closed field, and V be a finite-dimensional vector space over k. The symmetric algebra of the dual vector space V* is called the polynomial ring on V and denoted by k[V]. It is a naturally graded algebra by the degree of polynomials. The projective Nullstellensatz states that, for any homogeneous ideal I that does not contain all polynomials of a certain degree (referred to as an irrelevant ideal), the common zero locus of all polynomials in I (or Nullstelle) is non-trivial (i.e. the common zero locus contains more than the single element {0}), and, more precisely, the ideal of polynomials that vanish on that locus coincides with the radical of the ideal I. This last assertion is best summarized by the formula: for any relevant ideal I, I ( V ( I ) ) = I . {\displaystyle {\mathcal {I}}({\mathcal {V}}(I))={\sqrt {I}}.} In particular, maximal homogeneous relevant ideals of k[V] are one-to-one with lines through the origin of V. == Construction of projectivized schemes == Let V be a finite-dimensional vector space over a field k. The scheme over k defined by Proj(k[V]) is called projectivization of V. The projective n-space on k is the projectivization of the vector space A k n + 1 {\displaystyle \mathbb {A} _{k}^{n+1}} . The definition of the sheaf is done on the base of open sets of principal open sets D(P), where P varies over the set of homogeneous polynomials, by setting the sections Γ ( D ( P ) , O P ( V ) ) {\displaystyle \Gamma (D(P),{\mathcal {O}}_{\mathbb {P} (V)})} to be the ring ( k [ V ] P ) 0 {\displaystyle (k[V]_{P})_{0}} , the zero degree component of the ring obtained by localization at P. Its elements are therefore the rational functions with homogeneous numerator and some power of P as the denominator, with same degree as the numerator. The situation is most clear at a non-vanishing linear form φ. The restriction of the structure sheaf to the open set D(φ) is then canonically identified with the affine scheme spec(k[ker φ]). Since the D(φ) form an open cover of X the projective schemes can be thought of as being obtained by the gluing via projectivization of isomorphic affine schemes. It can be noted that the ring of global sections of this scheme is a field, which implies that the scheme is not affine. Any two open sets intersect non-trivially: ie the scheme is irreducible. When the field k is algebraically closed, P ( V ) {\displaystyle \mathbb {P} (V)} is in fact an abstract variety, that furthermore is complete. cf. Glossary of scheme theory == Divisors and twisting sheaves == The Proj construction in fact gives more than a mere scheme: a sheaf in graded modules over the structure sheaf is defined in the process. The homogeneous components of this graded sheaf are denoted O ( i ) {\displaystyle {\mathcal {O}}(i)} , the Serre twisting sheaves. All of these sheaves are in fact line bundles. By the correspondence between Cartier divisors and line bundles, the first twisting sheaf O ( 1 ) {\displaystyle {\mathcal {O}}(1)} is equivalent to hyperplane divisors. Since the ring of polynomials is a unique factorization domain, any prime ideal of height 1 is principal, which shows that any Weil divisor is linearly equivalent to some power of a hyperplane divisor. This consideration proves that the Picard group of a projective space is free of rank 1. That is Pic ⁡ P k n = Z {\displaystyle \operatorname {Pic} \mathbf {P} _{\mathbf {k} }^{n}=\mathbb {Z} } , and the isomorphism is given by the degree of divisors. === Classification of vector bundles === The invertible sheaves, or line bundles, on the projective space P k n , {\displaystyle \mathbb {P} _{k}^{n},\,} for k a field, are exactly the twisting sheaves O ( m ) , m ∈ Z , {\displaystyle {\mathcal {O}}(m),\ m\in \mathbb {Z} ,} so the Picard group of P k n {\displaystyle \mathbb {P} _{k}^{n}} is isomorphic to Z {\displaystyle \mathbb {Z} } . The isomorphism is given by the first Chern class. The space of local sections on an open set U ⊆ P ( V ) {\displaystyle U\subseteq \mathbb {P} (V)} of the line bundle O ( k ) {\displaystyle {\mathcal {O}}(k)} is the space of homogeneous degree k regular functions on the cone in V associated to U. In particular, the space of global sections Γ ( P , O ( m ) ) {\displaystyle \Gamma (\mathbb {P} ,{\mathcal {O}}(m))} vanishes if m < 0, and consists of constants in k for m=0 and of homogeneous polynomials of degree m for m > 0. (Hence has dimension ( m + n m ) = ( m + n n ) {\textstyle {\binom {m+n}{m}}={\binom {m+n}{n}}} ). The Birkhoff-Grothendieck theorem states that on the projective line, any vector bundle splits in a unique way as a direct sum of the line bundles. === Important line bundles === The tautological bundle, which appears for instance as the exceptional divisor of the blowing up of a smooth point is the sheaf O ( − 1 ) {\displaystyle {\mathcal {O}}(-1)} . The canonical bundle K ( P k n ) , {\displaystyle {\mathcal {K}}(\mathbb {P} _{k}^{n}),\,} is O ( − ( n + 1 ) ) {\displaystyle {\mathcal {O}}(-(n+1))} . This fact derives from a fundamental geometric statement on projective spaces: the Euler sequence. The negativity of the canonical line bundle makes projective spaces prime examples of Fano varieties, equivalently, their anticanonical line bundle is ample (in fact very ample). Their index (cf. Fano varieties) is given by Ind ⁡ ( P n ) = n + 1 {\displaystyle \operatorname {Ind} (\mathbb {P} ^{n})=n+1} , and, by a theorem of Kobayashi-Ochiai, projective spaces are characterized amongst Fano varieties by the property Ind ⁡ ( X ) = dim ⁡ X + 1. {\displaystyle \operatorname {Ind} (X)=\dim X+1.} == Morphisms to projective schemes == As affine spaces can be embedded in projective spaces, all affine varieties can be embedded in projective spaces too. Any choice of a finite system of nonsimultaneously vanishing global sections of a globally generated line bundle defines a morphism to a projective space. A line bundle whose base can be embedded in a projective space by such a morphism is called very ample. The group of symmetries of the projective space P k n {\displaystyle \mathbb {P} _{\mathbf {k} }^{n}} is the group of projectivized linear automorphisms P G L n + 1 ( k ) {\displaystyle \mathrm {PGL} _{n+1}(\mathbf {k} )} . The choice of a morphism to a projective space j : X → P n {\displaystyle j:X\to \mathbf {P} ^{n}} modulo the action of this group is in fact equivalent to the choice of a globally generating n-dimensional linear system of divisors on a line bundle on X. The choice of a projective embedding of X, modulo projective transformations is likewise equivalent to the choice of a very ample line bundle on X. A morphism to a projective space j : X → P n {\displaystyle j:X\to \mathbf {P} ^{n}} defines a globally generated line bundle by j ∗ O ( 1 ) {\displaystyle j^{*}{\mathcal {O}}(1)} and a linear system j ∗ ( Γ ( P n , O ( 1 ) ) ) ⊂ Γ ( X , j ∗ O ( 1 ) ) . {\displaystyle j^{*}(\Gamma (\mathbf {P} ^{n},{\mathcal {O}}(1)))\subset \Gamma (X,j^{*}{\mathcal {O}}(1)).} If the range of the morphism j {\displaystyle j} is not contained in a hyperplane divisor, then the pull-back is an injection and the linear system of divisors j ∗ ( Γ ( P n , O ( 1 ) ) ) {\displaystyle j^{*}(\Gamma (\mathbf {P} ^{n},{\mathcal {O}}(1)))} is a linear system of dimension n. === An example: the Veronese embeddings === The Veronese embeddings are embeddings P n → P N {\displaystyle \mathbb {P} ^{n}\to \mathbb {P} ^{N}} for N = ( n + d d ) − 1. {\textstyle N={\binom {n+d}{d}}-1.} See the answer on MathOverflow for an application of the Veronese embedding to the calculation of cohomology groups of smooth projective hypersurfaces (smooth divisors). == Curves in projective spaces == As Fano varieties, the projective spaces are ruled varieties. The intersection theory of curves in the projective plane yields the Bézout theorem. == See also == === General algebraic geometry === Scheme (mathematics) Projective variety Proj construction === General projective geometry === Projective space Projective geometry Homogeneous polynomial == Notes == == References == Robin Hartshorne (1977). Algebraic Geometry. Springer-Verlag. ISBN 0-387-90244-9.
Wikipedia/Algebraic_geometry_of_projective_spaces
In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function. A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials. An algebraic variety has naturally the structure of a locally ringed space; a morphism between algebraic varieties is precisely a morphism of the underlying locally ringed spaces. == Definition == If X and Y are closed subvarieties of A n {\displaystyle \mathbb {A} ^{n}} and A m {\displaystyle \mathbb {A} ^{m}} (so they are affine varieties), then a regular map f : X → Y {\displaystyle f\colon X\to Y} is the restriction of a polynomial map A n → A m {\displaystyle \mathbb {A} ^{n}\to \mathbb {A} ^{m}} . Explicitly, it has the form: f = ( f 1 , … , f m ) {\displaystyle f=(f_{1},\dots ,f_{m})} where the f i {\displaystyle f_{i}} s are in the coordinate ring of X: k [ X ] = k [ x 1 , … , x n ] / I , {\displaystyle k[X]=k[x_{1},\dots ,x_{n}]/I,} where I is the ideal defining X (note: two polynomials f and g define the same function on X if and only if f − g is in I). The image f(X) lies in Y, and hence satisfies the defining equations of Y. That is, a regular map f : X → Y {\displaystyle f:X\to Y} is the same as the restriction of a polynomial map whose components satisfy the defining equations of Y {\displaystyle Y} . More generally, a map f : X→Y between two varieties is regular at a point x if there is a neighbourhood U of x and a neighbourhood V of f(x) such that f(U) ⊂ V and the restricted function f : U→V is regular as a function on some affine charts of U and V. Then f is called regular, if it is regular at all points of X. Note: It is not immediately obvious that the two definitions coincide: if X and Y are affine varieties, then a map f : X→Y is regular in the first sense if and only if it is so in the second sense. Also, it is not immediately clear whether regularity depends on a choice of affine charts (it does not.) This kind of a consistency issue, however, disappears if one adopts the formal definition. Formally, an (abstract) algebraic variety is defined to be a particular kind of a locally ringed space. When this definition is used, a morphism of varieties is just a morphism of locally ringed spaces. The composition of regular maps is again regular; thus, algebraic varieties form the category of algebraic varieties where the morphisms are the regular maps. Regular maps between affine varieties correspond contravariantly in one-to-one to algebra homomorphisms between the coordinate rings: if f : X→Y is a morphism of affine varieties, then it defines the algebra homomorphism f # : k [ Y ] → k [ X ] , g ↦ g ∘ f {\displaystyle f^{\#}:k[Y]\to k[X],\,g\mapsto g\circ f} where k [ X ] , k [ Y ] {\displaystyle k[X],k[Y]} are the coordinate rings of X and Y; it is well-defined since g ∘ f = g ( f 1 , … , f m ) {\displaystyle g\circ f=g(f_{1},\dots ,f_{m})} is a polynomial in elements of k [ X ] {\displaystyle k[X]} . Conversely, if ϕ : k [ Y ] → k [ X ] {\displaystyle \phi :k[Y]\to k[X]} is an algebra homomorphism, then it induces the morphism ϕ a : X → Y {\displaystyle \phi ^{a}:X\to Y} given by: writing k [ Y ] = k [ y 1 , … , y m ] / J , {\displaystyle k[Y]=k[y_{1},\dots ,y_{m}]/J,} ϕ a = ( ϕ ( y 1 ¯ ) , … , ϕ ( y m ¯ ) ) {\displaystyle \phi ^{a}=(\phi ({\overline {y_{1}}}),\dots ,\phi ({\overline {y_{m}}}))} where y ¯ i {\displaystyle {\overline {y}}_{i}} are the images of y i {\displaystyle y_{i}} 's. Note ϕ a # = ϕ {\displaystyle {\phi ^{a}}^{\#}=\phi } as well as f # a = f . {\displaystyle {f^{\#}}^{a}=f.} In particular, f is an isomorphism of affine varieties if and only if f# is an isomorphism of the coordinate rings. For example, if X is a closed subvariety of an affine variety Y and f is the inclusion, then f# is the restriction of regular functions on Y to X. See #Examples below for more examples. == Regular functions == In the particular case that Y {\displaystyle Y} equals A 1 {\displaystyle \mathbb {A} ^{1}} the regular maps f : X → A 1 {\displaystyle f:X\rightarrow \mathbb {A} ^{1}} are called regular functions, and are algebraic analogs of smooth functions studied in differential geometry. The ring of regular functions (that is the coordinate ring or more abstractly the ring of global sections of the structure sheaf) is a fundamental object in affine algebraic geometry. The only regular function on a projective variety is constant (this can be viewed as an algebraic analogue of Liouville's theorem in complex analysis). A scalar function f : X → A 1 {\displaystyle f:X\rightarrow \mathbb {A} ^{1}} is regular at a point x {\displaystyle x} if, in some open affine neighborhood of x {\displaystyle x} , it is a rational function that is regular at x {\displaystyle x} ; i.e., there are regular functions g {\displaystyle g} , h {\displaystyle h} near x {\displaystyle x} such that f = g / h {\displaystyle f=g/h} and h {\displaystyle h} does not vanish at x {\displaystyle x} . Caution: the condition is for some pair (g, h) not for all pairs (g, h); see Examples. If X is a quasi-projective variety; i.e., an open subvariety of a projective variety, then the function field k(X) is the same as that of the closure X ¯ {\displaystyle {\overline {X}}} of X and thus a rational function on X is of the form g/h for some homogeneous elements g, h of the same degree in the homogeneous coordinate ring k [ X ¯ ] {\displaystyle k[{\overline {X}}]} of X ¯ {\displaystyle {\overline {X}}} (cf. Projective variety#Variety structure). Then a rational function f on X is regular at a point x if and only if there are some homogeneous elements g, h of the same degree in k [ X ¯ ] {\displaystyle k[{\overline {X}}]} such that f = g/h and h does not vanish at x. This characterization is sometimes taken as the definition of a regular function. == Comparison with a morphism of schemes == If X = Spec ⁡ A {\displaystyle X=\operatorname {Spec} A} and Y = Spec ⁡ B {\displaystyle Y=\operatorname {Spec} B} are affine schemes, then each ring homomorphism ϕ : B → A {\displaystyle \phi :B\rightarrow A} determines a morphism ϕ a : X → Y , p ↦ ϕ − 1 ( p ) {\displaystyle \phi ^{a}:X\to Y,\,{\mathfrak {p}}\mapsto \phi ^{-1}({\mathfrak {p}})} by taking the pre-images of prime ideals. All morphisms between affine schemes are of this type and gluing such morphisms gives a morphism of schemes in general. Now, if X, Y are affine varieties; i.e., A, B are integral domains that are finitely generated algebras over an algebraically closed field k, then, working with only the closed points, the above coincides with the definition given at #Definition. (Proof: If f : X → Y is a morphism, then writing ϕ = f # {\displaystyle \phi =f^{\#}} , we need to show m f ( x ) = ϕ − 1 ( m x ) {\displaystyle {\mathfrak {m}}_{f(x)}=\phi ^{-1}({\mathfrak {m}}_{x})} where m x , m f ( x ) {\displaystyle {\mathfrak {m}}_{x},{\mathfrak {m}}_{f(x)}} are the maximal ideals corresponding to the points x and f(x); i.e., m x = { g ∈ k [ X ] ∣ g ( x ) = 0 } {\displaystyle {\mathfrak {m}}_{x}=\{g\in k[X]\mid g(x)=0\}} . This is immediate.) This fact means that the category of affine varieties can be identified with a full subcategory of affine schemes over k. Since morphisms of varieties are obtained by gluing morphisms of affine varieties in the same way morphisms of schemes are obtained by gluing morphisms of affine schemes, it follows that the category of varieties is a full subcategory of the category of schemes over k. For more details, see [1]. == Examples == The regular functions on A n {\displaystyle \mathbb {A} ^{n}} are exactly the polynomials in n {\displaystyle n} variables and the regular functions on P n {\displaystyle \mathbb {P} ^{n}} are exactly the constants. Let X {\displaystyle X} be the affine curve y = x 2 {\displaystyle y=x^{2}} . Then f : X → A 1 , ( x , y ) ↦ x {\displaystyle f:X\to \mathbf {A} ^{1},\,(x,y)\mapsto x} is a morphism; it is bijective with the inverse g ( x ) = ( x , x 2 ) {\displaystyle g(x)=(x,x^{2})} . Since g {\displaystyle g} is also a morphism, f {\displaystyle f} is an isomorphism of varieties. Let X {\displaystyle X} be the affine curve y 2 = x 3 + x 2 {\displaystyle y^{2}=x^{3}+x^{2}} . Then f : A 1 → X , t ↦ ( t 2 − 1 , t 3 − t ) {\displaystyle f:\mathbf {A} ^{1}\to X,\,t\mapsto (t^{2}-1,t^{3}-t)} is a morphism. It corresponds to the ring homomorphism f # : k [ X ] → k [ t ] , g ↦ g ( t 2 − 1 , t 3 − t ) , {\displaystyle f^{\#}:k[X]\to k[t],\,g\mapsto g(t^{2}-1,t^{3}-t),} which is seen to be injective (since f is surjective). Continuing the preceding example, let U = A1 − {1}. Since U is the complement of the hyperplane t = 1, U is affine. The restriction f : U → X {\displaystyle f:U\to X} is bijective. But the corresponding ring homomorphism is the inclusion k [ X ] = k [ t 2 − 1 , t 3 − t ] ↪ k [ t , ( t − 1 ) − 1 ] {\displaystyle k[X]=k[t^{2}-1,t^{3}-t]\hookrightarrow k[t,(t-1)^{-1}]} , which is not an isomorphism and so the restriction f |U is not an isomorphism. Let X be the affine curve x2 + y2 = 1 and let f ( x , y ) = 1 − y x . {\displaystyle f(x,y)={1-y \over x}.} Then f is a rational function on X. It is regular at (0, 1) despite the expression since, as a rational function on X, f can also be written as f ( x , y ) = x 1 + y {\displaystyle f(x,y)={x \over 1+y}} . Let X = A2 − (0, 0). Then X is an algebraic variety since it is an open subset of a variety. If f is a regular function on X, then f is regular on D A 2 ( x ) = A 2 − { x = 0 } {\displaystyle D_{\mathbf {A} ^{2}}(x)=\mathbf {A} ^{2}-\{x=0\}} and so is in k [ D A 2 ( x ) ] = k [ A 2 ] [ x − 1 ] = k [ x , x − 1 , y ] {\displaystyle k[D_{\mathbf {A} ^{2}}(x)]=k[\mathbf {A} ^{2}][x^{-1}]=k[x,x^{-1},y]} . Similarly, it is in k [ x , y , y − 1 ] {\displaystyle k[x,y,y^{-1}]} . Thus, we can write: f = g x n = h y m {\displaystyle f={g \over x^{n}}={h \over y^{m}}} where g, h are polynomials in k[x, y]. But this implies g is divisible by xn and so f is in fact a polynomial. Hence, the ring of regular functions on X is just k[x, y]. (This also shows that X cannot be affine since if it were, X is determined by its coordinate ring and thus X = A2.) Suppose P 1 = A 1 ∪ { ∞ } {\displaystyle \mathbf {P} ^{1}=\mathbf {A} ^{1}\cup \{\infty \}} by identifying the points (x : 1) with the points x on A1 and ∞ = (1 : 0). There is an automorphism σ of P1 given by σ(x : y) = (y : x); in particular, σ exchanges 0 and ∞. If f is a rational function on P1, then σ # ( f ) = f ( 1 / z ) {\displaystyle \sigma ^{\#}(f)=f(1/z)} and f is regular at ∞ if and only if f(1/z) is regular at zero. Taking the function field k(V) of an irreducible algebraic curve V, the functions F in the function field may all be realised as morphisms from V to the projective line over k. (cf. #Properties) The image will either be a single point, or the whole projective line (this is a consequence of the completeness of projective varieties). That is, unless F is actually constant, we have to attribute to F the value ∞ at some points of V. For any algebraic varieties X, Y, the projection p : X × Y → X , ( x , y ) ↦ x {\displaystyle p:X\times Y\to X,\,(x,y)\mapsto x} is a morphism of varieties. If X and Y are affine, then the corresponding ring homomorphism is p # : k [ X ] → k [ X × Y ] = k [ X ] ⊗ k k [ Y ] , f ↦ f ⊗ 1 {\displaystyle p^{\#}:k[X]\to k[X\times Y]=k[X]\otimes _{k}k[Y],\,f\mapsto f\otimes 1} where ( f ⊗ 1 ) ( x , y ) = f ( p ( x , y ) ) = f ( x ) {\displaystyle (f\otimes 1)(x,y)=f(p(x,y))=f(x)} . == Properties == A morphism between varieties is continuous with respect to Zariski topologies on the source and the target. The image of a morphism of varieties need not be open nor closed (for example, the image of A 2 → A 2 , ( x , y ) ↦ ( x , x y ) {\displaystyle \mathbf {A} ^{2}\to \mathbf {A} ^{2},\,(x,y)\mapsto (x,xy)} is neither open nor closed). However, one can still say: if f is a morphism between varieties, then the image of f contains an open dense subset of its closure (cf. constructible set). A morphism f:X→Y of algebraic varieties is said to be dominant if it has dense image. For such an f, if V is a nonempty open affine subset of Y, then there is a nonempty open affine subset U of X such that f(U) ⊂ V and then f # : k [ V ] → k [ U ] {\displaystyle f^{\#}:k[V]\to k[U]} is injective. Thus, the dominant map f induces an injection on the level of function fields: k ( Y ) = lim → ⁡ k [ V ] ↪ k ( X ) , g ↦ g ∘ f {\displaystyle k(Y)=\varinjlim k[V]\hookrightarrow k(X),\,g\mapsto g\circ f} where the direct limit runs over all nonempty open affine subsets of Y. (More abstractly, this is the induced map from the residue field of the generic point of Y to that of X.) Conversely, every inclusion of fields k ( Y ) ↪ k ( X ) {\displaystyle k(Y)\hookrightarrow k(X)} is induced by a dominant rational map from X to Y. Hence, the above construction determines a contravariant-equivalence between the category of algebraic varieties over a field k and dominant rational maps between them and the category of finitely generated field extension of k. If X is a smooth complete curve (for example, P1) and if f is a rational map from X to a projective space Pm, then f is a regular map X → Pm. In particular, when X is a smooth complete curve, any rational function on X may be viewed as a morphism X → P1 and, conversely, such a morphism as a rational function on X. On a normal variety (in particular, a smooth variety), a rational function is regular if and only if it has no poles of codimension one. This is an algebraic analog of Hartogs' extension theorem. There is also a relative version of this fact; see [2]. A morphism between algebraic varieties that is a homeomorphism between the underlying topological spaces need not be an isomorphism (a counterexample is given by a Frobenius morphism t ↦ t p {\displaystyle t\mapsto t^{p}} .) On the other hand, if f is bijective birational and the target space of f is a normal variety, then f is biregular. (cf. Zariski's main theorem.) A regular map between complex algebraic varieties is a holomorphic map. (There is actually a slight technical difference: a regular map is a meromorphic map whose singular points are removable, but the distinction is usually ignored in practice.) In particular, a regular map into the complex numbers is just a usual holomorphic function (complex-analytic function). == Morphisms to a projective space == Let f : X → P m {\displaystyle f:X\to \mathbf {P} ^{m}} be a morphism from a projective variety to a projective space. Let x be a point of X. Then some i-th homogeneous coordinate of f(x) is nonzero; say, i = 0 for simplicity. Then, by continuity, there is an open affine neighborhood U of x such that f : U → P m − { y 0 = 0 } {\displaystyle f:U\to \mathbf {P} ^{m}-\{y_{0}=0\}} is a morphism, where yi are the homogeneous coordinates. Note the target space is the affine space Am through the identification ( a 0 : ⋯ : a m ) = ( 1 : a 1 / a 0 : ⋯ : a m / a 0 ) ∼ ( a 1 / a 0 , … , a m / a 0 ) {\displaystyle (a_{0}:\dots :a_{m})=(1:a_{1}/a_{0}:\dots :a_{m}/a_{0})\sim (a_{1}/a_{0},\dots ,a_{m}/a_{0})} . Thus, by definition, the restriction f |U is given by f | U ( x ) = ( g 1 ( x ) , … , g m ( x ) ) {\displaystyle f|_{U}(x)=(g_{1}(x),\dots ,g_{m}(x))} where gi's are regular functions on U. Since X is projective, each gi is a fraction of homogeneous elements of the same degree in the homogeneous coordinate ring k[X] of X. We can arrange the fractions so that they all have the same homogeneous denominator say f0. Then we can write gi = fi/f0 for some homogeneous elements fi's in k[X]. Hence, going back to the homogeneous coordinates, f ( x ) = ( f 0 ( x ) : f 1 ( x ) : ⋯ : f m ( x ) ) {\displaystyle f(x)=(f_{0}(x):f_{1}(x):\dots :f_{m}(x))} for all x in U and by continuity for all x in X as long as the fi's do not vanish at x simultaneously. If they vanish simultaneously at a point x of X, then, by the above procedure, one can pick a different set of fi's that do not vanish at x simultaneously (see Note at the end of the section.) In fact, the above description is valid for any quasi-projective variety X, an open subvariety of a projective variety X ¯ {\displaystyle {\overline {X}}} ; the difference being that fi's are in the homogeneous coordinate ring of X ¯ {\displaystyle {\overline {X}}} . Note: The above does not say a morphism from a projective variety to a projective space is given by a single set of polynomials (unlike the affine case). For example, let X be the conic y 2 = x z {\displaystyle y^{2}=xz} in P2. Then two maps ( x : y : z ) ↦ ( x : y ) {\displaystyle (x:y:z)\mapsto (x:y)} and ( x : y : z ) ↦ ( y : z ) {\displaystyle (x:y:z)\mapsto (y:z)} agree on the open subset { ( x : y : z ) ∈ X ∣ x ≠ 0 , z ≠ 0 } {\displaystyle \{(x:y:z)\in X\mid x\neq 0,z\neq 0\}} of X (since ( x : y ) = ( x y : y 2 ) = ( x y : x z ) = ( y : z ) {\displaystyle (x:y)=(xy:y^{2})=(xy:xz)=(y:z)} ) and so defines a morphism f : X → P 1 {\displaystyle f:X\to \mathbf {P} ^{1}} . == Fibers of a morphism == The important fact is: In Mumford's red book, the theorem is proved by means of Noether's normalization lemma. For an algebraic approach where the generic freeness plays a main role and the notion of "universally catenary ring" is a key in the proof, see Eisenbud, Ch. 14 of "Commutative algebra with a view toward algebraic geometry." In fact, the proof there shows that if f is flat, then the dimension equality in 2. of the theorem holds in general (not just generically). == Degree of a finite morphism == Let f: X → Y be a finite surjective morphism between algebraic varieties over a field k. Then, by definition, the degree of f is the degree of the finite field extension of the function field k(X) over f*k(Y). By generic freeness, there is some nonempty open subset U in Y such that the restriction of the structure sheaf OX to f−1(U) is free as OY|U-module. The degree of f is then also the rank of this free module. If f is étale and if X, Y are complete, then for any coherent sheaf F on Y, writing χ for the Euler characteristic, χ ( f ∗ F ) = deg ⁡ ( f ) χ ( F ) . {\displaystyle \chi (f^{*}F)=\deg(f)\chi (F).} (The Riemann–Hurwitz formula for a ramified covering shows the "étale" here cannot be omitted.) In general, if f is a finite surjective morphism, if X, Y are complete and F a coherent sheaf on Y, then from the Leray spectral sequence H p ⁡ ( Y , R q f ∗ f ∗ F ) ⇒ H p + q ⁡ ( X , f ∗ F ) {\displaystyle \operatorname {H} ^{p}(Y,R^{q}f_{*}f^{*}F)\Rightarrow \operatorname {H} ^{p+q}(X,f^{*}F)} , one gets: χ ( f ∗ F ) = ∑ q = 0 ∞ ( − 1 ) q χ ( R q f ∗ f ∗ F ) . {\displaystyle \chi (f^{*}F)=\sum _{q=0}^{\infty }(-1)^{q}\chi (R^{q}f_{*}f^{*}F).} In particular, if F is a tensor power L ⊗ n {\displaystyle L^{\otimes n}} of a line bundle, then R q f ∗ ( f ∗ F ) = R q f ∗ O X ⊗ L ⊗ n {\displaystyle R^{q}f_{*}(f^{*}F)=R^{q}f_{*}{\mathcal {O}}_{X}\otimes L^{\otimes n}} and since the support of R q f ∗ O X {\displaystyle R^{q}f_{*}{\mathcal {O}}_{X}} has positive codimension if q is positive, comparing the leading terms, one has: deg ⁡ ( f ∗ L ) = deg ⁡ ( f ) deg ⁡ ( L ) {\displaystyle \operatorname {deg} (f^{*}L)=\operatorname {deg} (f)\operatorname {deg} (L)} (since the generic rank of f ∗ O X {\displaystyle f_{*}{\mathcal {O}}_{X}} is the degree of f.) If f is étale and k is algebraically closed, then each geometric fiber f−1(y) consists exactly of deg(f) points. == See also == Algebraic function Smooth morphism Étale morphisms – The algebraic analogue of local diffeomorphisms. Resolution of singularities contraction morphism == Notes == == Citations == == References ==
Wikipedia/Morphism_of_algebraic_varieties
Algebraic statistics is the use of algebra to advance statistics. Algebra has been useful for experimental design, parameter estimation, and hypothesis testing. Traditionally, algebraic statistics has been associated with the design of experiments and multivariate analysis (especially time series). In recent years, the term "algebraic statistics" has been sometimes restricted, sometimes being used to label the use of algebraic geometry and commutative algebra in statistics. == The tradition of algebraic statistics == In the past, statisticians have used algebra to advance research in statistics. Some algebraic statistics led to the development of new topics in algebra and combinatorics, such as association schemes. === Design of experiments === For example, Ronald A. Fisher, Henry B. Mann, and Rosemary A. Bailey applied Abelian groups to the design of experiments. Experimental designs were also studied with affine geometry over finite fields and then with the introduction of association schemes by R. C. Bose. Orthogonal arrays were introduced by C. R. Rao also for experimental designs. === Algebraic analysis and abstract statistical inference === Invariant measures on locally compact groups have long been used in statistical theory, particularly in multivariate analysis. Beurling's factorization theorem and much of the work on (abstract) harmonic analysis sought better understanding of the Wold decomposition of stationary stochastic processes, which is important in time series statistics. Encompassing previous results on probability theory on algebraic structures, Ulf Grenander developed a theory of "abstract inference". Grenander's abstract inference and his theory of patterns are useful for spatial statistics and image analysis; these theories rely on lattice theory. === Partially ordered sets and lattices === Partially ordered vector spaces and vector lattices are used throughout statistical theory. Garrett Birkhoff metrized the positive cone using Hilbert's projective metric and proved Jentsch's theorem using the contraction mapping theorem. Birkhoff's results have been used for maximum entropy estimation (which can be viewed as linear programming in infinite dimensions) by Jonathan Borwein and colleagues. Vector lattices and conical measures were introduced into statistical decision theory by Lucien Le Cam. == Recent work using commutative algebra and algebraic geometry == In recent years, the term "algebraic statistics" has been used more restrictively, to label the use of algebraic geometry and commutative algebra to study problems related to discrete random variables with finite state spaces. Commutative algebra and algebraic geometry have applications in statistics because many commonly used classes of discrete random variables can be viewed as algebraic varieties. === Introductory example === Consider a random variable X which can take on the values 0, 1, 2. Such a variable is completely characterized by the three probabilities p i = P r ( X = i ) , i = 0 , 1 , 2 {\displaystyle p_{i}=\mathrm {Pr} (X=i),\quad i=0,1,2} and these numbers satisfy ∑ i = 0 2 p i = 1 and 0 ≤ p i ≤ 1. {\displaystyle \sum _{i=0}^{2}p_{i}=1\quad {\mbox{and}}\quad 0\leq p_{i}\leq 1.} Conversely, any three such numbers unambiguously specify a random variable, so we can identify the random variable X with the tuple ( p 0 , p 1 , p 2 ) ∈ R 3 {\displaystyle (p_{0},p_{1},p_{2})\in \mathbb {R} ^{3}} . Now suppose X is a binomial random variable with parameter q and n = 2, i.e. X represents the number of successes when repeating a certain experiment two times, where each experiment has an individual success probability of q. Then p i = P r ( X = i ) = ( 2 i ) q i ( 1 − q ) 2 − i {\displaystyle p_{i}=\mathrm {Pr} (X=i)={2 \choose i}q^{i}(1-q)^{2-i}} and it is not hard to show that the tuples ( p 0 , p 1 , p 2 ) {\displaystyle (p_{0},p_{1},p_{2})} which arise in this way are precisely the ones satisfying 4 p 0 p 2 − p 1 2 = 0. {\displaystyle 4p_{0}p_{2}-p_{1}^{2}=0.\ } The latter is a polynomial equation defining an algebraic variety (or surface) in R 3 {\displaystyle \mathbb {R} ^{3}} , and this variety, when intersected with the simplex given by ∑ i = 0 2 p i = 1 and 0 ≤ p i ≤ 1 , {\displaystyle \sum _{i=0}^{2}p_{i}=1\quad {\mbox{and}}\quad 0\leq p_{i}\leq 1,} yields a piece of an algebraic curve which may be identified with the set of all 3-state Bernoulli variables. Determining the parameter q amounts to locating one point on this curve; testing the hypothesis that a given variable X is Bernoulli amounts to testing whether a certain point lies on that curve or not. == Application of algebraic geometry to statistical learning theory == Algebraic geometry has also recently found applications to statistical learning theory, including a generalization of the Akaike information criterion to singular statistical models. == References == R. A. Bailey. Association Schemes: Designed Experiments, Algebra and Combinatorics, Cambridge University Press, Cambridge, 2004. 387pp. ISBN 0-521-82446-X. (Chapters from preliminary draft are available on-line) Caliński, Tadeusz; Kageyama, Sanpei (2003). Block designs: A Randomization approach, Volume II: Design. Lecture Notes in Statistics. Vol. 170. New York: Springer-Verlag. ISBN 0-387-95470-8. Hinkelmann, Klaus; Kempthorne, Oscar (2005). Design and Analysis of Experiments, Volume 2: Advanced Experimental Design (First ed.). Wiley. ISBN 978-0-471-55177-5. H. B. Mann. 1949. Analysis and Design of Experiments: Analysis of Variance and Analysis-of-Variance Designs. Dover. Raghavarao, Damaraju (1988). Constructions and Combinatorial Problems in Design of Experiments (corrected reprint of the 1971 Wiley ed.). New York: Dover. Raghavarao, Damaraju; Padgett, L.V. (2005). Block Designs: Analysis, Combinatorics and Applications. World Scientific. Street, Anne Penfold; Street, Deborah J. (1987). Combinatorics of Experimental Design. Oxford U. P. [Clarendon]. ISBN 0-19-853256-3. L. Pachter and B. Sturmfels. Algebraic Statistics for Computational Biology. Cambridge University Press 2005. G. Pistone, E. Riccomango, H. P. Wynn. Algebraic Statistics. CRC Press, 2001. Drton, Mathias, Sturmfels, Bernd, Sullivant, Seth. Lectures on Algebraic Statistics, Springer 2009. Watanabe, Sumio. Algebraic Geometry and Statistical Learning Theory, Cambridge University Press 2009. Paolo Gibilisco, Eva Riccomagno, Maria-Piera Rogantin, Henry P. Wynn. Algebraic and Geometric Methods in Statistics, Cambridge 2009. == External links == Algebraic Statistics Journal of Algebraic Statistics Archives of Journal of Algebraic Statistics
Wikipedia/Algebraic_statistics
In mathematics, singularity theory studies spaces that are almost manifolds, but not quite. A string can serve as an example of a one-dimensional manifold, if one neglects its thickness. A singularity can be made by balling it up, dropping it on the floor, and flattening it. In some places the flat string will cross itself in an approximate "X" shape. The points on the floor where it does this are one kind of singularity, the double point: one bit of the floor corresponds to more than one bit of string. Perhaps the string will also touch itself without crossing, like an underlined "U". This is another kind of singularity. Unlike the double point, it is not stable, in the sense that a small push will lift the bottom of the "U" away from the "underline". Vladimir Arnold defines the main goal of singularity theory as describing how objects depend on parameters, particularly in cases where the properties undergo sudden change under a small variation of the parameters. These situations are called perestroika (Russian: перестройка), bifurcations or catastrophes. Classifying the types of changes and characterizing sets of parameters which give rise to these changes are some of the main mathematical goals. Singularities can occur in a wide range of mathematical objects, from matrices depending on parameters to wavefronts. == How singularities may arise == In singularity theory the general phenomenon of points and sets of singularities is studied, as part of the concept that manifolds (spaces without singularities) may acquire special, singular points by a number of routes. Projection is one way, very obvious in visual terms when three-dimensional objects are projected into two dimensions (for example in one of our eyes); in looking at classical statuary the folds of drapery are amongst the most obvious features. Singularities of this kind include caustics, very familiar as the light patterns at the bottom of a swimming pool. Other ways in which singularities occur is by degeneration of manifold structure. The presence of symmetry can be good cause to consider orbifolds, which are manifolds that have acquired "corners" in a process of folding up, resembling the creasing of a table napkin. == Singularities in algebraic geometry == === Algebraic curve singularities === Historically, singularities were first noticed in the study of algebraic curves. The double point at (0, 0) of the curve y 2 = x 2 + x 3 {\displaystyle y^{2}=x^{2}+x^{3}} and the cusp there of y 2 = x 3 {\displaystyle y^{2}=x^{3}\ } are qualitatively different, as is seen just by sketching. Isaac Newton carried out a detailed study of all cubic curves, the general family to which these examples belong. It was noticed in the formulation of Bézout's theorem that such singular points must be counted with multiplicity (2 for a double point, 3 for a cusp), in accounting for intersections of curves. It was then a short step to define the general notion of a singular point of an algebraic variety; that is, to allow higher dimensions. === The general position of singularities in algebraic geometry === Such singularities in algebraic geometry are the easiest in principle to study, since they are defined by polynomial equations and therefore in terms of a coordinate system. One can say that the extrinsic meaning of a singular point isn't in question; it is just that in intrinsic terms the coordinates in the ambient space don't straightforwardly translate the geometry of the algebraic variety at the point. Intensive studies of such singularities led in the end to Heisuke Hironaka's fundamental theorem on resolution of singularities (in birational geometry in characteristic 0). This means that the simple process of "lifting" a piece of string off itself, by the "obvious" use of the cross-over at a double point, is not essentially misleading: all the singularities of algebraic geometry can be recovered as some sort of very general collapse (through multiple processes). This result is often implicitly used to extend affine geometry to projective geometry: it is entirely typical for an affine variety to acquire singular points on the hyperplane at infinity, when its closure in projective space is taken. Resolution says that such singularities can be handled rather as a (complicated) sort of compactification, ending up with a compact manifold (for the strong topology, rather than the Zariski topology, that is). == The smooth theory and catastrophes == At about the same time as Hironaka's work, the catastrophe theory of René Thom was receiving a great deal of attention. This is another branch of singularity theory, based on earlier work of Hassler Whitney on critical points. Roughly speaking, a critical point of a smooth function is where the level set develops a singular point in the geometric sense. This theory deals with differentiable functions in general, rather than just polynomials. To compensate, only the stable phenomena are considered. One can argue that in nature, anything destroyed by tiny changes is not going to be observed; the visible is the stable. Whitney had shown that in low numbers of variables the stable structure of critical points is very restricted, in local terms. Thom built on this, and his own earlier work, to create a catastrophe theory supposed to account for discontinuous change in nature. === Arnold's view === While Thom was an eminent mathematician, the subsequent fashionable nature of elementary catastrophe theory as propagated by Christopher Zeeman caused a reaction, in particular on the part of Vladimir Arnold. He may have been largely responsible for applying the term singularity theory to the area including the input from algebraic geometry, as well as that flowing from the work of Whitney, Thom and other authors. He wrote in terms making clear his distaste for the too-publicised emphasis on a small part of the territory. The foundational work on smooth singularities is formulated as the construction of equivalence relations on singular points, and germs. Technically this involves group actions of Lie groups on spaces of jets; in less abstract terms Taylor series are examined up to change of variable, pinning down singularities with enough derivatives. Applications, according to Arnold, are to be seen in symplectic geometry, as the geometric form of classical mechanics. === Duality === An important reason why singularities cause problems in mathematics is that, with a failure of manifold structure, the invocation of Poincaré duality is also disallowed. A major advance was the introduction of intersection cohomology, which arose initially from attempts to restore duality by use of strata. Numerous connections and applications stemmed from the original idea, for example the concept of perverse sheaf in homological algebra. == Other possible meanings == The theory mentioned above does not directly relate to the concept of mathematical singularity as a value at which a function is not defined. For that, see for example isolated singularity, essential singularity, removable singularity. The monodromy theory of differential equations, in the complex domain, around singularities, does however come into relation with the geometric theory. Roughly speaking, monodromy studies the way a covering map can degenerate, while singularity theory studies the way a manifold can degenerate; and these fields are linked. == See also == == Notes == == References ==
Wikipedia/Singularity_theory
In mathematics, intersection theory is one of the main branches of algebraic geometry, where it gives information about the intersection of two subvarieties of a given variety. The theory for varieties is older, with roots in Bézout's theorem on curves and elimination theory. On the other hand, the topological theory more quickly reached a definitive form. There is yet an ongoing development of intersection theory. Currently the main focus is on: virtual fundamental cycles, quantum intersection rings, Gromov–Witten theory and the extension of intersection theory from schemes to stacks. == Topological intersection form == For a connected oriented manifold M {\displaystyle M} of dimension 2 n {\displaystyle 2n} the intersection form is defined on the n {\displaystyle n} -th cohomology group (what is usually called the 'middle dimension') by the evaluation of the cup product on the fundamental class [ M ] {\displaystyle [M]} in H 2 n ( M , ∂ M ) {\displaystyle H_{2n}(M,\partial M)} . Stated precisely, there is a bilinear form λ M : H n ( M , ∂ M ) × H n ( M , ∂ M ) → Z {\displaystyle \lambda _{M}\colon H^{n}(M,\partial M)\times H^{n}(M,\partial M)\to \mathbf {Z} } given by λ M ( a , b ) = ⟨ a ⌣ b , [ M ] ⟩ ∈ Z {\displaystyle \lambda _{M}(a,b)=\langle a\smile b,[M]\rangle \in \mathbf {Z} } with λ M ( a , b ) = ( − 1 ) n λ M ( b , a ) ∈ Z . {\displaystyle \lambda _{M}(a,b)=(-1)^{n}\lambda _{M}(b,a)\in \mathbf {Z} .} This is a symmetric form for n even (so 2n = 4k doubly even), in which case the signature of M is defined to be the signature of the form, and an alternating form for n odd (so 2n = 4k + 2 is singly even). These can be referred to uniformly as ε-symmetric forms, where ε = (−1)n = ±1 respectively for symmetric and skew-symmetric forms. It is possible in some circumstances to refine this form to an ε-quadratic form, though this requires additional data such as a framing of the tangent bundle. It is possible to drop the orientability condition and work with Z/2Z coefficients instead. These forms are important topological invariants. For example, a theorem of Michael Freedman states that simply connected compact 4-manifolds are (almost) determined by their intersection forms up to homeomorphism. By Poincaré duality, it turns out that there is a way to think of this geometrically. If possible, choose representative n-dimensional submanifolds A, B for the Poincaré duals of a and b. Then λM (a, b) is the oriented intersection number of A and B, which is well-defined because since dimensions of A and B sum to the total dimension of M they generically intersect at isolated points. This explains the terminology intersection form. == Intersection theory in algebraic geometry == William Fulton in Intersection Theory (1984) writes ... if A and B are subvarieties of a non-singular variety X, the intersection product A · B should be an equivalence class of algebraic cycles closely related to the geometry of how A ∩ B, A and B are situated in X. Two extreme cases have been most familiar. If the intersection is proper, i.e. dim(A ∩ B) = dim A + dim B − dim X, then A · B is a linear combination of the irreducible components of A ∩ B, with coefficients the intersection multiplicities. At the other extreme, if A = B is a non-singular subvariety, the self-intersection formula says that A · B is represented by the top Chern class of the normal bundle of A in X. To give a definition, in the general case, of the intersection multiplicity was the major concern of André Weil's 1946 book Foundations of Algebraic Geometry. Work in the 1920s of B. L. van der Waerden had already addressed the question; in the Italian school of algebraic geometry the ideas were well known, but foundational questions were not addressed in the same spirit. === Moving cycles === A well-working machinery of intersecting algebraic cycles V and W requires more than taking just the set-theoretic intersection V ∩ W of the cycles in question. If the two cycles are in "good position" then the intersection product, denoted V · W, should consist of the set-theoretic intersection of the two subvarieties. However cycles may be in bad position, e.g. two parallel lines in the plane, or a plane containing a line (intersecting in 3-space). In both cases the intersection should be a point, because, again, if one cycle is moved, this would be the intersection. The intersection of two cycles V and W is called proper if the codimension of the (set-theoretic) intersection V ∩ W is the sum of the codimensions of V and W, respectively, i.e. the "expected" value. Therefore, the concept of moving cycles using appropriate equivalence relations on algebraic cycles is used. The equivalence must be broad enough that given any two cycles V and W, there are equivalent cycles V′ and W′ such that the intersection V′ ∩ W′ is proper. Of course, on the other hand, for a second equivalent V′′ and W′′, V′ ∩ W′ needs to be equivalent to V′′ ∩ W′′. For the purposes of intersection theory, rational equivalence is the most important one. Briefly, two r-dimensional cycles on a variety X are rationally equivalent if there is a rational function  f  on a (r + 1)-dimensional subvariety Y, i.e. an element of the function field k(Y) or equivalently a function  f  : Y → P1, such that V − W =  f −1(0) −  f −1(∞), where  f −1(⋅) is counted with multiplicities. Rational equivalence accomplishes the needs sketched above. === Intersection multiplicities === The guiding principle in the definition of intersection multiplicities of cycles is continuity in a certain sense. Consider the following elementary example: the intersection of a parabola y = x2 and an axis y = 0 should be 2 · (0, 0), because if one of the cycles moves (yet in an undefined sense), there are precisely two intersection points which both converge to (0, 0) when the cycles approach the depicted position. (The picture is misleading insofar as the apparently empty intersection of the parabola and the line y = −3 is empty, because only the real solutions of the equations are depicted). The first fully satisfactory definition of intersection multiplicities was given by Serre: Let the ambient variety X be smooth (or all local rings regular). Further let V and W be two (irreducible reduced closed) subvarieties, such that their intersection is proper. The construction is local, therefore the varieties may be represented by two ideals I and J in the coordinate ring of X. Let Z be an irreducible component of the set-theoretic intersection V ∩ W and z its generic point. The multiplicity of Z in the intersection product V · W is defined by μ ( Z ; V , W ) := ∑ i = 0 ∞ ( − 1 ) i length O X , z Tor i O X , z ( O X , z / I , O X , z / J ) , {\displaystyle \mu (Z;V,W):=\sum _{i=0}^{\infty }(-1)^{i}{\text{length}}_{{\mathcal {O}}_{X,z}}{\text{Tor}}_{i}^{{\mathcal {O}}_{X,z}}({\mathcal {O}}_{X,z}/I,{\mathcal {O}}_{X,z}/J),} the alternating sum over the length over the local ring of X in z of torsion groups of the factor rings corresponding to the subvarieties. This expression is sometimes referred to as Serre's Tor-formula. Remarks: The first summand, the length of ( O X , z / I ) ⊗ O X , z ( O X , z / J ) = O Z , z {\displaystyle \left({\mathcal {O}}_{X,z}/I\right)\otimes _{{\mathcal {O}}_{X,z}}\left({\mathcal {O}}_{X,z}/J\right)={\mathcal {O}}_{Z,z}} is the "naive" guess of the multiplicity; however, as Serre shows, it is not sufficient. The sum is finite, because the regular local ring O X , z {\displaystyle {\mathcal {O}}_{X,z}} has finite Tor-dimension. If the intersection of V and W is not proper, the above multiplicity will be zero. If it is proper, it is strictly positive. (Both statements are not obvious from the definition). Using a spectral sequence argument, it can be shown that μ(Z; V, W) = μ(Z; W, V). === The Chow ring === The Chow ring is the group of algebraic cycles modulo rational equivalence together with the following commutative intersection product: V ⋅ W := ∑ i μ ( Z i ; V , W ) Z i {\displaystyle V\cdot W:=\sum _{i}\mu (Z_{i};V,W)Z_{i}} whenever V and W meet properly, where V ∩ W = ∪ i Z i {\displaystyle V\cap W=\cup _{i}Z_{i}} is the decomposition of the set-theoretic intersection into irreducible components. === Self-intersection === Given two subvarieties V and W, one can take their intersection V ∩ W, but it is also possible, though more subtle, to define the self-intersection of a single subvariety. Given, for instance, a curve C on a surface S, its intersection with itself (as sets) is just itself: C ∩ C = C. This is clearly correct, but on the other hand unsatisfactory: given any two distinct curves on a surface (with no component in common), they intersect in some set of points, which for instance one can count, obtaining an intersection number, and we may wish to do the same for a given curve: the analogy is that intersecting distinct curves is like multiplying two numbers: xy, while self-intersection is like squaring a single number: x2. Formally, the analogy is stated as a symmetric bilinear form (multiplication) and a quadratic form (squaring). A geometric solution to this is to intersect the curve C not with itself, but with a slightly pushed off version of itself. In the plane, this just means translating the curve C in some direction, but in general one talks about taking a curve C′ that is linearly equivalent to C, and counting the intersection C · C′, thus obtaining an intersection number, denoted C · C. Note that unlike for distinct curves C and D, the actual points of intersection are not defined, because they depend on a choice of C′, but the “self intersection points of C′′ can be interpreted as k generic points on C, where k = C · C. More properly, the self-intersection point of C is the generic point of C, taken with multiplicity C · C. Alternatively, one can “solve” (or motivate) this problem algebraically by dualizing, and looking at the class of [C] ∪ [C] – this both gives a number, and raises the question of a geometric interpretation. Note that passing to cohomology classes is analogous to replacing a curve by a linear system. Note that the self-intersection number can be negative, as the example below illustrates. ==== Examples ==== Consider a line L in the projective plane P2: it has self-intersection number 1 since all other lines cross it once: one can push L off to L′, and L · L′ = 1 (for any choice) of L′, hence L · L = 1. In terms of intersection forms, we say the plane has one of type x2 (there is only one class of lines, and they all intersect with each other). Note that on the affine plane, one might push off L to a parallel line, so (thinking geometrically) the number of intersection points depends on the choice of push-off. One says that “the affine plane does not have a good intersection theory”, and intersection theory on non-projective varieties is much more difficult. A line on a P1 × P1 (which can also be interpreted as the non-singular quadric Q in P3) has self-intersection 0, since a line can be moved off itself. (It is a ruled surface.) In terms of intersection forms, we say P1 × P1 has one of type xy – there are two basic classes of lines, which intersect each other in one point (xy), but have zero self-intersection (no x2 or y2 terms). ==== Blow-ups ==== A key example of self-intersection numbers is the exceptional curve of a blow-up, which is a central operation in birational geometry. Given an algebraic surface S, blowing up at a point creates a curve C. This curve C is recognisable by its genus, which is 0, and its self-intersection number, which is −1. (This is not obvious.) Note that as a corollary, P2 and P1 × P1 are minimal surfaces (they are not blow-ups), since they do not have any curves with negative self-intersection. In fact, Castelnuovo’s contraction theorem states the converse: every (−1)-curve is the exceptional curve of some blow-up (it can be “blown down”). == See also == Chow group Grothendieck–Riemann–Roch theorem Enumerative geometry == Citations == == References ==
Wikipedia/Intersection_theory
In mathematics, the inverse function of a function f (also called the inverse of f) is a function that undoes the operation of f. The inverse of f exists if and only if f is bijective, and if it exists, is denoted by f − 1 . {\displaystyle f^{-1}.} For a function f : X → Y {\displaystyle f\colon X\to Y} , its inverse f − 1 : Y → X {\displaystyle f^{-1}\colon Y\to X} admits an explicit description: it sends each element y ∈ Y {\displaystyle y\in Y} to the unique element x ∈ X {\displaystyle x\in X} such that f(x) = y. As an example, consider the real-valued function of a real variable given by f(x) = 5x − 7. One can think of f as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of f is the function f − 1 : R → R {\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} } defined by f − 1 ( y ) = y + 7 5 . {\displaystyle f^{-1}(y)={\frac {y+7}{5}}.} == Definitions == Let f be a function whose domain is the set X, and whose codomain is the set Y. Then f is invertible if there exists a function g from Y to X such that g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} and f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} . If f is invertible, then there is exactly one function g satisfying this property. The function g is called the inverse of f, and is usually denoted as f −1, a notation introduced by John Frederick William Herschel in 1813. The function f is invertible if and only if it is bijective. This is because the condition g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} implies that f is injective, and the condition f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} implies that f is surjective. The inverse function f −1 to f can be explicitly described as the function f − 1 ( y ) = ( the unique element x ∈ X such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in X{\text{ such that }}f(x)=y)} . === Inverses and composition === Recall that if f is an invertible function with domain X and codomain Y, then f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}\left(f(x)\right)=x} , for every x ∈ X {\displaystyle x\in X} and f ( f − 1 ( y ) ) = y {\displaystyle f\left(f^{-1}(y)\right)=y} for every y ∈ Y {\displaystyle y\in Y} . Using the composition of functions, this statement can be rewritten to the following equations between functions: f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y , {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y},} where idX is the identity function on the set X; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation f −1. Repeatedly composing a function f: X→X with itself is called iteration. If f is applied n times, starting with the value x, then this is written as f n(x); so f 2(x) = f (f (x)), etc. Since f −1(f (x)) = x, composing f −1 and f n yields f n−1, "undoing" the effect of one application of f. === Notation === While the notation f −1(x) might be misunderstood, (f(x))−1 certainly denotes the multiplicative inverse of f(x) and has nothing to do with the inverse function of f. The notation f ⟨ − 1 ⟩ {\displaystyle f^{\langle -1\rangle }} might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like sin−1(x) to denote the inverse of the sine function applied to x (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of sin (x), which can be denoted as (sin (x))−1. To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin arcus). For instance, the inverse of the sine function is typically called the arcsine function, written as arcsin(x). Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ārea). For instance, the inverse of the hyperbolic sine function is typically written as arsinh(x). The expressions like sin−1(x) can still be useful to distinguish the multivalued inverse from the partial inverse: sin − 1 ⁡ ( x ) = { ( − 1 ) n arcsin ⁡ ( x ) + π n : n ∈ Z } {\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}} . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the f −1 notation should be avoided. == Examples == === Squaring and square root functions === The function f: R → [0,∞) given by f(x) = x2 is not injective because ( − x ) 2 = x 2 {\displaystyle (-x)^{2}=x^{2}} for all x ∈ R {\displaystyle x\in \mathbb {R} } . Therefore, f is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function f : [ 0 , ∞ ) → [ 0 , ∞ ) ; x ↦ x 2 {\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}} with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by x ↦ x {\displaystyle x\mapsto {\sqrt {x}}} . === Standard inverse functions === The following table shows several standard functions and their inverses: === Formula for the inverse === Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse f − 1 {\displaystyle f^{-1}} of an invertible function f : R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } has an explicit description as f − 1 ( y ) = ( the unique element x ∈ R such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in \mathbb {R} {\text{ such that }}f(x)=y)} . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if f is the function f ( x ) = ( 2 x + 8 ) 3 {\displaystyle f(x)=(2x+8)^{3}} then to determine f − 1 ( y ) {\displaystyle f^{-1}(y)} for a real number y, one must find the unique real number x such that (2x + 8)3 = y. This equation can be solved: y = ( 2 x + 8 ) 3 y 3 = 2 x + 8 y 3 − 8 = 2 x y 3 − 8 2 = x . {\displaystyle {\begin{aligned}y&=(2x+8)^{3}\\{\sqrt[{3}]{y}}&=2x+8\\{\sqrt[{3}]{y}}-8&=2x\\{\dfrac {{\sqrt[{3}]{y}}-8}{2}}&=x.\end{aligned}}} Thus the inverse function f −1 is given by the formula f − 1 ( y ) = y 3 − 8 2 . {\displaystyle f^{-1}(y)={\frac {{\sqrt[{3}]{y}}-8}{2}}.} Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if f is the function f ( x ) = x − sin ⁡ x , {\displaystyle f(x)=x-\sin x,} then f is a bijection, and therefore possesses an inverse function f −1. The formula for this inverse has an expression as an infinite sum: f − 1 ( y ) = ∑ n = 1 ∞ y n / 3 n ! lim θ → 0 ( d n − 1 d θ n − 1 ( θ θ − sin ⁡ ( θ ) 3 ) n ) . {\displaystyle f^{-1}(y)=\sum _{n=1}^{\infty }{\frac {y^{n/3}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right).} == Properties == Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. === Uniqueness === If an inverse function exists for a given function f, then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by f. === Symmetry === There is a symmetry between a function and its inverse. Specifically, if f is an invertible function with domain X and codomain Y, then its inverse f −1 has domain Y and image X, and the inverse of f −1 is the original function f. In symbols, for functions f:X → Y and f−1:Y → X, f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y . {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y}.} This statement is a consequence of the implication that for f to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by ( f − 1 ) − 1 = f . {\displaystyle \left(f^{-1}\right)^{-1}=f.} The inverse of a composition of functions is given by ( g ∘ f ) − 1 = f − 1 ∘ g − 1 . {\displaystyle (g\circ f)^{-1}=f^{-1}\circ g^{-1}.} Notice that the order of g and f have been reversed; to undo f followed by g, we must first undo g, and then undo f. For example, let f(x) = 3x and let g(x) = x + 5. Then the composition g ∘ f is the function that first multiplies by three and then adds five, ( g ∘ f ) ( x ) = 3 x + 5. {\displaystyle (g\circ f)(x)=3x+5.} To reverse this process, we must first subtract five, and then divide by three, ( g ∘ f ) − 1 ( x ) = 1 3 ( x − 5 ) . {\displaystyle (g\circ f)^{-1}(x)={\tfrac {1}{3}}(x-5).} This is the composition (f −1 ∘ g −1)(x). === Self-inverses === If X is a set, then the identity function on X is its own inverse: id X − 1 = id X . {\displaystyle {\operatorname {id} _{X}}^{-1}=\operatorname {id} _{X}.} More generally, a function f : X → X is equal to its own inverse, if and only if the composition f ∘ f is equal to idX. Such a function is called an involution. === Graph of the inverse === If f is invertible, then the graph of the function y = f − 1 ( x ) {\displaystyle y=f^{-1}(x)} is the same as the graph of the equation x = f ( y ) . {\displaystyle x=f(y).} This is identical to the equation y = f(x) that defines the graph of f, except that the roles of x and y have been reversed. Thus the graph of f −1 can be obtained from the graph of f by switching the positions of the x and y axes. This is equivalent to reflecting the graph across the line y = x. === Inverses and derivatives === By the inverse function theorem, a continuous function of a single variable f : A → R {\displaystyle f\colon A\to \mathbb {R} } (where A ⊆ R {\displaystyle A\subseteq \mathbb {R} } ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function f ( x ) = x 3 + x {\displaystyle f(x)=x^{3}+x} is invertible, since the derivative f′(x) = 3x2 + 1 is always positive. If the function f is differentiable on an interval I and f′(x) ≠ 0 for each x ∈ I, then the inverse f −1 is differentiable on f(I). If y = f(x), the derivative of the inverse is given by the inverse function theorem, ( f − 1 ) ′ ( y ) = 1 f ′ ( x ) . {\displaystyle \left(f^{-1}\right)^{\prime }(y)={\frac {1}{f'\left(x\right)}}.} Using Leibniz's notation the formula above can be written as d x d y = 1 d y / d x . {\displaystyle {\frac {dx}{dy}}={\frac {1}{dy/dx}}.} This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function f : Rn → Rn is invertible in a neighborhood of a point p as long as the Jacobian matrix of f at p is invertible. In this case, the Jacobian of f −1 at f(p) is the matrix inverse of the Jacobian of f at p. == Real-world examples == Let f be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, F = f ( C ) = 9 5 C + 32 ; {\displaystyle F=f(C)={\tfrac {9}{5}}C+32;} then its inverse function converts degrees Fahrenheit to degrees Celsius, C = f − 1 ( F ) = 5 9 ( F − 32 ) , {\displaystyle C=f^{-1}(F)={\tfrac {5}{9}}(F-32),} since f − 1 ( f ( C ) ) = f − 1 ( 9 5 C + 32 ) = 5 9 ( ( 9 5 C + 32 ) − 32 ) = C , for every value of C , and f ( f − 1 ( F ) ) = f ( 5 9 ( F − 32 ) ) = 9 5 ( 5 9 ( F − 32 ) ) + 32 = F , for every value of F . {\displaystyle {\begin{aligned}f^{-1}(f(C))={}&f^{-1}\left({\tfrac {9}{5}}C+32\right)={\tfrac {5}{9}}\left(({\tfrac {9}{5}}C+32)-32\right)=C,\\&{\text{for every value of }}C,{\text{ and }}\\[6pt]f\left(f^{-1}(F)\right)={}&f\left({\tfrac {5}{9}}(F-32)\right)={\tfrac {9}{5}}\left({\tfrac {5}{9}}(F-32)\right)+32=F,\\&{\text{for every value of }}F.\end{aligned}}} Suppose f assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, f ( Allan ) = 2005 , f ( Brad ) = 2007 , f ( Cary ) = 2001 f − 1 ( 2005 ) = Allan , f − 1 ( 2007 ) = Brad , f − 1 ( 2001 ) = Cary {\displaystyle {\begin{aligned}f({\text{Allan}})&=2005,\quad &f({\text{Brad}})&=2007,\quad &f({\text{Cary}})&=2001\\f^{-1}(2005)&={\text{Allan}},\quad &f^{-1}(2007)&={\text{Brad}},\quad &f^{-1}(2001)&={\text{Cary}}\end{aligned}}} Let R be the function that leads to an x percentage rise of some quantity, and F be the function producing an x percentage fall. Applied to $100 with x = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is pH = −log10[H+]. In many cases we need to find the concentration of acid from a pH measurement. The inverse function [H+] = 10−pH is used. == Generalizations == === Partial inverses === Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. For example, the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} is not one-to-one, since x2 = (−x)2. However, the function becomes one-to-one if we restrict to the domain x ≥ 0, in which case f − 1 ( y ) = y . {\displaystyle f^{-1}(y)={\sqrt {y}}.} (If we instead restrict to the domain x ≤ 0, then the inverse is the negative of the square root of y.) === Full inverses === Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: f − 1 ( y ) = ± y . {\displaystyle f^{-1}(y)=\pm {\sqrt {y}}.} Sometimes, this multivalued inverse is called the full inverse of f, and the portions (such as √x and −√x) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at y is called the principal value of f −1(y). For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). === Trigonometric inverses === The above considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since sin ⁡ ( x + 2 π ) = sin ⁡ ( x ) {\displaystyle \sin(x+2\pi )=\sin(x)} for every real x (and more generally sin(x + 2πn) = sin(x) for every integer n). However, the sine is one-to-one on the interval [−⁠π/2⁠, ⁠π/2⁠], and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −⁠π/2⁠ and ⁠π/2⁠. The following table describes the principal branch of each inverse trigonometric function: === Left and right inverses === Function composition on the left and on the right need not coincide. In general, the conditions "There exists g such that g(f(x))=x" and "There exists g such that f(g(x))=x" imply different properties of f. For example, let f: R → [0, ∞) denote the squaring map, such that f(x) = x2 for all x in R, and let g: [0, ∞) → R denote the square root map, such that g(x) = √x for all x ≥ 0. Then f(g(x)) = x for all x in [0, ∞); that is, g is a right inverse to f. However, g is not a left inverse to f, since, e.g., g(f(−1)) = 1 ≠ −1. ==== Left inverses ==== If f: X → Y, a left inverse for f (or retraction of f ) is a function g: Y → X such that composing f with g from the left gives the identity function g ∘ f = id X ⁡ . {\displaystyle g\circ f=\operatorname {id} _{X}{\text{.}}} That is, the function g satisfies the rule If f(x)=y, then g(y)=x. The function g must equal the inverse of f on the image of f, but may take any values for elements of Y not in the image. A function f with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If g is the left inverse of f, and f(x) = f(y), then g(f(x)) = g(f(y)) = x = y. If nonempty f: X → Y is injective, construct a left inverse g: Y → X as follows: for all y ∈ Y, if y is in the image of f, then there exists x ∈ X such that f(x) = y. Let g(y) = x; this definition is unique because f is injective. Otherwise, let g(y) be an arbitrary element of X.For all x ∈ X, f(x) is in the image of f. By construction, g(f(x)) = x, the condition for a left inverse. In classical mathematics, every injective function f with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion {0,1} → R of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set {0,1}. ==== Right inverses ==== A right inverse for f (or section of f ) is a function h: Y → X such that f ∘ h = id Y . {\displaystyle f\circ h=\operatorname {id} _{Y}.} That is, the function h satisfies the rule If h ( y ) = x {\displaystyle \displaystyle h(y)=x} , then f ( x ) = y . {\displaystyle \displaystyle f(x)=y.} Thus, h(y) may be any of the elements of X that map to y under f. A function f has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If h is the right inverse of f, then f is surjective. For all y ∈ Y {\displaystyle y\in Y} , there is x = h ( y ) {\displaystyle x=h(y)} such that f ( x ) = f ( h ( y ) ) = y {\displaystyle f(x)=f(h(y))=y} . If f is surjective, f has a right inverse h, which can be constructed as follows: for all y ∈ Y {\displaystyle y\in Y} , there is at least one x ∈ X {\displaystyle x\in X} such that f ( x ) = y {\displaystyle f(x)=y} (because f is surjective), so we choose one to be the value of h(y). ==== Two-sided inverses ==== An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If g {\displaystyle g} is a left inverse and h {\displaystyle h} a right inverse of f {\displaystyle f} , for all y ∈ Y {\displaystyle y\in Y} , g ( y ) = g ( f ( h ( y ) ) = h ( y ) {\displaystyle g(y)=g(f(h(y))=h(y)} . A function has a two-sided inverse if and only if it is bijective. A bijective function f is injective, so it has a left inverse (if f is the empty function, f : ∅ → ∅ {\displaystyle f\colon \varnothing \to \varnothing } is its own left inverse). f is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If f has a two-sided inverse g, then g is a left inverse and right inverse of f, so f is injective and surjective. === Preimages === If f: X → Y is any function (not necessarily invertible), the preimage (or inverse image) of an element y ∈ Y is defined to be the set of all elements of X that map to y: f − 1 ( y ) = { x ∈ X : f ( x ) = y } . {\displaystyle f^{-1}(y)=\left\{x\in X:f(x)=y\right\}.} The preimage of y can be thought of as the image of y under the (multivalued) full inverse of the function f. The notion can be generalized to subsets of the range. Specifically, if S is any subset of Y, the preimage of S, denoted by f − 1 ( S ) {\displaystyle f^{-1}(S)} , is the set of all elements of X that map to S: f − 1 ( S ) = { x ∈ X : f ( x ) ∈ S } . {\displaystyle f^{-1}(S)=\left\{x\in X:f(x)\in S\right\}.} For example, take the function f: R → R; x ↦ x2. This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. f − 1 ( { 1 , 4 , 9 , 16 } ) = { − 4 , − 3 , − 2 , − 1 , 1 , 2 , 3 , 4 } {\displaystyle f^{-1}(\left\{1,4,9,16\right\})=\left\{-4,-3,-2,-1,1,2,3,4\right\}} . The original notion and its generalization are related by the identity f − 1 ( y ) = f − 1 ( { y } ) , {\displaystyle f^{-1}(y)=f^{-1}(\{y\}),} The preimage of a single element y ∈ Y – a singleton set {y}  – is sometimes called the fiber of y. When Y is the set of real numbers, it is common to refer to f −1({y}) as a level set. == See also == Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function Integral of inverse functions Inverse Fourier transform Reversible computing == Notes == == References == == Bibliography == Briggs, William; Cochran, Lyle (2011). Calculus / Early Transcendentals Single Variable. Addison-Wesley. ISBN 978-0-321-66414-3. Devlin, Keith J. (2004). Sets, Functions, and Logic / An Introduction to Abstract Mathematics (3 ed.). Chapman & Hall / CRC Mathematics. ISBN 978-1-58488-449-1. Fletcher, Peter; Patty, C. Wayne (1988). Foundations of Higher Mathematics. PWS-Kent. ISBN 0-87150-164-3. Lay, Steven R. (2006). Analysis / With an Introduction to Proof (4 ed.). Pearson / Prentice Hall. ISBN 978-0-13-148101-5. Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006). A Transition to Advanced Mathematics (6 ed.). Thompson Brooks/Cole. ISBN 978-0-534-39900-9. Thomas Jr., George Brinton (1972). Calculus and Analytic Geometry Part 1: Functions of One Variable and Analytic Geometry (Alternate ed.). Addison-Wesley. Wolf, Robert S. (1998). Proof, Logic, and Conjecture / The Mathematician's Toolbox. W. H. Freeman and Co. ISBN 978-0-7167-3050-7. == Further reading == Amazigo, John C.; Rubenfeld, Lester A. (1980). "Implicit Functions; Jacobians; Inverse Functions". Advanced Calculus and its Applications to the Engineering and Physical Sciences. New York: Wiley. pp. 103–120. ISBN 0-471-04934-4. Binmore, Ken G. (1983). "Inverse Functions". Calculus. New York: Cambridge University Press. pp. 161–197. ISBN 0-521-28952-1. Spivak, Michael (1994). Calculus (3 ed.). Publish or Perish. ISBN 0-914098-89-6. Stewart, James (2002). Calculus (5 ed.). Brooks Cole. ISBN 978-0-534-39339-7. == External links == "Inverse function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Function_inverse
The Riemann–Roch theorem is an important theorem in mathematics, specifically in complex analysis and algebraic geometry, for the computation of the dimension of the space of meromorphic functions with prescribed zeros and allowed poles. It relates the complex analysis of a connected compact Riemann surface with the surface's purely topological genus g, in a way that can be carried over into purely algebraic settings. Initially proved as Riemann's inequality by Riemann (1857), the theorem reached its definitive form for Riemann surfaces after work of Riemann's short-lived student Gustav Roch (1865). It was later generalized to algebraic curves, to higher-dimensional varieties and beyond. == Preliminary notions == A Riemann surface X {\displaystyle X} is a topological space that is locally homeomorphic to an open subset of C {\displaystyle \mathbb {C} } , the set of complex numbers. In addition, the transition maps between these open subsets are required to be holomorphic. The latter condition allows one to transfer the notions and methods of complex analysis dealing with holomorphic and meromorphic functions on C {\displaystyle \mathbb {C} } to the surface X {\displaystyle X} . For the purposes of the Riemann–Roch theorem, the surface X {\displaystyle X} is always assumed to be compact. Colloquially speaking, the genus g {\displaystyle g} of a Riemann surface is its number of handles; for example the genus of the Riemann surface shown at the right is three. More precisely, the genus is defined as half of the first Betti number, i.e., half of the C {\displaystyle \mathbb {C} } -dimension of the first singular homology group H 1 ( X , C ) {\displaystyle H_{1}(X,\mathbb {C} )} with complex coefficients. The genus classifies compact Riemann surfaces up to homeomorphism, i.e., two such surfaces are homeomorphic if and only if their genus is the same. Therefore, the genus is an important topological invariant of a Riemann surface. On the other hand, Hodge theory shows that the genus coincides with the C {\displaystyle \mathbb {C} } -dimension of the space of holomorphic one-forms on X {\displaystyle X} , so the genus also encodes complex-analytic information about the Riemann surface. A divisor D {\displaystyle D} is an element of the free abelian group on the points of the surface. Equivalently, a divisor is a finite linear combination of points of the surface with integer coefficients. Any meromorphic function f {\displaystyle f} gives rise to a divisor denoted ( f ) {\displaystyle (f)} defined as ( f ) := ∑ z ν ∈ R ( f ) s ν z ν {\displaystyle (f):=\sum _{z_{\nu }\in R(f)}s_{\nu }z_{\nu }} where R ( f ) {\displaystyle R(f)} is the set of all zeroes and poles of f {\displaystyle f} , and s ν {\displaystyle s_{\nu }} is given by s ν := { a if z ν is a zero of order a − a if z ν is a pole of order a {\displaystyle s_{\nu }:={\begin{cases}a&{\text{if }}z_{\nu }{\text{ is a zero of order }}a\\-a&{\text{if }}z_{\nu }{\text{ is a pole of order }}a\end{cases}}} . The set R ( f ) {\displaystyle R(f)} is known to be finite; this is a consequence of X {\displaystyle X} being compact and the fact that the zeros of a (non-zero) holomorphic function do not have an accumulation point. Therefore, ( f ) {\displaystyle (f)} is well-defined. Any divisor of this form is called a principal divisor. Two divisors that differ by a principal divisor are called linearly equivalent. The divisor of a meromorphic 1-form is defined similarly. A divisor of a global meromorphic 1-form is called the canonical divisor (usually denoted K {\displaystyle K} ). Any two meromorphic 1-forms will yield linearly equivalent divisors, so the canonical divisor is uniquely determined up to linear equivalence (hence "the" canonical divisor). The symbol deg ⁡ ( D ) {\displaystyle \deg(D)} denotes the degree (occasionally also called index) of the divisor D {\displaystyle D} , i.e. the sum of the coefficients occurring in D {\displaystyle D} . It can be shown that the divisor of a global meromorphic function always has degree 0, so the degree of a divisor depends only on its linear equivalence class. The number ℓ ( D ) {\displaystyle \ell (D)} is the quantity that is of primary interest: the dimension (over C {\displaystyle \mathbb {C} } ) of the vector space of meromorphic functions h {\displaystyle h} on the surface, such that all the coefficients of ( h ) + D {\displaystyle (h)+D} are non-negative. Intuitively, we can think of this as being all meromorphic functions whose poles at every point are no worse than the corresponding coefficient in D {\displaystyle D} ; if the coefficient in D {\displaystyle D} at z {\displaystyle z} is negative, then we require that h {\displaystyle h} has a zero of at least that multiplicity at z {\displaystyle z} – if the coefficient in D {\displaystyle D} is positive, h {\displaystyle h} can have a pole of at most that order. The vector spaces for linearly equivalent divisors are naturally isomorphic through multiplication with the global meromorphic function (which is well-defined up to a scalar). == Statement of the theorem == The Riemann–Roch theorem for a compact Riemann surface of genus g {\displaystyle g} with canonical divisor K {\displaystyle K} states ℓ ( D ) − ℓ ( K − D ) = deg ⁡ ( D ) − g + 1 {\displaystyle \ell (D)-\ell (K-D)=\deg(D)-g+1} . Typically, the number ℓ ( D ) {\displaystyle \ell (D)} is the one of interest, while ℓ ( K − D ) {\displaystyle \ell (K-D)} is thought of as a correction term (also called index of speciality) so the theorem may be roughly paraphrased by saying dimension − correction = degree − genus + 1. Because it is the dimension of a vector space, the correction term ℓ ( K − D ) {\displaystyle \ell (K-D)} is always non-negative, so that ℓ ( D ) ≥ deg ⁡ ( D ) − g + 1 {\displaystyle \ell (D)\geq \deg(D)-g+1} . This is called Riemann's inequality. Roch's part of the statement is the description of the possible difference between the sides of the inequality. On a general Riemann surface of genus g {\displaystyle g} , K {\displaystyle K} has degree 2 g − 2 {\displaystyle 2g-2} , independently of the meromorphic form chosen to represent the divisor. This follows from putting D = K {\displaystyle D=K} in the theorem. In particular, as long as D {\displaystyle D} has degree at least 2 g − 1 {\displaystyle 2g-1} , the correction term is 0, so that ℓ ( D ) = deg ⁡ ( D ) − g + 1 {\displaystyle \ell (D)=\deg(D)-g+1} . The theorem will now be illustrated for surfaces of low genus. There are also a number other closely related theorems: an equivalent formulation of this theorem using line bundles and a generalization of the theorem to algebraic curves. === Examples === The theorem will be illustrated by picking a point P {\displaystyle P} on the surface in question and regarding the sequence of numbers ℓ ( n ⋅ P ) , n ≥ 0 {\displaystyle \ell (n\cdot P),n\geq 0} i.e., the dimension of the space of functions that are holomorphic everywhere except at P {\displaystyle P} where the function is allowed to have a pole of order at most n {\displaystyle n} . For n = 0 {\displaystyle n=0} , the functions are thus required to be entire, i.e., holomorphic on the whole surface X {\displaystyle X} . By Liouville's theorem, such a function is necessarily constant. Therefore, ℓ ( 0 ) = 1 {\displaystyle \ell (0)=1} . In general, the sequence ℓ ( n ⋅ P ) {\displaystyle \ell (n\cdot P)} is an increasing sequence. ==== Genus zero ==== The Riemann sphere (also called complex projective line) is simply connected and hence its first singular homology is zero. In particular its genus is zero. The sphere can be covered by two copies of C {\displaystyle \mathbb {C} } , with transition map being given by C ∖ { 0 } ∋ z ↦ 1 z ∈ C ∖ { 0 } {\displaystyle \mathbb {C} \setminus \{0\}\ni z\mapsto {\frac {1}{z}}\in \mathbb {C} \setminus \{0\}} . Therefore, the form ω = d z {\displaystyle \omega =dz} on one copy of C {\displaystyle \mathbb {C} } extends to a meromorphic form on the Riemann sphere: it has a double pole at infinity, since d ( 1 z ) = − 1 z 2 d z {\displaystyle d\left({\frac {1}{z}}\right)=-{\frac {1}{z^{2}}}\,dz} Thus, its canonical divisor is K := div ⁡ ( ω ) = − 2 P {\displaystyle K:=\operatorname {div} (\omega )=-2P} (where P {\displaystyle P} is the point at infinity). Therefore, the theorem says that the sequence ℓ ( n ⋅ P ) {\displaystyle \ell (n\cdot P)} reads 1, 2, 3, ... . This sequence can also be read off from the theory of partial fractions. Conversely if this sequence starts this way, then g {\displaystyle g} must be zero. ==== Genus one ==== The next case is a Riemann surface of genus g = 1 {\displaystyle g=1} , such as a torus C / Λ {\displaystyle \mathbb {C} /\Lambda } , where Λ {\displaystyle \Lambda } is a two-dimensional lattice (a group isomorphic to Z 2 {\displaystyle \mathbb {Z} ^{2}} ). Its genus is one: its first singular homology group is freely generated by two loops, as shown in the illustration at the right. The standard complex coordinate z {\displaystyle z} on C {\displaystyle C} yields a one-form ω = d z {\displaystyle \omega =dz} on X {\displaystyle X} that is everywhere holomorphic, i.e., has no poles at all. Therefore, K {\displaystyle K} , the divisor of ω {\displaystyle \omega } is zero. On this surface, this sequence is 1, 1, 2, 3, 4, 5 ... ; and this characterises the case g = 1 {\displaystyle g=1} . Indeed, for D = 0 {\displaystyle D=0} , ℓ ( K − D ) = ℓ ( 0 ) = 1 {\displaystyle \ell (K-D)=\ell (0)=1} , as was mentioned above. For D = n ⋅ P {\displaystyle D=n\cdot P} with n > 0 {\displaystyle n>0} , the degree of K − D {\displaystyle K-D} is strictly negative, so that the correction term is 0. The sequence of dimensions can also be derived from the theory of elliptic functions. ==== Genus two and beyond ==== For g = 2 {\displaystyle g=2} , the sequence mentioned above is 1, 1, ?, 2, 3, ... . It is shown from this that the ? term of degree 2 is either 1 or 2, depending on the point. It can be proven that in any genus 2 curve there are exactly six points whose sequences are 1, 1, 2, 2, ... and the rest of the points have the generic sequence 1, 1, 1, 2, ... In particular, a genus 2 curve is a hyperelliptic curve. For g > 2 {\displaystyle g>2} it is always true that at most points the sequence starts with g + 1 {\displaystyle g+1} ones and there are finitely many points with other sequences (see Weierstrass points). === Riemann–Roch for line bundles === Using the close correspondence between divisors and holomorphic line bundles on a Riemann surface, the theorem can also be stated in a different, yet equivalent way: let L be a holomorphic line bundle on X. Let H 0 ( X , L ) {\displaystyle H^{0}(X,L)} denote the space of holomorphic sections of L. This space will be finite-dimensional; its dimension is denoted h 0 ( X , L ) {\displaystyle h^{0}(X,L)} . Let K denote the canonical bundle on X. Then, the Riemann–Roch theorem states that h 0 ( X , L ) − h 0 ( X , L − 1 ⊗ K ) = deg ⁡ ( L ) + 1 − g {\displaystyle h^{0}(X,L)-h^{0}(X,L^{-1}\otimes K)=\deg(L)+1-g} . The theorem of the previous section is the special case of when L is a point bundle. The theorem can be applied to show that there are g linearly independent holomorphic sections of K, or one-forms on X, as follows. Taking L to be the trivial bundle, h 0 ( X , L ) = 1 {\displaystyle h^{0}(X,L)=1} since the only holomorphic functions on X are constants. The degree of L is zero, and L − 1 {\displaystyle L^{-1}} is the trivial bundle. Thus, 1 − h 0 ( X , K ) = 1 − g {\displaystyle 1-h^{0}(X,K)=1-g} . Therefore, h 0 ( X , K ) = g {\displaystyle h^{0}(X,K)=g} , proving that there are g holomorphic one-forms. === Degree of canonical bundle === Since the canonical bundle K {\displaystyle K} has h 0 ( X , K ) = g {\displaystyle h^{0}(X,K)=g} , applying Riemann–Roch to L = K {\displaystyle L=K} gives h 0 ( X , K ) − h 0 ( X , K − 1 ⊗ K ) = deg ⁡ ( K ) + 1 − g {\displaystyle h^{0}(X,K)-h^{0}(X,K^{-1}\otimes K)=\deg(K)+1-g} which can be rewritten as g − 1 = deg ⁡ ( K ) + 1 − g {\displaystyle g-1=\deg(K)+1-g} hence the degree of the canonical bundle is deg ⁡ ( K ) = 2 g − 2 {\displaystyle \deg(K)=2g-2} . === Riemann–Roch theorem for algebraic curves === Every item in the above formulation of the Riemann–Roch theorem for divisors on Riemann surfaces has an analogue in algebraic geometry. The analogue of a Riemann surface is a non-singular algebraic curve C over a field k. The difference in terminology (curve vs. surface) is because the dimension of a Riemann surface as a real manifold is two, but one as a complex manifold. The compactness of a Riemann surface is paralleled by the condition that the algebraic curve be complete, which is equivalent to being projective. Over a general field k, there is no good notion of singular (co)homology. The so-called geometric genus is defined as g ( C ) := dim k ⁡ Γ ( C , Ω C 1 ) {\displaystyle g(C):=\dim _{k}\Gamma (C,\Omega _{C}^{1})} i.e., as the dimension of the space of globally defined (algebraic) one-forms (see Kähler differential). Finally, meromorphic functions on a Riemann surface are locally represented as fractions of holomorphic functions. Hence they are replaced by rational functions which are locally fractions of regular functions. Thus, writing ℓ ( D ) {\displaystyle \ell (D)} for the dimension (over k) of the space of rational functions on the curve whose poles at every point are not worse than the corresponding coefficient in D, the very same formula as above holds: ℓ ( D ) − ℓ ( K − D ) = deg ⁡ ( D ) − g + 1 {\displaystyle \ell (D)-\ell (K-D)=\deg(D)-g+1} . where C is a projective non-singular algebraic curve over an algebraically closed field k. In fact, the same formula holds for projective curves over any field, except that the degree of a divisor needs to take into account multiplicities coming from the possible extensions of the base field and the residue fields of the points supporting the divisor. Finally, for a proper curve over an Artinian ring, the Euler characteristic of the line bundle associated to a divisor is given by the degree of the divisor (appropriately defined) plus the Euler characteristic of the structural sheaf O {\displaystyle {\mathcal {O}}} . The smoothness assumption in the theorem can be relaxed, as well: for a (projective) curve over an algebraically closed field, all of whose local rings are Gorenstein rings, the same statement as above holds, provided that the geometric genus as defined above is replaced by the arithmetic genus ga, defined as g a := dim k ⁡ H 1 ( C , O C ) {\displaystyle g_{a}:=\dim _{k}H^{1}(C,{\mathcal {O}}_{C})} . (For smooth curves, the geometric genus agrees with the arithmetic one.) The theorem has also been extended to general singular curves (and higher-dimensional varieties). == Applications == === Hilbert polynomial === One of the important consequences of Riemann–Roch is it gives a formula for computing the Hilbert polynomial of line bundles on a curve. If a line bundle L {\displaystyle {\mathcal {L}}} is ample, then the Hilbert polynomial will give the first degree L ⊗ n {\displaystyle {\mathcal {L}}^{\otimes n}} giving an embedding into projective space. For example, the canonical sheaf ω C {\displaystyle \omega _{C}} has degree 2 g − 2 {\displaystyle 2g-2} , which gives an ample line bundle for genus g ≥ 2 {\displaystyle g\geq 2} . If we set ω C ( n ) = ω C ⊗ n {\displaystyle \omega _{C}(n)=\omega _{C}^{\otimes n}} then the Riemann–Roch formula reads χ ( ω C ( n ) ) = deg ⁡ ( ω C ⊗ n ) − g + 1 = n ( 2 g − 2 ) − g + 1 = 2 n g − 2 n − g + 1 = ( 2 n − 1 ) ( g − 1 ) {\displaystyle {\begin{aligned}\chi (\omega _{C}(n))&=\deg(\omega _{C}^{\otimes n})-g+1\\&=n(2g-2)-g+1\\&=2ng-2n-g+1\\&=(2n-1)(g-1)\end{aligned}}} Giving the degree 1 {\displaystyle 1} Hilbert polynomial of ω C {\displaystyle \omega _{C}} H ω C ( t ) = 2 ( g − 1 ) t − g + 1 {\displaystyle H_{\omega _{C}}(t)=2(g-1)t-g+1} . Because the tri-canonical sheaf ω C ⊗ 3 {\displaystyle \omega _{C}^{\otimes 3}} is used to embed the curve, the Hilbert polynomial H C ( t ) = H ω C ⊗ 3 ( t ) {\displaystyle H_{C}(t)=H_{\omega _{C}^{\otimes 3}}(t)} is generally considered while constructing the Hilbert scheme of curves (and the moduli space of algebraic curves). This polynomial is H C ( t ) = ( 6 t − 1 ) ( g − 1 ) = 6 ( g − 1 ) t + ( 1 − g ) {\displaystyle {\begin{aligned}H_{C}(t)&=(6t-1)(g-1)\\&=6(g-1)t+(1-g)\end{aligned}}} and is called the Hilbert polynomial of a genus g curve. === Pluricanonical embedding === Analyzing this equation further, the Euler characteristic reads as χ ( ω C ⊗ n ) = h 0 ( C , ω C ⊗ n ) − h 0 ( C , ω C ⊗ ( ω C ⊗ n ) ∨ ) = h 0 ( C , ω C ⊗ n ) − h 0 ( C , ( ω C ⊗ ( n − 1 ) ) ∨ ) {\displaystyle {\begin{aligned}\chi (\omega _{C}^{\otimes n})&=h^{0}\left(C,\omega _{C}^{\otimes n}\right)-h^{0}\left(C,\omega _{C}\otimes \left(\omega _{C}^{\otimes n}\right)^{\vee }\right)\\&=h^{0}\left(C,\omega _{C}^{\otimes n}\right)-h^{0}\left(C,\left(\omega _{C}^{\otimes (n-1)}\right)^{\vee }\right)\end{aligned}}} Since deg ⁡ ( ω C ⊗ n ) = n ( 2 g − 2 ) {\displaystyle \deg(\omega _{C}^{\otimes n})=n(2g-2)} h 0 ( C , ( ω C ⊗ ( n − 1 ) ) ∨ ) = 0 {\displaystyle h^{0}\left(C,\left(\omega _{C}^{\otimes (n-1)}\right)^{\vee }\right)=0} . for n ≥ 3 {\displaystyle n\geq 3} , since its degree is negative for all g ≥ 2 {\displaystyle g\geq 2} , implying it has no global sections, there is an embedding into some projective space from the global sections of ω C ⊗ n {\displaystyle \omega _{C}^{\otimes n}} . In particular, ω C ⊗ 3 {\displaystyle \omega _{C}^{\otimes 3}} gives an embedding into P N ≅ P ( H 0 ( C , ω C ⊗ 3 ) ) {\displaystyle \mathbb {P} ^{N}\cong \mathbb {P} (H^{0}(C,\omega _{C}^{\otimes 3}))} where N = 5 g − 5 − 1 = 5 g − 6 {\displaystyle N=5g-5-1=5g-6} since h 0 ( ω C ⊗ 3 ) = 6 g − 6 − g + 1 {\displaystyle h^{0}(\omega _{C}^{\otimes 3})=6g-6-g+1} . This is useful in the construction of the moduli space of algebraic curves because it can be used as the projective space to construct the Hilbert scheme with Hilbert polynomial H C ( t ) {\displaystyle H_{C}(t)} . === Genus of plane curves with singularities === An irreducible plane algebraic curve of degree d has (d − 1)(d − 2)/2 − g singularities, when properly counted. It follows that, if a curve has (d − 1)(d − 2)/2 different singularities, it is a rational curve and, thus, admits a rational parameterization. === Riemann–Hurwitz formula === The Riemann–Hurwitz formula concerning (ramified) maps between Riemann surfaces or algebraic curves is a consequence of the Riemann–Roch theorem. === Clifford's theorem on special divisors === Clifford's theorem on special divisors is also a consequence of the Riemann–Roch theorem. It states that for a special divisor (i.e., such that ℓ ( K − D ) > 0 {\displaystyle \ell (K-D)>0} ) satisfying ℓ ( D ) > 0 {\displaystyle \ell (D)>0} , the following inequality holds: ℓ ( D ) ≤ deg ⁡ D 2 + 1 {\displaystyle \ell (D)\leq {\frac {\deg D}{2}}+1} . == Proof == === Proof for algebraic curves === The statement for algebraic curves can be proved using Serre duality. The integer ℓ ( D ) {\displaystyle \ell (D)} is the dimension of the space of global sections of the line bundle L ( D ) {\displaystyle {\mathcal {L}}(D)} associated to D (cf. Cartier divisor). In terms of sheaf cohomology, we therefore have ℓ ( D ) = d i m H 0 ( X , L ( D ) ) {\displaystyle \ell (D)=\mathrm {dim} H^{0}(X,{\mathcal {L}}(D))} , and likewise ℓ ( K X − D ) = dim ⁡ H 0 ( X , ω X ⊗ L ( D ) ∨ ) {\displaystyle \ell ({\mathcal {K}}_{X}-D)=\dim H^{0}(X,\omega _{X}\otimes {\mathcal {L}}(D)^{\vee })} . But Serre duality for non-singular projective varieties in the particular case of a curve states that H 0 ( X , ω X ⊗ L ( D ) ∨ ) {\displaystyle H^{0}(X,\omega _{X}\otimes {\mathcal {L}}(D)^{\vee })} is isomorphic to the dual H 1 ( X , L ( D ) ) ∨ {\displaystyle H^{1}(X,{\mathcal {L}}(D))^{\vee }} . The left hand side thus equals the Euler characteristic of the divisor D. When D = 0, we find the Euler characteristic for the structure sheaf is 1 − g {\displaystyle 1-g} by definition. To prove the theorem for general divisor, one can then proceed by adding points one by one to the divisor and ensure that the Euler characteristic transforms accordingly to the right hand side. === Proof for compact Riemann surfaces === The theorem for compact Riemann surfaces can be deduced from the algebraic version using Chow's Theorem and the GAGA principle: in fact, every compact Riemann surface is defined by algebraic equations in some complex projective space. (Chow's Theorem says that any closed analytic subvariety of projective space is defined by algebraic equations, and the GAGA principle says that sheaf cohomology of an algebraic variety is the same as the sheaf cohomology of the analytic variety defined by the same equations). One may avoid the use of Chow's theorem by arguing identically to the proof in the case of algebraic curves, but replacing L ( D ) {\displaystyle {\mathcal {L}}(D)} with the sheaf O D {\displaystyle {\mathcal {O}}_{D}} of meromorphic functions h such that all coefficients of the divisor ( h ) + D {\displaystyle (h)+D} are nonnegative. Here the fact that the Euler characteristic transforms as desired when one adds a point to the divisor can be read off from the long exact sequence induced by the short exact sequence 0 → O D → O D + P → C P → 0 {\displaystyle 0\to {\mathcal {O}}_{D}\to {\mathcal {O}}_{D+P}\to \mathbb {C} _{P}\to 0} where C P {\displaystyle \mathbb {C} _{P}} is the skyscraper sheaf at P, and the map O D + P → C P {\displaystyle {\mathcal {O}}_{D+P}\to \mathbb {C} _{P}} returns the − k − 1 {\displaystyle -k-1} th Laurent coefficient, where k = D ( P ) {\displaystyle k=D(P)} . == Arithmetic Riemann–Roch theorem == A version of the arithmetic Riemann–Roch theorem states that if k is a global field, and f is a suitably admissible function of the adeles of k, then for every idele a, one has a Poisson summation formula: 1 | a | ∑ x ∈ k f ^ ( x / a ) = ∑ x ∈ k f ( a x ) {\displaystyle {\frac {1}{|a|}}\sum _{x\in k}{\hat {f}}(x/a)=\sum _{x\in k}f(ax)} . In the special case when k is the function field of an algebraic curve over a finite field and f is any character that is trivial on k, this recovers the geometric Riemann–Roch theorem. Other versions of the arithmetic Riemann–Roch theorem make use of Arakelov theory to resemble the traditional Riemann–Roch theorem more exactly. == Generalizations of the Riemann–Roch theorem == The Riemann–Roch theorem for curves was proved for Riemann surfaces by Riemann and Roch in the 1850s and for algebraic curves by Friedrich Karl Schmidt in 1931 as he was working on perfect fields of finite characteristic. As stated by Peter Roquette, The first main achievement of F. K. Schmidt is the discovery that the classical theorem of Riemann–Roch on compact Riemann surfaces can be transferred to function fields with finite base field. Actually, his proof of the Riemann–Roch theorem works for arbitrary perfect base fields, not necessarily finite. It is foundational in the sense that the subsequent theory for curves tries to refine the information it yields (for example in the Brill–Noether theory). There are versions in higher dimensions (for the appropriate notion of divisor, or line bundle). Their general formulation depends on splitting the theorem into two parts. One, which would now be called Serre duality, interprets the ℓ ( K − D ) {\displaystyle \ell (K-D)} term as a dimension of a first sheaf cohomology group; with ℓ ( D ) {\displaystyle \ell (D)} the dimension of a zeroth cohomology group, or space of sections, the left-hand side of the theorem becomes an Euler characteristic, and the right-hand side a computation of it as a degree corrected according to the topology of the Riemann surface. In algebraic geometry of dimension two such a formula was found by the geometers of the Italian school; a Riemann–Roch theorem for surfaces was proved (there are several versions, with the first possibly being due to Max Noether). An n-dimensional generalisation, the Hirzebruch–Riemann–Roch theorem, was found and proved by Friedrich Hirzebruch, as an application of characteristic classes in algebraic topology; he was much influenced by the work of Kunihiko Kodaira. At about the same time Jean-Pierre Serre was giving the general form of Serre duality, as we now know it. Alexander Grothendieck proved a far-reaching generalization in 1957, now known as the Grothendieck–Riemann–Roch theorem. His work reinterprets Riemann–Roch not as a theorem about a variety, but about a morphism between two varieties. The details of the proofs were published by Armand Borel and Jean-Pierre Serre in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof. Finally a general version was found in algebraic topology, too. These developments were essentially all carried out between 1950 and 1960. After that the Atiyah–Singer index theorem opened another route to generalization. Consequently, the Euler characteristic of a coherent sheaf is reasonably computable. For just one summand within the alternating sum, further arguments such as vanishing theorems must be used. == See also == Arakelov theory Grothendieck–Riemann–Roch theorem Hirzebruch–Riemann–Roch theorem Kawasaki's Riemann–Roch formula Hilbert polynomial Moduli of algebraic curves == Notes == == References == Serre, Jean-Pierre; Borel, Armand (1958). "Le théorème de Riemann-Roch". Bulletin de la Société Mathématique de France. 79: 97–136. doi:10.24033/bsmf.1500. Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, doi:10.1002/9781118032527, ISBN 978-0-471-05059-9, MR 1288523 Grothendieck, Alexander, et al. (1966/67), Théorie des Intersections et Théorème de Riemann–Roch (SGA 6), LNM 225, Springer-Verlag, 1971. Fulton, William (1974). Algebraic Curves (PDF). Mathematics Lecture Note Series. W.A. Benjamin. ISBN 0-8053-3080-1. Jost, Jürgen (2006). Compact Riemann Surfaces. Berlin, New York: Springer-Verlag. ISBN 978-3-540-33065-3. See pages 208–219 for the proof in the complex situation. Note that Jost uses slightly different notation. Hartshorne, Robin (1977). Algebraic Geometry. Berlin, New York: Springer-Verlag. ISBN 978-0-387-90244-9. MR 0463157. OCLC 13348052., contains the statement for curves over an algebraically closed field. See section IV.1. "Riemann–Roch theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Hirzebruch, Friedrich (1995). Topological methods in algebraic geometry. Classics in Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-3-540-58663-0. MR 1335917.. Miranda, Rick (1995). Algebraic Curves and Riemann Surfaces. Graduate Studies in Mathematics. Vol. 5. doi:10.1090/gsm/005. ISBN 9780821802687. Shigeru Mukai (2003). An Introduction to Invariants and Moduli. Cambridge studies in advanced mathematics. Vol. 81. William Oxbury (trans.). New York: Cambridge University Press. ISBN 0-521-80906-1. Vector bundles on Compact Riemann Surfaces, M. S. Narasimhan, pp. 5–6. Riemann, Bernhard (1857). "Theorie der Abel'schen Functionen". Journal für die reine und angewandte Mathematik. 1857 (54): 115–155. doi:10.1515/crll.1857.54.115. hdl:2027/coo.31924060183864. S2CID 16593204. Roch, Gustav (1865). "Ueber die Anzahl der willkurlichen Constanten in algebraischen Functionen". Journal für die reine und angewandte Mathematik. 1865 (64): 372–376. doi:10.1515/crll.1865.64.372. S2CID 120178388. Schmidt, Friedrich Karl (1931), "Analytische Zahlentheorie in Körpern der Charakteristik p", Mathematische Zeitschrift, 33: 1–32, doi:10.1007/BF01174341, S2CID 186228993, Zbl 0001.05401, archived from the original on 2017-12-22, retrieved 2020-05-16 Stichtenoth, Henning (1993). Algebraic Function Fields and Codes. Springer-Verlag. ISBN 3-540-56489-6. Misha Kapovich, The Riemann–Roch Theorem (lecture note) an elementary introduction J. Gray, The Riemann–Roch theorem and Geometry, 1854–1914. Is there a Riemann–Roch for smooth projective curves over an arbitrary field? on MathOverflow
Wikipedia/Riemann-Roch_theorem_for_algebraic_curves
In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel. The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis: If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev. This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. == Implementations == The Faugère F4 algorithm is implemented in FGb, Faugère's own implementation, which includes interfaces for using it from C/C++ or Maple, in Maple computer algebra system, as the option method=fgb of function Groebner[gbasis] in the Magma computer algebra system, in the SageMath computer algebra system, Study versions of the Faugère F5 algorithm is implemented in the SINGULAR computer algebra system; the SageMath computer algebra system. in SymPy Python package. == Applications == The previously intractable "cyclic 10" problem was solved by F5, as were a number of systems related to cryptography; for example HFE and C*. == References == Faugère, J.-C. (June 1999). "A new efficient algorithm for computing Gröbner bases (F4)" (PDF). Journal of Pure and Applied Algebra. 139 (1): 61–88. doi:10.1016/S0022-4049(99)00005-5. ISSN 0022-4049. Faugère, J.-C. (July 2002). "A new efficient algorithm for computing Gröbner bases without reduction to zero ( F 5 )". Proceedings of the 2002 international symposium on Symbolic and algebraic computation (PDF). ACM Press. pp. 75–83. CiteSeerX 10.1.1.188.651. doi:10.1145/780506.780516. ISBN 978-1-58113-484-1. S2CID 15833106. Till Stegers Faugère's F5 Algorithm Revisited (alternative link). Diplom-Mathematiker Thesis, advisor Johannes Buchmann, Technische Universität Darmstadt, September 2005 (revised April 27, 2007). Many references, including links to available implementations. == External links == Faugère's home page (includes pdf reprints of additional papers) An introduction to the F4 algorithm.
Wikipedia/Faugère_F5_algorithm
Geometric group theory is an area in mathematics devoted to the study of finitely generated groups via exploring the connections between algebraic properties of such groups and topological and geometric properties of spaces on which these groups can act non-trivially (that is, when the groups in question are realized as geometric symmetries or continuous transformations of some spaces). Another important idea in geometric group theory is to consider finitely generated groups themselves as geometric objects. This is usually done by studying the Cayley graphs of groups, which, in addition to the graph structure, are endowed with the structure of a metric space, given by the so-called word metric. Geometric group theory, as a distinct area, is relatively new, and became a clearly identifiable branch of mathematics in the late 1980s and early 1990s. Geometric group theory closely interacts with low-dimensional topology, hyperbolic geometry, algebraic topology, computational group theory and differential geometry. There are also substantial connections with complexity theory, mathematical logic, the study of Lie groups and their discrete subgroups, dynamical systems, probability theory, K-theory, and other areas of mathematics. In the introduction to his book Topics in Geometric Group Theory, Pierre de la Harpe wrote: "One of my personal beliefs is that fascination with symmetries and groups is one way of coping with frustrations of life's limitations: we like to recognize symmetries which allow us to recognize more than what we can see. In this sense the study of geometric group theory is a part of culture, and reminds me of several things that Georges de Rham practiced on many occasions, such as teaching mathematics, reciting Mallarmé, or greeting a friend".: 3  == History == Geometric group theory grew out of combinatorial group theory that largely studied properties of discrete groups via analyzing group presentations, which describe groups as quotients of free groups; this field was first systematically studied by Walther von Dyck, student of Felix Klein, in the early 1880s, while an early form is found in the 1856 icosian calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. Currently combinatorial group theory as an area is largely subsumed by geometric group theory. Moreover, the term "geometric group theory" came to often include studying discrete groups using probabilistic, measure-theoretic, arithmetic, analytic and other approaches that lie outside of the traditional combinatorial group theory arsenal. In the first half of the 20th century, pioneering work of Max Dehn, Jakob Nielsen, Kurt Reidemeister and Otto Schreier, J. H. C. Whitehead, Egbert van Kampen, amongst others, introduced some topological and geometric ideas into the study of discrete groups. Other precursors of geometric group theory include small cancellation theory and Bass–Serre theory. Small cancellation theory was introduced by Martin Grindlinger in the 1960s and further developed by Roger Lyndon and Paul Schupp. It studies van Kampen diagrams, corresponding to finite group presentations, via combinatorial curvature conditions and derives algebraic and algorithmic properties of groups from such analysis. Bass–Serre theory, introduced in the 1977 book of Serre, derives structural algebraic information about groups by studying group actions on simplicial trees. External precursors of geometric group theory include the study of lattices in Lie groups, especially Mostow's rigidity theorem, the study of Kleinian groups, and the progress achieved in low-dimensional topology and hyperbolic geometry in the 1970s and early 1980s, spurred, in particular, by William Thurston's Geometrization program. The emergence of geometric group theory as a distinct area of mathematics is usually traced to the late 1980s and early 1990s. It was spurred by the 1987 monograph of Mikhail Gromov "Hyperbolic groups" that introduced the notion of a hyperbolic group (also known as word-hyperbolic or Gromov-hyperbolic or negatively curved group), which captures the idea of a finitely generated group having large-scale negative curvature, and by his subsequent monograph Asymptotic Invariants of Infinite Groups, that outlined Gromov's program of understanding discrete groups up to quasi-isometry. The work of Gromov had a transformative effect on the study of discrete groups and the phrase "geometric group theory" started appearing soon afterwards. (see e.g.). == Modern themes and developments == Notable themes and developments in geometric group theory in 1990s and 2000s include: Gromov's program to study quasi-isometric properties of groups. A particularly influential broad theme in the area is Gromov's program of classifying finitely generated groups according to their large scale geometry. Formally, this means classifying finitely generated groups with their word metric up to quasi-isometry. This program involves: The study of properties that are invariant under quasi-isometry. Examples of such properties of finitely generated groups include: the growth rate of a finitely generated group; the isoperimetric function or Dehn function of a finitely presented group; the number of ends of a group; hyperbolicity of a group; the homeomorphism type of the Gromov boundary of a hyperbolic group; asymptotic cones of finitely generated groups (see e.g.); amenability of a finitely generated group; being virtually abelian (that is, having an abelian subgroup of finite index); being virtually nilpotent; being virtually free; being finitely presentable; being a finitely presentable group with solvable Word Problem; and others. Theorems which use quasi-isometry invariants to prove algebraic results about groups, for example: Gromov's polynomial growth theorem; Stallings' ends theorem; Mostow rigidity theorem. Quasi-isometric rigidity theorems, in which one classifies algebraically all groups that are quasi-isometric to some given group or metric space. This direction was initiated by the work of Schwartz on quasi-isometric rigidity of rank-one lattices and the work of Benson Farb and Lee Mosher on quasi-isometric rigidity of Baumslag–Solitar groups. The theory of word-hyperbolic and relatively hyperbolic groups. A particularly important development here is the work of Zlil Sela in 1990s resulting in the solution of the isomorphism problem for word-hyperbolic groups. The notion of a relatively hyperbolic groups was originally introduced by Gromov in 1987 and refined by Farb and Brian Bowditch, in the 1990s. The study of relatively hyperbolic groups gained prominence in the 2000s. Interactions with mathematical logic and the study of the first-order theory of free groups. Particularly important progress occurred on the famous Tarski conjectures, due to the work of Sela as well as of Olga Kharlampovich and Alexei Myasnikov. The study of limit groups and introduction of the language and machinery of non-commutative algebraic geometry gained prominence. Interactions with computer science, complexity theory and the theory of formal languages. This theme is exemplified by the development of the theory of automatic groups, a notion that imposes certain geometric and language theoretic conditions on the multiplication operation in a finitely generated group. The study of isoperimetric inequalities, Dehn functions and their generalizations for finitely presented group. This includes, in particular, the work of Jean-Camille Birget, Aleksandr Olʹshanskiĭ, Eliyahu Rips and Mark Sapir essentially characterizing the possible Dehn functions of finitely presented groups, as well as results providing explicit constructions of groups with fractional Dehn functions. The theory of toral or JSJ-decompositions for 3-manifolds was originally brought into a group theoretic setting by Peter Kropholler. This notion has been developed by many authors for both finitely presented and finitely generated groups. Connections with geometric analysis, the study of C*-algebras associated with discrete groups and of the theory of free probability. This theme is represented, in particular, by considerable progress on the Novikov conjecture and the Baum–Connes conjecture and the development and study of related group-theoretic notions such as topological amenability, asymptotic dimension, uniform embeddability into Hilbert spaces, rapid decay property, and so on (see e.g.). Interactions with the theory of quasiconformal analysis on metric spaces, particularly in relation to Cannon's conjecture about characterization of hyperbolic groups with Gromov boundary homeomorphic to the 2-sphere. Finite subdivision rules, also in relation to Cannon's conjecture. Interactions with topological dynamics in the contexts of studying actions of discrete groups on various compact spaces and group compactifications, particularly convergence group methods Development of the theory of group actions on R {\displaystyle \mathbb {R} } -trees (particularly the Rips machine), and its applications. The study of group actions on CAT(0) spaces and CAT(0) cubical complexes, motivated by ideas from Alexandrov geometry. Interactions with low-dimensional topology and hyperbolic geometry, particularly the study of 3-manifold groups (see, e.g.,), mapping class groups of surfaces, braid groups and Kleinian groups. Introduction of probabilistic methods to study algebraic properties of "random" group theoretic objects (groups, group elements, subgroups, etc.). A particularly important development here is the work of Gromov who used probabilistic methods to prove the existence of a finitely generated group that is not uniformly embeddable into a Hilbert space. Other notable developments include introduction and study of the notion of generic-case complexity for group-theoretic and other mathematical algorithms and algebraic rigidity results for generic groups. The study of automata groups and iterated monodromy groups as groups of automorphisms of infinite rooted trees. In particular, Grigorchuk's groups of intermediate growth, and their generalizations, appear in this context. The study of measure-theoretic properties of group actions on measure spaces, particularly introduction and development of the notions of measure equivalence and orbit equivalence, as well as measure-theoretic generalizations of Mostow rigidity. The study of unitary representations of discrete groups and Kazhdan's property (T) The study of Out(Fn) (the outer automorphism group of a free group of rank n) and of individual automorphisms of free groups. Introduction and the study of Culler-Vogtmann's outer space and of the theory of train tracks for free group automorphisms played a particularly prominent role here. Development of Bass–Serre theory, particularly various accessibility results and the theory of tree lattices. Generalizations of Bass–Serre theory such as the theory of complexes of groups. The study of random walks on groups and related boundary theory, particularly the notion of Poisson boundary (see e.g.). The study of amenability and of groups whose amenability status is still unknown. Interactions with finite group theory, particularly progress in the study of subgroup growth. Studying subgroups and lattices in linear groups, such as S L ( n , R ) {\displaystyle SL(n,\mathbb {R} )} , and of other Lie groups, via geometric methods (e.g. buildings), algebro-geometric tools (e.g. algebraic groups and representation varieties), analytic methods (e.g. unitary representations on Hilbert spaces) and arithmetic methods. Group cohomology, using algebraic and topological methods, particularly involving interaction with algebraic topology and the use of morse-theoretic ideas in the combinatorial context; large-scale, or coarse (see e.g.) homological and cohomological methods. Progress on traditional combinatorial group theory topics, such as the Burnside problem, the study of Coxeter groups and Artin groups, and so on (the methods used to study these questions currently are often geometric and topological). == Examples == The following examples are often studied in geometric group theory: == See also == The ping-pong lemma, a useful way to exhibit a group as a free product Amenable group Nielsen transformation Tietze transformation == References == === Books and monographs === These texts cover geometric group theory and related topics. Bowditch, Brian H. (2006). A course on geometric group theory. MSJ Memoirs. Vol. 16. Tokyo: Mathematical Society of Japan. ISBN 4-931469-35-3. Bridson, Martin R.; Haefliger, André (1999). Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Vol. 319. Berlin: Springer-Verlag. ISBN 3-540-64324-9. Coornaert, Michel; Delzant, Thomas; Papadopoulos, Athanase (1990). Géométrie et théorie des groupes : les groupes hyperboliques de Gromov. Lecture Notes in Mathematics. Vol. 1441. Springer-Verlag. ISBN 3-540-52977-2. MR 1075994. Clay, Matt; Margalit, Dan (2017). Office Hours with a Geometric Group Theorist. Princeton University Press. ISBN 978-0-691-15866-2. Coornaert, Michel; Papadopoulos, Athanase (1993). Symbolic dynamics and hyperbolic groups. Lecture Notes in Mathematics. Vol. 1539. Springer-Verlag. ISBN 3-540-56499-3. de la Harpe, P. (2000). Topics in geometric group theory. Chicago Lectures in Mathematics. University of Chicago Press. ISBN 0-226-31719-6. Druţu, Cornelia; Kapovich, Michael (2018). Geometric Group Theory (PDF). American Mathematical Society Colloquium Publications. Vol. 63. American Mathematical Society. ISBN 978-1-4704-1104-6. MR 3753580. Epstein, D.B.A.; Cannon, J.W.; Holt, D.; Levy, S.; Paterson, M.; Thurston, W. (1992). Word Processing in Groups. Jones and Bartlett. ISBN 0-86720-244-0. Gromov, M. (1987). "Hyperbolic Groups". In Gersten, G.M. (ed.). Essays in Group Theory. Vol. 8. MSRI. pp. 75–263. ISBN 0-387-96618-8. Gromov, Mikhael (1993). "Asymptotic invariants of infinite groups". In Niblo, G.A.; Roller, M.A. (eds.). Geometric Group Theory: Proceedings of the Symposium held in Sussex 1991. London Mathematical Society Lecture Note Series. Vol. 2. Cambridge University Press. pp. 1–295. ISBN 978-0-521-44680-8. Kapovich, M. (2001). Hyperbolic Manifolds and Discrete Groups. Progress in Mathematics. Vol. 183. Birkhäuser. ISBN 978-0-8176-3904-4. Lyndon, Roger C.; Schupp, Paul E. (2015) [1977]. Combinatorial Group Theory. Classics in mathematics. Springer. ISBN 978-3-642-61896-3. Ol'shanskii, A.Yu. (2012) [1991]. Geometry of Defining Relations in Groups. Springer. ISBN 978-94-011-3618-1. Roe, John (2003). Lectures on Coarse Geometry. University Lecture Series. Vol. 31. American Mathematical Society. ISBN 978-0-8218-3332-2. == External links == Jon McCammond's Geometric Group Theory Page What is Geometric Group Theory? By Daniel Wise Open Problems in combinatorial and geometric group theory Geometric group theory Theme on arxiv.org
Wikipedia/Geometric_Group_Theory
Numerical algebraic geometry is a field of computational mathematics, particularly computational algebraic geometry, which uses methods from numerical analysis to study and manipulate the solutions of systems of polynomial equations. == Homotopy continuation == The primary computational method used in numerical algebraic geometry is homotopy continuation, in which a homotopy is formed between two polynomial systems, and the isolated solutions (points) of one are continued to the other. This is a specialization of the more general method of numerical continuation. Let z {\displaystyle z} represent the variables of the system. By abuse of notation, and to facilitate the spectrum of ambient spaces over which one can solve the system, we do not use vector notation for z {\displaystyle z} . Similarly for the polynomial systems f {\displaystyle f} and g {\displaystyle g} . Current canonical notation calls the start system g {\displaystyle g} , and the target system, i.e., the system to solve, f {\displaystyle f} . A very common homotopy, the straight-line homotopy, between f {\displaystyle f} and g {\displaystyle g} is H ( z , t ) = ( 1 − t ) f ( z ) + t g ( z ) . {\displaystyle H(z,t)=(1-t)f(z)+tg(z).} In the above homotopy, one starts the path variable at t start = 1 {\displaystyle t_{\text{start}}=1} and continues toward t end = 0 {\displaystyle t_{\text{end}}=0} . Another common choice is to run from 0 {\displaystyle 0} to 1 {\displaystyle 1} . In principle, the choice is completely arbitrary. In practice, regarding endgame methods for computing singular solutions using homotopy continuation, the target time being 0 {\displaystyle 0} can significantly ease analysis, so this perspective is here taken. Regardless of the choice of start and target times, the H {\displaystyle H} ought to be formulated such that H ( z , t start ) = g ( z ) {\displaystyle H(z,t_{\text{start}})=g(z)} , and H ( z , t end ) = f ( z ) {\displaystyle H(z,t_{\text{end}})=f(z)} . One has a choice in g ( z ) {\displaystyle g(z)} , including Roots of unity Total degree Polyhedral Multi-homogeneous and beyond these, specific start systems that closely mirror the structure of f {\displaystyle f} may be formed for particular systems. The choice of start system impacts the computational time it takes to solve f {\displaystyle f} , in that those that are easy to formulate (such as total degree) tend to have higher numbers of paths to track, and those that take significant effort (such as the polyhedral method) are much sharper. There is currently no good way to predict which will lead to the quickest time to solve. Actual continuation is typically done using predictor–corrector methods, with additional features as implemented. Predicting is done using a standard ODE predictor method, such as Runge–Kutta, and correction often uses Newton–Raphson iteration. Because f {\displaystyle f} and g {\displaystyle g} are polynomial, homotopy continuation in this context is theoretically guaranteed to compute all solutions of f {\displaystyle f} , due to Bertini's theorem. However, this guarantee is not always achieved in practice, because of issues arising from limitations of the modern computer, most namely finite precision. That is, despite the strength of the probability-1 argument underlying this theory, without using a priori certified tracking methods, some paths may fail to track perfectly for various reasons. == Witness set == A witness set W {\displaystyle W} is a data structure used to describe algebraic varieties. The witness set for an affine variety that is equidimensional consists of three pieces of information. The first piece of information is a system of equations F {\displaystyle F} . These equations define the algebraic variety V ( F ) {\displaystyle {\mathbf {V} }(F)} that is being studied. The second piece of information is a linear space L {\displaystyle {\mathcal {L}}} . The dimension of L {\displaystyle {\mathcal {L}}} is the codimension of V ( F ) {\displaystyle {\mathbf {V} }(F)} , and chosen to intersect V ( F ) {\displaystyle {\mathbf {V} }(F)} transversely. The third piece of information is the list of points in the intersection L ∩ V ( F ) {\displaystyle {\mathcal {L}}\cap {\mathbf {V} }(F)} . This intersection has finitely many points and the number of points is the degree of the algebraic variety V ( F ) {\displaystyle {\mathbf {V} }(F)} . Thus, witness sets encode the answer to the first two questions one asks about an algebraic variety: What is the dimension, and what is the degree? Witness sets also allow one to perform a numerical irreducible decomposition, component membership tests, and component sampling. This makes witness sets a good description of an algebraic variety. == Certification == Solutions to polynomial systems computed using numerical algebraic geometric methods can be certified, meaning that the approximate solution is "correct". This can be achieved in several ways, either a priori using a certified tracker, or a posteriori by showing that the point is, say, in the basin of convergence for Newton's method. == Software == Several software packages implement portions of the theoretical body of numerical algebraic geometry. These include, in alphabetic order: alphaCertified Bertini Hom4PS HomotopyContinuation.jl Macaulay2 (core implementation of homotopy tracking and NumericalAlgebraicGeometry package) MiNuS: Optimized C++ framework for fast homotopy continuation. Fastest solver for certain 100-320 degree square problems to date. PHCPack == References == == External links == Bertini home page Hom4PS-3 HomotopyContinuation.jl MiNuS fast C++ framework
Wikipedia/Numerical_algebraic_geometry
In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered. If the defining polynomial of a plane algebraic curve is irreducible, then one has an irreducible plane algebraic curve. Otherwise, the algebraic curve is the union of one or several irreducible curves, called its components, that are defined by the irreducible factors. More generally, an algebraic curve is an algebraic variety of dimension one. In some contexts, an algebraic set of dimension one is also called an algebraic curve, but this will not be the case in this article. Equivalently, an algebraic curve is an algebraic variety that is birationally equivalent to an irreducible algebraic plane curve. If the curve is contained in an affine space or a projective space, one can take a projection for such a birational equivalence. These birational equivalences reduce most of the study of algebraic curves to the study of algebraic plane curves. However, some properties are not kept under birational equivalence and must be studied on non-plane curves. This is, in particular, the case for the degree and smoothness. For example, there exist smooth curves of genus 0 and degree greater than two, but any plane projection of such curves has singular points (see Genus–degree formula). A non-plane curve is often called a space curve or a skew curve. == In Euclidean geometry == An algebraic curve in the Euclidean plane is the set of the points whose coordinates are the solutions of a bivariate polynomial equation p(x, y) = 0. This equation is often called the implicit equation of the curve, in contrast to the curves that are the graph of a function defining explicitly y as a function of x. With a curve given by such an implicit equation, the first problems are to determine the shape of the curve and to draw it. These problems are not as easy to solve as in the case of the graph of a function, for which y may easily be computed for various values of x. The fact that the defining equation is a polynomial implies that the curve has some structural properties that may help in solving these problems. Every algebraic curve may be uniquely decomposed into a finite number of smooth monotone arcs (also called branches) sometimes connected by some points sometimes called "remarkable points", and possibly a finite number of isolated points called acnodes. A smooth monotone arc is the graph of a smooth function which is defined and monotone on an open interval of the x-axis. In each direction, an arc is either unbounded (usually called an infinite arc) or has an endpoint which is either a singular point (this will be defined below) or a point with a tangent parallel to one of the coordinate axes. For example, for the Tschirnhausen cubic, there are two infinite arcs having the origin (0,0) as of endpoint. This point is the only singular point of the curve. There are also two arcs having this singular point as one endpoint and having a second endpoint with a horizontal tangent. Finally, there are two other arcs each having one of these points with horizontal tangent as the first endpoint and having the unique point with vertical tangent as the second endpoint. In contrast, the sinusoid is certainly not an algebraic curve, having an infinite number of monotone arcs. To draw an algebraic curve, it is important to know the remarkable points and their tangents, the infinite branches and their asymptotes (if any) and the way in which the arcs connect them. It is also useful to consider the inflection points as remarkable points. When all this information is drawn on a sheet of paper, the shape of the curve usually appears rather clearly. If not, it suffices to add a few other points and their tangents to get a good description of the curve. The methods for computing the remarkable points and their tangents are described below in the section Remarkable points of a plane curve. == Plane projective curves == It is often desirable to consider curves in the projective space. An algebraic curve in the projective plane or plane projective curve is the set of the points in a projective plane whose projective coordinates are zeros of a homogeneous polynomial in three variables P(x, y, z). Every affine algebraic curve of equation p(x, y) = 0 may be completed into the projective curve of equation h p ( x , y , z ) = 0 , {\displaystyle ^{h}p(x,y,z)=0,} where h p ( x , y , z ) = z deg ⁡ ( p ) p ( x z , y z ) {\displaystyle ^{h}p(x,y,z)=z^{\deg(p)}p\left({\frac {x}{z}},{\frac {y}{z}}\right)} is the result of the homogenization of p. Conversely, if P(x, y, z) = 0 is the homogeneous equation of a projective curve, then P(x, y, 1) = 0 is the equation of an affine curve, which consists of the points of the projective curve whose third projective coordinate is not zero. These two operations are reciprocal one to the other, as h p ( x , y , 1 ) = p ( x , y ) {\displaystyle ^{h}p(x,y,1)=p(x,y)} and, if p is defined by p ( x , y ) = P ( x , y , 1 ) {\displaystyle p(x,y)=P(x,y,1)} , then h p ( x , y , z ) = P ( x , y , z ) , {\displaystyle ^{h}p(x,y,z)=P(x,y,z),} as soon as the homogeneous polynomial P is not divisible by z. For example, the projective curve of equation x2 + y2 − z2 is the projective completion of the unit circle of equation x2 + y2 − 1 = 0. This implies that an affine curve and its projective completion are the same curves, or, more precisely that the affine curve is a part of the projective curve that is large enough to well define the "complete" curve. This point of view is commonly expressed by calling "points at infinity" of the affine curve the points (in finite number) of the projective completion that do not belong to the affine part. Projective curves are frequently studied for themselves. They are also useful for the study of affine curves. For example, if p(x, y) is the polynomial defining an affine curve, beside the partial derivatives p x ′ {\displaystyle p'_{x}} and p y ′ {\displaystyle p'_{y}} , it is useful to consider the derivative at infinity p ∞ ′ ( x , y ) = h p z ′ ( x , y , 1 ) . {\displaystyle p'_{\infty }(x,y)={^{h}p'_{z}(x,y,1)}.} For example, the equation of the tangent of the affine curve of equation p(x, y) = 0 at a point (a, b) is x p x ′ ( a , b ) + y p y ′ ( a , b ) + p ∞ ′ ( a , b ) = 0. {\displaystyle xp'_{x}(a,b)+yp'_{y}(a,b)+p'_{\infty }(a,b)=0.} == Remarkable points of a plane curve == In this section, we consider a plane algebraic curve defined by a bivariate polynomial p(x, y) and its projective completion, defined by the homogenization P ( x , y , z ) = h p ( x , y , z ) {\displaystyle P(x,y,z)={}^{h}p(x,y,z)} of p. === Intersection with a line === Knowing the points of intersection of a curve with a given line is frequently useful. The intersection with the axes of coordinates and the asymptotes are useful to draw the curve. Intersecting with a line parallel to the axes allows one to find at least a point in each branch of the curve. If an efficient root-finding algorithm is available, this allows to draw the curve by plotting the intersection point with all the lines parallel to the y-axis and passing through each pixel on the x-axis. If the polynomial defining the curve has a degree d, any line cuts the curve in at most d points. Bézout's theorem asserts that this number is exactly d, if the points are searched in the projective plane over an algebraically closed field (for example the complex numbers), and counted with their multiplicity. The method of computation that follows proves again this theorem, in this simple case. To compute the intersection of the curve defined by the polynomial p with the line of equation ax+by+c = 0, one solves the equation of the line for x (or for y if a = 0). Substituting the result in p, one gets a univariate equation q(y) = 0 (or q(x) = 0, if the equation of the line has been solved in y), each of whose roots is one coordinate of an intersection point. The other coordinate is deduced from the equation of the line. The multiplicity of an intersection point is the multiplicity of the corresponding root. There is an intersection point at infinity if the degree of q is lower than the degree of p; the multiplicity of such an intersection point at infinity is the difference of the degrees of p and q. === Tangent at a point === The tangent at a point (a, b) of the curve is the line of equation ( x − a ) p x ′ ( a , b ) + ( y − b ) p y ′ ( a , b ) = 0 {\displaystyle (x-a)p'_{x}(a,b)+(y-b)p'_{y}(a,b)=0} , like for every differentiable curve defined by an implicit equation. In the case of polynomials, another formula for the tangent has a simpler constant term and is more symmetric: x p x ′ ( a , b ) + y p y ′ ( a , b ) + p ∞ ′ ( a , b ) = 0 , {\displaystyle xp'_{x}(a,b)+yp'_{y}(a,b)+p'_{\infty }(a,b)=0,} where p ∞ ′ ( x , y ) = P z ′ ( x , y , 1 ) {\displaystyle p'_{\infty }(x,y)=P'_{z}(x,y,1)} is the derivative at infinity. The equivalence of the two equations results from Euler's homogeneous function theorem applied to P. If p x ′ ( a , b ) = p y ′ ( a , b ) = 0 , {\displaystyle p'_{x}(a,b)=p'_{y}(a,b)=0,} the tangent is not defined and the point is a singular point. This extends immediately to the projective case: The equation of the tangent of at the point of projective coordinates (a:b:c) of the projective curve of equation P(x, y, z) = 0 is x P x ′ ( a , b , c ) + y P y ′ ( a , b , c ) + z P z ′ ( a , b , c ) = 0 , {\displaystyle xP'_{x}(a,b,c)+yP'_{y}(a,b,c)+zP'_{z}(a,b,c)=0,} and the points of the curves that are singular are the points such that P x ′ ( a , b , c ) = P y ′ ( a , b , c ) = P z ′ ( a , b , c ) = 0. {\displaystyle P'_{x}(a,b,c)=P'_{y}(a,b,c)=P'_{z}(a,b,c)=0.} (The condition P(a, b, c) = 0 is implied by these conditions, by Euler's homogeneous function theorem.) === Asymptotes === Every infinite branch of an algebraic curve corresponds to a point at infinity on the curve, that is a point of the projective completion of the curve that does not belong to its affine part. The corresponding asymptote is the tangent of the curve at that point. The general formula for a tangent to a projective curve may apply, but it is worth to make it explicit in this case. Let p = p d + ⋯ + p 0 {\displaystyle p=p_{d}+\cdots +p_{0}} be the decomposition of the polynomial defining the curve into its homogeneous parts, where pi is the sum of the monomials of p of degree i. It follows that P = h p = p d + z p d − 1 + ⋯ + z d p 0 {\displaystyle P={^{h}p}=p_{d}+zp_{d-1}+\cdots +z^{d}p_{0}} and P z ′ ( a , b , 0 ) = p d − 1 ( a , b ) . {\displaystyle P'_{z}(a,b,0)=p_{d-1}(a,b).} A point at infinity of the curve is a zero of p of the form (a, b, 0). Equivalently, (a, b) is a zero of pd. The fundamental theorem of algebra implies that, over an algebraically closed field (typically, the field of complex numbers), pd factors into a product of linear factors. Each factor defines a point at infinity on the curve: if bx − ay is such a factor, then it defines the point at infinity (a, b, 0). Over the reals, pd factors into linear and quadratic factors. The irreducible quadratic factors define non-real points at infinity, and the real points are given by the linear factors. If (a, b, 0) is a point at infinity of the curve, one says that (a, b) is an asymptotic direction. Setting q = pd the equation of the corresponding asymptote is x q x ′ ( a , b ) + y q y ′ ( a , b ) + p d − 1 ( a , b ) = 0. {\displaystyle xq'_{x}(a,b)+yq'_{y}(a,b)+p_{d-1}(a,b)=0.} If q x ′ ( a , b ) = q y ′ ( a , b ) = 0 {\displaystyle q'_{x}(a,b)=q'_{y}(a,b)=0} and p d − 1 ( a , b ) ≠ 0 , {\displaystyle p_{d-1}(a,b)\neq 0,} the asymptote is the line at infinity, and, in the real case, the curve has a branch that looks like a parabola. In this case one says that the curve has a parabolic branch. If q x ′ ( a , b ) = q y ′ ( a , b ) = p d − 1 ( a , b ) = 0 , {\displaystyle q'_{x}(a,b)=q'_{y}(a,b)=p_{d-1}(a,b)=0,} the curve has a singular point at infinity and may have several asymptotes. They may be computed by the method of computing the tangent cone of a singular point. === Singular points === The singular points of a curve of degree d defined by a polynomial p(x,y) of degree d are the solutions of the system of equations: p x ′ ( x , y ) = p y ′ ( x , y ) = p ( x , y ) = 0. {\displaystyle p'_{x}(x,y)=p'_{y}(x,y)=p(x,y)=0.} In characteristic zero, this system is equivalent to p x ′ ( x , y ) = p y ′ ( x , y ) = p ∞ ′ ( x , y ) = 0 , {\displaystyle p'_{x}(x,y)=p'_{y}(x,y)=p'_{\infty }(x,y)=0,} where, with the notation of the preceding section, p ∞ ′ ( x , y ) = P z ′ ( x , y , 1 ) . {\displaystyle p'_{\infty }(x,y)=P'_{z}(x,y,1).} The systems are equivalent because of Euler's homogeneous function theorem. The latter system has the advantage of having its third polynomial of degree d-1 instead of d. Similarly, for a projective curve defined by a homogeneous polynomial P(x,y,z) of degree d, the singular points have the solutions of the system P x ′ ( x , y , z ) = P y ′ ( x , y , z ) = P z ′ ( x , y , z ) = 0 {\displaystyle P'_{x}(x,y,z)=P'_{y}(x,y,z)=P'_{z}(x,y,z)=0} as homogeneous coordinates. (In positive characteristic, the equation P ( x , y , z ) {\displaystyle P(x,y,z)} has to be added to the system.) This implies that the number of singular points is finite as long as p(x,y) or P(x,y,z) is square free. Bézout's theorem implies thus that the number of singular points is at most (d − 1)2, but this bound is not sharp because the system of equations is overdetermined. If reducible polynomials are allowed, the sharp bound is d(d − 1)/2, this value is reached when the polynomial factors in linear factors, that is if the curve is the union of d lines. For irreducible curves and polynomials, the number of singular points is at most (d − 1)(d − 2)/2, because of the formula expressing the genus in term of the singularities (see below). The maximum is reached by the curves of genus zero whose all singularities have multiplicity two and distinct tangents (see below). The equation of the tangents at a singular point is given by the nonzero homogeneous part of the lowest degree in the Taylor series of the polynomial at the singular point. When one changes the coordinates to put the singular point at the origin, the equation of the tangents at the singular point is thus the nonzero homogeneous part of the lowest degree of the polynomial, and the multiplicity of the singular point is the degree of this homogeneous part. == Analytic structure == The study of the analytic structure of an algebraic curve in the neighborhood of a singular point provides accurate information of the topology of singularities. In fact, near a singular point, a real algebraic curve is the union of a finite number of branches that intersect only at the singular point and look either as a cusp or as a smooth curve. Near a regular point, one of the coordinates of the curve may be expressed as an analytic function of the other coordinate. This is a corollary of the analytic implicit function theorem, and implies that the curve is smooth near the point. Near a singular point, the situation is more complicated and involves Puiseux series, which provide analytic parametric equations of the branches. For describing a singularity, it is worth to translate the curve for having the singularity at the origin. This consists of a change of variable of the form X = x − a , Y = y − b , {\displaystyle X=x-a,Y=y-b,} where a , b {\displaystyle a,b} are the coordinates of the singular point. In the following, the singular point under consideration is always supposed to be at the origin. The equation of an algebraic curve is f ( x , y ) = 0 , {\displaystyle f(x,y)=0,} where f is a polynomial in x and y. This polynomial may be considered as a polynomial in y, with coefficients in the algebraically closed field of the Puiseux series in x. Thus f may be factored in factors of the form y − P ( x ) , {\displaystyle y-P(x),} where P is a Puiseux series. These factors are all different if f is an irreducible polynomial, because this implies that f is square-free, a property which is independent of the field of coefficients. The Puiseux series that occur here have the form P ( x ) = ∑ n = n 0 ∞ a n x n / d , {\displaystyle P(x)=\sum _{n=n_{0}}^{\infty }a_{n}x^{n/d},} where d is a positive integer, and ⁠ n 0 {\displaystyle n_{0}} ⁠ is an integer that may also be supposed to be positive, because we consider only the branches of the curve that pass through the origin. Without loss of generality, we may suppose that d is coprime with the greatest common divisor of the n such that ⁠ a n ≠ 0 {\displaystyle a_{n}\neq 0} ⁠ (otherwise, one could choose a smaller common denominator for the exponents). Let ⁠ ω d {\displaystyle \omega _{d}} ⁠ be a primitive dth root of unity. If the above Puiseux series occurs in the factorization of ⁠ f ( x , y ) = 0 {\displaystyle f(x,y)=0} ⁠, then the d series P i ( x ) = ∑ n = n 0 ∞ a n ω d i x n / d {\displaystyle P_{i}(x)=\sum _{n=n_{0}}^{\infty }a_{n}\omega _{d}^{i}x^{n/d}} occur also in the factorization (a consequence of Galois theory). These d series are said conjugate, and are considered as a single branch of the curve, of ramification index d. In the case of a real curve, that is a curve defined by a polynomial with real coefficients, three cases may occur. If none ⁠ P i ( x ) {\displaystyle P_{i}(x)} ⁠ has real coefficients, then one has a non-real branch. If some ⁠ P i ( x ) {\displaystyle P_{i}(x)} ⁠ has real coefficients, then one may choose it as ⁠ P 0 ( x ) {\displaystyle P_{0}(x)} ⁠. If d is odd, then every real value of x provides a real value of ⁠ P 0 ( x ) {\displaystyle P_{0}(x)} ⁠, and one has a real branch that looks regular, although it is singular if d > 1. If d is even, then ⁠ P 0 ( x ) {\displaystyle P_{0}(x)} ⁠ and ⁠ P d / 2 ( x ) {\displaystyle P_{d/2}(x)} ⁠ have real values, but only for x ≥ 0. In this case, the real branch looks as a cusp (or is a cusp, depending on the definition of a cusp that is used). For example, the ordinary cusp has only one branch. If it is defined by the equation y 2 − x 3 = 0 , {\displaystyle y^{2}-x^{3}=0,} then the factorization is ( y − x 3 / 2 ) ( y + x 3 / 2 ) ; {\displaystyle (y-x^{3/2})(y+x^{3/2});} the ramification index is 2, and the two factors are real and define each a half branch. If the cusp is rotated, it equation becomes y 3 − x 2 = 0 , {\displaystyle y^{3}-x^{2}=0,} and the factorization is ( y − x 2 / 3 ) ( y − j 2 x 2 / 3 ) ( y − ( j 2 ) 2 x 2 / 3 ) , {\displaystyle (y-x^{2/3})(y-j^{2}x^{2/3})(y-(j^{2})^{2}x^{2/3}),} with j = ( 1 + − 3 ) / 2 {\displaystyle j=(1+{\sqrt {-3}})/2} (the coefficient ⁠ ( j 2 ) 2 {\displaystyle (j^{2})^{2}} ⁠ has not been simplified to j for showing how the above definition of ⁠ P i ( x ) {\displaystyle P_{i}(x)} ⁠ is specialized). Here the ramification index is 3, and only one factor is real; this shows that, in the first case, the two factors must be considered as defining the same branch. == Non-plane algebraic curves == An algebraic curve is an algebraic variety of dimension one. This implies that an affine curve in an affine space of dimension n is defined by, at least, n − 1 polynomials in n variables. To define a curve, these polynomials must generate a prime ideal of Krull dimension 1. This condition is not easy to test in practice. Therefore, the following way to represent non-plane curves may be preferred. Let f , g 0 , g 3 , … , g n {\displaystyle f,g_{0},g_{3},\ldots ,g_{n}} be n polynomials in two variables x1 and x2 such that f is irreducible. The points in the affine space of dimension n such whose coordinates satisfy the equations and inequations f ( x 1 , x 2 ) = 0 g 0 ( x 1 , x 2 ) ≠ 0 x 3 = g 3 ( x 1 , x 2 ) g 0 ( x 1 , x 2 ) ⋮ x n = g n ( x 1 , x 2 ) g 0 ( x 1 , x 2 ) {\displaystyle {\begin{aligned}&f(x_{1},x_{2})=0\\&g_{0}(x_{1},x_{2})\neq 0\\x_{3}&={\frac {g_{3}(x_{1},x_{2})}{g_{0}(x_{1},x_{2})}}\\&{}\ \vdots \\x_{n}&={\frac {g_{n}(x_{1},x_{2})}{g_{0}(x_{1},x_{2})}}\end{aligned}}} are all the points of an algebraic curve in which a finite number of points have been removed. This curve is defined by a system of generators of the ideal of the polynomials h such that it exists an integer k such g 0 k h {\displaystyle g_{0}^{k}h} belongs to the ideal generated by f , x 3 g 0 − g 3 , … , x n g 0 − g n {\displaystyle f,x_{3}g_{0}-g_{3},\ldots ,x_{n}g_{0}-g_{n}} . This representation is a birational equivalence between the curve and the plane curve defined by f. Every algebraic curve may be represented in this way. However, a linear change of variables may be needed in order to make almost always injective the projection on the two first variables. When a change of variables is needed, almost every change is convenient, as soon as it is defined over an infinite field. This representation allows us to deduce easily any property of a non-plane algebraic curve, including its graphical representation, from the corresponding property of its plane projection. For a curve defined by its implicit equations, above representation of the curve may easily deduced from a Gröbner basis for a block ordering such that the block of the smaller variables is (x1, x2). The polynomial f is the unique polynomial in the base that depends only of x1 and x2. The fractions gi/g0 are obtained by choosing, for i = 3, ..., n, a polynomial in the basis that is linear in xi and depends only on x1, x2 and xi. If these choices are not possible, this means either that the equations define an algebraic set that is not a variety, or that the variety is not of dimension one, or that one must change of coordinates. The latter case occurs when f exists and is unique, and, for i = 3, …, n, there exist polynomials whose leading monomial depends only on x1, x2 and xi. == Algebraic function fields == The study of algebraic curves can be reduced to the study of irreducible algebraic curves: those curves that cannot be written as the union of two smaller curves. Up to birational equivalence, the irreducible curves over a field F are categorically equivalent to algebraic function fields in one variable over F. Such an algebraic function field is a field extension K of F that contains an element x which is transcendental over F, and such that K is a finite algebraic extension of F(x), which is the field of rational functions in the indeterminate x over F. For example, consider the field C of complex numbers, over which we may define the field C(x) of rational functions in C. If y2 = x3 − x − 1, then the field C(x, y) is an elliptic function field. The element x is not uniquely determined; the field can also be regarded, for instance, as an extension of C(y). The algebraic curve corresponding to the function field is simply the set of points (x, y) in C2 satisfying y2 = x3 − x − 1. If the field F is not algebraically closed, the point of view of function fields is a little more general than that of considering the locus of points, since we include, for instance, "curves" with no points on them. For example, if the base field F is the field R of real numbers, then x2 + y2 = −1 defines an algebraic extension field of R(x), but the corresponding curve considered as a subset of R2 has no points. The equation x2 + y2 = −1 does define an irreducible algebraic curve over R in the scheme sense (an integral, separated one-dimensional schemes of finite type over R). In this sense, the one-to-one correspondence between irreducible algebraic curves over F (up to birational equivalence) and algebraic function fields in one variable over F holds in general. Two curves can be birationally equivalent (i.e. have isomorphic function fields) without being isomorphic as curves. The situation becomes easier when dealing with nonsingular curves, i.e. those that lack any singularities. Two nonsingular projective curves over a field are isomorphic if and only if their function fields are isomorphic. Tsen's theorem is about the function field of an algebraic curve over an algebraically closed field. == Complex curves and real surfaces == A complex projective algebraic curve resides in n-dimensional complex projective space CPn. This has complex dimension n, but topological dimension, as a real manifold, 2n, and is compact, connected, and orientable. An algebraic curve over C likewise has topological dimension two; in other words, it is a surface. The topological genus of this surface, that is the number of handles or donut holes, is equal to the geometric genus of the algebraic curve that may be computed by algebraic means. In short, if one consider a plane projection of a nonsingular curve that has degree d and only ordinary singularities (singularities of multiplicity two with distinct tangents), then the genus is (d − 1)(d − 2)/2 − k, where k is the number of these singularities. === Compact Riemann surfaces === A Riemann surface is a connected complex analytic manifold of one complex dimension, which makes it a connected real manifold of two dimensions. It is compact if it is compact as a topological space. There is a triple equivalence of categories between the category of smooth irreducible projective algebraic curves over C (with non-constant regular maps as morphisms), the category of compact Riemann surfaces (with non-constant holomorphic maps as morphisms), and the opposite of the category of algebraic function fields in one variable over C (with field homomorphisms that fix C as morphisms). This means that in studying these three subjects we are in a sense studying one and the same thing. It allows complex analytic methods to be used in algebraic geometry, and algebraic-geometric methods in complex analysis and field-theoretic methods to be used in both. This is characteristic of a much wider class of problems in algebraic geometry. See also algebraic geometry and analytic geometry for a more general theory. == Singularities == Using the intrinsic concept of tangent space, points P on an algebraic curve C are classified as smooth (synonymous: non-singular), or else singular. Given n − 1 homogeneous polynomials in n + 1 variables, we may find the Jacobian matrix as the (n − 1)×(n + 1) matrix of the partial derivatives. If the rank of this matrix is n − 1, then the polynomials define an algebraic curve (otherwise they define an algebraic variety of higher dimension). If the rank remains n − 1 when the Jacobian matrix is evaluated at a point P on the curve, then the point is a smooth or regular point; otherwise it is a singular point. In particular, if the curve is a plane projective algebraic curve, defined by a single homogeneous polynomial equation f(x,y,z) = 0, then the singular points are precisely the points P where the rank of the 1×(n + 1) matrix is zero, that is, where ∂ f ∂ x ( P ) = ∂ f ∂ y ( P ) = ∂ f ∂ z ( P ) = 0. {\displaystyle {\frac {\partial f}{\partial x}}(P)={\frac {\partial f}{\partial y}}(P)={\frac {\partial f}{\partial z}}(P)=0.} Since f is a polynomial, this definition is purely algebraic and makes no assumption about the nature of the field F, which in particular need not be the real or complex numbers. It should, of course, be recalled that (0,0,0) is not a point of the curve and hence not a singular point. Similarly, for an affine algebraic curve defined by a single polynomial equation f(x,y) = 0, then the singular points are precisely the points P of the curve where the rank of the 1×n Jacobian matrix is zero, that is, where f ( P ) = ∂ f ∂ x ( P ) = ∂ f ∂ y ( P ) = 0. {\displaystyle f(P)={\frac {\partial f}{\partial x}}(P)={\frac {\partial f}{\partial y}}(P)=0.} The singularities of a curve are not birational invariants. However, locating and classifying the singularities of a curve is one way of computing the genus, which is a birational invariant. For this to work, we should consider the curve projectively and require F to be algebraically closed, so that all the singularities which belong to the curve are considered. === Classification of singularities === Singular points include multiple points where the curve crosses over itself, and also various types of cusp, for example that shown by the curve with equation x3 = y2 at (0,0). A curve C has at most a finite number of singular points. If it has none, it can be called smooth or non-singular. Commonly, this definition is understood over an algebraically closed field and for a curve C in a projective space (i.e., complete in the sense of algebraic geometry). For example, the plane curve of equation y − x 3 = 0 {\displaystyle y-x^{3}=0} is considered as singular, as having a singular point (a cusp) at infinity. In the remainder of this section, one considers a plane curve C defined as the zero set of a bivariate polynomial f(x, y). Some of the results, but not all, may be generalized to non-plane curves. The singular points are classified by means of several invariants. The multiplicity m is defined as the maximum integer such that the derivatives of f to all orders up to m – 1 vanish (also the minimal intersection number between the curve and a straight line at P). Intuitively, a singular point has delta invariant δ if it concentrates δ ordinary double points at P. To make this precise, the blow up process produces so-called infinitely near points, and summing m(m − 1)/2 over the infinitely near points, where m is their multiplicity, produces δ. For an irreducible and reduced curve and a point P we can define δ algebraically as the length of O ~ P / O P {\displaystyle {\widetilde {\mathcal {O}}}_{P}/{\mathcal {O}}_{P}} where O P {\displaystyle {\mathcal {O}}_{P}} is the local ring at P and O ~ P {\displaystyle {\widetilde {\mathcal {O}}}_{P}} is its integral closure. The Milnor number μ of a singularity is the degree of the mapping ⁠grad f(x,y)/|grad f(x,y)|⁠ on the small sphere of radius ε, in the sense of the topological degree of a continuous mapping, where grad f is the (complex) gradient vector field of f. It is related to δ and r by the Milnor–Jung formula, Here, the branching number r of P is the number of locally irreducible branches at P. For example, r = 1 at an ordinary cusp, and r = 2 at an ordinary double point. The multiplicity m is at least r, and that P is singular if and only if m is at least 2. Moreover, δ is at least m(m-1)/2. Computing the delta invariants of all of the singularities allows the genus g of the curve to be determined; if d is the degree, then g = 1 2 ( d − 1 ) ( d − 2 ) − ∑ P δ P , {\displaystyle g={\frac {1}{2}}(d-1)(d-2)-\sum _{P}\delta _{P},} where the sum is taken over all singular points P of the complex projective plane curve. It is called the genus formula. Assign the invariants [m, δ, r] to a singularity, where m is the multiplicity, δ is the delta-invariant, and r is the branching number. Then an ordinary cusp is a point with invariants [2,1,1] and an ordinary double point is a point with invariants [2,1,2], and an ordinary m-multiple point is a point with invariants [m, m(m − 1)/2, m]. == Examples of curves == === Rational curves === A rational curve, also called a unicursal curve, is any curve which is birationally equivalent to a line, which we may take to be a projective line; accordingly, we may identify the function field of the curve with the field of rational functions in one indeterminate F(x). If F is algebraically closed, this is equivalent to a curve of genus zero; however, the field of all real algebraic functions defined on the real algebraic variety x2 + y2 = −1 is a field of genus zero which is not a rational function field. Concretely, a rational curve embedded in an affine space of dimension n over F can be parameterized (except for isolated exceptional points) by means of n rational functions of a single parameter t; by reducing these rational functions to the same denominator, the n+1 resulting polynomials define a polynomial parametrization of the projective completion of the curve in the projective space. An example is the rational normal curve, where all these polynomials are monomials. Any conic section defined over F with a rational point in F is a rational curve. It can be parameterized by drawing a line with slope t through the rational point, and an intersection with the plane quadratic curve; this gives a polynomial with F-rational coefficients and one F-rational root, hence the other root is F-rational (i.e., belongs to F) also. For example, consider the ellipse x2 + xy + y2 = 1, where (−1, 0) is a rational point. Drawing a line with slope t from (−1,0), y = t(x + 1), substituting it in the equation of the ellipse, factoring, and solving for x, we obtain x = 1 − t 2 1 + t + t 2 . {\displaystyle x={\frac {1-t^{2}}{1+t+t^{2}}}.} Then the equation for y is y = t ( x + 1 ) = t ( t + 2 ) 1 + t + t 2 , {\displaystyle y=t(x+1)={\frac {t(t+2)}{1+t+t^{2}}}\,,} which defines a rational parameterization of the ellipse and hence shows the ellipse is a rational curve. All points of the ellipse are given, except for (−1,1), which corresponds to t = ∞; the entire curve is parameterized therefore by the real projective line. Such a rational parameterization may be considered in the projective space by equating the first projective coordinates to the numerators of the parameterization and the last one to the common denominator. As the parameter is defined in a projective line, the polynomials in the parameter should be homogenized. For example, the projective parameterization of the above ellipse is X = U 2 − T 2 , Y = T ( T + 2 U ) , Z = T 2 + T U + U 2 . {\displaystyle X=U^{2}-T^{2},\quad Y=T\,(T+2\,U),\quad Z=T^{2}+TU+U^{2}.} Eliminating T and U between these equations we get again the projective equation of the ellipse X 2 + X Y + Y 2 = Z 2 , {\displaystyle X^{2}+X\,Y+Y^{2}=Z^{2},} which may be easily obtained directly by homogenizing the above equation. Many of the curves on Wikipedia's list of curves are rational and hence have similar rational parameterizations. === Rational plane curves === Rational plane curves are rational curves embedded into P 2 {\displaystyle \mathbb {P} ^{2}} . Given generic sections s 1 , s 2 , s 3 ∈ Γ ( P 1 , O ( d ) ) {\displaystyle s_{1},s_{2},s_{3}\in \Gamma (\mathbb {P} ^{1},{\mathcal {O}}(d))} of degree d {\displaystyle d} homogeneous polynomials in two coordinates, x , y {\displaystyle x,y} , there is a map s : P 1 → P 2 {\displaystyle s:\mathbb {P} ^{1}\to \mathbb {P} ^{2}} given by s ( [ x : y ] ) = [ s 1 ( [ x : y ] ) : s 2 ( [ x : y ] ) : s 3 ( [ x : y ] ) ] {\displaystyle s([x:y])=[s_{1}([x:y]):s_{2}([x:y]):s_{3}([x:y])]} defining a rational plane curve of degree d {\displaystyle d} . There is an associated moduli space M = M ¯ 0 , 0 ( P 2 , d ⋅ [ H ] ) {\displaystyle {\mathcal {M}}={\overline {\mathcal {M}}}_{0,0}(\mathbb {P} ^{2},d\cdot [H])} (where [ H ] {\displaystyle [H]} is the hyperplane class) parametrizing all such stable curves. A dimension count can be made to determine the moduli spaces dimension: There are d + 1 {\displaystyle d+1} parameters in Γ ( P 1 , O ( d ) ) {\displaystyle \Gamma (\mathbb {P} ^{1},{\mathcal {O}}(d))} giving 3 d + 3 {\displaystyle 3d+3} parameters total for each of the sections. Then, since they are considered up to a projective quotient in P 2 {\displaystyle \mathbb {P} ^{2}} there is 1 {\displaystyle 1} less parameter in M {\displaystyle {\mathcal {M}}} . Furthermore, there is a three dimensional group of automorphisms of P 1 {\displaystyle \mathbb {P} ^{1}} , hence M {\displaystyle {\mathcal {M}}} has dimension 3 d + 3 − 1 − 3 = 3 d − 1 {\displaystyle 3d+3-1-3=3d-1} . This moduli space can be used to count the number N d {\displaystyle N_{d}} of degree d {\displaystyle d} rational plane curves intersecting 3 d − 1 {\displaystyle 3d-1} points using Gromov–Witten theory. It is given by the recursive relation N d = ∑ d A + d B = d N d A N d B d A 2 d B ( d B ( 3 d − 4 3 d A − 2 ) − d A ( 3 d − 4 3 d A − 1 ) ) {\displaystyle N_{d}=\sum _{d_{A}+d_{B}=d}N_{d_{A}}N_{d_{B}}d_{A}^{2}d_{B}\left(d_{B}{\binom {3d-4}{3d_{A}-2}}-d_{A}{\binom {3d-4}{3d_{A}-1}}\right)} where N 1 = N 2 = 1 {\displaystyle N_{1}=N_{2}=1} . === Elliptic curves === An elliptic curve may be defined as any curve of genus one with a rational point: a common model is a nonsingular cubic curve, which suffices to model any genus one curve. In this model the distinguished point is commonly taken to be an inflection point at infinity; this amounts to requiring that the curve can be written in Tate-Weierstrass form, which in its projective version is y 2 z + a 1 x y z + a 3 y z 2 = x 3 + a 2 x 2 z + a 4 x z 2 + a 6 z 3 . {\displaystyle y^{2}z+a_{1}xyz+a_{3}yz^{2}=x^{3}+a_{2}x^{2}z+a_{4}xz^{2}+a_{6}z^{3}.} If the characteristic of the field is different from 2 and 3, then a linear change of coordinates allows putting a 1 = a 2 = a 3 = 0 , {\displaystyle a_{1}=a_{2}=a_{3}=0,} which gives the classical Weierstrass form y 2 = x 3 + p x + q . {\displaystyle y^{2}=x^{3}+px+q.} Elliptic curves carry the structure of an abelian group with the distinguished point as the identity of the group law. In a plane cubic model three points sum to zero in the group if and only if they are collinear. For an elliptic curve defined over the complex numbers the group is isomorphic to the additive group of the complex plane modulo the period lattice of the corresponding elliptic functions. The intersection of two quadric surfaces is, in general, a nonsingular curve of genus one and degree four, and thus an elliptic curve, if it has a rational point. In special cases, the intersection either may be a rational singular quartic or is decomposed in curves of smaller degrees which are not always distinct (either a cubic curve and a line, or two conics, or a conic and two lines, or four lines). === Curves of genus greater than one === Curves of genus greater than one differ markedly from both rational and elliptic curves. Such curves defined over the rational numbers, by Faltings's theorem, can have only a finite number of rational points, and they may be viewed as having a hyperbolic geometry structure. Examples are the hyperelliptic curves, the Klein quartic curve, and the Fermat curve xn + yn = zn when n is greater than three. Also projective plane curves in P 2 {\displaystyle \mathbb {P} ^{2}} and curves in P 1 × P 1 {\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}} provide many useful examples. ==== Projective plane curves ==== Plane curves C ⊂ P 2 {\displaystyle C\subset \mathbb {P} ^{2}} of degree k {\displaystyle k} , which can be constructed as the vanishing locus of a generic section s ∈ Γ ( P 2 , O ( k ) ) {\displaystyle s\in \Gamma (\mathbb {P} ^{2},{\mathcal {O}}(k))} , have genus ( k − 1 ) ( k − 2 ) 2 {\displaystyle {\frac {(k-1)(k-2)}{2}}} which can be computed using coherent sheaf cohomology. Here's a brief summary of the curves' genera relative to their degree For example, the curve x 4 + y 4 + z 4 {\displaystyle x^{4}+y^{4}+z^{4}} defines a curve of genus 3 {\displaystyle 3} which is smooth since the differentials 4 x 3 , 4 y 3 , 4 z 3 {\displaystyle 4x^{3},4y^{3},4z^{3}} have no common zeros with the curve. A non-example of a generic section is the curve x ( x 2 + y 2 + z 2 ) {\displaystyle x(x^{2}+y^{2}+z^{2})} which, by Bezout's theorem, should intersect at most 2 {\displaystyle 2} points; it is the union of two rational curves C 1 ∪ C 2 {\displaystyle C_{1}\cup C_{2}} intersecting at two points. Note C 1 {\displaystyle C_{1}} is given by the vanishing locus of x {\displaystyle x} and C 2 {\displaystyle C_{2}} is given by the vanishing locus of x 2 + y 2 + z 2 {\displaystyle x^{2}+y^{2}+z^{2}} . These can be found explicitly: a point lies in both if x = 0 {\displaystyle x=0} . So the two solutions are the points [ 0 : y : z ] {\displaystyle [0:y:z]} such that y 2 + z 2 = 0 {\displaystyle y^{2}+z^{2}=0} , which are [ 0 : 1 : − − 1 ] {\displaystyle [0:1:-{\sqrt {-1}}]} and [ 0 : 1 : − 1 ] {\displaystyle [0:1:{\sqrt {-1}}]} . ==== Curves in product of projective lines ==== Curve C ⊂ P 1 × P 1 {\displaystyle C\subset \mathbb {P} ^{1}\times \mathbb {P} ^{1}} given by the vanishing locus of s ∈ Γ ( P 1 × P 1 , O ( a , b ) ) {\displaystyle s\in \Gamma (\mathbb {P} ^{1}\times \mathbb {P} ^{1},{\mathcal {O}}(a,b))} , for a , b ≥ 2 {\displaystyle a,b\geq 2} , give curves of genus a b − a − b + 1 {\displaystyle ab-a-b+1} which can be checked using coherent sheaf cohomology. If a = 2 {\displaystyle a=2} , then they define curves of genus 2 b − 2 − b + 1 = b − 1 {\displaystyle 2b-2-b+1=b-1} , hence a curve of any genus can be constructed as a curve in P 1 × P 1 {\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}} . Their genera can be summarized in the table and for a = 3 {\displaystyle a=3} , this is == See also == === Classical algebraic geometry === === Modern algebraic geometry === === Geometry of Riemann surfaces === == Notes == == References == Brieskorn, Egbert; Knörrer, Horst (2013). Plane Algebraic Curves. Translated by Stillwell, John. Birkhäuser. ISBN 978-3-0348-5097-1. Chevalley, Claude (1951). Introduction to the Theory of Algebraic Functions of One Variable. Mathematical surveys. Vol. 6. American Mathematical Society. ISBN 978-0-8218-1506-9. {{cite book}}: ISBN / Date incompatibility (help) Coolidge, Julian L. (2004) [1931]. A Treatise on Algebraic Plane Curves. Dover. ISBN 978-0-486-49576-7. Farkas, H. M.; Kra, I. (2012) [1980]. Riemann Surfaces. Graduate Texts in Mathematics. Vol. 71. Springer. ISBN 978-1-4684-9930-8. Fulton, William (1989). Algebraic Curves: An Introduction to Algebraic Geometry. Mathematics lecture note series. Vol. 30 (3rd ed.). Addison-Wesley. ISBN 978-0-201-51010-2. Gibson, C.G. (1998). Elementary Geometry of Algebraic Curves: An Undergraduate Introduction. Cambridge University Press. ISBN 978-0-521-64641-3. Griffiths, Phillip A. (1985). Introduction to Algebraic Curves. Translation of Mathematical Monographs. Vol. 70 (3rd ed.). American Mathematical Society. ISBN 9780821845370. Hartshorne, Robin (2013) [1977]. Algebraic Geometry. Graduate Texts in Mathematics. Vol. 52. Springer. ISBN 978-1-4757-3849-0. Iitaka, Shigeru (2011) [1982]. Algebraic Geometry: An Introduction to Birational Geometry of Algebraic Varieties. Graduate Texts in Mathematics. Vol. 76. Springer New York. ISBN 978-1-4613-8121-1. Milnor, John (1968). Singular Points of Complex Hypersurfaces. Princeton University Press. ISBN 0-691-08065-8. Serre, Jean-Pierre (2012) [1988]. Algebraic Groups and Class Fields. Graduate Texts in Mathematics. Vol. 117. Springer. ISBN 978-1-4612-1035-1. Kötter, Ernst (1887). "Grundzüge einer rein geometrischen Theorie der algebraischen ebenen Curven" [Fundamentals of a purely geometrical theory of algebraic plane curves]. Transactions of the Royal Academy of Berlin. — gained the 1886 Academy prize
Wikipedia/Plane_algebraic_curve
In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel. The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis: If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev. This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. == Implementations == The Faugère F4 algorithm is implemented in FGb, Faugère's own implementation, which includes interfaces for using it from C/C++ or Maple, in Maple computer algebra system, as the option method=fgb of function Groebner[gbasis] in the Magma computer algebra system, in the SageMath computer algebra system, Study versions of the Faugère F5 algorithm is implemented in the SINGULAR computer algebra system; the SageMath computer algebra system. in SymPy Python package. == Applications == The previously intractable "cyclic 10" problem was solved by F5, as were a number of systems related to cryptography; for example HFE and C*. == References == Faugère, J.-C. (June 1999). "A new efficient algorithm for computing Gröbner bases (F4)" (PDF). Journal of Pure and Applied Algebra. 139 (1): 61–88. doi:10.1016/S0022-4049(99)00005-5. ISSN 0022-4049. Faugère, J.-C. (July 2002). "A new efficient algorithm for computing Gröbner bases without reduction to zero ( F 5 )". Proceedings of the 2002 international symposium on Symbolic and algebraic computation (PDF). ACM Press. pp. 75–83. CiteSeerX 10.1.1.188.651. doi:10.1145/780506.780516. ISBN 978-1-58113-484-1. S2CID 15833106. Till Stegers Faugère's F5 Algorithm Revisited (alternative link). Diplom-Mathematiker Thesis, advisor Johannes Buchmann, Technische Universität Darmstadt, September 2005 (revised April 27, 2007). Many references, including links to available implementations. == External links == Faugère's home page (includes pdf reprints of additional papers) An introduction to the F4 algorithm.
Wikipedia/Faugère's_F4_and_F5_algorithms
In mathematics, birational geometry is a field of algebraic geometry in which the goal is to determine when two algebraic varieties are isomorphic outside lower-dimensional subsets. This amounts to studying mappings that are given by rational functions rather than polynomials; the map may fail to be defined where the rational functions have poles. == Birational maps == === Rational maps === A rational map from one variety (understood to be irreducible) X {\displaystyle X} to another variety Y {\displaystyle Y} , written as a dashed arrow X ⇢Y, is defined as a morphism from a nonempty open subset U ⊂ X {\displaystyle U\subset X} to Y {\displaystyle Y} . By definition of the Zariski topology used in algebraic geometry, a nonempty open subset U {\displaystyle U} is always dense in X {\displaystyle X} , in fact the complement of a lower-dimensional subset. Concretely, a rational map can be written in coordinates using rational functions. === Birational maps === A birational map from X to Y is a rational map f : X ⇢ Y such that there is a rational map Y ⇢ X inverse to f. A birational map induces an isomorphism from a nonempty open subset of X to a nonempty open subset of Y, and vice versa: an isomorphism between nonempty open subsets of X, Y by definition gives a birational map f : X ⇢ Y. In this case, X and Y are said to be birational, or birationally equivalent. In algebraic terms, two varieties over a field k are birational if and only if their function fields are isomorphic as extension fields of k. A special case is a birational morphism f : X → Y, meaning a morphism which is birational. That is, f is defined everywhere, but its inverse may not be. Typically, this happens because a birational morphism contracts some subvarieties of X to points in Y. === Birational equivalence and rationality === A variety X is said to be rational if it is birational to affine space (or equivalently, to projective space) of some dimension. Rationality is a very natural property: it means that X minus some lower-dimensional subset can be identified with affine space minus some lower-dimensional subset. ==== Birational equivalence of a plane conic ==== For example, the circle X {\displaystyle X} with equation x 2 + y 2 − 1 = 0 {\displaystyle x^{2}+y^{2}-1=0} in the affine plane is a rational curve, because there is a rational map f : A 1 {\displaystyle \mathbb {A} ^{1}} ⇢ X given by f ( t ) = ( 2 t 1 + t 2 , 1 − t 2 1 + t 2 ) , {\displaystyle f(t)=\left({\frac {2t}{1+t^{2}}},{\frac {1-t^{2}}{1+t^{2}}}\right),} which has a rational inverse g: X ⇢ A 1 {\displaystyle \mathbb {A} ^{1}} given by g ( x , y ) = 1 − y x . {\displaystyle g(x,y)={\frac {1-y}{x}}.} Applying the map f with t a rational number gives a systematic construction of Pythagorean triples. The rational map f {\displaystyle f} is not defined on the locus where 1 + t 2 = 0 {\displaystyle 1+t^{2}=0} . So, on the complex affine line A C 1 {\displaystyle \mathbb {A} _{\mathbb {C} }^{1}} , f {\displaystyle f} is a morphism on the open subset U = A C 1 − { i , − i } {\displaystyle U=\mathbb {A} _{\mathbb {C} }^{1}-\{i,-i\}} , f : U → X {\displaystyle f:U\to X} . Likewise, the rational map g : X ⇢ A 1 {\displaystyle \mathbb {A} ^{1}} is not defined at the point (0,−1) in X {\displaystyle X} . ==== Birational equivalence of smooth quadrics and Pn ==== More generally, a smooth quadric (degree 2) hypersurface X of any dimension n is rational, by stereographic projection. (For X a quadric over a field k, X must be assumed to have a k-rational point; this is automatic if k is algebraically closed.) To define stereographic projection, let p be a point in X. Then a birational map from X to the projective space P n {\displaystyle \mathbb {P} ^{n}} of lines through p is given by sending a point q in X to the line through p and q. This is a birational equivalence but not an isomorphism of varieties, because it fails to be defined where q = p (and the inverse map fails to be defined at those lines through p which are contained in X). ===== Birational equivalence of quadric surface ===== The Segre embedding gives an embedding P 1 × P 1 → P 3 {\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}\to \mathbb {P} ^{3}} given by ( [ x , y ] , [ z , w ] ) ↦ [ x z , x w , y z , y w ] . {\displaystyle ([x,y],[z,w])\mapsto [xz,xw,yz,yw].} The image is the quadric surface x 0 x 3 = x 1 x 2 {\displaystyle x_{0}x_{3}=x_{1}x_{2}} in P 3 {\displaystyle \mathbb {P} ^{3}} . That gives another proof that this quadric surface is rational, since P 1 × P 1 {\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}} is obviously rational, having an open subset isomorphic to A 2 {\displaystyle \mathbb {A} ^{2}} . == Minimal models and resolution of singularities == Every algebraic variety is birational to a projective variety (Chow's lemma). So, for the purposes of birational classification, it is enough to work only with projective varieties, and this is usually the most convenient setting. Much deeper is Hironaka's 1964 theorem on resolution of singularities: over a field of characteristic 0 (such as the complex numbers), every variety is birational to a smooth projective variety. Given that, it is enough to classify smooth projective varieties up to birational equivalence. In dimension 1, if two smooth projective curves are birational, then they are isomorphic. But that fails in dimension at least 2, by the blowing up construction. By blowing up, every smooth projective variety of dimension at least 2 is birational to infinitely many "bigger" varieties, for example with bigger Betti numbers. This leads to the idea of minimal models: is there a unique simplest variety in each birational equivalence class? The modern definition is that a projective variety X is minimal if the canonical line bundle KX has nonnegative degree on every curve in X; in other words, KX is nef. It is easy to check that blown-up varieties are never minimal. This notion works perfectly for algebraic surfaces (varieties of dimension 2). In modern terms, one central result of the Italian school of algebraic geometry from 1890–1910, part of the classification of surfaces, is that every surface X is birational either to a product P 1 × C {\displaystyle \mathbb {P} ^{1}\times C} for some curve C or to a minimal surface Y. The two cases are mutually exclusive, and Y is unique if it exists. When Y exists, it is called the minimal model of X. == Birational invariants == At first, it is not clear how to show that there are any algebraic varieties which are not rational. In order to prove this, some birational invariants of algebraic varieties are needed. A birational invariant is any kind of number, ring, etc which is the same, or isomorphic, for all varieties that are birationally equivalent. === Plurigenera === One useful set of birational invariants are the plurigenera. The canonical bundle of a smooth variety X of dimension n means the line bundle of n-forms KX = Ωn, which is the nth exterior power of the cotangent bundle of X. For an integer d, the dth tensor power of KX is again a line bundle. For d ≥ 0, the vector space of global sections H0(X, KXd) has the remarkable property that a birational map f : X ⇢ Y between smooth projective varieties induces an isomorphism H0(X, KXd) ≅ H0(Y, KYd). For d ≥ 0, define the dth plurigenus Pd as the dimension of the vector space H0(X, KXd); then the plurigenera are birational invariants for smooth projective varieties. In particular, if any plurigenus Pd with d > 0 is not zero, then X is not rational. === Kodaira dimension === A fundamental birational invariant is the Kodaira dimension, which measures the growth of the plurigenera Pd as d goes to infinity. The Kodaira dimension divides all varieties of dimension n into n + 2 types, with Kodaira dimension −∞, 0, 1, ..., or n. This is a measure of the complexity of a variety, with projective space having Kodaira dimension −∞. The most complicated varieties are those with Kodaira dimension equal to their dimension n, called varieties of general type. === Summands of ⊗kΩ1 and some Hodge numbers === More generally, for any natural summand E ( Ω 1 ) = ⨂ k Ω 1 {\displaystyle E(\Omega ^{1})=\bigotimes ^{k}\Omega ^{1}} of the r-th tensor power of the cotangent bundle Ω1 with r ≥ 0, the vector space of global sections H0(X, E(Ω1)) is a birational invariant for smooth projective varieties. In particular, the Hodge numbers h p , 0 = H 0 ( X , Ω p ) {\displaystyle h^{p,0}=H^{0}(X,\Omega ^{p})} are birational invariants of X. (Most other Hodge numbers hp,q are not birational invariants, as shown by blowing up.) === Fundamental group of smooth projective varieties === The fundamental group π1(X) is a birational invariant for smooth complex projective varieties. The "Weak factorization theorem", proved by Abramovich, Karu, Matsuki, and Włodarczyk (2002), says that any birational map between two smooth complex projective varieties can be decomposed into finitely many blow-ups or blow-downs of smooth subvarieties. This is important to know, but it can still be very hard to determine whether two smooth projective varieties are birational. == Minimal models in higher dimensions == A projective variety X is called minimal if the canonical bundle KX is nef. For X of dimension 2, it is enough to consider smooth varieties in this definition. In dimensions at least 3, minimal varieties must be allowed to have certain mild singularities, for which KX is still well-behaved; these are called terminal singularities. That being said, the minimal model conjecture would imply that every variety X is either covered by rational curves or birational to a minimal variety Y. When it exists, Y is called a minimal model of X. Minimal models are not unique in dimensions at least 3, but any two minimal varieties which are birational are very close. For example, they are isomorphic outside subsets of codimension at least 2, and more precisely they are related by a sequence of flops. So the minimal model conjecture would give strong information about the birational classification of algebraic varieties. The conjecture was proved in dimension 3 by Mori. There has been great progress in higher dimensions, although the general problem remains open. In particular, Birkar, Cascini, Hacon, and McKernan (2010) proved that every variety of general type over a field of characteristic zero has a minimal model. == Uniruled varieties == A variety is called uniruled if it is covered by rational curves. A uniruled variety does not have a minimal model, but there is a good substitute: Birkar, Cascini, Hacon, and McKernan showed that every uniruled variety over a field of characteristic zero is birational to a Fano fiber space. This leads to the problem of the birational classification of Fano fiber spaces and (as the most interesting special case) Fano varieties. By definition, a projective variety X is Fano if the anticanonical bundle K X ∗ {\displaystyle K_{X}^{*}} is ample. Fano varieties can be considered the algebraic varieties which are most similar to projective space. In dimension 2, every Fano variety (known as a Del Pezzo surface) over an algebraically closed field is rational. A major discovery in the 1970s was that starting in dimension 3, there are many Fano varieties which are not rational. In particular, smooth cubic 3-folds are not rational by Clemens–Griffiths (1972), and smooth quartic 3-folds are not rational by Iskovskikh–Manin (1971). Nonetheless, the problem of determining exactly which Fano varieties are rational is far from solved. For example, it is not known whether there is any smooth cubic hypersurface in P n + 1 {\displaystyle \mathbb {P} ^{n+1}} with n ≥ 4 which is not rational. == Birational automorphism groups == Algebraic varieties differ widely in how many birational automorphisms they have. Every variety of general type is extremely rigid, in the sense that its birational automorphism group is finite. At the other extreme, the birational automorphism group of projective space P n {\displaystyle \mathbb {P} ^{n}} over a field k, known as the Cremona group Crn(k), is large (in a sense, infinite-dimensional) for n ≥ 2. For n = 2, the complex Cremona group C r 2 ( C ) {\displaystyle Cr_{2}(\mathbb {C} )} is generated by the "quadratic transformation" [x,y,z] ↦ [1/x, 1/y, 1/z] together with the group P G L ( 3 , C ) {\displaystyle PGL(3,\mathbb {C} )} of automorphisms of P 2 , {\displaystyle \mathbb {P} ^{2},} by Max Noether and Castelnuovo. By contrast, the Cremona group in dimensions n ≥ 3 is very much a mystery: no explicit set of generators is known. Iskovskikh–Manin (1971) showed that the birational automorphism group of a smooth quartic 3-fold is equal to its automorphism group, which is finite. In this sense, quartic 3-folds are far from being rational, since the birational automorphism group of a rational variety is enormous. This phenomenon of "birational rigidity" has since been discovered in many other Fano fiber spaces. == Applications == Birational geometry has found applications in other areas of geometry, but especially in traditional problems in algebraic geometry. Famously the minimal model program was used to construct moduli spaces of varieties of general type by János Kollár and Nicholas Shepherd-Barron, now known as KSB moduli spaces. Birational geometry has recently found important applications in the study of K-stability of Fano varieties through general existence results for Kähler–Einstein metrics, in the development of explicit invariants of Fano varieties to test K-stability by computing on birational models, and in the construction of moduli spaces of Fano varieties. Important results in birational geometry such as Birkar's proof of boundedness of Fano varieties have been used to prove existence results for moduli spaces. == See also == Abundance conjecture == Citations == === Notes === == References ==
Wikipedia/Birational_transformation
Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys to provide equivalent security, compared to cryptosystems based on modular exponentiation in Galois fields, such as the RSA cryptosystem and ElGamal cryptosystem. Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization. == History == The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005. In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields: Five prime fields F p {\displaystyle \mathbb {F} _{p}} for certain primes p of sizes 192, 224, 256, 384, and 521 bits. For each of the prime fields, one elliptic curve is recommended. Five binary fields F 2 m {\displaystyle \mathbb {F} _{2^{m}}} for m equal 163, 233, 283, 409, and 571. For each of the binary fields, one elliptic curve and one Koblitz curve was selected. The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency. At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The NSA allows their use for protecting information classified up to top secret with 384-bit keys. Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption. Elliptic curve cryptography is used successfully in numerous popular protocols, such as Transport Layer Security and Bitcoin. === Security concerns === In 2013, The New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups. Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC. === Patents === While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However, RSA Laboratories and Daniel J. Bernstein have argued that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents. == Elliptic curve theory == For the purposes of this article, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation y 2 = x 3 + a x + b , {\displaystyle y^{2}=x^{3}+ax+b,} along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation would be somewhat more complicated. This set of points, together with the group operation of elliptic curves, is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety: Div 0 ⁡ ( E ) → Pic 0 ⁡ ( E ) ≃ E . {\displaystyle \operatorname {Div} ^{0}(E)\to \operatorname {Pic} ^{0}(E)\simeq E.} === Application to cryptography === Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems, such as RSA's 1983 patent, based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors which are far apart. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible (the computational Diffie–Hellman assumption): this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem. The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smaller key size, reducing storage and transmission requirements. For example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key. === Cryptographic schemes === Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group ( Z p ) × {\displaystyle (\mathbb {Z} _{p})^{\times }} with an elliptic curve: The Elliptic-curve Diffie–Hellman (ECDH) key agreement scheme is based on the Diffie–Hellman scheme, The Elliptic Curve Integrated Encryption Scheme (ECIES), also known as Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme, The Elliptic Curve Digital Signature Algorithm (ECDSA) is based on the Digital Signature Algorithm, The deformation scheme using Harrison's p-adic Manhattan metric, The Edwards-curve Digital Signature Algorithm (EdDSA) is based on Schnorr signature and uses twisted Edwards curves, The ECMQV key agreement scheme is based on the MQV key agreement scheme, The ECQV implicit certificate scheme. == Implementation == Some common implementation considerations include: === Domain parameters === To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two ( 2 m {\displaystyle 2^{m}} ); the latter case is called the binary case, and this case necessitates the choice of an auxiliary curve denoted by f. Thus the field is defined by p in the prime case and the pair of m and f in the binary case. The elliptic curve is defined by the constants a and b used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. base point) G. For cryptographic application, the order of G, that is the smallest positive number n such that n G = O {\displaystyle nG={\mathcal {O}}} (the point at infinity of the curve, and the identity element), is normally prime. Since n is the size of a subgroup of E ( F p ) {\displaystyle E(\mathbb {F} _{p})} it follows from Lagrange's theorem that the number h = 1 n | E ( F p ) | {\displaystyle h={\frac {1}{n}}|E(\mathbb {F} _{p})|} is an integer. In cryptographic applications, this number h, called the cofactor, must be small ( h ≤ 4 {\displaystyle h\leq 4} ) and, preferably, h = 1 {\displaystyle h=1} . To summarize: in the prime case, the domain parameters are ( p , a , b , G , n , h ) {\displaystyle (p,a,b,G,n,h)} ; in the binary case, they are ( m , f , a , b , G , n , h ) {\displaystyle (m,f,a,b,G,n,h)} . Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use. The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents: NIST, Recommended Elliptic Curves for Government Use SECG, SEC 2: Recommended Elliptic Curve Domain Parameters ECC Brainpool (RFC 5639), ECC Brainpool Standard Curves and Curve Generation SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name. If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods: Select a random curve and use a general point-counting algorithm, for example, Schoof's algorithm or the Schoof–Elkies–Atkin algorithm, Select a random curve from a family which allows easy calculation of the number of points (e.g., Koblitz curves), or Select the number of points and generate a curve with this number of points using the complex multiplication technique. Several classes of curves are weak and should be avoided: Curves over F 2 m {\displaystyle \mathbb {F} _{2^{m}}} with non-prime m are vulnerable to Weil descent attacks. Curves such that n divides p B − 1 {\displaystyle p^{B}-1} (where p is the characteristic of the field: q for a prime field, or 2 {\displaystyle 2} for a binary field) for sufficiently small B are vulnerable to Menezes–Okamoto–Vanstone (MOV) attack which applies usual discrete logarithm problem (DLP) in a small-degree extension field of F p {\displaystyle \mathbb {F} _{p}} to solve ECDLP. The bound B should be chosen so that discrete logarithms in the field F p B {\displaystyle \mathbb {F} _{p^{B}}} are at least as difficult to compute as discrete logs on the elliptic curve E ( F q ) {\displaystyle E(\mathbb {F} _{q})} . Curves such that | E ( F q ) | = q {\displaystyle |E(\mathbb {F} _{q})|=q} are vulnerable to the attack that maps the points on the curve to the additive group of F q {\displaystyle \mathbb {F} _{q}} . === Key sizes === Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need O ( n ) {\displaystyle O({\sqrt {n}})} steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over F q {\displaystyle \mathbb {F} _{q}} , where q ≈ 2 256 {\displaystyle q\approx 2^{256}} . This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of n, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited. The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months. A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA. === Projective coordinates === A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in F q {\displaystyle \mathbb {F} _{q}} but also an inversion operation. The inversion (for given x ∈ F q {\displaystyle x\in \mathbb {F} _{q}} find y ∈ F q {\displaystyle y\in \mathbb {F} _{q}} such that x y = 1 {\displaystyle xy=1} ) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} using the following relation: x = X Z {\displaystyle x={\frac {X}{Z}}} , y = Y Z {\displaystyle y={\frac {Y}{Z}}} ; in the Jacobian system a point is also represented with three coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} , but a different relation is used: x = X Z 2 {\displaystyle x={\frac {X}{Z^{2}}}} , y = Y Z 3 {\displaystyle y={\frac {Y}{Z^{3}}}} ; in the López–Dahab system the relation is x = X Z {\displaystyle x={\frac {X}{Z}}} , y = Y Z 2 {\displaystyle y={\frac {Y}{Z^{2}}}} ; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations ( X , Y , Z , a Z 4 ) {\displaystyle (X,Y,Z,aZ^{4})} ; and in the Chudnovsky Jacobian system five coordinates are used ( X , Y , Z , Z 2 , Z 3 ) {\displaystyle (X,Y,Z,Z^{2},Z^{3})} . Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used. === Fast reduction (NIST curves) === Reduction modulo p (which is needed for addition and multiplication) can be executed much faster if the prime p is a pseudo-Mersenne prime, that is p ≈ 2 d {\displaystyle p\approx 2^{d}} ; for example, p = 2 521 − 1 {\displaystyle p=2^{521}-1} or p = 2 256 − 2 32 − 2 9 − 2 8 − 2 7 − 2 6 − 2 4 − 1. {\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.} Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations. The curves over F p {\displaystyle \mathbb {F} _{p}} with pseudo-Mersenne p are recommended by NIST. Yet another advantage of the NIST curves is that they use a = −3, which improves addition in Jacobian coordinates. According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast. == Security == === Side-channel attacks === Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P = Q) and general addition (P ≠ Q) depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards. === Backdoors === Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output. The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor. === Quantum computing attack === Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security). In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away. Supersingular Isogeny Diffie–Hellman Key Exchange claimed to provide a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems. However, new classical attacks undermined the security of this protocol. In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." === Invalid curve attack === When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key. == Alternative representations == Alternative representations of elliptic curves include: Hessian curves Edwards curves Twisted curves Twisted Hessian curves Twisted Edwards curve Doubling-oriented Doche–Icart–Kohel curve Tripling-oriented Doche–Icart–Kohel curve Jacobian curve Montgomery curves == See also == == Notes == == References == Jacques Vélu, Courbes elliptiques (...), Société Mathématique de France, 57, 1-152, Paris, 1978. == External links == Elliptic Curves at Stanford University Interactive introduction to elliptic curves and elliptic curve cryptography with Sage by Maike Massierer and the CrypTool team Media related to Elliptic curve at Wikimedia Commons
Wikipedia/Elliptic-curve_cryptography
In mathematics, particularly in homotopy theory, a model category is a category with distinguished classes of morphisms ('arrows') called 'weak equivalences', 'fibrations' and 'cofibrations' satisfying certain axioms relating them. These abstract from the category of topological spaces or of chain complexes (derived category theory). The concept was introduced by Daniel G. Quillen (1967). In recent decades, the language of model categories has been used in some parts of algebraic K-theory and algebraic geometry, where homotopy-theoretic approaches led to deep results. == Motivation == Model categories can provide a natural setting for homotopy theory: the category of topological spaces is a model category, with the homotopy corresponding to the usual theory. Similarly, objects that are thought of as spaces often admit a model category structure, such as the category of simplicial sets. Another model category is the category of chain complexes of R-modules for a commutative ring R. Homotopy theory in this context is homological algebra. Homology can then be viewed as a type of homotopy, allowing generalizations of homology to other objects, such as groups and R-algebras, one of the first major applications of the theory. Because of the above example regarding homology, the study of closed model categories is sometimes thought of as homotopical algebra. == Formal definition == The definition given initially by Quillen was that of a closed model category, the assumptions of which seemed strong at the time, motivating others to weaken some of the assumptions to define a model category. In practice the distinction has not proven significant and most recent authors (e.g., Mark Hovey and Philip Hirschhorn) work with closed model categories and simply drop the adjective 'closed'. The definition has been separated to that of a model structure on a category and then further categorical conditions on that category, the necessity of which may seem unmotivated at first but becomes important later. The following definition follows that given by Hovey. A model structure on a category C consists of three distinguished classes of morphisms (equivalently subcategories): weak equivalences, fibrations, and cofibrations, and two functorial factorizations ( α , β ) {\displaystyle (\alpha ,\beta )} and ( γ , δ ) {\displaystyle (\gamma ,\delta )} subject to the following axioms. A fibration that is also a weak equivalence is called an acyclic (or trivial) fibration and a cofibration that is also a weak equivalence is called an acyclic (or trivial) cofibration (or sometimes called an anodyne morphism). Axioms Retracts: if g is a morphism belonging to one of the distinguished classes, and f is a retract of g (as objects in the arrow category C 2 {\displaystyle C^{2}} , where 2 is the 2-element ordered set), then f belongs to the same distinguished class. Explicitly, the requirement that f is a retract of g means that there exist i, j, r, and s, such that the following diagram commutes: 2 of 3: if f and g are maps in C such that gf is defined and any two of these are weak equivalences then so is the third. Lifting: acyclic cofibrations have the left lifting property with respect to fibrations, and cofibrations have the left lifting property with respect to acyclic fibrations. Explicitly, if the outer square of the following diagram commutes, where i is a cofibration and p is a fibration, and i or p is acyclic, then there exists h completing the diagram. Factorization: every morphism f in C can be written as p ∘ i {\displaystyle p\circ i} for a fibration p and an acyclic cofibration i; every morphism f in C can be written as p ∘ i {\displaystyle p\circ i} for an acyclic fibration p and a cofibration i. A model category is a category that has a model structure and all (small) limits and colimits, i.e., a complete and cocomplete category with a model structure. === Definition via weak factorization systems === The above definition can be succinctly phrased by the following equivalent definition: a model category is a category C and three classes of (so-called) weak equivalences W, fibrations F and cofibrations C so that C has all limits and colimits, ( C ∩ W , F ) {\displaystyle (C\cap W,F)} is a weak factorization system, ( C , F ∩ W ) {\displaystyle (C,F\cap W)} is a weak factorization system W {\displaystyle W} satisfies the 2 of 3 property. === First consequences of the definition === The axioms imply that any two of the three classes of maps determine the third (e.g., cofibrations and weak equivalences determine fibrations). Also, the definition is self-dual: if C is a model category, then its opposite category C o p {\displaystyle {\mathcal {C}}^{op}} also admits a model structure so that weak equivalences correspond to their opposites, fibrations opposites of cofibrations and cofibrations opposites of fibrations. == Examples == === Topological spaces === The category of topological spaces, Top, admits a standard model category structure with the usual (Serre) fibrations and with weak equivalences as weak homotopy equivalences. The cofibrations are not the usual notion found here, but rather the narrower class of maps that have the left lifting property with respect to the acyclic Serre fibrations. Equivalently, they are the retracts of the relative cell complexes, as explained for example in Hovey's Model Categories. This structure is not unique; in general there can be many model category structures on a given category. For the category of topological spaces, another such structure is given by Hurewicz fibrations and standard cofibrations, and the weak equivalences are the (strong) homotopy equivalences. === Chain complexes === The category of (nonnegatively graded) chain complexes of R-modules carries at least two model structures, which both feature prominently in homological algebra: weak equivalences are maps that induce isomorphisms in homology; cofibrations are maps that are monomorphisms in each degree with projective cokernel; and fibrations are maps that are epimorphisms in each nonzero degree or weak equivalences are maps that induce isomorphisms in homology; fibrations are maps that are epimorphisms in each degree with injective kernel; and cofibrations are maps that are monomorphisms in each nonzero degree. This explains why Ext-groups of R-modules can be computed by either resolving the source projectively or the target injectively. These are cofibrant or fibrant replacements in the respective model structures. The category of arbitrary chain-complexes of R-modules has a model structure that is defined by weak equivalences are chain homotopy equivalences of chain-complexes; cofibrations are monomorphisms that are split as morphisms of underlying R-modules; and fibrations are epimorphisms that are split as morphisms of underlying R-modules. === Further examples === Other examples of categories admitting model structures include the category of all small categories, the category of simplicial sets or simplicial presheaves on any small Grothendieck site, the category of topological spectra, and the categories of simplicial spectra or presheaves of simplicial spectra on a small Grothendieck site. Simplicial objects in a category are a frequent source of model categories; for instance, simplicial commutative rings or simplicial R-modules admit natural model structures. This follows because there is an adjunction between simplicial sets and simplicial commutative rings (given by the forgetful and free functors), and in nice cases one can lift model structures under an adjunction. A simplicial model category is a simplicial category with a model structure that is compatible with the simplicial structure. Given any category C and a model category M, under certain extra hypothesis the category of functors Fun (C, M) (also called C-diagrams in M) is also a model category. In fact, there are always two candidates for distinct model structures: in one, the so-called projective model structure, fibrations and weak equivalences are those maps of functors which are fibrations and weak equivalences when evaluated at each object of C. Dually, the injective model structure is similar with cofibrations and weak equivalences instead. In both cases the third class of morphisms is given by a lifting condition (see below). In some cases, when the category C is a Reedy category, there is a third model structure lying in between the projective and injective. The process of forcing certain maps to become weak equivalences in a new model category structure on the same underlying category is known as Bousfield localization. For example, the category of simplicial sheaves can be obtained as a Bousfield localization of the model category of simplicial presheaves. Denis-Charles Cisinski has developed a general theory of model structures on presheaf categories (generalizing simplicial sets, which are presheaves on the simplex category). If C is a model category, then so is the category Pro(C) of pro-objects in C. However, a model structure on Pro(C) can also be constructed by imposing a weaker set of axioms to C. == Some constructions == Every closed model category has a terminal object by completeness and an initial object by cocompleteness, since these objects are the limit and colimit, respectively, of the empty diagram. Given an object X in the model category, if the unique map from the initial object to X is a cofibration, then X is said to be cofibrant. Analogously, if the unique map from X to the terminal object is a fibration then X is said to be fibrant. If Z and X are objects of a model category such that Z is cofibrant and there is a weak equivalence from Z to X then Z is said to be a cofibrant replacement for X. Similarly, if Z is fibrant and there is a weak equivalence from X to Z then Z is said to be a fibrant replacement for X. In general, not all objects are fibrant or cofibrant, though this is sometimes the case. For example, all objects are cofibrant in the standard model category of simplicial sets and all objects are fibrant for the standard model category structure given above for topological spaces. Left homotopy is defined with respect to cylinder objects and right homotopy is defined with respect to path space objects. These notions coincide when the domain is cofibrant and the codomain is fibrant. In that case, homotopy defines an equivalence relation on the hom sets in the model category giving rise to homotopy classes. == Characterizations of fibrations and cofibrations by lifting properties == Cofibrations can be characterized as the maps which have the left lifting property with respect to acyclic fibrations, and acyclic cofibrations are characterized as the maps which have the left lifting property with respect to fibrations. Similarly, fibrations can be characterized as the maps which have the right lifting property with respect to acyclic cofibrations, and acyclic fibrations are characterized as the maps which have the right lifting property with respect to cofibrations. == Homotopy and the homotopy category == The homotopy category of a model category C is the localization of C with respect to the class of weak equivalences. This definition of homotopy category does not depend on the choice of fibrations and cofibrations. However, the classes of fibrations and cofibrations are useful in describing the homotopy category in a different way and in particular avoiding set-theoretic issues arising in general localizations of categories. More precisely, the "fundamental theorem of model categories" states that the homotopy category of C is equivalent to the category whose objects are the objects of C which are both fibrant and cofibrant, and whose morphisms are left homotopy classes of maps (equivalently, right homotopy classes of maps) as defined above. (See for instance Model Categories by Hovey, Thm 1.2.10) Applying this to the category of topological spaces with the model structure given above, the resulting homotopy category is equivalent to the category of CW complexes and homotopy classes of continuous maps, whence the name. === Quillen adjunctions === A pair of adjoint functors F : C ⇆ D : G {\displaystyle F:C\leftrightarrows D:G} between two model categories C and D is called a Quillen adjunction if F preserves cofibrations and acyclic cofibrations or, equivalently by the closed model axioms, such that G preserves fibrations and acyclic fibrations. In this case F and G induce an adjunction L F : H o ( C ) ⇆ H o ( D ) : R G {\displaystyle LF:Ho(C)\leftrightarrows Ho(D):RG} between the homotopy categories. There is also an explicit criterion for the latter to be an equivalence (F and G are called a Quillen equivalence then). A typical example is the standard adjunction between simplicial sets and topological spaces: | − | : s S e t ⇆ T o p : S i n g {\displaystyle |-|:\mathbf {sSet} \leftrightarrows \mathbf {Top} :Sing} involving the geometric realization of a simplicial set and the singular chains in some topological space. The categories sSet and Top are not equivalent, but their homotopy categories are. Therefore, simplicial sets are often used as models for topological spaces because of this equivalence of homotopy categories. == See also == (∞,1)-category Cocycle category Stable model category == Notes == == References == Denis-Charles Cisinski: Les préfaisceaux commes modèles des types d'homotopie, Astérisque, (308) 2006, xxiv+392 pp. Dwyer, William G.; Spaliński, Jan (1995), "Homotopy theories and model categories" (PDF), Handbook of algebraic topology, Amsterdam: North-Holland, pp. 73–126, doi:10.1016/B978-044481779-2/50003-1, ISBN 9780444817792, MR 1361887 Philip S. Hirschhorn: Model Categories and Their Localizations, 2003, ISBN 0-8218-3279-4. Mark Hovey: Model Categories, 1999, ISBN 0-8218-1359-5. Klaus Heiner Kamps and Timothy Porter: Abstract homotopy and simple homotopy theory, 1997, World Scientific, ISBN 981-02-1602-5. Georges Maltsiniotis: La théorie de l'homotopie de Grothendieck. Astérisque, (301) 2005, vi+140 pp. Riehl, Emily (2014), Categorical homotopy theory, Cambridge University Press, doi:10.1017/CBO9781107261457, ISBN 978-1-107-04845-4, MR 3221774 Quillen, Daniel G. (1967), Homotopical algebra, Lecture Notes in Mathematics, No. 43, vol. 43, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0097438, ISBN 978-3-540-03914-3, MR 0223432 Balchin, Scott (2021), A Handbook of Model Categories, Algebra and Applications, vol. 27, Springer, doi:10.1007/978-3-030-75035-0, ISBN 978-3-030-75034-3, MR 4385504, S2CID 240268465 == Further reading == "Do we still need model categories?" "(infinity,1)-categories directly from model categories" Paul Goerss and Kristen Schemmerhorn, Model Categories and Simplicial Methods == External links == Model category at the nLab Model category in Joyal's catlab
Wikipedia/Quillen_model_category
In mathematics, the degree of an affine or projective variety of dimension n is the number of intersection points of the variety with n hyperplanes in general position. For an algebraic set, the intersection points must be counted with their intersection multiplicity, because of the possibility of multiple components. For (irreducible) varieties, if one takes into account the multiplicities and, in the affine case, the points at infinity, the hypothesis of general position may be replaced by the much weaker condition that the intersection of the variety has the dimension zero (that is, consists of a finite number of points). This is a generalization of Bézout's theorem. (For a proof, see Hilbert series and Hilbert polynomial § Degree of a projective variety and Bézout's theorem.) The degree is not an intrinsic property of the variety, as it depends on a specific embedding of the variety in an affine or projective space. The degree of a hypersurface is equal to the total degree of its defining equation. A generalization of Bézout's theorem asserts that, if an intersection of n projective hypersurfaces has codimension n, then the degree of the intersection is the product of the degrees of the hypersurfaces. The degree of a projective variety is the evaluation at 1 of the numerator of the Hilbert series of its coordinate ring. It follows that, given the equations of the variety, the degree may be computed from a Gröbner basis of the ideal of these equations. == Definition == For V embedded in a projective space Pn and defined over some algebraically closed field K, the degree d of V is the number of points of intersection of V, defined over K, with a linear subspace L in general position, such that dim ⁡ ( V ) + dim ⁡ ( L ) = n . {\displaystyle \dim(V)+\dim(L)=n.} Here dim(V) is the dimension of V, and the codimension of L will be equal to that dimension. The degree d is an extrinsic quantity, and not intrinsic as a property of V. For example, the projective line has an (essentially unique) embedding of degree n in Pn. == Properties == The degree of a hypersurface F = 0 is the same as the total degree of the homogeneous polynomial F defining it (granted, in case F has repeated factors, that intersection theory is used to count intersections with multiplicity, as in Bézout's theorem). If two varieties Y and Z intersect transversally, then the degree of their intersection is the product of their degrees: deg Y ∩ Z = (deg Y)(deg Z). == Other approaches == For a more sophisticated approach, the linear system of divisors defining the embedding of V can be related to the line bundle or invertible sheaf defining the embedding by its space of sections. The tautological line bundle on Pn pulls back to V. The degree determines the first Chern class. The degree can also be computed in the cohomology ring of Pn, or Chow ring, with the class of a hyperplane intersecting the class of V an appropriate number of times. == Extending Bézout's theorem == The degree can be used to generalize Bézout's theorem in an expected way to intersections of n hypersurfaces in Pn. == Notes ==
Wikipedia/Degree_of_an_algebraic_variety
In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One may then apply ideas from calculus while working within the individual charts, since each chart lies within a vector space to which the usual rules of calculus apply. If the charts are suitably compatible (namely, the transition from one chart to another is differentiable), then computations done in one chart are valid in any other differentiable chart. In formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a vector space. To induce a global differential structure on the local coordinate systems induced by the homeomorphisms, their compositions on chart intersections in the atlas must be differentiable functions on the corresponding vector space. In other words, where the domains of charts overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the various charts to one another are called transition maps. The ability to define such a local differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A locally differential structure allows one to define the globally differentiable tangent space, differentiable functions, and differentiable tensor and vector fields. Differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for physical theories such as classical mechanics, general relativity, and Yang–Mills theory. It is possible to develop a calculus for differentiable manifolds. This leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry. "Differentiability" of a manifold has been given several meanings, including: continuously differentiable, k-times differentiable, smooth (which itself has many meanings), and analytic. == History == The emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen. He motivated the idea of a manifold by an intuitive process of varying a given object in a new direction, and presciently described the role of coordinate systems and charts in subsequent formal developments: Having constructed the notion of a manifoldness of n dimensions, and found that its true character consists in the property that the determination of position in it may be reduced to n determinations of magnitude, ... – B. Riemann The works of physicists such as James Clerk Maxwell, and mathematicians Gregorio Ricci-Curbastro and Tullio Levi-Civita led to the development of tensor analysis and the notion of covariance, which identifies an intrinsic geometric property as one that is invariant with respect to coordinate transformations. These ideas found a key application in Albert Einstein's theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces. The widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. == Definition == === Atlases === Let M be a topological space. A chart (U, φ) on M consists of an open subset U of M, and a homeomorphism φ from U to an open subset of some Euclidean space Rn. Somewhat informally, one may refer to a chart φ : U → Rn, meaning that the image of φ is an open subset of Rn, and that φ is a homeomorphism onto its image; in the usage of some authors, this may instead mean that φ : U → Rn is itself a homeomorphism. The presence of a chart suggests the possibility of doing differential calculus on M; for instance, if given a function u : M → R and a chart (U, φ) on M, one could consider the composition u ∘ φ−1, which is a real-valued function whose domain is an open subset of a Euclidean space; as such, if it happens to be differentiable, one could consider its partial derivatives. This situation is not fully satisfactory for the following reason. Consider a second chart (V, ψ) on M, and suppose that U and V contain some points in common. The two corresponding functions u ∘ φ−1 and u ∘ ψ−1 are linked in the sense that they can be reparametrized into one another: u ∘ φ − 1 = ( u ∘ ψ − 1 ) ∘ ( ψ ∘ φ − 1 ) , {\displaystyle u\circ \varphi ^{-1}={\big (}u\circ \psi ^{-1}{\big )}\circ {\big (}\psi \circ \varphi ^{-1}{\big )},} the natural domain of the right-hand side being φ(U ∩ V). Since φ and ψ are homeomorphisms, it follows that ψ ∘ φ−1 is a homeomorphism from φ(U ∩ V) to ψ(U ∩ V). Consequently it's just a bicontinuous function, thus even if both functions u ∘ φ−1 and u ∘ ψ−1 are differentiable, their differential properties will not necessarily be strongly linked to one another, as ψ ∘ φ−1 is not guaranteed to be sufficiently differentiable for being able to compute the partial derivatives of the LHS applying the chain rule to the RHS. The same problem is found if one considers instead functions c : R → M; one is led to the reparametrization formula φ ∘ c = ( φ ∘ ψ − 1 ) ∘ ( ψ ∘ c ) , {\displaystyle \varphi \circ c={\big (}\varphi \circ \psi ^{-1}{\big )}\circ {\big (}\psi \circ c{\big )},} at which point one can make the same observation as before. This is resolved by the introduction of a "differentiable atlas" of charts, which specifies a collection of charts on M for which the transition maps ψ ∘ φ−1 are all differentiable. This makes the situation quite clean: if u ∘ φ−1 is differentiable, then due to the first reparametrization formula listed above, the map u ∘ ψ−1 is also differentiable on the region ψ(U ∩ V), and vice versa. Moreover, the derivatives of these two maps are linked to one another by the chain rule. Relative to the given atlas, this facilitates a notion of differentiable mappings whose domain or range is M, as well as a notion of the derivative of such maps. Formally, the word "differentiable" is somewhat ambiguous, as it is taken to mean different things by different authors; sometimes it means the existence of first derivatives, sometimes the existence of continuous first derivatives, and sometimes the existence of infinitely many derivatives. The following gives a formal definition of various (nonambiguous) meanings of "differentiable atlas". Generally, "differentiable" will be used as a catch-all term including all of these possibilities, provided k ≥ 1. Since every real-analytic map is smooth, and every smooth map is Ck for any k, one can see that any analytic atlas can also be viewed as a smooth atlas, and every smooth atlas can be viewed as a Ck atlas. This chain can be extended to include holomorphic atlases, with the understanding that any holomorphic map between open subsets of Cn can be viewed as a real-analytic map between open subsets of R2n. Given a differentiable atlas on a topological space, one says that a chart is differentiably compatible with the atlas, or differentiable relative to the given atlas, if the inclusion of the chart into the collection of charts comprising the given differentiable atlas results in a differentiable atlas. A differentiable atlas determines a maximal differentiable atlas, consisting of all charts which are differentiably compatible with the given atlas. A maximal atlas is always very large. For instance, given any chart in a maximal atlas, its restriction to an arbitrary open subset of its domain will also be contained in the maximal atlas. A maximal smooth atlas is also known as a smooth structure; a maximal holomorphic atlas is also known as a complex structure. An alternative but equivalent definition, avoiding the direct use of maximal atlases, is to consider equivalence classes of differentiable atlases, in which two differentiable atlases are considered equivalent if every chart of one atlas is differentiably compatible with the other atlas. Informally, what this means is that in dealing with a smooth manifold, one can work with a single differentiable atlas, consisting of only a few charts, with the implicit understanding that many other charts and differentiable atlases are equally legitimate. According to the invariance of domain, each connected component of a topological space which has a differentiable atlas has a well-defined dimension n. This causes a small ambiguity in the case of a holomorphic atlas, since the corresponding dimension will be one-half of the value of its dimension when considered as an analytic, smooth, or Ck atlas. For this reason, one refers separately to the "real" and "complex" dimension of a topological space with a holomorphic atlas. === Manifolds === A differentiable manifold is a Hausdorff and second countable topological space M, together with a maximal differentiable atlas on M. Much of the basic theory can be developed without the need for the Hausdorff and second countability conditions, although they are vital for much of the advanced theory. They are essentially equivalent to the general existence of bump functions and partitions of unity, both of which are used ubiquitously. The notion of a C0 manifold is identical to that of a topological manifold. However, there is a notable distinction to be made. Given a topological space, it is meaningful to ask whether or not it is a topological manifold. By contrast, it is not meaningful to ask whether or not a given topological space is (for instance) a smooth manifold, since the notion of a smooth manifold requires the specification of a smooth atlas, which is an additional structure. It could, however, be meaningful to say that a certain topological space cannot be given the structure of a smooth manifold. It is possible to reformulate the definitions so that this sort of imbalance is not present; one can start with a set M (rather than a topological space M), using the natural analogue of a smooth atlas in this setting to define the structure of a topological space on M. === Patching together Euclidean pieces to form a manifold === One can reverse-engineer the above definitions to obtain one perspective on the construction of manifolds. The idea is to start with the images of the charts and the transition maps, and to construct the manifold purely from this data. As in the above discussion, we use the "smooth" context but everything works just as well in other settings. Given an indexing set A , {\displaystyle A,} let V α {\displaystyle V_{\alpha }} be a collection of open subsets of R n {\displaystyle \mathbb {R} ^{n}} and for each α , β ∈ A {\displaystyle \alpha ,\beta \in A} let V α β {\displaystyle V_{\alpha \beta }} be an open (possibly empty) subset of V β {\displaystyle V_{\beta }} and let ϕ α β : V α β → V β α {\displaystyle \phi _{\alpha \beta }:V_{\alpha \beta }\to V_{\beta \alpha }} be a smooth map. Suppose that ϕ α α {\displaystyle \phi _{\alpha \alpha }} is the identity map, that ϕ α β ∘ ϕ β α {\displaystyle \phi _{\alpha \beta }\circ \phi _{\beta \alpha }} is the identity map, and that ϕ α β ∘ ϕ β γ ∘ ϕ γ α {\displaystyle \phi _{\alpha \beta }\circ \phi _{\beta \gamma }\circ \phi _{\gamma \alpha }} is the identity map. Then define an equivalence relation on the disjoint union ⨆ α ∈ A V α {\textstyle \bigsqcup _{\alpha \in A}V_{\alpha }} by declaring p ∈ V α β {\displaystyle p\in V_{\alpha \beta }} to be equivalent to ϕ α β ( p ) ∈ V β α . {\displaystyle \phi _{\alpha \beta }(p)\in V_{\beta \alpha }.} With some technical work, one can show that the set of equivalence classes can naturally be given a topological structure, and that the charts used in doing so form a smooth atlas. For the patching together the analytic structures(subset), see analytic varieties. == Differentiable functions == A real valued function f on an n-dimensional differentiable manifold M is called differentiable at a point p ∈ M if it is differentiable in any coordinate chart defined around p. In more precise terms, if ( U , ϕ ) {\displaystyle (U,\phi )} is a differentiable chart where U {\displaystyle U} is an open set in M {\displaystyle M} containing p and ϕ : U → R n {\displaystyle \phi :U\to {\mathbf {R} }^{n}} is the map defining the chart, then f is differentiable at p if and only if f ∘ ϕ − 1 : ϕ ( U ) ⊂ R n → R {\displaystyle f\circ \phi ^{-1}\colon \phi (U)\subset {\mathbf {R} }^{n}\to {\mathbf {R} }} is differentiable at ϕ ( p ) {\displaystyle \phi (p)} , that is f ∘ ϕ − 1 {\displaystyle f\circ \phi ^{-1}} is a differentiable function from the open set ϕ ( U ) {\displaystyle \phi (U)} , considered as a subset of R n {\displaystyle {\mathbf {R} }^{n}} , to R {\displaystyle \mathbf {R} } . In general, there will be many available charts; however, the definition of differentiability does not depend on the choice of chart at p. It follows from the chain rule applied to the transition functions between one chart and another that if f is differentiable in any particular chart at p, then it is differentiable in all charts at p. Analogous considerations apply to defining Ck functions, smooth functions, and analytic functions. === Differentiation of functions === There are various ways to define the derivative of a function on a differentiable manifold, the most fundamental of which is the directional derivative. The definition of the directional derivative is complicated by the fact that a manifold will lack a suitable affine structure with which to define vectors. Therefore, the directional derivative looks at curves in the manifold instead of vectors. ==== Directional differentiation ==== Given a real valued function f on an n dimensional differentiable manifold M, the directional derivative of f at a point p in M is defined as follows. Suppose that γ(t) is a curve in M with γ(0) = p, which is differentiable in the sense that its composition with any chart is a differentiable curve in Rn. Then the directional derivative of f at p along γ is d d t f ( γ ( t ) ) | t = 0 . {\displaystyle \left.{\frac {d}{dt}}f(\gamma (t))\right|_{t=0}.} If γ1 and γ2 are two curves such that γ1(0) = γ2(0) = p, and in any coordinate chart ϕ {\displaystyle \phi } , d d t ϕ ∘ γ 1 ( t ) | t = 0 = d d t ϕ ∘ γ 2 ( t ) | t = 0 {\displaystyle \left.{\frac {d}{dt}}\phi \circ \gamma _{1}(t)\right|_{t=0}=\left.{\frac {d}{dt}}\phi \circ \gamma _{2}(t)\right|_{t=0}} then, by the chain rule, f has the same directional derivative at p along γ1 as along γ2. This means that the directional derivative depends only on the tangent vector of the curve at p. Thus, the more abstract definition of directional differentiation adapted to the case of differentiable manifolds ultimately captures the intuitive features of directional differentiation in an affine space. ==== Tangent vector and the differential ==== A tangent vector at p ∈ M is an equivalence class of differentiable curves γ with γ(0) = p, modulo the equivalence relation of first-order contact between the curves. Therefore, γ 1 ≡ γ 2 ⟺ d d t ϕ ∘ γ 1 ( t ) | t = 0 = d d t ϕ ∘ γ 2 ( t ) | t = 0 {\displaystyle \gamma _{1}\equiv \gamma _{2}\iff \left.{\frac {d}{dt}}\phi \circ \gamma _{1}(t)\right|_{t=0}=\left.{\frac {d}{dt}}\phi \circ \gamma _{2}(t)\right|_{t=0}} in every coordinate chart ϕ {\displaystyle \phi } . Therefore, the equivalence classes are curves through p with a prescribed velocity vector at p. The collection of all tangent vectors at p forms a vector space: the tangent space to M at p, denoted TpM. If X is a tangent vector at p and f a differentiable function defined near p, then differentiating f along any curve in the equivalence class defining X gives a well-defined directional derivative along X: X f ( p ) := d d t f ( γ ( t ) ) | t = 0 . {\displaystyle Xf(p):=\left.{\frac {d}{dt}}f(\gamma (t))\right|_{t=0}.} Once again, the chain rule establishes that this is independent of the freedom in selecting γ from the equivalence class, since any curve with the same first order contact will yield the same directional derivative. If the function f is fixed, then the mapping X ↦ X f ( p ) {\displaystyle X\mapsto Xf(p)} is a linear functional on the tangent space. This linear functional is often denoted by df(p) and is called the differential of f at p: d f ( p ) : T p M → R . {\displaystyle df(p)\colon T_{p}M\to {\mathbf {R} }.} ==== Definition of tangent space and differentiation in local coordinates ==== Let M {\displaystyle M} be a topological n {\displaystyle n} -manifold with a smooth atlas { ( U α , ϕ α ) } α ∈ A . {\displaystyle \{(U_{\alpha },\phi _{\alpha })\}_{\alpha \in A}.} Given p ∈ M {\displaystyle p\in M} let A p {\displaystyle A_{p}} denote { α ∈ A : p ∈ U α } . {\displaystyle \{\alpha \in A:p\in U_{\alpha }\}.} A "tangent vector at p ∈ M {\displaystyle p\in M} " is a mapping v : A p → R n , {\displaystyle v:A_{p}\to \mathbb {R} ^{n},} here denoted α ↦ v α , {\displaystyle \alpha \mapsto v_{\alpha },} such that v α = D | ϕ β ( p ) ( ϕ α ∘ ϕ β − 1 ) ( v β ) {\displaystyle v_{\alpha }=D{\Big |}_{\phi _{\beta }(p)}(\phi _{\alpha }\circ \phi _{\beta }^{-1})(v_{\beta })} for all α , β ∈ A p . {\displaystyle \alpha ,\beta \in A_{p}.} Let the collection of tangent vectors at p {\displaystyle p} be denoted by T p M . {\displaystyle T_{p}M.} Given a smooth function f : M → R {\displaystyle f:M\to \mathbb {R} } , define d f p : T p M → R {\displaystyle df_{p}:T_{p}M\to \mathbb {R} } by sending a tangent vector v : A p → R n {\displaystyle v:A_{p}\to \mathbb {R} ^{n}} to the number given by D | ϕ α ( p ) ( f ∘ ϕ α − 1 ) ( v α ) , {\displaystyle D{\Big |}_{\phi _{\alpha }(p)}(f\circ \phi _{\alpha }^{-1})(v_{\alpha }),} which due to the chain rule and the constraint in the definition of a tangent vector does not depend on the choice of α ∈ A p . {\displaystyle \alpha \in A_{p}.} One can check that T p M {\displaystyle T_{p}M} naturally has the structure of a n {\displaystyle n} -dimensional real vector space, and that with this structure, d f p {\displaystyle df_{p}} is a linear map. The key observation is that, due to the constraint appearing in the definition of a tangent vector, the value of v β {\displaystyle v_{\beta }} for a single element β {\displaystyle \beta } of A p {\displaystyle A_{p}} automatically determines v α {\displaystyle v_{\alpha }} for all α ∈ A . {\displaystyle \alpha \in A.} The above formal definitions correspond precisely to a more informal notation which appears often in textbooks, specifically v i = v ~ j ∂ x i ∂ x ~ j {\displaystyle v^{i}={\widetilde {v}}^{j}{\frac {\partial x^{i}}{\partial {\widetilde {x}}^{j}}}} and d f p ( v ) = ∂ f ∂ x i v i . {\displaystyle df_{p}(v)={\frac {\partial f}{\partial x^{i}}}v^{i}.} With the idea of the formal definitions understood, this shorthand notation is, for most purposes, much easier to work with. === Partitions of unity === One of the topological features of the sheaf of differentiable functions on a differentiable manifold is that it admits partitions of unity. This distinguishes the differential structure on a manifold from stronger structures (such as analytic and holomorphic structures) that in general fail to have partitions of unity. Suppose that M is a manifold of class Ck, where 0 ≤ k ≤ ∞. Let {Uα} be an open covering of M. Then a partition of unity subordinate to the cover {Uα} is a collection of real-valued Ck functions φi on M satisfying the following conditions: The supports of the φi are compact and locally finite; The support of φi is completely contained in Uα for some α; The φi sum to one at each point of M: ∑ i ϕ i ( x ) = 1. {\displaystyle \sum _{i}\phi _{i}(x)=1.} (Note that this last condition is actually a finite sum at each point because of the local finiteness of the supports of the φi.) Every open covering of a Ck manifold M has a Ck partition of unity. This allows for certain constructions from the topology of Ck functions on Rn to be carried over to the category of differentiable manifolds. In particular, it is possible to discuss integration by choosing a partition of unity subordinate to a particular coordinate atlas, and carrying out the integration in each chart of Rn. Partitions of unity therefore allow for certain other kinds of function spaces to be considered: for instance Lp spaces, Sobolev spaces, and other kinds of spaces that require integration. === Differentiability of mappings between manifolds === Suppose M and N are two differentiable manifolds with dimensions m and n, respectively, and f is a function from M to N. Since differentiable manifolds are topological spaces we know what it means for f to be continuous. But what does "f is Ck(M, N)" mean for k ≥ 1? We know what that means when f is a function between Euclidean spaces, so if we compose f with a chart of M and a chart of N such that we get a map that goes from Euclidean space to M to N to Euclidean space we know what it means for that map to be Ck(Rm, Rn). We define "f is Ck(M, N)" to mean that all such compositions of f with charts are Ck(Rm, Rn). Once again, the chain rule guarantees that the idea of differentiability does not depend on which charts of the atlases on M and N are selected. However, defining the derivative itself is more subtle. If M or N is itself already a Euclidean space, then we don't need a chart to map it to one. == Bundles == === Tangent bundle === The tangent space of a point consists of the possible directional derivatives at that point, and has the same dimension n as does the manifold. For a set of (non-singular) coordinates xk local to the point, the coordinate derivatives ∂ k = ∂ ∂ x k {\displaystyle \partial _{k}={\frac {\partial }{\partial x_{k}}}} define a holonomic basis of the tangent space. The collection of tangent spaces at all points can in turn be made into a manifold, the tangent bundle, whose dimension is 2n. The tangent bundle is where tangent vectors lie, and is itself a differentiable manifold. The Lagrangian is a function on the tangent bundle. One can also define the tangent bundle as the bundle of 1-jets from R (the real line) to M. One may construct an atlas for the tangent bundle consisting of charts based on Uα × Rn, where Uα denotes one of the charts in the atlas for M. Each of these new charts is the tangent bundle for the charts Uα. The transition maps on this atlas are defined from the transition maps on the original manifold, and retain the original differentiability class. === Cotangent bundle === The dual space of a vector space is the set of real valued linear functions on the vector space. The cotangent space at a point is the dual of the tangent space at that point and the elements are referred to as cotangent vectors; the cotangent bundle is the collection of all cotangent vectors, along with the natural differentiable manifold structure. Like the tangent bundle, the cotangent bundle is again a differentiable manifold. The Hamiltonian is a scalar on the cotangent bundle. The total space of a cotangent bundle has the structure of a symplectic manifold. Cotangent vectors are sometimes called covectors. One can also define the cotangent bundle as the bundle of 1-jets of functions from M to R. Elements of the cotangent space can be thought of as infinitesimal displacements: if f is a differentiable function we can define at each point p a cotangent vector dfp, which sends a tangent vector Xp to the derivative of f associated with Xp. However, not every covector field can be expressed this way. Those that can are referred to as exact differentials. For a given set of local coordinates xk, the differentials dxkp form a basis of the cotangent space at p. === Tensor bundle === The tensor bundle is the direct sum of all tensor products of the tangent bundle and the cotangent bundle. Each element of the bundle is a tensor field, which can act as a multilinear operator on vector fields, or on other tensor fields. The tensor bundle is not a differentiable manifold in the traditional sense, since it is infinite dimensional. It is however an algebra over the ring of scalar functions. Each tensor is characterized by its ranks, which indicate how many tangent and cotangent factors it has. Sometimes these ranks are referred to as covariant and contravariant ranks, signifying tangent and cotangent ranks, respectively. === Frame bundle === A frame (or, in more precise terms, a tangent frame), is an ordered basis of particular tangent space. Likewise, a tangent frame is a linear isomorphism of Rn to this tangent space. A moving tangent frame is an ordered list of vector fields that give a basis at every point of their domain. One may also regard a moving frame as a section of the frame bundle F(M), a GL(n, R) principal bundle made up of the set of all frames over M. The frame bundle is useful because tensor fields on M can be regarded as equivariant vector-valued functions on F(M). === Jet bundles === On a manifold that is sufficiently smooth, various kinds of jet bundles can also be considered. The (first-order) tangent bundle of a manifold is the collection of curves in the manifold modulo the equivalence relation of first-order contact. By analogy, the k-th order tangent bundle is the collection of curves modulo the relation of k-th order contact. Likewise, the cotangent bundle is the bundle of 1-jets of functions on the manifold: the k-jet bundle is the bundle of their k-jets. These and other examples of the general idea of jet bundles play a significant role in the study of differential operators on manifolds. The notion of a frame also generalizes to the case of higher-order jets. Define a k-th order frame to be the k-jet of a diffeomorphism from Rn to M. The collection of all k-th order frames, Fk(M), is a principal Gk bundle over M, where Gk is the group of k-jets; i.e., the group made up of k-jets of diffeomorphisms of Rn that fix the origin. Note that GL(n, R) is naturally isomorphic to G1, and a subgroup of every Gk, k ≥ 2. In particular, a section of F2(M) gives the frame components of a connection on M. Thus, the quotient bundle F2(M) / GL(n, R) is the bundle of symmetric linear connections over M. == Calculus on manifolds == Many of the techniques from multivariate calculus also apply, mutatis mutandis, to differentiable manifolds. One can define the directional derivative of a differentiable function along a tangent vector to the manifold, for instance, and this leads to a means of generalizing the total derivative of a function: the differential. From the perspective of calculus, the derivative of a function on a manifold behaves in much the same way as the ordinary derivative of a function defined on a Euclidean space, at least locally. For example, there are versions of the implicit and inverse function theorems for such functions. There are, however, important differences in the calculus of vector fields (and tensor fields in general). In brief, the directional derivative of a vector field is not well-defined, or at least not defined in a straightforward manner. Several generalizations of the derivative of a vector field (or tensor field) do exist, and capture certain formal features of differentiation in Euclidean spaces. The chief among these are: The Lie derivative, which is uniquely defined by the differential structure, but fails to satisfy some of the usual features of directional differentiation. An affine connection, which is not uniquely defined, but generalizes in a more complete manner the features of ordinary directional differentiation. Because an affine connection is not unique, it is an additional piece of data that must be specified on the manifold. Ideas from integral calculus also carry over to differential manifolds. These are naturally expressed in the language of exterior calculus and differential forms. The fundamental theorems of integral calculus in several variables—namely Green's theorem, the divergence theorem, and Stokes' theorem—generalize to a theorem (also called Stokes' theorem) relating the exterior derivative and integration over submanifolds. === Differential calculus of functions === Differentiable functions between two manifolds are needed in order to formulate suitable notions of submanifolds, and other related concepts. If f : M → N is a differentiable function from a differentiable manifold M of dimension m to another differentiable manifold N of dimension n, then the differential of f is a mapping df : TM → TN. It is also denoted by Tf and called the tangent map. At each point of M, this is a linear transformation from one tangent space to another: d f ( p ) : T p M → T f ( p ) N . {\displaystyle df(p)\colon T_{p}M\to T_{f(p)}N.} The rank of f at p is the rank of this linear transformation. Usually the rank of a function is a pointwise property. However, if the function has maximal rank, then the rank will remain constant in a neighborhood of a point. A differentiable function "usually" has maximal rank, in a precise sense given by Sard's theorem. Functions of maximal rank at a point are called immersions and submersions: If m ≤ n, and f : M → N has rank m at p ∈ M, then f is called an immersion at p. If f is an immersion at all points of M and is a homeomorphism onto its image, then f is an embedding. Embeddings formalize the notion of M being a submanifold of N. In general, an embedding is an immersion without self-intersections and other sorts of non-local topological irregularities. If m ≥ n, and f : M → N has rank n at p ∈ M, then f is called a submersion at p. The implicit function theorem states that if f is a submersion at p, then M is locally a product of N and Rm−n near p. In formal terms, there exist coordinates (y1, ..., yn) in a neighborhood of f(p) in N, and m − n functions x1, ..., xm−n defined in a neighborhood of p in M such that ( y 1 ∘ f , … , y n ∘ f , x 1 , … , x m − n ) {\displaystyle (y_{1}\circ f,\dotsc ,y_{n}\circ f,x_{1},\dotsc ,x_{m-n})} is a system of local coordinates of M in a neighborhood of p. Submersions form the foundation of the theory of fibrations and fibre bundles. === Lie derivative === A Lie derivative, named after Sophus Lie, is a derivation on the algebra of tensor fields over a manifold M. The vector space of all Lie derivatives on M forms an infinite dimensional Lie algebra with respect to the Lie bracket defined by [ A , B ] := L A B = − L B A . {\displaystyle [A,B]:={\mathcal {L}}_{A}B=-{\mathcal {L}}_{B}A.} The Lie derivatives are represented by vector fields, as infinitesimal generators of flows (active diffeomorphisms) on M. Looking at it the other way around, the group of diffeomorphisms of M has the associated Lie algebra structure, of Lie derivatives, in a way directly analogous to the Lie group theory. === Exterior calculus === The exterior calculus allows for a generalization of the gradient, divergence and curl operators. The bundle of differential forms, at each point, consists of all totally antisymmetric multilinear maps on the tangent space at that point. It is naturally divided into n-forms for each n at most equal to the dimension of the manifold; an n-form is an n-variable form, also called a form of degree n. The 1-forms are the cotangent vectors, while the 0-forms are just scalar functions. In general, an n-form is a tensor with cotangent rank n and tangent rank 0. But not every such tensor is a form, as a form must be antisymmetric. ==== Exterior derivative ==== The exterior derivative is a linear operator on the graded vector space of all smooth differential forms on a smooth manifold M {\displaystyle M} . It is usually denoted by d {\displaystyle d} . More precisely, if n = dim ⁡ ( M ) {\displaystyle n=\dim(M)} , for 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} the operator d {\displaystyle d} maps the space Ω k ( M ) {\displaystyle \Omega ^{k}(M)} of k {\displaystyle k} -forms on M {\displaystyle M} into the space Ω k + 1 ( M ) {\displaystyle \Omega ^{k+1}(M)} of ( k + 1 ) {\displaystyle (k+1)} -forms (if k > n {\displaystyle k>n} there are no non-zero k {\displaystyle k} -forms on M {\displaystyle M} so the map d {\displaystyle d} is identically zero on n {\displaystyle n} -forms). For example, the exterior differential of a smooth function f {\displaystyle f} is given in local coordinates x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} , with associated local co-frame d x 1 , … , d x n {\displaystyle dx_{1},\ldots ,dx_{n}} by the formula : d f = ∑ i = 1 n ∂ f ∂ x i d x i . {\displaystyle df=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i}}}dx_{i}.} The exterior differential satisfies the following identity, similar to a product rule with respect to the wedge product of forms: d ( ω ∧ η ) = d ω ∧ η + ( − 1 ) deg ⁡ ω ω ∧ d η . {\displaystyle d(\omega \wedge \eta )=d\omega \wedge \eta +(-1)^{\deg \omega }\omega \wedge d\eta .} The exterior derivative also satisfies the identity d ∘ d = 0 {\displaystyle d\circ d=0} . That is, if ω {\displaystyle \omega } is a k {\displaystyle k} -form then the ( k + 2 ) {\displaystyle (k+2)} -form d ( d f ) {\displaystyle d(df)} is identically vanishing. A form ω {\displaystyle \omega } such that d ω = 0 {\displaystyle d\omega =0} is called closed, while a form ω {\displaystyle \omega } such that ω = d η {\displaystyle \omega =d\eta } for some other form η {\displaystyle \eta } is called exact. Another formulation of the identity d ∘ d = 0 {\displaystyle d\circ d=0} is that an exact form is closed. This allows one to define de Rham cohomology of the manifold M {\displaystyle M} , where the k {\displaystyle k} th cohomology group is the quotient group of the closed forms on M {\displaystyle M} by the exact forms on M {\displaystyle M} . == Topology of differentiable manifolds == === Relationship with topological manifolds === Suppose that M {\displaystyle M} is a topological n {\displaystyle n} -manifold. If given any smooth atlas { ( U α , ϕ α ) } α ∈ A {\displaystyle \{(U_{\alpha },\phi _{\alpha })\}_{\alpha \in A}} , it is easy to find a smooth atlas which defines a different smooth manifold structure on M ; {\displaystyle M;} consider a homeomorphism Φ : M → M {\displaystyle \Phi :M\to M} which is not smooth relative to the given atlas; for instance, one can modify the identity map using a localized non-smooth bump. Then consider the new atlas { ( Φ − 1 ( U α ) , ϕ α ∘ Φ ) } α ∈ A , {\displaystyle \{(\Phi ^{-1}(U_{\alpha }),\phi _{\alpha }\circ \Phi )\}_{\alpha \in A},} which is easily verified as a smooth atlas. However, the charts in the new atlas are not smoothly compatible with the charts in the old atlas, since this would require that ϕ α ∘ Φ ∘ ϕ β − 1 {\displaystyle \phi _{\alpha }\circ \Phi \circ \phi _{\beta }^{-1}} and ϕ α ∘ Φ − 1 ∘ ϕ β − 1 {\displaystyle \phi _{\alpha }\circ \Phi ^{-1}\circ \phi _{\beta }^{-1}} are smooth for any α {\displaystyle \alpha } and β , {\displaystyle \beta ,} with these conditions being exactly the definition that both Φ {\displaystyle \Phi } and Φ − 1 {\displaystyle \Phi ^{-1}} are smooth, in contradiction to how Φ {\displaystyle \Phi } was selected. With this observation as motivation, one can define an equivalence relation on the space of smooth atlases on M {\displaystyle M} by declaring that smooth atlases { ( U α , ϕ α ) } α ∈ A {\displaystyle \{(U_{\alpha },\phi _{\alpha })\}_{\alpha \in A}} and { ( V β , ψ β ) } β ∈ B {\displaystyle \{(V_{\beta },\psi _{\beta })\}_{\beta \in B}} are equivalent if there is a homeomorphism Φ : M → M {\displaystyle \Phi :M\to M} such that { ( Φ − 1 ( U α ) , ϕ α ∘ Φ ) } α ∈ A {\displaystyle \{(\Phi ^{-1}(U_{\alpha }),\phi _{\alpha }\circ \Phi )\}_{\alpha \in A}} is smoothly compatible with { ( V β , ψ β ) } β ∈ B , {\displaystyle \{(V_{\beta },\psi _{\beta })\}_{\beta \in B},} and such that { ( Φ ( V β ) , ψ β ∘ Φ − 1 ) } β ∈ B {\displaystyle \{(\Phi (V_{\beta }),\psi _{\beta }\circ \Phi ^{-1})\}_{\beta \in B}} is smoothly compatible with { ( U α , ϕ α ) } α ∈ A . {\displaystyle \{(U_{\alpha },\phi _{\alpha })\}_{\alpha \in A}.} More briefly, one could say that two smooth atlases are equivalent if there exists a diffeomorphism M → M , {\displaystyle M\to M,} in which one smooth atlas is taken for the domain and the other smooth atlas is taken for the range. Note that this equivalence relation is a refinement of the equivalence relation which defines a smooth manifold structure, as any two smoothly compatible atlases are also compatible in the present sense; one can take Φ {\displaystyle \Phi } to be the identity map. If the dimension of M {\displaystyle M} is 1, 2, or 3, then there exists a smooth structure on M {\displaystyle M} , and all distinct smooth structures are equivalent in the above sense. The situation is more complicated in higher dimensions, although it isn't fully understood. Some topological manifolds admit no smooth structures, as was originally shown with a ten-dimensional example by Kervaire (1960). A major application of partial differential equations in differential geometry due to Simon Donaldson, in combination with results of Michael Freedman, shows that many simply-connected compact topological 4-manifolds do not admit smooth structures. A well-known particular example is the E8 manifold. Some topological manifolds admit many smooth structures which are not equivalent in the sense given above. This was originally discovered by John Milnor in the form of the exotic 7-spheres. === Classification === Every one-dimensional connected smooth manifold is diffeomorphic to either R {\displaystyle \mathbb {R} } or S 1 , {\displaystyle S^{1},} each with their standard smooth structures. For a classification of smooth 2-manifolds, see surface. A particular result is that every two-dimensional connected compact smooth manifold is diffeomorphic to one of the following: S 2 , {\displaystyle S^{2},} or ( S 1 × S 1 ) ♯ ⋯ ♯ ( S 1 × S 1 ) , {\displaystyle (S^{1}\times S^{1})\sharp \cdots \sharp (S^{1}\times S^{1}),} or R P 2 ♯ ⋯ ♯ R P 2 . {\displaystyle \mathbb {RP} ^{2}\sharp \cdots \sharp \mathbb {RP} ^{2}.} The situation is more nontrivial if one considers complex-differentiable structure instead of smooth structure. The situation in three dimensions is quite a bit more complicated, and known results are more indirect. A remarkable result, proved in 2002 by methods of partial differential equations, is the geometrization conjecture, stating loosely that any compact smooth 3-manifold can be split up into different parts, each of which admits Riemannian metrics which possess many symmetries. There are also various "recognition results" for geometrizable 3-manifolds, such as Mostow rigidity and Sela's algorithm for the isomorphism problem for hyperbolic groups. The classification of n-manifolds for n greater than three is known to be impossible, even up to homotopy equivalence. Given any finitely presented group, one can construct a closed 4-manifold having that group as fundamental group. Since there is no algorithm to decide the isomorphism problem for finitely presented groups, there is no algorithm to decide whether two 4-manifolds have the same fundamental group. Since the previously described construction results in a class of 4-manifolds that are homeomorphic if and only if their groups are isomorphic, the homeomorphism problem for 4-manifolds is undecidable. In addition, since even recognizing the trivial group is undecidable, it is not even possible in general to decide whether a manifold has trivial fundamental group, i.e. is simply connected. Simply connected 4-manifolds have been classified up to homeomorphism by Freedman using the intersection form and Kirby–Siebenmann invariant. Smooth 4-manifold theory is known to be much more complicated, as the exotic smooth structures on R4 demonstrate. However, the situation becomes more tractable for simply connected smooth manifolds of dimension ≥ 5, where the h-cobordism theorem can be used to reduce the classification to a classification up to homotopy equivalence, and surgery theory can be applied. This has been carried out to provide an explicit classification of simply connected 5-manifolds by Dennis Barden. == Structures on smooth manifolds == === (Pseudo-)Riemannian manifolds === A Riemannian manifold consists of a smooth manifold together with a positive-definite inner product on each of the individual tangent spaces. This collection of inner products is called the Riemannian metric, and is naturally a symmetric 2-tensor field. This "metric" identifies a natural vector space isomorphism T p M → T p ∗ M {\displaystyle T_{p}M\to T_{p}^{\ast }M} for each p ∈ M . {\displaystyle p\in M.} On a Riemannian manifold one can define notions of length, volume, and angle. Any smooth manifold can be given many different Riemannian metrics. A pseudo-Riemannian manifold (also called a semi-Riemannian manifold) is a generalization of the notion of Riemannian manifold where the inner products are allowed to have an indefinite signature, as opposed to being positive-definite; they are still required to be non-degenerate. Every smooth pseudo-Riemannian and Riemmannian manifold defines a number of associated tensor fields, such as the Riemann curvature tensor. Lorentzian manifolds are pseudo-Riemannian manifolds of signature ( n − 1 , 1 ) {\displaystyle (n-1,1)} ; the case n = 4 {\displaystyle n=4} is fundamental in general relativity. Not every smooth manifold can be given a non-Riemannian pseudo-Riemannian structure; there are topological restrictions on doing so. A Finsler manifold is a different generalization of a Riemannian manifold, in which the inner product is replaced with a vector norm; as such, this allows the definition of length, but not angle. === Symplectic manifolds === A symplectic manifold is a manifold equipped with a closed, nondegenerate 2-form. This condition forces symplectic manifolds to be even-dimensional, due to the fact that skew-symmetric ( 2 n + 1 ) × ( 2 n + 1 ) {\displaystyle (2n+1)\times (2n+1)} matrices all have zero determinant. There are two basic examples: Cotangent bundles, which arise as phase spaces in Hamiltonian mechanics, are a motivating example, since they admit a natural symplectic form. All oriented two-dimensional Riemannian manifolds ( M , g ) {\displaystyle (M,g)} are, in a natural way, symplectic, by defining the form ω ( u , v ) = g ( u , J ( v ) ) {\displaystyle \omega (u,v)=g(u,J(v))} where, for any v ∈ T p M , {\displaystyle v\in T_{p}M,} J ( v ) {\displaystyle J(v)} denotes the vector such that v , J ( v ) {\displaystyle v,J(v)} is an oriented g p {\displaystyle g_{p}} -orthonormal basis of T p M . {\displaystyle T_{p}M.} === Lie groups === A Lie group consists of a C∞ manifold G {\displaystyle G} together with a group structure on G {\displaystyle G} such that the product and inversion maps m : G × G → G {\displaystyle m:G\times G\to G} and i : G → G {\displaystyle i:G\to G} are smooth as maps of manifolds. These objects often arise naturally in describing (continuous) symmetries, and they form an important source of examples of smooth manifolds. Many otherwise familiar examples of smooth manifolds, however, cannot be given a Lie group structure, since given a Lie group G {\displaystyle G} and any g ∈ G {\displaystyle g\in G} , one could consider the map m ( g , ⋅ ) : G → G {\displaystyle m(g,\cdot ):G\to G} which sends the identity element e {\displaystyle e} to g {\displaystyle g} and hence, by considering the differential T e G → T g G , {\displaystyle T_{e}G\to T_{g}G,} gives a natural identification between any two tangent spaces of a Lie group. In particular, by considering an arbitrary nonzero vector in T e G , {\displaystyle T_{e}G,} one can use these identifications to give a smooth non-vanishing vector field on G . {\displaystyle G.} This shows, for instance, that no even-dimensional sphere can support a Lie group structure. The same argument shows, more generally, that every Lie group must be parallelizable. == Alternative definitions == === Pseudogroups === The notion of a pseudogroup provides a flexible generalization of atlases in order to allow a variety of different structures to be defined on manifolds in a uniform way. A pseudogroup consists of a topological space S and a collection Γ consisting of homeomorphisms from open subsets of S to other open subsets of S such that If f ∈ Γ, and U is an open subset of the domain of f, then the restriction f|U is also in Γ. If f is a homeomorphism from a union of open subsets of S, ∪ i U i {\displaystyle \cup _{i}\,U_{i}} , to an open subset of S, then f ∈ Γ provided f | U i ∈ Γ {\displaystyle f|_{U_{i}}\in \Gamma } for every i. For every open U ⊂ S, the identity transformation of U is in Γ. If f ∈ Γ, then f−1 ∈ Γ. The composition of two elements of Γ is in Γ. These last three conditions are analogous to the definition of a group. Note that Γ need not be a group, however, since the functions are not globally defined on S. For example, the collection of all local Ck diffeomorphisms on Rn form a pseudogroup. All biholomorphisms between open sets in Cn form a pseudogroup. More examples include: orientation preserving maps of Rn, symplectomorphisms, Möbius transformations, affine transformations, and so on. Thus, a wide variety of function classes determine pseudogroups. An atlas (Ui, φi) of homeomorphisms φi from Ui ⊂ M to open subsets of a topological space S is said to be compatible with a pseudogroup Γ provided that the transition functions φj ∘ φi−1 : φi(Ui ∩ Uj) → φj(Ui ∩ Uj) are all in Γ. A differentiable manifold is then an atlas compatible with the pseudogroup of Ck functions on Rn. A complex manifold is an atlas compatible with the biholomorphic functions on open sets in Cn. And so forth. Thus, pseudogroups provide a single framework in which to describe many structures on manifolds of importance to differential geometry and topology. === Structure sheaf === Sometimes, it can be useful to use an alternative approach to endow a manifold with a Ck-structure. Here k = 1, 2, ..., ∞, or ω for real analytic manifolds. Instead of considering coordinate charts, it is possible to start with functions defined on the manifold itself. The structure sheaf of M, denoted Ck, is a sort of functor that defines, for each open set U ⊂ M, an algebra Ck(U) of continuous functions U → R. A structure sheaf Ck is said to give M the structure of a Ck manifold of dimension n provided that, for any p ∈ M, there exists a neighborhood U of p and n functions x1, ..., xn ∈ Ck(U) such that the map f = (x1, ..., xn) : U → Rn is a homeomorphism onto an open set in Rn, and such that Ck|U is the pullback of the sheaf of k-times continuously differentiable functions on Rn. In particular, this latter condition means that any function h in Ck(V), for V, can be written uniquely as h(x) = H(x1(x), ..., xn(x)), where H is a k-times differentiable function on f(V) (an open set in Rn). Thus, the sheaf-theoretic viewpoint is that the functions on a differentiable manifold can be expressed in local coordinates as differentiable functions on Rn, and a fortiori this is sufficient to characterize the differential structure on the manifold. ==== Sheaves of local rings ==== A similar, but more technical, approach to defining differentiable manifolds can be formulated using the notion of a ringed space. This approach is strongly influenced by the theory of schemes in algebraic geometry, but uses local rings of the germs of differentiable functions. It is especially popular in the context of complex manifolds. We begin by describing the basic structure sheaf on Rn. If U is an open set in Rn, let O(U) = Ck(U, R) consist of all real-valued k-times continuously differentiable functions on U. As U varies, this determines a sheaf of rings on Rn. The stalk Op for p ∈ Rn consists of germs of functions near p, and is an algebra over R. In particular, this is a local ring whose unique maximal ideal consists of those functions that vanish at p. The pair (Rn, O) is an example of a locally ringed space: it is a topological space equipped with a sheaf whose stalks are each local rings. A differentiable manifold (of class Ck) consists of a pair (M, OM) where M is a second countable Hausdorff space, and OM is a sheaf of local R-algebras defined on M, such that the locally ringed space (M, OM) is locally isomorphic to (Rn, O). In this way, differentiable manifolds can be thought of as schemes modeled on Rn. This means that for each point p ∈ M, there is a neighborhood U of p, and a pair of functions (f, f#), where f : U → f(U) ⊂ Rn is a homeomorphism onto an open set in Rn. f#: O|f(U) → f∗ (OM|U) is an isomorphism of sheaves. The localization of f# is an isomorphism of local rings f#f(p) : Of(p) → OM,p. There are a number of important motivations for studying differentiable manifolds within this abstract framework. First, there is no a priori reason that the model space needs to be Rn. For example, (in particular in algebraic geometry), one could take this to be the space of complex numbers Cn equipped with the sheaf of holomorphic functions (thus arriving at the spaces of complex analytic geometry), or the sheaf of polynomials (thus arriving at the spaces of interest in complex algebraic geometry). In broader terms, this concept can be adapted for any suitable notion of a scheme (see topos theory). Second, coordinates are no longer explicitly necessary to the construction. The analog of a coordinate system is the pair (f, f#), but these merely quantify the idea of local isomorphism rather than being central to the discussion (as in the case of charts and atlases). Third, the sheaf OM is not manifestly a sheaf of functions at all. Rather, it emerges as a sheaf of functions as a consequence of the construction (via the quotients of local rings by their maximal ideals). Hence, it is a more primitive definition of the structure (see synthetic differential geometry). A final advantage of this approach is that it allows for natural direct descriptions of many of the fundamental objects of study to differential geometry and topology. The cotangent space at a point is Ip/Ip2, where Ip is the maximal ideal of the stalk OM,p. In general, the entire cotangent bundle can be obtained by a related technique (see cotangent bundle for details). Taylor series (and jets) can be approached in a coordinate-independent manner using the Ip-adic filtration on OM,p. The tangent bundle (or more precisely its sheaf of sections) can be identified with the sheaf of morphisms of OM into the ring of dual numbers. == Generalizations == The category of smooth manifolds with smooth maps lacks certain desirable properties, and people have tried to generalize smooth manifolds in order to rectify this. Diffeological spaces use a different notion of chart known as a "plot". Frölicher spaces and orbifolds are other attempts. A rectifiable set generalizes the idea of a piece-wise smooth or rectifiable curve to higher dimensions; however, rectifiable sets are not in general manifolds. Banach manifolds and Fréchet manifolds, in particular manifolds of mappings are infinite dimensional differentiable manifolds. === Non-commutative geometry === For a Ck manifold M, the set of real-valued Ck functions on the manifold forms an algebra under pointwise addition and multiplication, called the algebra of scalar fields or simply the algebra of scalars. This algebra has the constant function 1 as the multiplicative identity, and is a differentiable analog of the ring of regular functions in algebraic geometry. It is possible to reconstruct a manifold from its algebra of scalars, first as a set, but also as a topological space – this is an application of the Banach–Stone theorem, and is more formally known as the spectrum of a C*-algebra. First, there is a one-to-one correspondence between the points of M and the algebra homomorphisms φ: Ck(M) → R, as such a homomorphism φ corresponds to a codimension one ideal in Ck(M) (namely the kernel of φ), which is necessarily a maximal ideal. On the converse, every maximal ideal in this algebra is an ideal of functions vanishing at a single point, which demonstrates that MSpec (the Max Spec) of Ck(M) recovers M as a point set, though in fact it recovers M as a topological space. One can define various geometric structures algebraically in terms of the algebra of scalars, and these definitions often generalize to algebraic geometry (interpreting rings geometrically) and operator theory (interpreting Banach spaces geometrically). For example, the tangent bundle to M can be defined as the derivations of the algebra of smooth functions on M. This "algebraization" of a manifold (replacing a geometric object with an algebra) leads to the notion of a C*-algebra – a commutative C*-algebra being precisely the ring of scalars of a manifold, by Banach–Stone, and allows one to consider noncommutative C*-algebras as non-commutative generalizations of manifolds. This is the basis of the field of noncommutative geometry. == See also == == References == == Bibliography ==
Wikipedia/Differential_manifold
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes. The shapes studied in geometric modeling are mostly two- or three-dimensional (solid figures), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing. Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering, architecture, geology and medical image processing. Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an opaque algorithm that generates its appearance. They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth. Notable awards of the area are the John A. Gregory Memorial Award and the Bézier award. == See also == 2D geometric modeling Architectural geometry Computational conformal geometry Computational topology Computer-aided engineering Computer-aided manufacturing Digital geometry Geometric modeling kernel List of interactive geometry software Parametric equation Parametric surface Solid modeling Space partitioning == References == == Further reading == General textbooks: Jean Gallier (1999). Curves and Surfaces in Geometric Modeling: Theory and Algorithms. Morgan Kaufmann. This book is out of print and freely available from the author. Gerald E. Farin (2002). Curves and Surfaces for CAGD: A Practical Guide (5th ed.). Morgan Kaufmann. ISBN 978-1-55860-737-8. Michael E. Mortenson (2006). Geometric Modeling (3rd ed.). Industrial Press. ISBN 978-0-8311-3298-9. Ronald Goldman (2009). An Integrated Introduction to Computer Graphics and Geometric Modeling (1st ed.). CRC Press. ISBN 978-1-4398-0334-9. Nikolay N. Golovanov (2014). Geometric Modeling: The mathematics of shapes. CreateSpace Independent Publishing Platform. ISBN 978-1497473195. For multi-resolution (multiple level of detail) geometric modeling : Armin Iske; Ewald Quak; Michael S. Floater (2002). Tutorials on Multiresolution in Geometric Modelling: Summer School Lecture Notes. Springer Science & Business Media. ISBN 978-3-540-43639-3. Neil Dodgson; Michael S. Floater; Malcolm Sabin (2006). Advances in Multiresolution for Geometric Modelling. Springer Science & Business Media. ISBN 978-3-540-26808-6. Subdivision methods (such as subdivision surfaces): Joseph D. Warren; Henrik Weimer (2002). Subdivision Methods for Geometric Design: A Constructive Approach. Morgan Kaufmann. ISBN 978-1-55860-446-9. Jörg Peters; Ulrich Reif (2008). Subdivision Surfaces. Springer Science & Business Media. ISBN 978-3-540-76405-2. Lars-Erik Andersson; Neil Frederick Stewart (2010). Introduction to the Mathematics of Subdivision Surfaces. SIAM. ISBN 978-0-89871-761-7. == External links == Geometry and Algorithms for CAD (Lecture Note, TU Darmstadt)
Wikipedia/Geometric_modelling
In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various equivalent ways. Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic, as independent of any embedding of the variety into an affine or projective space, while other are related to such an embedding. == Dimension of an affine algebraic set == Let K be a field, and L ⊇ K be an algebraically closed extension. An affine algebraic set V is the set of the common zeros in Ln of the elements of an ideal I in a polynomial ring R = K [ x 1 , … , x n ] . {\displaystyle R=K[x_{1},\ldots ,x_{n}].} Let A = R / I {\displaystyle A=R/I} be the K-algebra of the polynomial functions over V. The dimension of V is any of the following integers. It does not change if K is enlarged, if L is replaced by another algebraically closed extension of K and if I is replaced by another ideal having the same zeros (that is having the same radical). The dimension is also independent of the choice of coordinates; in other words it does not change if the xi are replaced by linearly independent linear combinations of them. The dimension of V is The maximal length d {\displaystyle d} of the chains V 0 ⊂ V 1 ⊂ … ⊂ V d {\displaystyle V_{0}\subset V_{1}\subset \ldots \subset V_{d}} of distinct nonempty (irreducible) subvarieties of V. This definition generalizes a property of the dimension of a Euclidean space or a vector space. It is thus probably the definition that gives the easiest intuitive description of the notion. The Krull dimension of the coordinate ring A. This is the transcription of the preceding definition in the language of commutative algebra, the Krull dimension being the maximal length of the chains p 0 ⊂ p 1 ⊂ … ⊂ p d {\displaystyle p_{0}\subset p_{1}\subset \ldots \subset p_{d}} of prime ideals of A. The maximal Krull dimension of the local rings at the points of V. This definition shows that the dimension is a local property if V {\displaystyle V} is irreducible. If V {\displaystyle V} is irreducible, it turns out that all the local rings at points of V have the same Krull dimension (see ); thus: If V is a variety, the Krull dimension of the local ring at any point of V This rephrases the previous definition into a more geometric language. The maximal dimension of the tangent vector spaces at the non singular points of V. This relates the dimension of a variety to that of a differentiable manifold. More precisely, if V if defined over the reals, then the set of its real regular points, if it is not empty, is a differentiable manifold that has the same dimension as a variety and as a manifold. If V is a variety, the dimension of the tangent vector space at any non singular point of V. This is the algebraic analogue to the fact that a connected manifold has a constant dimension. This can also be deduced from the result stated below the third definition, and the fact that the dimension of the tangent space is equal to the Krull dimension at any non-singular point (see Zariski tangent space). The number of hyperplanes or hypersurfaces in general position which are needed to have an intersection with V which is reduced to a nonzero finite number of points. This definition is not intrinsic as it apply only to algebraic sets that are explicitly embedded in an affine or projective space. The maximal length of a regular sequence in the coordinate ring A. This the algebraic translation of the preceding definition. The difference between n and the maximal length of the regular sequences contained in I. This is the algebraic translation of the fact that the intersection of n – d general hypersurfaces is an algebraic set of dimension d. The degree of the Hilbert polynomial of A. The degree of the denominator of the Hilbert series of A. This allows, through a Gröbner basis computation to compute the dimension of the algebraic set defined by a given system of polynomial equations. Moreover, the dimension is not changed if the polynomials of the Gröbner basis are replaced with their leading monomials, and if these leading monomials are replaced with their radical (monomials obtained by removing exponents). So: The Krull dimension of the Stanley–Reisner ring R / J {\displaystyle R/J} where J {\displaystyle J} is the radical of the initial ideal of I {\displaystyle I} for any admissible monomial ordering (the initial ideal of I {\displaystyle I} is the set of all leading monomials of elements of I {\displaystyle I} ). The dimension of the simplicial complex defined by this Stanley–Reisner ring. If I is a prime ideal (i.e. V is an algebraic variety), the transcendence degree over K of the field of fractions of A. This allows to prove easily that the dimension is invariant under birational equivalence. == Dimension of a projective algebraic set == Let V be a projective algebraic set defined as the set of the common zeros of a homogeneous ideal I in a polynomial ring R = K [ x 0 , x 1 , … , x n ] {\displaystyle R=K[x_{0},x_{1},\ldots ,x_{n}]} over a field K, and let A=R/I be the graded algebra of the polynomials over V. All the definitions of the previous section apply, with the change that, when A or I appear explicitly in the definition, the value of the dimension must be reduced by one. For example, the dimension of V is one less than the Krull dimension of A. == Computation of the dimension == Given a system of polynomial equations over an algebraically closed field K {\displaystyle K} , it may be difficult to compute the dimension of the algebraic set that it defines. Without further information on the system, there is only one practical method, which consists of computing a Gröbner basis and deducing the degree of the denominator of the Hilbert series of the ideal generated by the equations. The second step, which is usually the fastest, may be accelerated in the following way: Firstly, the Gröbner basis is replaced by the list of its leading monomials (this is already done for the computation of the Hilbert series). Then each monomial like x 1 e 1 ⋯ x n e n {\displaystyle {x_{1}}^{e_{1}}\cdots {x_{n}}^{e_{n}}} is replaced by the product of the variables in it: x 1 min ( e 1 , 1 ) ⋯ x n min ( e n , 1 ) . {\displaystyle x_{1}^{\min(e_{1},1)}\cdots x_{n}^{\min(e_{n},1)}.} Then the dimension is the maximal size of a subset S of the variables, such that none of these products of variables depends only on the variables in S. This algorithm is implemented in several computer algebra systems. For example in Maple, this is the function Groebner[HilbertDimension], and in Macaulay2, this is the function dim. == Real dimension == The real dimension of a set of real points, typically a semialgebraic set, is the dimension of its Zariski closure. For a semialgebraic set S, the real dimension is one of the following equal integers: The real dimension of S {\displaystyle S} is the dimension of its Zariski closure. The real dimension of S {\displaystyle S} is the maximal integer d {\displaystyle d} such that there is a homeomorphism of [ 0 , 1 ] d {\displaystyle [0,1]^{d}} in S {\displaystyle S} . The real dimension of S {\displaystyle S} is the maximal integer d {\displaystyle d} such that there is a projection of S {\displaystyle S} over a d {\displaystyle d} -dimensional subspace with a non-empty interior. For an algebraic set defined over the reals (that is defined by polynomials with real coefficients), it may occur that the real dimension of the set of its real points is smaller than its dimension as a semi algebraic set. For example, the algebraic surface of equation x 2 + y 2 + z 2 = 0 {\displaystyle x^{2}+y^{2}+z^{2}=0} is an algebraic variety of dimension two, which has only one real point (0, 0, 0), and thus has the real dimension zero. The real dimension is more difficult to compute than the algebraic dimension. For the case of a real hypersurface (that is the set of real solutions of a single polynomial equation), there exists a probabilistic algorithm to compute its real dimension. == See also == Dimension (vector space) Dimension theory (algebra) Dimension of a scheme == References ==
Wikipedia/Dimension_of_an_algebraic_variety
John Wiley & Sons, Inc., commonly known as Wiley (), is an American multinational publishing company that focuses on academic publishing and instructional materials. The company was founded in 1807 and produces books, journals, and encyclopedias, in print and electronically, as well as online products and services, training materials, and educational materials for undergraduate, graduate, and continuing education students. == History == The company was established in 1807 when Charles Wiley opened a print shop in Manhattan. The company was the publisher of 19th century American literary figures like James Fenimore Cooper, Washington Irving, Herman Melville, and Edgar Allan Poe, as well as of legal, religious, and other non-fiction titles. The firm took its current name in 1865. Wiley later shifted its focus to scientific, technical, and engineering subject areas, abandoning its literary interests. Wiley's son John (born in Flatbush, New York, October 4, 1808; died in East Orange, New Jersey, February 21, 1891) took over the business when Charles Wiley died in 1826. The firm was successively named Wiley, Lane & Co., then Wiley & Putnam, and then John Wiley. The company acquired its present name in 1876, when John's second son William H. Wiley joined his brother Charles in the business. Through the 20th century, the company expanded its publishing activities, the sciences, and higher education. In 1960 Wiley set up a European branch in London, which later moved to Chichester. In 1982, Wiley acquired the publishing operations of the British firm Heyden & Son. In 1989, Wiley acquired the life science publisher Liss. In 1996, Wiley acquired the German technical publisher VCH. In 1997, Wiley acquired the professional publisher Van Nostrand Reinhold (the successor to the company started by David Van Nostrand) from Thomson Learning. In 1999, Wiley acquired the professional publisher Jossey-Bass from Pearson. In 2001, Wiley acquired the publisher Hungry Minds (formerly IDG Books, including most titles formerly published by Macmillan General Reference) from International Data Group. In 2005, Wiley acquired the British medical publisher Whurr. Wiley marked its bicentennial in 2007. In conjunction with the anniversary, the company published Knowledge for Generations: Wiley and the Global Publishing Industry, 1807–2007, depicting Wiley's role in the evolution of publishing against a social, cultural, and economic backdrop. Wiley has also created an online community called Wiley Living History, offering excerpts from Knowledge for Generations and a forum for visitors and Wiley employees to post their comments and anecdotes. In 2021, Wiley acquired Hindawi and J&J Editorial. In 2023, Academic Partnerships acquired Wiley's online education business for $150 million. === High-growth and emerging markets === In December 2010, Wiley opened an office in Dubai. Wiley established publishing operations in India in 2006 (though it has had a sales presence since 1966), and has established a presence in North Africa through sales contracts with academic institutions in Tunisia, Libya, and Egypt. On April 16, 2012, the company announced the establishment of Wiley Brasil Editora LTDA in São Paulo, Brazil, effective May 1, 2012. === Strategic acquisition and divestiture === Wiley's scientific, technical, and medical business was expanded by the acquisition of Blackwell Publishing in February 2007 for US$1.12 billion, its largest purchase to that time. The combined business, named Scientific, Technical, Medical, and Scholarly (also known as Wiley-Blackwell), publishes, in print and online, 1,600 scholarly peer-reviewed journals and an extensive collection of books, reference works, databases, and laboratory manuals in the life and physical sciences, medicine and allied health, engineering, the humanities, and the social sciences. Through a backfile initiative completed in 2007, 8.2 million pages of journal content have been made available online, a collection dating back to 1799. Wiley-Blackwell also publishes on behalf of about 700 professional and scholarly societies; among them are the American Cancer Society (ACS), for which it publishes Cancer, the flagship ACS journal; the Sigma Theta Tau International Honor Society of Nursing; and the American Anthropological Association. Other journals published include Angewandte Chemie, Advanced Materials, Hepatology, International Finance and Liver Transplantation. Launched as a pilot in 1997 with fifty journals and expanded through 1998, Wiley Interscience provided online access to Wiley journals, reference works, and books, including backfile content. Journals previously from Blackwell Publishing were available online from Blackwell Synergy until they were integrated into Wiley Interscience on June 30, 2008. In December 2007, Wiley also began distributing its technical titles through the Safari Books Online e-reference service. Interscience was supplanted by Wiley Online Library in 2010. On February 17, 2012, Wiley announced the acquisition of Inscape Holdings Inc., which provides DISC assessments and training for interpersonal business skills. A month later, Wiley announced its intention to divest assets in the areas of travel (including the Frommer's brand), culinary, general interest, nautical, pets, and crafts, as well as the Webster's New World and CliffsNotes brands. The planned divestiture was aligned with Wiley's "increased strategic focus on content and services for research, learning, and professional practices, and on lifelong learning through digital technology". In May 2012, the company acquired publishing company Harlan Davidson, Inc., which is a family-owned business based in Illinois. On August 13 of the same year, Wiley announced it entered into a definitive agreement to sell all of its travel assets, including all of its interests in the Frommer's brand, to Google Inc. On November 6, 2012, Houghton Mifflin Harcourt acquired Wiley's cookbooks, dictionaries and study guides. In 2013, Wiley sold its pets, crafts and general interest lines to Turner Publishing Company and its nautical line to Fernhurst Books. HarperCollins acquired parts of Wiley Canada's trade operations in 2013; the remaining Canadian trade operations were merged into Wiley U.S. In 2021, Wiley acquired the Hindawi publishing firm for $298 million in cash to expand its open access journals portfolio. Wiley stated it would keep the Hindawi journals under their previous brand and continue developing the open source publishing platform Phenom. In 2023 and after over 7000 article retractions in Hindawi journals related to the publication of articles originating from paper mills, Wiley announced that it will cease using the Hindawi brand and will integrate Hindawi's 200 remaining journals into its main portfolio. The Wiley CEO who initiated the Hindawi acquisition stepped down in the wake of those announcements. In 2021, Wiley announced the acquisition of eJournalPress (EJP), a provider of web-based technology solutions for scholarly publishing companies. == Products == === Brands and partnerships === Wiley's Professional Development brands include For Dummies, Jossey-Bass, Pfeiffer, Wrox Press, J.K. Lasser, Sybex, Fisher Investments Press, and Bloomberg Press. The STMS business is also known as Wiley-Blackwell, formed following the acquisition of Blackwell Publishing in February 2007. Brands include The Cochrane Library and more than 1,500 journals. Wiley has publishing alliances with partners including Microsoft, CFA Institute, the Culinary Institute of America, the American Institute of Architects, the National Geographic Society, and the Institute of Electrical and Electronics Engineers (IEEE). Wiley-Blackwell also publishes journals on behalf of more than 700 professional and scholarly society partners including the New York Academy of Sciences, American Cancer Society, The Physiological Society, British Ecological Society, American Association of Anatomists, Society for the Psychological Study of Social Issues and The London School of Economics and Political Science, making it the world's largest society publisher. Wiley partners with GreyCampus to provide professional learning solutions around big data and digital literacy. Wiley has also partnered with five other higher-education publishers to create CourseSmart, a company developed to sell college textbooks in eTextbook format on a common platform. In 2002, Wiley created a partnership with French publisher Anuman Interactive in order to launch a series of e-books adapted from the For Dummies collection. In 2013, Wiley partnered with American Graphics Institute to create an online education video and e-book subscription service called The Digital Classroom. In 2016, Wiley launched a worldwide partnership with Christian H. Cooper to create a program for candidates taking the Financial Risk Manager exam offered by the Global Association of Risk Professionals. The program will be built on the existing Wiley efficient learning platform and Christian's legacy Financial Risk Manager product. The partnership is built on the view the FRM designation will rapidly grow to be one of the premier financial designations for practitioners that will track the growth of the Chartered Financial Analyst designation. The program will serve tens of thousands of FRM candidates worldwide and is based on the adaptive learning technology of Wiley's efficient learning platform and Christian's unique writing style and legacy book series. With the integration of digital technology and the traditional print medium, Wiley has stated that in the near future its customers will be able to search across all its content regardless of original medium and assemble a custom product in the format of choice. Web resources are also enabling new types of publisher-customer interactions within the company's various businesses. === Open access === In 2016, Wiley started a collaboration with the open access publisher Hindawi to help convert nine Wiley journals to full open access. In 2018 a further announcement was made indicating that the Wiley-Hindawi collaboration would launch an additional four new fully open access journals. On January 18, 2019, Wiley signed a contract with Project DEAL to begin open access to its academic journals for more than 700 academic institutions. It is the first contract between a publisher and a leading research nation (Germany) toward open access to scientific research. === Higher education === Higher Education's "WileyPLUS" is an online product that combines electronic versions of texts with media resources and tools for instructors and students. It is intended to provide a single source from which instructors can manage their courses, create presentations, and assign and grade homework and tests; students can receive hints and explanations as they work on homework, and link back to relevant sections of the text. "Wiley Custom Select" launched in February 2009 as a custom textbook system allowing instructors to combine content from different Wiley textbooks and lab manuals and add in their own material. The company has begun to make content from its STMS business available to instructors through the system, with content from its Professional/Trade business to follow. In September 2019, Wiley entered into a collaboration with IIM Lucknow to offer analytics courses for finance executives. === Online Program Management === In November 2011, Wiley Education Services announced the purchase Deltak for $220 million. Wiley later acquired The Learning House in 2018. This made Wiley one of the largest OPM providers at the time, with 60 university partners and more than 700 online programs. In June 2023, Wiley announced they would divest several business units, including Wiley University Services. Wiley's 2023 full year revenue was $208 million, an 8% reduction from the prior year. In 2020, Wiley reported $232 million in OPM revenue with organic growth of 11% compared to prior year. In November 2023, Academic Partnerships announced they would purchase Wiley's OPM business for $110 million. === Medicine === In January 2008, Wiley launched a new version of its evidence-based medicine (EBM) product, InfoPOEMs with InfoRetriever, under the name Essential Evidence Plus, providing primary-care clinicians with point-of-care access to the most extensive source of EBM information via their PDAs/handheld devices and desktop computers. Essential Evidence Plus includes the InfoPOEMs daily EBM content alerting service and two new content resources—EBM Guidelines, a collection of practice guidelines, evidence summaries, and images, and e-Essential Evidence, a reference for general practitioners, nurses, and physician assistants providing first-contact care. === Architecture and design === In October 2008, Wiley launched a new online service providing continuing education units (CEU) and professional development hour (PDH) credits to architects and designers. The initial courses are adapted from Wiley books, extending their reach into the digital space. Wiley is an accredited AIA continuing education provider. === Wiley Online Library === Wiley Online Library is a subscription-based library of John Wiley & Sons that launched on August 7, 2010, replacing Wiley Interscience. It is a collection of online resources covering life, health, and physical sciences as well as social science and the humanities. To its members, Wiley Online Library delivers access to over 4 million articles from 1,600 journals, more than 22,000 books, and hundreds of reference works, laboratory protocols, and databases from John Wiley & Sons and its imprints, including Wiley-Blackwell, Wiley-VCH, and Jossey-Bass. The online library is implemented on top of the Literatum platform, developed by Atypon which Wiley acquired in 2016. == Corporate structure == === Governance and operations === While the company is led by an independent management team and Board of Directors, the involvement of the Wiley family is ongoing, with sixth-generation members (and siblings) Peter Booth Wiley as the non-executive chairman of the board and Bradford Wiley II as a Director and past chairman of the board. Seventh-generation members Jesse and Nate Wiley work in the company's Professional/Trade and Scientific, Technical, Medical, and Scholarly businesses, respectively. Wiley has been publicly owned since 1962, and listed on the New York Stock Exchange since 1995; its stock is traded under the symbols NYSE: WLY (for its Class A stock) and NYSE: WLYB (for its class B stock). Wiley's operations are organized into three business divisions: Scientific, Technical, Medical, and Scholarly (STMS), also known as Wiley-Blackwell Professional Development Global Education The company has approximately 10,000 employees worldwide, with headquarters in Hoboken, New Jersey, since 2002. === Corporate culture === In 2008, Wiley was named for the second consecutive year to Forbes magazine's annual list of the "400 Best Big Companies in America". In 2007, Book Business magazine cited Wiley as "One of the 20 Best Book Publishing Companies to Work For". For two consecutive years, 2006 and 2005, Fortune magazine named Wiley one of the "100 Best Companies to Work For". Wiley Canada was named to Canadian Business magazine's 2006 list of "Best Workplaces in Canada", and Wiley Australia has received the Australian government's "Employer of Choice for Women" citation every year since its inception in 2001. In 2004, Wiley was named to the U.S. Environmental Protection Agency's "Best Workplaces for Commuters" list. Working Mother magazine in 2003 listed Wiley as one of the "100 Best Companies for Working Mothers", and that same year, the company received the Enterprise Award from the New Jersey Business & Industry Association in recognition of its contribution to the state's economic growth. In 1998, Financial Times selected Wiley as one of the "most respected companies" with a "strong and well thought out strategy" in its global survey of CEOs. In August 2009, the company announced a proposed reduction of Wiley-Blackwell staff in content management operations in the UK and Australia by approximately 60, in conjunction with an increase of staff in Asia. In March 2010, it announced a similar reorganization of its Wiley-Blackwell central marketing operations that would lay off approximately 40 employees. The company's position was that the primary goal of this restructuring was to increase workflow efficiency. In June 2012, it announced the proposed closing of its Edinburgh facility in June 2013 with the intention of relocating journal content management activities currently performed there to Oxford and Asia. The move would lay off approximately 50 employees. Wiley is a signatory of the SDG Publishers Compact, and has taken steps to support the achievement of the Sustainable Development Goals (SDGs) in the publishing industry. These include becoming carbon neutral and supporting reforestation. Wiley's Natural Resources Forum was one of six out of 100 journals to receive the highest possible "Five Wheel" impact rating from an SDG Impact Intensity journal rating system analyzing data from 2016 to 2020. === Gender pay gap === Wiley reported a mean 2017 gender pay gap of 21.1% for its UK workforce, while the median was 21.5%. The gender bonus gaps are far higher, at 50.7% for the median measure and 42.3% for the mean. Wiley said: "Our mean and median bonus gaps are driven by our highest earners, who are predominantly male." == Controversies == === Forced inclusion of authors into AI LLM's === In August 2024, it was reported that Wiley was projected to earn $44 million (£33 million) from partnerships with Artificial Intelligence (AI) firms that utilize authors' content to train Large Language Models (LLMs). Authors are not provided with an opt-out option for these deals. === Journal protests === In 2020, the entire editorial board of the European Law Journal  resigned over a dispute about contract terms and the behavior of its publisher, Wiley. Wiley did not allow the editorial board members to decide over editorial appointments and decisions. A majority of the editorial board of the journal Diversity & Distributions resigned in 2018 after Wiley allegedly blocked the publication of a letter protesting the publisher's decision to make the journal entirely open access. === Publication practices === According to Retraction Watch, Wiley makes some articles disappear from their journals without any explanation. === Manipulation of bibliometrics === According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics. Five Wiley journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total. === Publication of "Paper Mill" generated papers === In April 2022, the journal Science revealed that a Ukrainian company, International Publisher Ltd., run by Ksenia Badziun, operates a Russian website where academics can purchase authorships in soon-to-be-published academic papers. Over a two-year period, researchers found that at least 419 articles "appeared to match manuscripts that later appeared in dozens of different journals" and that "more than 100 of these identified papers were published in 68 journals run by established publishers, including Elsevier, Oxford University Press, Springer Nature, Taylor & Francis, Wolters Kluwer, and Wiley-Blackwell." Wiley-Blackwell claimed that they were examining the specific papers that were identified and brought to their attention. In 2024, Wiley closed down 19 of the about 250 journals it had acquired in the Hindawi deal, after retracting "more than 11,300 'compromised' studies over the past two years"; Wiley had earlier shuttered four journals for publishing fake articles coming from paper mills. === COI between climate research and fossil fuel industry === Wiley is a publisher of climate change research, but also publishes a journal dedicated to fossil fuel exploration. Climate scientists are concerned that this conflict of interest could undermine the credibility of climate science because they believe that fossil fuel extraction and climate action are incompatible. == Copyright cases == === Hindawi case === In 2021, Wiley purchased another Open Access company named Hindawi. Shortly after, many articles published by Hindawi were retracted and Scopus disconnected all of them from their database. === Photographer copyrights === A 2013 lawsuit brought by a stock photo agency for alleged violation of a 1997 license was dismissed for procedural reasons. A 2014 ruling by the District Court for the Southern District of New York, later affirmed by the Second Circuit, says that Wiley infringed on the copyright of photographer Tom Bean by using his photos beyond the scope of the license it had purchased. The case was connected to a larger set of copyright infringement cases brought by photo agency DRK against various publishers. A 2015 9th Circuit Court of Appeals opinion established that another photo agency had standing to sue Wiley for its usage of photos beyond the scope of the license acquired. === Used books === In 2018, a Southern District of New York court upheld the award of over $39 million to Wiley and other textbook publishers in a vast litigation against Book Dog Books, a re-seller of used books which was found to hold and distribute counterfeit copies. The Court found that circumstantial evidence was sufficient to establish distribution of 116 titles for which counterfeit copies had been presented and of other 5 titles. It also found that unchallenged testimony on how the publishers usually acquired licenses from authors was sufficient to establish the publishers' copyright on the books in question. === Kirtsaeng v. John Wiley & Sons === In 2008, John Wiley & Sons filed suit against Thailand native Supap Kirtsaeng over the sale of textbooks made outside of the United States and then imported into the country. In 2013, the U.S. Supreme Court held 6–3 that the first-sale doctrine applied to copies of copyrighted works made and sold abroad at lower prices, reversing the Second Circuit decision which had favored Wiley. === Internet Archive lawsuit === In June 2020, Wiley was one of a group of publishers who sued the Internet Archive, arguing that its collection of e-books was denying authors and publishers revenue and accusing the library of "willful mass copyright infringement". == Antitrust cases == In September 2024, Lucina Uddin, a neuroscience professor at UCLA, sued John Wiley & Sons along with five other academic journal publishers in a proposed class-action lawsuit, alleging that the publishers violated antitrust law by agreeing not to compete against each other for manuscripts and by denying scholars payment for peer review services. == References == == Further reading == The First One Hundred and Fifty Years: A History of John Wiley and Sons Incorporated 1807–1957. New York: John Wiley & Sons. 1957. Moore, John Hammond (1982). Wiley: One Hundred and Seventy Five Years of Publishing. New York: John Wiley & Sons. ISBN 978-0-471-86082-2. Munroe, Mary H. (2004). "John Wiley Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on October 20, 2014 – via Northern Illinois University. Wiley, Peter Booth; Chaves, Frances; Grolier Club (2010). John Wiley & Sons: 200 years of publishing (PDF). Hoboken, NJ: John Wiley & Sons. Wright, Robert E.; Jacobson, Timothy C.; Smith, George David (2007). Knowledge for Generations: Wiley and the Global Publishing Industry, 1807–2007. Hoboken, New Jersey: John Wiley & Sons. ISBN 978-0-471-75721-4. == External links == Official website
Wikipedia/Wiley-Interscience
In commutative algebra and algebraic geometry, elimination theory is the classical name for algorithmic approaches to eliminating some variables between polynomials of several variables, in order to solve systems of polynomial equations. Classical elimination theory culminated with the work of Francis Macaulay on multivariate resultants, as described in the chapter on Elimination theory in the first editions (1930) of Bartel van der Waerden's Moderne Algebra. After that, elimination theory was ignored by most algebraic geometers for almost thirty years, until the introduction of new methods for solving polynomial equations, such as Gröbner bases, which were needed for computer algebra. == History and connection to modern theories == The field of elimination theory was motivated by the need of methods for solving systems of polynomial equations. One of the first results was Bézout's theorem, which bounds the number of solutions (in the case of two polynomials in two variables at Bézout time). Except for Bézout's theorem, the general approach was to eliminate variables for reducing the problem to a single equation in one variable. The case of linear equations was completely solved by Gaussian elimination, where the older method of Cramer's rule does not proceed by elimination, and works only when the number of equations equals the number of variables. In the 19th century, this was extended to linear Diophantine equations and abelian group with Hermite normal form and Smith normal form. Before the 20th century, different types of eliminants were introduced, including resultants, and various kinds of discriminants. In general, these eliminants are also invariant under various changes of variables, and are also fundamental in invariant theory. All these concepts are effective, in the sense that their definitions include a method of computation. Around 1890, David Hilbert introduced non-effective methods, and this was seen as a revolution, which led most algebraic geometers of the first half of the 20th century to try to "eliminate elimination". Nevertheless Hilbert's Nullstellensatz, may be considered to belong to elimination theory, as it asserts that a system of polynomial equations does not have any solution if and only if one may eliminate all unknowns to obtain the constant equation 1 = 0. Elimination theory culminated with the work of Leopold Kronecker, and finally Macaulay, who introduced multivariate resultants and U-resultants, providing complete elimination methods for systems of polynomial equations, which are described in the chapter on Elimination theory in the first editions (1930) of van der Waerden's Moderne Algebra. Later, elimination theory was considered old-fashioned and removed from subsequent editions of Moderne Algebra. It was generally ignored until the introduction of computers, and more specifically of computer algebra, which again made relevant the design of efficient elimination algorithms, rather than merely existence and structural results. The main methods for this renewal of elimination theory are Gröbner bases and cylindrical algebraic decomposition, introduced around 1970. == Connection to logic == There is also a logical facet to elimination theory, as seen in the Boolean satisfiability problem. In the worst case, it is presumably hard to eliminate variables computationally. Quantifier elimination is a term used in mathematical logic to explain that, in some theories, every formula is equivalent to a formula without quantifier. This is the case of the theory of polynomials over an algebraically closed field, where elimination theory may be viewed as the theory of the methods to make quantifier elimination algorithmically effective. Quantifier elimination over the reals is another example, which is fundamental in computational algebraic geometry. == See also == Buchberger's algorithm Faugère's F4 and F5 algorithms Resultant Triangular decomposition Main theorem of elimination theory == References == Israel Gelfand, Mikhail Kapranov, Andrey Zelevinsky, Discriminants, resultants, and multidimensional determinants. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1994. x+523 pp. ISBN 0-8176-3660-9 Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 David Cox, John Little, Donal O'Shea, Using Algebraic Geometry. Revised second edition. Graduate Texts in Mathematics, vol. 185. Springer-Verlag, 2005, xii+558 pp., ISBN 978-0-387-20733-9
Wikipedia/Elimination_theory
In mathematics, a parametric equation expresses several quantities, such as the coordinates of a point, as functions of one or several variables called parameters. In the case of a single parameter, parametric equations are commonly used to express the trajectory of a moving point, in which case, the parameter is often, but not necessarily, time, and the point describes a curve, called a parametric curve. In the case of two parameters, the point describes a surface, called a parametric surface. In all cases, the equations are collectively called a parametric representation, or parametric system, or parameterization (also spelled parametrization, parametrisation) of the object. For example, the equations x = cos ⁡ t y = sin ⁡ t {\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}} form a parametric representation of the unit circle, where t is the parameter: A point (x, y) is on the unit circle if and only if there is a value of t such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors: ( x , y ) = ( cos ⁡ t , sin ⁡ t ) . {\displaystyle (x,y)=(\cos t,\sin t).} Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations. In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.). Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled t; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve. == Implicitization == Converting a set of parametric equations to a single implicit equation involves eliminating the variable t from the simultaneous equations x = f ( t ) , y = g ( t ) . {\displaystyle x=f(t),\ y=g(t).} This process is called implicitization. If one of these equations can be solved for t, the expression obtained can be substituted into the other equation to obtain an equation involving x and y only: Solving y = g ( t ) {\displaystyle y=g(t)} to obtain t = g − 1 ( y ) {\displaystyle t=g^{-1}(y)} and using this in x = f ( t ) {\displaystyle x=f(t)} gives the explicit equation x = f ( g − 1 ( y ) ) , {\displaystyle x=f(g^{-1}(y)),} while more complicated cases will give an implicit equation of the form h ( x , y ) = 0. {\displaystyle h(x,y)=0.} If the parametrization is given by rational functions x = p ( t ) r ( t ) , y = q ( t ) r ( t ) , {\displaystyle x={\frac {p(t)}{r(t)}},\qquad y={\frac {q(t)}{r(t)}},} where p, q, and r are set-wise coprime polynomials, a resultant computation allows one to implicitize. More precisely, the implicit equation is the resultant with respect to t of xr(t) – p(t) and yr(t) – q(t). In higher dimensions (either more than two coordinates or more than one parameter), the implicitization of rational parametric equations may by done with Gröbner basis computation; see Gröbner basis § Implicitization in higher dimension. To take the example of the circle of radius a, the parametric equations x = a cos ⁡ ( t ) y = a sin ⁡ ( t ) {\displaystyle {\begin{aligned}x&=a\cos(t)\\y&=a\sin(t)\end{aligned}}} can be implicitized in terms of x and y by way of the Pythagorean trigonometric identity. With x a = cos ⁡ ( t ) y a = sin ⁡ ( t ) {\displaystyle {\begin{aligned}{\frac {x}{a}}&=\cos(t)\\{\frac {y}{a}}&=\sin(t)\\\end{aligned}}} and cos ⁡ ( t ) 2 + sin ⁡ ( t ) 2 = 1 , {\displaystyle \cos(t)^{2}+\sin(t)^{2}=1,} we get ( x a ) 2 + ( y a ) 2 = 1 , {\displaystyle \left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{a}}\right)^{2}=1,} and thus x 2 + y 2 = a 2 , {\displaystyle x^{2}+y^{2}=a^{2},} which is the standard equation of a circle centered at the origin. == Parametric plane curves == === Parabola === The simplest equation for a parabola, y = x 2 {\displaystyle y=x^{2}} can be (trivially) parameterized by using a free parameter t, and setting x = t , y = t 2 f o r − ∞ < t < ∞ . {\displaystyle x=t,y=t^{2}\quad \mathrm {for} -\infty <t<\infty .} === Explicit equations === More generally, any curve given by an explicit equation y = f ( x ) {\displaystyle y=f(x)} can be (trivially) parameterized by using a free parameter t, and setting x = t , y = f ( t ) f o r − ∞ < t < ∞ . {\displaystyle x=t,y=f(t)\quad \mathrm {for} -\infty <t<\infty .} === Circle === A more sophisticated example is the following. Consider the unit circle which is described by the ordinary (Cartesian) equation x 2 + y 2 = 1. {\displaystyle x^{2}+y^{2}=1.} This equation can be parameterized as follows: ( x , y ) = ( cos ⁡ ( t ) , sin ⁡ ( t ) ) f o r 0 ≤ t < 2 π . {\displaystyle (x,y)=(\cos(t),\;\sin(t))\quad \mathrm {for} \ 0\leq t<2\pi .} With the Cartesian equation it is easier to check whether a point lies on the circle or not. With the parametric version it is easier to obtain points on a plot. In some contexts, parametric equations involving only rational functions (that is fractions of two polynomials) are preferred, if they exist. In the case of the circle, such a rational parameterization is x = 1 − t 2 1 + t 2 y = 2 t 1 + t 2 . {\displaystyle {\begin{aligned}x&={\frac {1-t^{2}}{1+t^{2}}}\\y&={\frac {2t}{1+t^{2}}}\,.\end{aligned}}} With this pair of parametric equations, the point (−1, 0) is not represented by a real value of t, but by the limit of x and y when t tends to infinity. === Ellipse === An ellipse in canonical position (center at origin, major axis along the x-axis) with semi-axes a and b can be represented parametrically as x = a cos ⁡ t y = b sin ⁡ t . {\displaystyle {\begin{aligned}x&=a\,\cos t\\y&=b\,\sin t\,.\end{aligned}}} An ellipse in general position can be expressed as x = X c + a cos ⁡ t cos ⁡ φ − b sin ⁡ t sin ⁡ φ y = Y c + a cos ⁡ t sin ⁡ φ + b sin ⁡ t cos ⁡ φ {\displaystyle {\begin{alignedat}{4}x={}&&X_{\mathrm {c} }&+a\,\cos t\,\cos \varphi {}&&-b\,\sin t\,\sin \varphi \\y={}&&Y_{\mathrm {c} }&+a\,\cos t\,\sin \varphi {}&&+b\,\sin t\,\cos \varphi \end{alignedat}}} as the parameter t varies from 0 to 2π. Here (Xc , Yc) is the center of the ellipse, and φ is the angle between the x-axis and the major axis of the ellipse. Both parameterizations may be made rational by using the tangent half-angle formula and setting tan ⁡ t 2 = u . {\textstyle \tan {\frac {t}{2}}=u\,.} === Lissajous curve === A Lissajous curve is similar to an ellipse, but the x and y sinusoids are not in phase. In canonical position, a Lissajous curve is given by x = a cos ⁡ ( k x t ) y = b sin ⁡ ( k y t ) {\displaystyle {\begin{aligned}x&=a\,\cos(k_{x}t)\\y&=b\,\sin(k_{y}t)\end{aligned}}} where kx and ky are constants describing the number of lobes of the figure. === Hyperbola === An east-west opening hyperbola can be represented parametrically by x = a sec ⁡ t + h y = b tan ⁡ t + k , {\displaystyle {\begin{aligned}x&=a\sec t+h\\y&=b\tan t+k\,,\end{aligned}}} or, rationally x = a 1 + t 2 1 − t 2 + h y = b 2 t 1 − t 2 + k . {\displaystyle {\begin{aligned}x&=a{\frac {1+t^{2}}{1-t^{2}}}+h\\y&=b{\frac {2t}{1-t^{2}}}+k\,.\end{aligned}}} A north-south opening hyperbola can be represented parametrically as x = b tan ⁡ t + h y = a sec ⁡ t + k , {\displaystyle {\begin{aligned}x&=b\tan t+h\\y&=a\sec t+k\,,\end{aligned}}} or, rationally x = b 2 t 1 − t 2 + h y = a 1 + t 2 1 − t 2 + k . {\displaystyle {\begin{aligned}x&=b{\frac {2t}{1-t^{2}}}+h\\y&=a{\frac {1+t^{2}}{1-t^{2}}}+k\,.\end{aligned}}} In all these formulae (h , k) are the center coordinates of the hyperbola, a is the length of the semi-major axis, and b is the length of the semi-minor axis. Note that in the rational forms of these formulae, the points (−a , 0) and (0 , −a), respectively, are not represented by a real value of t, but are the limit of x and y as t tends to infinity. === Hypotrochoid === A hypotrochoid is a curve traced by a point attached to a circle of radius r rolling around the inside of a fixed circle of radius R, where the point is at a distance d from the center of the interior circle. The parametric equations for the hypotrochoids are: x ( θ ) = ( R − r ) cos ⁡ θ + d cos ⁡ ( R − r r θ ) y ( θ ) = ( R − r ) sin ⁡ θ − d sin ⁡ ( R − r r θ ) . {\displaystyle {\begin{aligned}x(\theta )&=(R-r)\cos \theta +d\cos \left({R-r \over r}\theta \right)\\y(\theta )&=(R-r)\sin \theta -d\sin \left({R-r \over r}\theta \right)\,.\end{aligned}}} Some examples: == Parametric space curves == === Helix === Parametric equations are convenient for describing curves in higher-dimensional spaces. For example: x = a cos ⁡ ( t ) y = a sin ⁡ ( t ) z = b t {\displaystyle {\begin{aligned}x&=a\cos(t)\\y&=a\sin(t)\\z&=bt\,\end{aligned}}} describes a three-dimensional curve, the helix, with a radius of a and rising by 2πb units per turn. The equations are identical in the plane to those for a circle. Such expressions as the one above are commonly written as r ( t ) = ( x ( t ) , y ( t ) , z ( t ) ) = ( a cos ⁡ ( t ) , a sin ⁡ ( t ) , b t ) , {\displaystyle {\begin{aligned}\mathbf {r} (t)&=(x(t),y(t),z(t))\\&=(a\cos(t),a\sin(t),bt)\,,\end{aligned}}} where r is a three-dimensional vector. === Parametric surfaces === A torus with major radius R and minor radius r may be defined parametrically as x = cos ⁡ ( t ) ( R + r cos ⁡ ( u ) ) , y = sin ⁡ ( t ) ( R + r cos ⁡ ( u ) ) , z = r sin ⁡ ( u ) . {\displaystyle {\begin{aligned}x&=\cos(t)\left(R+r\cos(u)\right),\\y&=\sin(t)\left(R+r\cos(u)\right),\\z&=r\sin(u)\,.\end{aligned}}} where the two parameters t and u both vary between 0 and 2π. As u varies from 0 to 2π the point on the surface moves about a short circle passing through the hole in the torus. As t varies from 0 to 2π the point on the surface moves about a long circle around the hole in the torus. === Straight line === The parametric equation of the line through the point ( x 0 , y 0 , z 0 ) {\displaystyle \left(x_{0},y_{0},z_{0}\right)} and parallel to the vector a i ^ + b j ^ + c k ^ {\displaystyle a{\hat {\mathbf {i} }}+b{\hat {\mathbf {j} }}+c{\hat {\mathbf {k} }}} is x = x 0 + a t y = y 0 + b t z = z 0 + c t {\displaystyle {\begin{aligned}x&=x_{0}+at\\y&=y_{0}+bt\\z&=z_{0}+ct\end{aligned}}} == Applications == === Kinematics === In kinematics, objects' paths through space are commonly described as parametric curves, with each spatial coordinate depending explicitly on an independent parameter (usually time). Used in this way, the set of parametric equations for the object's coordinates collectively constitute a vector-valued function for position. Such parametric curves can then be integrated and differentiated termwise. Thus, if a particle's position is described parametrically as r ( t ) = ( x ( t ) , y ( t ) , z ( t ) ) , {\displaystyle \mathbf {r} (t)=(x(t),y(t),z(t))\,,} then its velocity can be found as v ( t ) = r ′ ( t ) = ( x ′ ( t ) , y ′ ( t ) , z ′ ( t ) ) , {\displaystyle {\begin{aligned}\mathbf {v} (t)&=\mathbf {r} '(t)\\&=(x'(t),y'(t),z'(t))\,,\end{aligned}}} and its acceleration as a ( t ) = v ′ ( t ) = r ″ ( t ) = ( x ″ ( t ) , y ″ ( t ) , z ″ ( t ) ) . {\displaystyle {\begin{aligned}\mathbf {a} (t)&=\mathbf {v} '(t)=\mathbf {r} ''(t)\\&=(x''(t),y''(t),z''(t))\,.\end{aligned}}} === Computer-aided design === Another important use of parametric equations is in the field of computer-aided design (CAD). For example, consider the following three representations, all of which are commonly used to describe planar curves. Each representation has advantages and drawbacks for CAD applications. The explicit representation may be very complicated, or even may not exist. Moreover, it does not behave well under geometric transformations, and in particular under rotations. On the other hand, as a parametric equation and an implicit equation may easily be deduced from an explicit representation, when a simple explicit representation exists, it has the advantages of both other representations. Implicit representations may make it difficult to generate points on the curve, and even to decide whether there are real points. On the other hand, they are well suited for deciding whether a given point is on a curve, or whether it is inside or outside of a closed curve. Such decisions may be difficult with a parametric representation, but parametric representations are best suited for generating points on a curve, and for plotting it. === Integer geometry === Numerous problems in integer geometry can be solved using parametric equations. A classical such solution is Euclid's parametrization of right triangles such that the lengths of their sides a, b and their hypotenuse c are coprime integers. As a and b are not both even (otherwise a, b and c would not be coprime), one may exchange them to have a even, and the parameterization is then a = 2 m n b = m 2 − n 2 c = m 2 + n 2 , {\displaystyle {\begin{aligned}a&=2mn\\b&=m^{2}-n^{2}\\c&=m^{2}+n^{2}\,,\end{aligned}}} where the parameters m and n are positive coprime integers that are not both odd. By multiplying a, b and c by an arbitrary positive integer, one gets a parametrization of all right triangles whose three sides have integer lengths. == Underdetermined linear systems == A system of m linear equations in n unknowns is underdetermined if it has more than one solution. This occurs when the matrix of the system and its augmented matrix have the same rank r and r < n. In this case, one can select n − r unknowns as parameters and represent all solutions as a parametric equation where all unknowns are expressed as linear combinations of the selected ones. That is, if the unknowns are x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} one can reorder them for expressing the solutions as x 1 = β 1 + ∑ j = r + 1 n α 1 , j x j ⋮ x r = β r + ∑ j = r + 1 n α r , j x j x r + 1 = x r + 1 ⋮ x n = x n . {\displaystyle {\begin{aligned}x_{1}&=\beta _{1}+\sum _{j=r+1}^{n}\alpha _{1,j}x_{j}\\\vdots \\x_{r}&=\beta _{r}+\sum _{j=r+1}^{n}\alpha _{r,j}x_{j}\\x_{r+1}&=x_{r+1}\\\vdots \\x_{n}&=x_{n}.\end{aligned}}} Such a parametric equation is called a parametric form of the solution of the system. The standard method for computing a parametric form of the solution is to use Gaussian elimination for computing a reduced row echelon form of the augmented matrix. Then the unknowns that can be used as parameters are the ones that correspond to columns not containing any leading entry (that is the left most non zero entry in a row or the matrix), and the parametric form can be straightforwardly deduced. == See also == Curve Parametric estimating Position vector Vector-valued function Parametrization by arc length Parametric derivative == Notes == == External links == Web application to draw parametric curves on the plane
Wikipedia/Parametric_equation
In graph theory, the degree (or valency) of a vertex of a graph is the number of edges that are incident to the vertex; in a multigraph, a loop contributes 2 to a vertex's degree, for the two ends of the edge. The degree of a vertex v {\displaystyle v} is denoted deg ⁡ ( v ) {\displaystyle \deg(v)} or deg ⁡ v {\displaystyle \deg v} . The maximum degree of a graph G {\displaystyle G} is denoted by Δ ( G ) {\displaystyle \Delta (G)} , and is the maximum of G {\displaystyle G} 's vertices' degrees. The minimum degree of a graph is denoted by δ ( G ) {\displaystyle \delta (G)} , and is the minimum of G {\displaystyle G} 's vertices' degrees. In the multigraph shown on the right, the maximum degree is 5 and the minimum degree is 0. In a regular graph, every vertex has the same degree, and so we can speak of the degree of the graph. A complete graph (denoted K n {\displaystyle K_{n}} , where n {\displaystyle n} is the number of vertices in the graph) is a special kind of regular graph where all vertices have the maximum possible degree, n − 1 {\displaystyle n-1} . In a signed graph, the number of positive edges connected to the vertex v {\displaystyle v} is called positive deg ( v ) {\displaystyle (v)} and the number of connected negative edges is entitled negative deg ( v ) {\displaystyle (v)} . == Handshaking lemma == The degree sum formula states that, given a graph G = ( V , E ) {\displaystyle G=(V,E)} , ∑ v ∈ V deg ⁡ ( v ) = 2 | E | {\displaystyle \sum _{v\in V}\deg(v)=2|E|\,} . The formula implies that in any undirected graph, the number of vertices with odd degree is even. This statement (as well as the degree sum formula) is known as the handshaking lemma. The latter name comes from a popular mathematical problem, which is to prove that in any group of people, the number of people who have shaken hands with an odd number of other people from the group is even. == Degree sequence == The degree sequence of an undirected graph is the non-increasing sequence of its vertex degrees; for the above graph it is (5, 3, 3, 2, 2, 1, 0). The degree sequence is a graph invariant, so isomorphic graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a graph; in some cases, non-isomorphic graphs have the same degree sequence. The degree sequence problem is the problem of finding some or all graphs with the degree sequence being a given non-increasing sequence of positive integers. (Trailing zeroes may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the graph.) A sequence which is the degree sequence of some simple graph, i.e. for which the degree sequence problem has a solution, is called a graphic or graphical sequence. As a consequence of the degree sum formula, any sequence with an odd sum, such as (3, 3, 1), cannot be realized as the degree sequence of a graph. The inverse is also true: if a sequence has an even sum, it is the degree sequence of a multigraph. The construction of such a graph is straightforward: connect vertices with odd degrees in pairs (forming a matching), and fill out the remaining even degree counts by self-loops. The question of whether a given degree sequence can be realized by a simple graph is more challenging. This problem is also called graph realization problem and can be solved by either the Erdős–Gallai theorem or the Havel–Hakimi algorithm. The problem of finding or estimating the number of graphs with a given degree sequence is a problem from the field of graph enumeration. More generally, the degree sequence of a hypergraph is the non-increasing sequence of its vertex degrees. A sequence is k {\displaystyle k} -graphic if it is the degree sequence of some simple k {\displaystyle k} -uniform hypergraph. In particular, a 2 {\displaystyle 2} -graphic sequence is graphic. Deciding if a given sequence is k {\displaystyle k} -graphic is doable in polynomial time for k = 2 {\displaystyle k=2} via the Erdős–Gallai theorem but is NP-complete for all k ≥ 3 {\displaystyle k\geq 3} . == Special values == A vertex with degree 0 is called an isolated vertex. A vertex with degree 1 is called a leaf vertex or end vertex or a pendant vertex, and the edge incident with that vertex is called a pendant edge. In the graph on the right, {3,5} is a pendant edge. This terminology is common in the study of trees in graph theory and especially trees as data structures. A vertex with degree n − 1 in a graph on n vertices is called a dominating vertex. == Global properties == If each vertex of the graph has the same degree k, the graph is called a k-regular graph and the graph itself is said to have degree k. Similarly, a bipartite graph in which every two vertices on the same side of the bipartition as each other have the same degree is called a biregular graph. An undirected, connected graph has an Eulerian path if and only if it has either 0 or 2 vertices of odd degree. If it has 0 vertices of odd degree, the Eulerian path is an Eulerian circuit. A directed graph is a directed pseudoforest if and only if every vertex has outdegree at most 1. A functional graph is a special case of a pseudoforest in which every vertex has outdegree exactly 1. By Brooks' theorem, any graph G other than a clique or an odd cycle has chromatic number at most Δ(G), and by Vizing's theorem any graph has chromatic index at most Δ(G) + 1. A k-degenerate graph is a graph in which each subgraph has a vertex of degree at most k. == See also == Indegree, outdegree for digraphs Degree distribution Degree sequence for bipartite graphs == Notes == == References == Erdős, P.; Gallai, T. (1960). "Gráfok előírt fokszámú pontokkal" (PDF). Matematikai Lapok (in Hungarian). 11: 264–274.. Havel, Václav (1955). "A remark on the existence of finite graphs". Časopis Pro Pěstování Matematiky (in Czech). 80 (4): 477–480. doi:10.21136/CPM.1955.108220. Hakimi, S. L. (1962). "On realizability of a set of integers as degrees of the vertices of a linear graph. I". Journal of the Society for Industrial and Applied Mathematics. 10 (3): 496–506. doi:10.1137/0110037. MR 0148049.. Sierksma, Gerard; Hoogeveen, Han (1991). "Seven criteria for integer sequences being graphic". Journal of Graph Theory. 15 (2): 223–231. doi:10.1002/jgt.3190150209. MR 1106533..
Wikipedia/Degree_(graph_theory)
A notable molecule editor is a computer program for creating and modifying representations of chemical structures. Molecule editors can manipulate chemical structure representations in either a simulated two-dimensional space or three-dimensional space, via 2D computer graphics or 3D computer graphics, respectively. Two-dimensional output is used as illustrations or to query chemical databases. Three-dimensional output is used to build molecular models, usually as part of molecular modelling software packages. Database molecular editors such as Leatherface, RECAP, and Molecule Slicer allow large numbers of molecules to be modified automatically according to rules such as 'deprotonate carboxylic acids' or 'break exocyclic bonds' that can be specified by a user. Molecule editors typically support reading and writing at least one file format or line notation. Examples of each include Molfile and simplified molecular input line entry specification (SMILES), respectively. Files generated by molecule editors can be displayed by molecular graphics tools. == Standalone programs == === 2D structure editing === === 3D structure editing === == Java Applets == == JavaScript embeddable editors == == See also == ChemSpider Comparison of software for molecular mechanics modeling Molecular design software == Notes and references == == External links == Molecular structure input on the web The Chemical Structure Editor: Bridging Chemistry and Cheminformatics
Wikipedia/Molecule_editor
In theoretical computer science, the subgraph isomorphism problem is a computational task in which two graphs G {\displaystyle G} and H {\displaystyle H} are given as input, and one must determine whether G {\displaystyle G} contains a subgraph that is isomorphic to H {\displaystyle H} . Subgraph isomorphism is a generalization of both the maximum clique problem and the problem of testing whether a graph contains a Hamiltonian cycle, and is therefore NP-complete. However certain other cases of subgraph isomorphism may be solved in polynomial time. Sometimes the name subgraph matching is also used for the same problem. This name puts emphasis on finding such a subgraph as opposed to the bare decision problem. == Decision problem and computational complexity == To prove subgraph isomorphism is NP-complete, it must be formulated as a decision problem. The input to the decision problem is a pair of graphs G {\displaystyle G} and H. The answer to the problem is positive if H is isomorphic to a subgraph of G, and negative otherwise. Formal question: Let G = ( V , E ) {\displaystyle G=(V,E)} , H = ( V ′ , E ′ ) {\displaystyle H=(V^{\prime },E^{\prime })} be graphs. Is there a subgraph G 0 = ( V 0 , E 0 ) ∣ V 0 ⊆ V , E 0 ⊆ E ∩ ( V 0 × V 0 ) {\displaystyle G_{0}=(V_{0},E_{0})\mid V_{0}\subseteq V,E_{0}\subseteq E\cap (V_{0}\times V_{0})} such that G 0 ≅ H {\displaystyle G_{0}\cong H} ? I. e., does there exist a bijection f : V 0 → V ′ {\displaystyle f\colon V_{0}\rightarrow V^{\prime }} such that { v 1 , v 2 } ∈ E 0 ⟺ { f ( v 1 ) , f ( v 2 ) } ∈ E ′ {\displaystyle \{\,v_{1},v_{2}\,\}\in E_{0}\iff \{\,f(v_{1}),f(v_{2})\,\}\in E^{\prime }} ? The proof of subgraph isomorphism being NP-complete is simple and based on reduction of the clique problem, an NP-complete decision problem in which the input is a single graph G and a number k, and the question is whether G contains a complete subgraph with k vertices. To translate this to a subgraph isomorphism problem, simply let H be the complete graph Kk; then the answer to the subgraph isomorphism problem for G and H is equal to the answer to the clique problem for G and k. Since the clique problem is NP-complete, this polynomial-time many-one reduction shows that subgraph isomorphism is also NP-complete. An alternative reduction from the Hamiltonian cycle problem translates a graph G which is to be tested for Hamiltonicity into the pair of graphs G and H, where H is a cycle having the same number of vertices as G. Because the Hamiltonian cycle problem is NP-complete even for planar graphs, this shows that subgraph isomorphism remains NP-complete even in the planar case. Subgraph isomorphism is a generalization of the graph isomorphism problem, which asks whether G is isomorphic to H: the answer to the graph isomorphism problem is true if and only if G and H both have the same numbers of vertices and edges and the subgraph isomorphism problem for G and H is true. However the complexity-theoretic status of graph isomorphism remains an open question. In the context of the Aanderaa–Karp–Rosenberg conjecture on the query complexity of monotone graph properties, Gröger (1992) showed that any subgraph isomorphism problem has query complexity Ω(n3/2); that is, solving the subgraph isomorphism requires an algorithm to check the presence or absence in the input of Ω(n3/2) different edges in the graph. == Algorithms == Ullmann (1976) describes a recursive backtracking procedure for solving the subgraph isomorphism problem. Although its running time is, in general, exponential, it takes polynomial time for any fixed choice of H (with a polynomial that depends on the choice of H). When G is a planar graph (or more generally a graph of bounded expansion) and H is fixed, the running time of subgraph isomorphism can be reduced to linear time. Ullmann (2010) is a substantial update to the 1976 subgraph isomorphism algorithm paper. Cordella (2004) proposed in 2004 another algorithm based on Ullmann's, VF2, which improves the refinement process using different heuristics and uses significantly less memory. Bonnici & Giugno (2013) proposed a better algorithm, which improves the initial order of the vertices using some heuristics. The current state of the art solver for moderately-sized, hard instances is the Glasgow Subgraph Solver (McCreesh, Prosser & Trimble (2020)). This solver adopts a constraint programming approach, using bit-parallel data structures and specialized propagation algorithms for performance. It supports most common variations of the problem and is capable of counting or enumerating solutions as well as deciding whether one exists. For large graphs, state-of-the art algorithms include CFL-Match and Turboiso, and extensions thereupon such as DAF by Han et al. (2019). == Applications == As subgraph isomorphism has been applied in the area of cheminformatics to find similarities between chemical compounds from their structural formula; often in this area the term substructure search is used. A query structure is often defined graphically using a structure editor program; SMILES based database systems typically define queries using SMARTS, a SMILES extension. The closely related problem of counting the number of isomorphic copies of a graph H in a larger graph G has been applied to pattern discovery in databases, the bioinformatics of protein-protein interaction networks, and in exponential random graph methods for mathematically modeling social networks. Ohlrich et al. (1993) describe an application of subgraph isomorphism in the computer-aided design of electronic circuits. Subgraph matching is also a substep in graph rewriting (the most runtime-intensive), and thus offered by graph rewrite tools. The problem is also of interest in artificial intelligence, where it is considered part of an array of pattern matching in graphs problems; an extension of subgraph isomorphism known as graph mining is also of interest in that area. == See also == Frequent subtree mining Induced subgraph isomorphism problem Maximum common edge subgraph problem Maximum common subgraph isomorphism problem == Notes == == References == Cook, S. A. (1971), "The complexity of theorem-proving procedures", Proc. 3rd ACM Symposium on Theory of Computing, pp. 151–158, doi:10.1145/800157.805047, S2CID 7573663. Eppstein, David (1999), "Subgraph isomorphism in planar graphs and related problems" (PDF), Journal of Graph Algorithms and Applications, 3 (3): 1–27, arXiv:cs.DS/9911003, doi:10.7155/jgaa.00014, S2CID 2303110. Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, ISBN 978-0-7167-1045-5. A1.4: GT48, pg.202. Gröger, Hans Dietmar (1992), "On the randomized complexity of monotone graph properties" (PDF), Acta Cybernetica, 10 (3): 119–127. Han, Myoungji; Kim, Hyunjoon; Gu, Geonmo; Park, Kunsoo; Han, Wookshin (2019), "Efficient Subgraph Matching: Harmonizing Dynamic Programming, Adaptive Matching Order, and Failing Set Together", SIGMOD, doi:10.1145/3299869.3319880, S2CID 195259296 Kuramochi, Michihiro; Karypis, George (2001), "Frequent subgraph discovery", 1st IEEE International Conference on Data Mining, p. 313, CiteSeerX 10.1.1.22.4992, doi:10.1109/ICDM.2001.989534, ISBN 978-0-7695-1119-1, S2CID 8684662. Ohlrich, Miles; Ebeling, Carl; Ginting, Eka; Sather, Lisa (1993), "SubGemini: identifying subcircuits using a fast subgraph isomorphism algorithm", Proceedings of the 30th international Design Automation Conference, pp. 31–37, doi:10.1145/157485.164556, ISBN 978-0-89791-577-9, S2CID 5889119. Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), "18.3 The subgraph isomorphism problem and Boolean queries", Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Springer, pp. 400–401, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058. Pržulj, N.; Corneil, D. G.; Jurisica, I. (2006), "Efficient estimation of graphlet frequency distributions in protein–protein interaction networks", Bioinformatics, 22 (8): 974–980, doi:10.1093/bioinformatics/btl030, PMID 16452112. Snijders, T. A. B.; Pattison, P. E.; Robins, G.; Handcock, M. S. (2006), "New specifications for exponential random graph models", Sociological Methodology, 36 (1): 99–153, CiteSeerX 10.1.1.62.7975, doi:10.1111/j.1467-9531.2006.00176.x, S2CID 10800726. Ullmann, Julian R. (1976), "An algorithm for subgraph isomorphism", Journal of the ACM, 23 (1): 31–42, doi:10.1145/321921.321925, S2CID 17268751. Jamil, Hasan (2011), "Computing Subgraph Isomorphic Queries using Structural Unification and Minimum Graph Structures", 26th ACM Symposium on Applied Computing, pp. 1058–1063. Ullmann, Julian R. (2010), "Bit-vector algorithms for binary constraint satisfaction and subgraph isomorphism", Journal of Experimental Algorithmics, 15: 1.1, CiteSeerX 10.1.1.681.8766, doi:10.1145/1671970.1921702, S2CID 15021184. Cordella, Luigi P. (2004), "A (sub) graph isomorphism algorithm for matching large graphs", IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (10): 1367–1372, CiteSeerX 10.1.1.101.5342, doi:10.1109/tpami.2004.75, PMID 15641723, S2CID 833657 Bonnici, V.; Giugno, R. (2013), "A subgraph isomorphism algorithm and its application to biochemical data", BMC Bioinformatics, 14(Suppl7) (13): S13, doi:10.1186/1471-2105-14-s7-s13, PMC 3633016, PMID 23815292 Carletti, V.; Foggia, P.; Saggese, A.; Vento, M. (2018), "Challenging the time complexity of exact subgraph isomorphism for huge and dense graphs with VF3", IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (4): 804–818, doi:10.1109/TPAMI.2017.2696940, PMID 28436848, S2CID 3709576 Solnon, Christine (2019), "Experimental Evaluation of Subgraph Isomorphism Solvers", Graph-Based Representations in Pattern Recognition - 12th IAPR-TC-15 International Workshop, GbRPR 2019, Tours, France, June 19-21, 2019, Proceedings, Lecture Notes in Computer Science, vol. 11510, Springer, pp. 1–13, doi:10.1007/978-3-030-20081-7_1, ISBN 978-3-030-20080-0, S2CID 128270779 McCreesh, Ciaran; Prosser, Patrick; Trimble, James (2020), "The Glasgow Subgraph Solver: Using Constraint Programming to Tackle Hard Subgraph Isomorphism Problem Variants", Graph Transformation - 13th International Conference, ICGT 2020, Held as Part of STAF 2020, Bergen, Norway, June 25-26, 2020, Proceedings, Lecture Notes in Computer Science, vol. 12150, Springer, pp. 316–324, doi:10.1007/978-3-030-51372-6_19, ISBN 978-3-030-51371-9
Wikipedia/Subgraph_isomorphism_problem
In graph theory, an undirected graph H is called a minor of the graph G if H can be formed from G by deleting edges, vertices and by contracting edges. The theory of graph minors began with Wagner's theorem that a graph is planar if and only if its minors include neither the complete graph K5 nor the complete bipartite graph K3,3. The Robertson–Seymour theorem implies that an analogous forbidden minor characterization exists for every property of graphs that is preserved by deletions and edge contractions. For every fixed graph H, it is possible to test whether H is a minor of an input graph G in polynomial time; together with the forbidden minor characterization this implies that every graph property preserved by deletions and contractions may be recognized in polynomial time. Other results and conjectures involving graph minors include the graph structure theorem, according to which the graphs that do not have H as a minor may be formed by gluing together simpler pieces, and Hadwiger's conjecture relating the inability to color a graph to the existence of a large complete graph as a minor of it. Important variants of graph minors include the topological minors and immersion minors. == Definitions == An edge contraction is an operation that removes an edge from a graph while simultaneously merging the two vertices it used to connect. An undirected graph H is a minor of another undirected graph G if a graph isomorphic to H can be obtained from G by contracting some edges, deleting some edges, and deleting some isolated vertices. The order in which a sequence of such contractions and deletions is performed on G does not affect the resulting graph H. Graph minors are often studied in the more general context of matroid minors. In this context, it is common to assume that all graphs are connected, with self-loops and multiple edges allowed (that is, they are multigraphs rather than simple graphs); the contraction of a loop and the deletion of a cut-edge are forbidden operations. This point of view has the advantage that edge deletions leave the rank of a graph unchanged, and edge contractions always reduce the rank by one. In other contexts (such as with the study of pseudoforests) it makes more sense to allow the deletion of a cut-edge, and to allow disconnected graphs, but to forbid multigraphs. In this variation of graph minor theory, a graph is always simplified after any edge contraction to eliminate its self-loops and multiple edges. A function f is referred to as "minor-monotone" if, whenever H is a minor of G, one has f(H) ≤ f(G). == Example == In the following example, graph H is a minor of graph G: H. G. The following diagram illustrates this. First construct a subgraph of G by deleting the dashed edges (and the resulting isolated vertex), and then contract the gray edge (merging the two vertices it connects): == Major results and conjectures == It is straightforward to verify that the graph minor relation forms a partial order on the isomorphism classes of finite undirected graphs: it is transitive (a minor of a minor of G is a minor of G itself), and G and H can only be minors of each other if they are isomorphic because any nontrivial minor operation removes edges or vertices. A deep result by Neil Robertson and Paul Seymour states that this partial order is actually a well-quasi-ordering: if an infinite list (G1, G2, …) of finite graphs is given, then there always exist two indices i < j such that Gi is a minor of Gj. Another equivalent way of stating this is that any set of graphs can have only a finite number of minimal elements under the minor ordering. This result proved a conjecture formerly known as Wagner's conjecture, after Klaus Wagner; Wagner had conjectured it long earlier, but only published it in 1970. In the course of their proof, Seymour and Robertson also prove the graph structure theorem in which they determine, for any fixed graph H, the rough structure of any graph that does not have H as a minor. The statement of the theorem is itself long and involved, but in short it establishes that such a graph must have the structure of a clique-sum of smaller graphs that are modified in small ways from graphs embedded on surfaces of bounded genus. Thus, their theory establishes fundamental connections between graph minors and topological embeddings of graphs. For any graph H, the simple H-minor-free graphs must be sparse, which means that the number of edges is less than some constant multiple of the number of vertices. More specifically, if H has h vertices, then a simple n-vertex simple H-minor-free graph can have at most O ( n h log ⁡ h ) {\displaystyle \scriptstyle O(nh{\sqrt {\log h}})} edges, and some Kh-minor-free graphs have at least this many edges. Thus, if H has h vertices, then H-minor-free graphs have average degree O ( h log ⁡ h ) {\displaystyle \scriptstyle O(h{\sqrt {\log h}})} and furthermore degeneracy O ( h log ⁡ h ) {\displaystyle \scriptstyle O(h{\sqrt {\log h}})} . Additionally, the H-minor-free graphs have a separator theorem similar to the planar separator theorem for planar graphs: for any fixed H, and any n-vertex H-minor-free graph G, it is possible to find a subset of O ( n ) {\displaystyle \scriptstyle O({\sqrt {n}})} vertices whose removal splits G into two (possibly disconnected) subgraphs with at most 2n⁄3 vertices per subgraph. Even stronger, for any fixed H, H-minor-free graphs have treewidth O ( n ) {\displaystyle \scriptstyle O({\sqrt {n}})} . The Hadwiger conjecture in graph theory proposes that if a graph G does not contain a minor isomorphic to the complete graph on k vertices, then G has a proper coloring with k – 1 colors. The case k = 5 is a restatement of the four color theorem. The Hadwiger conjecture has been proven for k ≤ 6, but is unknown in the general case. Bollobás, Catlin & Erdős (1980) call it "one of the deepest unsolved problems in graph theory." Another result relating the four-color theorem to graph minors is the snark theorem announced by Robertson, Sanders, Seymour, and Thomas, a strengthening of the four-color theorem conjectured by W. T. Tutte and stating that any bridgeless 3-regular graph that requires four colors in an edge coloring must have the Petersen graph as a minor. == Minor-closed graph families == Many families of graphs have the property that every minor of a graph in F is also in F; such a class is said to be minor-closed. For instance, in any planar graph, or any embedding of a graph on a fixed topological surface, neither the removal of edges nor the contraction of edges can increase the genus of the embedding; therefore, planar graphs and the graphs embeddable on any fixed surface form minor-closed families. If F is a minor-closed family, then (because of the well-quasi-ordering property of minors) among the graphs that do not belong to F there is a finite set X of minor-minimal graphs. These graphs are forbidden minors for F: a graph belongs to F if and only if it does not contain as a minor any graph in X. That is, every minor-closed family F can be characterized as the family of X-minor-free graphs for some finite set X of forbidden minors. The best-known example of a characterization of this type is Wagner's theorem characterizing the planar graphs as the graphs having neither K5 nor K3,3 as minors. In some cases, the properties of the graphs in a minor-closed family may be closely connected to the properties of their excluded minors. For example a minor-closed graph family F has bounded pathwidth if and only if its forbidden minors include a forest, F has bounded tree-depth if and only if its forbidden minors include a disjoint union of path graphs, F has bounded treewidth if and only if its forbidden minors include a planar graph, and F has bounded local treewidth (a functional relationship between diameter and treewidth) if and only if its forbidden minors include an apex graph (a graph that can be made planar by the removal of a single vertex). If H can be drawn in the plane with only a single crossing (that is, it has crossing number one) then the H-minor-free graphs have a simplified structure theorem in which they are formed as clique-sums of planar graphs and graphs of bounded treewidth. For instance, both K5 and K3,3 have crossing number one, and as Wagner showed the K5-free graphs are exactly the 3-clique-sums of planar graphs and the eight-vertex Wagner graph, while the K3,3-free graphs are exactly the 2-clique-sums of planar graphs and K5. == Variations == === Topological minors === A graph H is called a topological minor of a graph G if a subdivision of H is isomorphic to a subgraph of G. Every topological minor is also a minor. The converse however is not true in general (for instance the complete graph K5 in the Petersen graph is a minor but not a topological one), but holds for graph with maximum degree not greater than three. The topological minor relation is not a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour does not apply to topological minors. However it is straightforward to construct finite forbidden topological minor characterizations from finite forbidden minor characterizations by replacing every branch set with k outgoing edges by every tree on k leaves that has down degree at least two. === Induced minors === A graph H is called an induced minor of a graph G if it can be obtained from an induced subgraph of G by contracting edges. Otherwise, G is said to be H-induced minor-free. === Immersion minor === A graph operation called lifting is central in a concept called immersions. The lifting is an operation on adjacent edges. Given three vertices v, u, and w, where (v,u) and (u,w) are edges in the graph, the lifting of vuw, or equivalent of (v,u), (u,w) is the operation that deletes the two edges (v,u) and (u,w) and adds the edge (v,w). In the case where (v,w) already was present, v and w will now be connected by more than one edge, and hence this operation is intrinsically a multi-graph operation. In the case where a graph H can be obtained from a graph G by a sequence of lifting operations (on G) and then finding an isomorphic subgraph, we say that H is an immersion minor of G. There is yet another way of defining immersion minors, which is equivalent to the lifting operation. We say that H is an immersion minor of G if there exists an injective mapping from vertices in H to vertices in G where the images of adjacent elements of H are connected in G by edge-disjoint paths. The immersion minor relation is a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour applies to immersion minors. This furthermore means that every immersion minor-closed family is characterized by a finite family of forbidden immersion minors. In graph drawing, immersion minors arise as the planarizations of non-planar graphs: from a drawing of a graph in the plane, with crossings, one can form an immersion minor by replacing each crossing point by a new vertex, and in the process also subdividing each crossed edge into a path. This allows drawing methods for planar graphs to be extended to non-planar graphs. === Shallow minors === A shallow minor of a graph G is a minor in which the edges of G that were contracted to form the minor form a collection of disjoint subgraphs with low diameter. Shallow minors interpolate between the theories of graph minors and subgraphs, in that shallow minors with high depth coincide with the usual type of graph minor, while the shallow minors with depth zero are exactly the subgraphs. They also allow the theory of graph minors to be extended to classes of graphs such as the 1-planar graphs that are not closed under taking minors. === Parity conditions === An alternative and equivalent definition of a graph minor is that H is a minor of G whenever the vertices of H can be represented by a collection of vertex-disjoint subtrees of G, such that if two vertices are adjacent in H, there exists an edge with its endpoints in the corresponding two trees in G. An odd minor restricts this definition by adding parity conditions to these subtrees. If H is represented by a collection of subtrees of G as above, then H is an odd minor of G whenever it is possible to assign two colors to the vertices of G in such a way that each edge of G within a subtree is properly colored (its endpoints have different colors) and each edge of G that represents an adjacency between two subtrees is monochromatic (both its endpoints are the same color). Unlike for the usual kind of graph minors, graphs with forbidden odd minors are not necessarily sparse. The Hadwiger conjecture, that k-chromatic graphs necessarily contain k-vertex complete graphs as minors, has also been studied from the point of view of odd minors. A different parity-based extension of the notion of graph minors is the concept of a bipartite minor, which produces a bipartite graph whenever the starting graph is bipartite. A graph H is a bipartite minor of another graph G whenever H can be obtained from G by deleting vertices, deleting edges, and collapsing pairs of vertices that are at distance two from each other along a peripheral cycle of the graph. A form of Wagner's theorem applies for bipartite minors: A bipartite graph G is a planar graph if and only if it does not have the utility graph K3,3 as a bipartite minor. == Algorithms == The problem of deciding whether a graph G contains H as a minor is NP-complete in general; for instance, if H is a cycle graph with the same number of vertices as G, then H is a minor of G if and only if G contains a Hamiltonian cycle. However, when G is part of the input but H is fixed, it can be solved in polynomial time. More specifically, the running time for testing whether H is a minor of G in this case is O(n3), where n is the number of vertices in G and the big O notation hides a constant that depends superexponentially on H; since the original Graph Minors result, this algorithm has been improved to O(n2) time. Thus, by applying the polynomial time algorithm for testing whether a given graph contains any of the forbidden minors, it is theoretically possible to recognize the members of any minor-closed family in polynomial time. This result is not used in practice since the hidden constant is so huge (needing three layers of Knuth's up-arrow notation to express) as to rule out any application, making it a galactic algorithm. Furthermore, in order to apply this result constructively, it is necessary to know what the forbidden minors of the graph family are. In some cases, the forbidden minors are known, or can be computed. In the case where H is a fixed planar graph, then we can test in linear time in an input graph G whether H is a minor of G. In cases where H is not fixed, faster algorithms are known in the case where G is planar. == Notes == == References == == External links == Weisstein, Eric W. "Graph Minor". MathWorld.
Wikipedia/Minor_(graph_theory)
An algorithm is fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems. Broadly, algorithms define process(es), sets of rules, or methodologies that are to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations. With the increasing automation of services, more and more decisions are being made by algorithms. Some general examples are; risk assessments, anticipatory policing, and pattern recognition technology. The following is a list of well-known algorithms. == Automated planning == == Combinatorial algorithms == === General combinatorial algorithms === Brent's algorithm: finds a cycle in function value iterations using only two iterators Floyd's cycle-finding algorithm: finds a cycle in function value iterations Gale–Shapley algorithm: solves the stable matching problem Pseudorandom number generators (uniformly distributed—see also List of pseudorandom number generators for other PRNGs with varying degrees of convergence and varying statistical quality): ACORN generator Blum Blum Shub Lagged Fibonacci generator Linear congruential generator Mersenne Twister === Graph algorithms === Coloring algorithm: Graph coloring algorithm. Hopcroft–Karp algorithm: convert a bipartite graph to a maximum cardinality matching Hungarian algorithm: algorithm for finding a perfect matching Prüfer coding: conversion between a labeled tree and its Prüfer sequence Tarjan's off-line lowest common ancestors algorithm: computes lowest common ancestors for pairs of nodes in a tree Topological sort: finds linear order of nodes (e.g. jobs) based on their dependencies. ==== Graph drawing ==== Force-based algorithms (also known as force-directed algorithms or spring-based algorithm) Spectral layout ==== Network theory ==== Network analysis Link analysis Girvan–Newman algorithm: detect communities in complex systems Web link analysis Hyperlink-Induced Topic Search (HITS) (also known as Hubs and authorities) PageRank TrustRank Flow networks Dinic's algorithm: is a strongly polynomial algorithm for computing the maximum flow in a flow network. Edmonds–Karp algorithm: implementation of Ford–Fulkerson Ford–Fulkerson algorithm: computes the maximum flow in a graph Karger's algorithm: a Monte Carlo method to compute the minimum cut of a connected graph Push–relabel algorithm: computes a maximum flow in a graph ==== Routing for graphs ==== Edmonds' algorithm (also known as Chu–Liu/Edmonds' algorithm): find maximum or minimum branchings Euclidean minimum spanning tree: algorithms for computing the minimum spanning tree of a set of points in the plane Longest path problem: find a simple path of maximum length in a given graph Minimum spanning tree Borůvka's algorithm Kruskal's algorithm Prim's algorithm Reverse-delete algorithm Nonblocking minimal spanning switch say, for a telephone exchange Shortest path problem Bellman–Ford algorithm: computes shortest paths in a weighted graph (where some of the edge weights may be negative) Dijkstra's algorithm: computes shortest paths in a graph with non-negative edge weights Floyd–Warshall algorithm: solves the all pairs shortest path problem in a weighted, directed graph Johnson's algorithm: all pairs shortest path algorithm in sparse weighted directed graph Transitive closure problem: find the transitive closure of a given binary relation Traveling salesman problem Christofides algorithm Nearest neighbour algorithm Vehicle routing problem Clarke and Wright Saving algorithm Warnsdorff's rule: a heuristic method for solving the Knight's tour problem ==== Graph search ==== A*: special case of best-first search that uses heuristics to improve speed B*: a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals) Backtracking: abandons partial solutions when they are found not to satisfy a complete solution Beam search: is a heuristic search algorithm that is an optimization of best-first search that reduces its memory requirement Beam stack search: integrates backtracking with beam search Best-first search: traverses a graph in the order of likely importance using a priority queue Bidirectional search: find the shortest path from an initial vertex to a goal vertex in a directed graph Breadth-first search: traverses a graph level by level Brute-force search: an exhaustive and reliable search method, but computationally inefficient in many applications D*: an incremental heuristic search algorithm Depth-first search: traverses a graph branch by branch Dijkstra's algorithm: a special case of A* for which no heuristic function is used General Problem Solver: a seminal theorem-proving algorithm intended to work as a universal problem solver machine. Iterative deepening depth-first search (IDDFS): a state space search strategy Jump point search: an optimization to A* which may reduce computation time by an order of magnitude using further heuristics Lexicographic breadth-first search (also known as Lex-BFS): a linear time algorithm for ordering the vertices of a graph SSS*: state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm Uniform-cost search: a tree search that finds the lowest-cost route where costs vary ==== Subgraphs ==== Cliques Bron–Kerbosch algorithm: a technique for finding maximal cliques in an undirected graph MaxCliqueDyn maximum clique algorithm: find a maximum clique in an undirected graph Strongly connected components Kosaraju's algorithm Path-based strong component algorithm Tarjan's strongly connected components algorithm Subgraph isomorphism problem === Sequence algorithms === ==== Approximate sequence matching ==== Bitap algorithm: fuzzy algorithm that determines if strings are approximately equal. Phonetic algorithms Daitch–Mokotoff Soundex: a Soundex refinement which allows matching of Slavic and Germanic surnames Double Metaphone: an improvement on Metaphone Match rating approach: a phonetic algorithm developed by Western Airlines Metaphone: an algorithm for indexing words by their sound, when pronounced in English NYSIIS: phonetic algorithm, improves on Soundex Soundex: a phonetic algorithm for indexing names by sound, as pronounced in English String metrics: computes a similarity or dissimilarity (distance) score between two pairs of text strings Damerau–Levenshtein distance: computes a distance measure between two strings, improves on Levenshtein distance Dice's coefficient (also known as the Dice coefficient): a similarity measure related to the Jaccard index Hamming distance: sum number of positions which are different Jaro–Winkler distance: is a measure of similarity between two strings Levenshtein edit distance: computes a metric for the amount of difference between two sequences Trigram search: search for text when the exact syntax or spelling of the target object is not precisely known ==== Selection algorithms ==== Introselect Quickselect ==== Sequence search ==== Linear search: locates an item in an unsorted sequence Selection algorithm: finds the kth largest item in a sequence Sorted lists Binary search algorithm: locates an item in a sorted sequence Eytzinger binary search: cache friendly binary search algorithm Fibonacci search technique: search a sorted sequence using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers Jump search (or block search): linear search on a smaller subset of the sequence Predictive search: binary-like search which factors in magnitude of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search. Uniform binary search: an optimization of the classic binary search algorithm Ternary search: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa ==== Sequence merging ==== k-way merge algorithm Simple merge algorithm Union (merge, with elements on the output not repeated) ==== Sequence permutations ==== Fisher–Yates shuffle (also known as the Knuth shuffle): randomly shuffle a finite set Heap's permutation generation algorithm: interchange elements to generate next permutation Schensted algorithm: constructs a pair of Young tableaux from a permutation Steinhaus–Johnson–Trotter algorithm (also known as the Johnson–Trotter algorithm): generates permutations by transposing elements ==== Sequence combinations ==== ==== Sequence alignment ==== Dynamic time warping: measure similarity between two sequences which may vary in time or speed Hirschberg's algorithm: finds the least cost sequence alignment between two sequences, as measured by their Levenshtein distance Needleman–Wunsch algorithm: find global alignment between two sequences Smith–Waterman algorithm: find local sequence alignment ==== Sequence sorting ==== Exchange sorts Bubble sort: for each pair of indices, swap the items if out of order Cocktail shaker sort or bidirectional bubble sort, a bubble sort traversing the list alternately from front to back and back to front Comb sort Gnome sort Odd–even sort Quicksort: divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice Humorous or ineffective Bogosort: the list is randomly shuffled until it happens to be sorted Slowsort Stalin sort: all elements that are not in order are removed from the list Stooge sort Hybrid Flashsort Introsort: begin with quicksort and switch to heapsort when the recursion depth exceeds a certain level Timsort: adaptative algorithm derived from merge sort and insertion sort. Used in Python 2.3 and up, and Java SE 7. Insertion sorts Cycle sort: in-place with theoretically optimal number of writes Insertion sort: determine where the current item belongs in the list of sorted ones, and insert it there Library sort Patience sorting Shell sort: an attempt to improve insertion sort Tree sort (binary tree sort): build binary tree, then traverse it to create sorted list Merge sorts Merge sort: sort the first and second half of the list separately, then merge the sorted lists Slowsort Strand sort Non-comparison sorts Bead sort Bucket sort Burstsort: build a compact, cache efficient burst trie and then traverse it to create sorted output Counting sort Pigeonhole sort Postman sort: variant of Bucket sort which takes advantage of hierarchical structure Radix sort: sorts strings letter by letter Selection sorts Heapsort: convert the list into a heap, keep removing the largest element from the heap and adding it to the end of the list Selection sort: pick the smallest of the remaining elements, add it to the end of the sorted list Smoothsort Other Bitonic sorter Pancake sorting Spaghetti sort Topological sort Unknown class Samplesort ==== Subsequences ==== Longest common subsequence problem: Find the longest subsequence common to all sequences in a set of sequences Longest increasing subsequence problem: Find the longest increasing subsequence of a given sequence Ruzzo–Tompa algorithm: Find all non-overlapping, contiguous, maximal scoring subsequences in a sequence of real numbers Shortest common supersequence problem: Find the shortest supersequence that contains two or more sequences as subsequences ==== Substrings ==== Kadane's algorithm: finds the contiguous subarray with largest sum in an array of numbers Longest common substring problem: find the longest string (or strings) that is a substring (or are substrings) of two or more strings Matching wildcards Krauss matching wildcards algorithm: an open-source non-recursive algorithm Rich Salz' wildmat: a widely used open-source recursive algorithm Substring search Aho–Corasick string matching algorithm: trie based algorithm for finding all substring matches to any of a finite set of strings Boyer–Moore–Horspool algorithm: Simplification of Boyer–Moore Boyer–Moore string-search algorithm: amortized linear (sublinear in most times) algorithm for substring search Knuth–Morris–Pratt algorithm: substring search which bypasses reexamination of matched characters Rabin–Karp string search algorithm: searches multiple patterns efficiently Zhu–Takaoka string matching algorithm: a variant of Boyer–Moore Ukkonen's algorithm: a linear-time, online algorithm for constructing suffix trees == Computational mathematics == === Abstract algebra === Chien search: a recursive algorithm for determining roots of polynomials defined over a finite field Schreier–Sims algorithm: computing a base and strong generating set (BSGS) of a permutation group Todd–Coxeter algorithm: Procedure for generating cosets. === Computer algebra === Buchberger's algorithm: finds a Gröbner basis Cantor–Zassenhaus algorithm: factor polynomials over finite fields Faugère F4 algorithm: finds a Gröbner basis (also mentions the F5 algorithm) Gosper's algorithm: find sums of hypergeometric terms that are themselves hypergeometric terms Knuth–Bendix completion algorithm: for rewriting rule systems Multivariate division algorithm: for polynomials in several indeterminates Pollard's kangaroo algorithm (also known as Pollard's lambda algorithm): an algorithm for solving the discrete logarithm problem Polynomial long division: an algorithm for dividing a polynomial by another polynomial of the same or lower degree Risch algorithm: an algorithm for the calculus operation of indefinite integration (i.e. finding antiderivatives) === Geometry === Closest pair problem: find the pair of points (from a set of points) with the smallest distance between them Collision detection algorithms: check for the collision or intersection of two given solids Cone algorithm: identify surface points Convex hull algorithms: determining the convex hull of a set of points Chan's algorithm Gift wrapping algorithm or Jarvis march Graham scan Kirkpatrick–Seidel algorithm Quickhull Euclidean distance transform: computes the distance between every point in a grid and a discrete collection of points. Geometric hashing: a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation Gilbert–Johnson–Keerthi distance algorithm: determining the smallest distance between two convex shapes. Jump-and-Walk algorithm: an algorithm for point location in triangulations Laplacian smoothing: an algorithm to smooth a polygonal mesh Line segment intersection: finding whether lines intersect, usually with a sweep line algorithm Bentley–Ottmann algorithm Shamos–Hoey algorithm Minimum bounding box algorithms: find the oriented minimum bounding box enclosing a set of points Nearest neighbor search: find the nearest point or points to a query point Nesting algorithm: make the most efficient use of material or space Point in polygon algorithms: tests whether a given point lies within a given polygon Point set registration algorithms: finds the transformation between two point sets to optimally align them. Rotating calipers: determine all antipodal pairs of points and vertices on a convex polygon or convex hull. Shoelace algorithm: determine the area of a polygon whose vertices are described by ordered pairs in the plane Triangulation Delaunay triangulation Chew's second algorithm: create quality constrained Delaunay triangulations Ruppert's algorithm (also known as Delaunay refinement): create quality Delaunay triangulations Marching triangles: reconstruct two-dimensional surface geometry from an unstructured point cloud Polygon triangulation algorithms: decompose a polygon into a set of triangles Quasitriangulation Voronoi diagrams, geometric dual of Delaunay triangulation Bowyer–Watson algorithm: create voronoi diagram in any number of dimensions Fortune's Algorithm: create voronoi diagram === Number theoretic algorithms === Binary GCD algorithm: Efficient way of calculating GCD. Booth's multiplication algorithm Chakravala method: a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation Discrete logarithm: Baby-step giant-step Index calculus algorithm Pohlig–Hellman algorithm Pollard's rho algorithm for logarithms Euclidean algorithm: computes the greatest common divisor Extended Euclidean algorithm: also solves the equation ax + by = c Integer factorization: breaking an integer into its prime factors Congruence of squares Dixon's algorithm Fermat's factorization method General number field sieve Lenstra elliptic curve factorization Pollard's p − 1 algorithm Pollard's rho algorithm prime factorization algorithm Quadratic sieve Shor's algorithm Special number field sieve Trial division Lenstra–Lenstra–Lovász algorithm (also known as LLL algorithm): find a short, nearly orthogonal lattice basis in polynomial time Modular square root: computing square roots modulo a prime number Berlekamp's root finding algorithm Cipolla's algorithm Tonelli–Shanks algorithm Multiplication algorithms: fast multiplication of two numbers Karatsuba algorithm Schönhage–Strassen algorithm Toom–Cook multiplication Odlyzko–Schönhage algorithm: calculates nontrivial zeroes of the Riemann zeta function Primality tests: determining whether a given number is prime AKS primality test Baillie–PSW primality test Fermat primality test Lucas primality test Miller–Rabin primality test Sieve of Atkin Sieve of Eratosthenes Sieve of Sundaram === Numerical algorithms === ==== Differential equation solving ==== Backward Euler method Euler method Linear multistep methods Multigrid methods (MG methods), a group of algorithms for solving differential equations using a hierarchy of discretizations Partial differential equation: Crank–Nicolson method for diffusion equations Finite difference method Lax–Wendroff for wave equations Runge–Kutta methods Euler integration Trapezoidal rule (differential equations) Verlet integration (French pronunciation: [vɛʁˈlɛ]): integrate Newton's equations of motion ==== Elementary and special functions ==== Computation of π: Bailey–Borwein–Plouffe formula: (BBP formula) a spigot algorithm for the computation of the nth binary digit of π Borwein's algorithm: an algorithm to calculate the value of 1/π Chudnovsky algorithm: a fast method for calculating the digits of π Gauss–Legendre algorithm: computes the digits of pi Division algorithms: for computing quotient and/or remainder of two numbers Goldschmidt division Long division Newton–Raphson division: uses Newton's method to find the reciprocal of D, and multiply that reciprocal by N to find the final quotient Q. Non-restoring division Restoring division SRT division Exponentiation: Addition-chain exponentiation: exponentiation by positive integer powers that requires a minimal number of multiplications Exponentiating by squaring: an algorithm used for the fast computation of large integer powers of a number Hyperbolic and Trigonometric Functions: BKM algorithm: computes elementary functions using a table of logarithms CORDIC: computes hyperbolic and trigonometric functions using a table of arctangents Montgomery reduction: an algorithm that allows modular arithmetic to be performed efficiently when the modulus is large Multiplication algorithms: fast multiplication of two numbers Booth's multiplication algorithm: a multiplication algorithm that multiplies two signed binary numbers in two's complement notation Fürer's algorithm: an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity Karatsuba algorithm: an efficient procedure for multiplying large numbers Schönhage–Strassen algorithm: an asymptotically fast multiplication algorithm for large integers Toom–Cook multiplication: (Toom3) a multiplication algorithm for large integers Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal). Newton's method Rounding functions: the classic ways to round numbers Spigot algorithm: a way to compute the value of a mathematical constant without knowing preceding digits Square and Nth root of a number: Alpha max plus beta min algorithm: an approximation of the square-root of the sum of two squares Methods of computing square roots nth root algorithm Summation: Binary splitting: a divide and conquer technique which speeds up the numerical evaluation of many types of series with rational terms Kahan summation algorithm: a more accurate method of summing floating-point numbers Unrestricted algorithm ==== Geometric ==== Filtered back-projection: efficiently computes the inverse 2-dimensional Radon transform. Level set method (LSM): a numerical technique for tracking interfaces and shapes ==== Interpolation and extrapolation ==== Birkhoff interpolation: an extension of polynomial interpolation Cubic interpolation Hermite interpolation Lagrange interpolation: interpolation using Lagrange polynomials Linear interpolation: a method of curve fitting using linear polynomials Monotone cubic interpolation: a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Multivariate interpolation Bicubic interpolation: a generalization of cubic interpolation to two dimensions Bilinear interpolation: an extension of linear interpolation for interpolating functions of two variables on a regular grid Lanczos resampling ("Lanzosh"): a multivariate interpolation method used to compute new values for any digitally sampled data Nearest-neighbor interpolation Tricubic interpolation: a generalization of cubic interpolation to three dimensions Pareto interpolation: a method of estimating the median and other properties of a population that follows a Pareto distribution. Polynomial interpolation Neville's algorithm Spline interpolation: Reduces error with Runge's phenomenon. De Boor algorithm: B-splines De Casteljau's algorithm: Bézier curves Trigonometric interpolation ==== Linear algebra ==== Eigenvalue algorithms Arnoldi iteration Inverse iteration Jacobi method Lanczos iteration Power iteration QR algorithm Rayleigh quotient iteration Gram–Schmidt process: orthogonalizes a set of vectors Krylov methods (for large sparse matrix problems; third most-important numerical method class of the 20th century as ranked by SISC; after fast-fourier and fast-multipole) Matrix multiplication algorithms Cannon's algorithm: a distributed algorithm for matrix multiplication especially suitable for computers laid out in an N × N mesh Coppersmith–Winograd algorithm: square matrix multiplication Freivalds' algorithm: a randomized algorithm used to verify matrix multiplication Strassen algorithm: faster matrix multiplication Solving systems of linear equations Biconjugate gradient method: solves systems of linear equations Conjugate gradient: an algorithm for the numerical solution of particular systems of linear equations Gauss–Jordan elimination: solves systems of linear equations Gauss–Seidel method: solves systems of linear equations iteratively Gaussian elimination Levinson recursion: solves equation involving a Toeplitz matrix Stone's method: also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations Successive over-relaxation (SOR): method used to speed up convergence of the Gauss–Seidel method Tridiagonal matrix algorithm (Thomas algorithm): solves systems of tridiagonal equations Sparse matrix algorithms Cuthill–McKee algorithm: reduce the bandwidth of a symmetric sparse matrix Minimum degree algorithm: permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition Symbolic Cholesky decomposition: Efficient way of storing sparse matrix ==== Monte Carlo ==== Gibbs sampling: generates a sequence of samples from the joint probability distribution of two or more random variables Hybrid Monte Carlo: generates a sequence of samples using Hamiltonian weighted Markov chain Monte Carlo, from a probability distribution which is difficult to sample directly. Metropolis–Hastings algorithm: used to generate a sequence of samples from the probability distribution of one or more variables Wang and Landau algorithm: an extension of Metropolis–Hastings algorithm sampling ==== Numerical integration ==== MISER algorithm: Monte Carlo simulation, numerical integration ==== Root finding ==== Bisection method False position method: and Illinois method: 2-point, bracketing Halley's method: uses first and second derivatives ITP method: minmax optimal and superlinear convergence simultaneously Muller's method: 3-point, quadratic interpolation Newton's method: finds zeros of functions with calculus Ridder's method: 3-point, exponential scaling Secant method: 2-point, 1-sided === Optimization algorithms === Hybrid Algorithms Alpha–beta pruning: search to reduce number of nodes in minimax algorithm A hybrid BFGS-Like method (see more https://doi.org/10.1016/j.cam.2024.115857) Branch and bound Bruss algorithm: see odds algorithm Chain matrix multiplication Combinatorial optimization: optimization problems where the set of feasible solutions is discrete Greedy randomized adaptive search procedure (GRASP): successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search Hungarian method: a combinatorial optimization algorithm which solves the assignment problem in polynomial time Conjugate gradient methods (see more https://doi.org/10.1016/j.jksus.2022.101923) Constraint satisfaction AC-3 algorithm general algorithms for the constraint satisfaction Chaff algorithm: an algorithm for solving instances of the Boolean satisfiability problem Davis–Putnam algorithm: check the validity of a first-order logic formula Difference map algorithm general algorithms for the constraint satisfaction Davis–Putnam–Logemann–Loveland algorithm (DPLL): an algorithm for deciding the satisfiability of propositional logic formula in conjunctive normal form, i.e. for solving the CNF-SAT problem Exact cover problem Min conflicts algorithm general algorithms for the constraint satisfaction Algorithm X: a nondeterministic algorithm Dancing Links: an efficient implementation of Algorithm X Cross-entropy method: a general Monte Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling Differential evolution Dynamic Programming: problems exhibiting the properties of overlapping subproblems and optimal substructure Ellipsoid method: is an algorithm for solving convex optimization problems Evolutionary computation: optimization inspired by biological mechanisms of evolution Evolution strategy Gene expression programming Genetic algorithms Fitness proportionate selection – also known as roulette-wheel selection Stochastic universal sampling Tournament selection Truncation selection Memetic algorithm Swarm intelligence Ant colony optimization Bees algorithm: a search algorithm which mimics the food foraging behavior of swarms of honey bees Particle swarm Frank-Wolfe algorithm: an iterative first-order optimization algorithm for constrained convex optimization Golden-section search: an algorithm for finding the maximum of a real function Gradient descent Grid Search Harmony search (HS): a metaheuristic algorithm mimicking the improvisation process of musicians A hybrid HS-LS conjugate gradient algorithm (see https://doi.org/10.1016/j.cam.2023.115304) Interior point method Line search Linear programming Benson's algorithm: an algorithm for solving linear vector optimization problems Dantzig–Wolfe decomposition: an algorithm for solving linear programming problems with special structure Delayed column generation Integer linear programming: solve linear programming problems where some or all the unknowns are restricted to integer values Branch and cut Cutting-plane method Karmarkar's algorithm: The first reasonably efficient algorithm that solves the linear programming problem in polynomial time. Simplex algorithm: an algorithm for solving linear programming problems Local search: a metaheuristic for solving computationally hard optimization problems Random-restart hill climbing Tabu search Minimax used in game programming Nearest neighbor search (NNS): find closest points in a metric space Best Bin First: find an approximate solution to the nearest neighbor search problem in very-high-dimensional spaces Newton's method in optimization Nonlinear optimization BFGS method: a nonlinear optimization algorithm Gauss–Newton algorithm: an algorithm for solving nonlinear least squares problems Levenberg–Marquardt algorithm: an algorithm for solving nonlinear least squares problems Nelder–Mead method (downhill simplex method): a nonlinear optimization algorithm Odds algorithm (Bruss algorithm): Finds the optimal strategy to predict a last specific event in a random sequence event Random Search Simulated annealing Stochastic tunneling Subset sum algorithm == Computational science == === Astronomy === Doomsday algorithm: day of the week various Easter algorithms are used to calculate the day of Easter Zeller's congruence is an algorithm to calculate the day of the week for any Julian or Gregorian calendar date === Bioinformatics === Basic Local Alignment Search Tool also known as BLAST: an algorithm for comparing primary biological sequence information Bloom Filter: probabilistic data structure used to test for the existence of an element within a set. Primarily used in bioinformatics to test for the existence of a k-mer in a sequence or sequences. Kabsch algorithm: calculate the optimal alignment of two sets of points in order to compute the root mean squared deviation between two protein structures. Maximum parsimony (phylogenetics): an algorithm for finding the simplest phylogenetic tree to explain a given character matrix. Sorting by signed reversals: an algorithm for understanding genomic evolution. UPGMA: a distance-based phylogenetic tree construction algorithm. Velvet: a set of algorithms manipulating de Bruijn graphs for genomic sequence assembly === Geoscience === Geohash: a public domain algorithm that encodes a decimal latitude/longitude pair as a hash string Vincenty's formulae: a fast algorithm to calculate the distance between two latitude/longitude points on an ellipsoid === Linguistics === Lesk algorithm: word sense disambiguation Stemming algorithm: a method of reducing words to their stem, base, or root form Sukhotin's algorithm: a statistical classification algorithm for classifying characters in a text as vowels or consonants === Medicine === ESC algorithm for the diagnosis of heart failure Manning Criteria for irritable bowel syndrome Pulmonary embolism diagnostic algorithms Texas Medication Algorithm Project === Physics === Constraint algorithm: a class of algorithms for satisfying constraints for bodies that obey Newton's equations of motion Demon algorithm: a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy Featherstone's algorithm: computes the effects of forces applied to a structure of joints and links Glauber dynamics: a method for simulating the Ising Model on a computer Ground state approximation Variational method Ritz method n-body problems Barnes–Hut simulation: Solves the n-body problem in an approximate way that has the order O(n log n) instead of O(n2) as in a direct-sum simulation. Fast multipole method (FMM): speeds up the calculation of long-ranged forces Rainflow-counting algorithm: Reduces a complex stress history to a count of elementary stress-reversals for use in fatigue analysis Sweep and prune: a broad phase algorithm used during collision detection to limit the number of pairs of solids that need to be checked for collision VEGAS algorithm: a method for reducing error in Monte Carlo simulations === Statistics === Algorithms for calculating variance: avoiding instability and numerical overflow Approximate counting algorithm: allows counting large number of events in a small register Bayesian statistics Nested sampling algorithm: a computational approach to the problem of comparing models in Bayesian statistics Clustering algorithms Average-linkage clustering: a simple agglomerative clustering algorithm Canopy clustering algorithm: an unsupervised pre-clustering algorithm related to the K-means algorithm Chinese whispers Complete-linkage clustering: a simple agglomerative clustering algorithm DBSCAN: a density based clustering algorithm Expectation-maximization algorithm Fuzzy clustering: a class of clustering algorithms where each point has a degree of belonging to clusters FLAME clustering (Fuzzy clustering by Local Approximation of MEmberships): define clusters in the dense parts of a dataset and perform cluster assignment solely based on the neighborhood relationships among objects Fuzzy c-means k-means clustering: cluster objects based on attributes into partitions k-means++: a variation of this, using modified random seeds k-medoids: similar to k-means, but chooses datapoints or medoids as centers KHOPCA clustering algorithm: a local clustering algorithm, which produces hierarchical multi-hop clusters in static and mobile environments. Linde–Buzo–Gray algorithm: a vector quantization algorithm to derive a good codebook Lloyd's algorithm (Voronoi iteration or relaxation): group data points into a given number of categories, a popular algorithm for k-means clustering OPTICS: a density based clustering algorithm with a visual evaluation method Single-linkage clustering: a simple agglomerative clustering algorithm SUBCLU: a subspace clustering algorithm WACA clustering algorithm: a local clustering algorithm with potentially multi-hop structures; for dynamic networks Ward's method: an agglomerative clustering algorithm, extended to more general Lance–Williams algorithms Estimation Theory Expectation-maximization algorithm A class of related algorithms for finding maximum likelihood estimates of parameters in probabilistic models Ordered subset expectation maximization (OSEM): used in medical imaging for positron emission tomography, single-photon emission computed tomography and X-ray computed tomography. Kalman filter: estimate the state of a linear dynamic system from a series of noisy measurements Odds algorithm (Bruss algorithm) Optimal online search for distinguished value in sequential random input False nearest neighbor algorithm (FNN) estimates fractal dimension Hidden Markov model Baum–Welch algorithm: computes maximum likelihood estimates and posterior mode estimates for the parameters of a hidden Markov model Forward-backward algorithm: a dynamic programming algorithm for computing the probability of a particular observation sequence Viterbi algorithm: find the most likely sequence of hidden states in a hidden Markov model Partial least squares regression: finds a linear model describing some predicted variables in terms of other observable variables Queuing theory Buzen's algorithm: an algorithm for calculating the normalization constant G(K) in the Gordon–Newell theorem RANSAC (an abbreviation for "RANdom SAmple Consensus"): an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers Scoring algorithm: is a form of Newton's method used to solve maximum likelihood equations numerically Yamartino method: calculate an approximation to the standard deviation σθ of wind direction θ during a single pass through the incoming data Ziggurat algorithm: generates random numbers from a non-uniform distribution == Computer science == === Computer architecture === Tomasulo algorithm: allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially === Computer graphics === Binary space partitioning Clipping Line clipping Cohen–Sutherland Cyrus–Beck Fast-clipping Liang–Barsky Nicholl–Lee–Nicholl Polygon clipping Sutherland–Hodgman Vatti Weiler–Atherton Contour lines and Isosurfaces Marching cubes: extract a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels) Marching squares: generates contour lines for a two-dimensional scalar field Marching tetrahedrons: an alternative to Marching cubes Discrete Green's theorem: is an algorithm for computing double integral over a generalized rectangular domain in constant time. It is a natural extension to the summed area table algorithm Flood fill: fills a connected region of a multi-dimensional array with a specified symbol Global illumination algorithms: Considers direct illumination and reflection from other objects. Ambient occlusion Beam tracing Cone tracing Image-based lighting Metropolis light transport Path tracing Photon mapping Radiosity Ray tracing Hidden-surface removal or visual surface determination Newell's algorithm: eliminate polygon cycles in the depth sorting required in hidden-surface removal Painter's algorithm: detects visible parts of a 3-dimensional scenery Scanline rendering: constructs an image by moving an imaginary line over the image Warnock algorithm Line drawing: graphical algorithm for approximating a line segment on discrete graphical media. Bresenham's line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses decision variables) DDA line algorithm: plots points of a 2-dimensional array to form a straight line between specified points Xiaolin Wu's line algorithm: algorithm for line antialiasing. Midpoint circle algorithm: an algorithm used to determine the points needed for drawing a circle Ramer–Douglas–Peucker algorithm: Given a 'curve' composed of line segments to find a curve not too dissimilar but that has fewer points Shading Gouraud shading: an algorithm to simulate the differing effects of light and colour across the surface of an object in 3D computer graphics Phong shading: an algorithm to interpolate surface normal-vectors for surface shading in 3D computer graphics Slerp (spherical linear interpolation): quaternion interpolation for the purpose of animating 3D rotation Summed area table (also known as an integral image): an algorithm for computing the sum of values in a rectangular subset of a grid in constant time === Cryptography === Asymmetric (public key) encryption: ElGamal Elliptic curve cryptography MAE1 NTRUEncrypt RSA Digital signatures (asymmetric authentication): DSA, and its variants: ECDSA and Deterministic ECDSA EdDSA (Ed25519) RSA Cryptographic hash functions (see also the section on message authentication codes): BLAKE MD5 – Note that there is now a method of generating collisions for MD5 RIPEMD-160 SHA-1 – Note that there is now a method of generating collisions for SHA-1 SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) SHA-3 (SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128, SHAKE256) Tiger (TTH), usually used in Tiger tree hashes WHIRLPOOL Cryptographically secure pseudo-random number generators Blum Blum Shub – based on the hardness of factorization Fortuna, intended as an improvement on Yarrow algorithm Linear-feedback shift register (note: many LFSR-based algorithms are weak or have been broken) Yarrow algorithm Key exchange Diffie–Hellman key exchange Elliptic-curve Diffie–Hellman (ECDH) Key derivation functions, often used for password hashing and key stretching Argon2 bcrypt PBKDF2 scrypt Message authentication codes (symmetric authentication algorithms, which take a key as a parameter): HMAC: keyed-hash message authentication Poly1305 SipHash Secret sharing, secret splitting, key splitting, M of N algorithms Blakey's scheme Shamir's secret sharing Symmetric (secret key) encryption: Advanced Encryption Standard (AES), winner of NIST competition, also known as Rijndael Blowfish ChaCha20 updated variant of Salsa20 Data Encryption Standard (DES), sometimes DE Algorithm, winner of NBS selection competition, replaced by AES for most purposes IDEA RC4 (cipher) Salsa20 Threefish Tiny Encryption Algorithm (TEA) Twofish Post-quantum cryptography Proof-of-work algorithms === Digital logic === Boolean minimization Espresso heuristic logic minimizer: a fast algorithm for Boolean function minimization Petrick's method: another algorithm for Boolean simplification Quine–McCluskey algorithm: also called as Q-M algorithm, programmable method for simplifying the Boolean equations === Machine learning and statistical classification === Almeida–Pineda recurrent backpropagation: Adjust a matrix of synaptic weights to generate desired outputs given its inputs ALOPEX: a correlation-based machine-learning algorithm Association rule learning: discover interesting relations between variables, used in data mining Apriori algorithm Eclat algorithm FP-growth algorithm One-attribute rule Zero-attribute rule Boosting (meta-algorithm): Use many weak learners to boost effectiveness AdaBoost: adaptive boosting BrownBoost: a boosting algorithm that may be robust to noisy datasets LogitBoost: logistic regression boosting LPBoost: linear programming boosting Bootstrap aggregating (bagging): technique to improve stability and classification accuracy Clustering: a class of unsupervised learning algorithms for grouping and bucketing related input vector Computer Vision Grabcut based on Graph cuts Decision Trees C4.5 algorithm: an extension to ID3 ID3 algorithm (Iterative Dichotomiser 3): use heuristic to generate small decision trees k-nearest neighbors (k-NN): a non-parametric method for classifying objects based on closest training examples in the feature space Linde–Buzo–Gray algorithm: a vector quantization algorithm used to derive a good codebook Locality-sensitive hashing (LSH): a method of performing probabilistic dimension reduction of high-dimensional data Neural Network Backpropagation: a supervised learning method which requires a teacher that knows, or can calculate, the desired output for any given input Hopfield net: a Recurrent neural network in which all connections are symmetric Perceptron: the simplest kind of feedforward neural network: a linear classifier. Pulse-coupled neural networks (PCNN): Neural models proposed by modeling a cat's visual cortex and developed for high-performance biomimetic image processing. Radial basis function network: an artificial neural network that uses radial basis functions as activation functions Self-organizing map: an unsupervised network that produces a low-dimensional representation of the input space of the training samples Random forest: classify using many decision trees Reinforcement learning: Q-learning: learns an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter State–Action–Reward–State–Action (SARSA): learn a Markov decision process policy Temporal difference learning Relevance-Vector Machine (RVM): similar to SVM, but provides probabilistic classification Supervised learning: Learning by examples (labelled data-set split into training-set and test-set) Support Vector Machine (SVM): a set of methods which divide multidimensional data by finding a dividing hyperplane with the maximum margin between the two sets Structured SVM: allows training of a classifier for general structured output labels. Winnow algorithm: related to the perceptron, but uses a multiplicative weight-update scheme === Programming language theory === C3 linearization: an algorithm used primarily to obtain a consistent linearization of a multiple inheritance hierarchy in object-oriented programming Chaitin's algorithm: a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric Hindley–Milner type inference algorithm Rete algorithm: an efficient pattern matching algorithm for implementing production rule systems Sethi-Ullman algorithm: generates optimal code for arithmetic expressions ==== Parsing ==== CYK algorithm: an O(n3) algorithm for parsing context-free grammars in Chomsky normal form Earley parser: another O(n3) algorithm for parsing any context-free grammar GLR parser: an algorithm for parsing any context-free grammar by Masaru Tomita. It is tuned for deterministic grammars, on which it performs almost linear time and O(n3) in worst case. Inside-outside algorithm: an O(n3) algorithm for re-estimating production probabilities in probabilistic context-free grammars Lexical analysis LL parser: a relatively simple linear time parsing algorithm for a limited class of context-free grammars LR parser: A more complex linear time parsing algorithm for a larger class of context-free grammars. Variants: Canonical LR parser LALR (look-ahead LR) parser Operator-precedence parser Simple LR parser Simple precedence parser Packrat parser: a linear time parsing algorithm supporting some context-free grammars and parsing expression grammars Pratt parser Recursive descent parser: a top-down parser suitable for LL(k) grammars Shunting-yard algorithm: converts an infix-notation math expression to postfix === Quantum algorithms === Deutsch–Jozsa algorithm: criterion of balance for Boolean function Grover's algorithm: provides quadratic speedup for many search problems Shor's algorithm: provides exponential speedup (relative to currently known non-quantum algorithms) for factoring a number Simon's algorithm: provides a provably exponential speedup (relative to any non-quantum algorithm) for a black-box problem === Theory of computation and automata === Hopcroft's algorithm, Moore's algorithm, and Brzozowski's algorithm: algorithms for minimizing the number of states in a deterministic finite automaton Powerset construction: algorithm to convert nondeterministic automaton to deterministic automaton. Tarski–Kuratowski algorithm: a non-deterministic algorithm which provides an upper bound for the complexity of formulas in the arithmetical hierarchy and analytical hierarchy == Information theory and signal processing == === Coding theory === ==== Error detection and correction ==== BCH Codes Berlekamp–Massey algorithm Peterson–Gorenstein–Zierler algorithm Reed–Solomon error correction BCJR algorithm: decoding of error correcting codes defined on trellises (principally convolutional codes) Forward error correction Gray code Hamming codes Hamming(7,4): a Hamming code that encodes 4 bits of data into 7 bits by adding 3 parity bits Hamming distance: sum number of positions which are different Hamming weight (population count): find the number of 1 bits in a binary word Redundancy checks Adler-32 Cyclic redundancy check Damm algorithm Fletcher's checksum Longitudinal redundancy check (LRC) Luhn algorithm: a method of validating identification numbers Luhn mod N algorithm: extension of Luhn to non-numeric characters Parity: simple/fast error detection technique Verhoeff algorithm ==== Lossless compression algorithms ==== Burrows–Wheeler transform: preprocessing useful for improving lossless compression Context tree weighting Delta encoding: aid to compression of data in which sequential data occurs frequently Dynamic Markov compression: Compression using predictive arithmetic coding Dictionary coders Byte pair encoding (BPE) Deflate Lempel–Ziv LZ77 and LZ78 Lempel–Ziv Jeff Bonwick (LZJB) Lempel–Ziv–Markov chain algorithm (LZMA) Lempel–Ziv–Oberhumer (LZO): speed oriented Lempel–Ziv Ross Williams (LZRW) Lempel–Ziv–Stac (LZS) Lempel–Ziv–Storer–Szymanski (LZSS) Lempel–Ziv–Welch (LZW) LZWL: syllable-based variant LZX Entropy encoding: coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols Arithmetic coding: advanced entropy coding Range encoding: same as arithmetic coding, but looked at in a slightly different way Huffman coding: simple lossless compression taking advantage of relative character frequencies Adaptive Huffman coding: adaptive coding technique based on Huffman coding Package-merge algorithm: Optimizes Huffman coding subject to a length restriction on code strings Shannon–Fano coding Shannon–Fano–Elias coding: precursor to arithmetic encoding Entropy coding with known entropy characteristics Golomb coding: form of entropy coding that is optimal for alphabets following geometric distributions Rice coding: form of entropy coding that is optimal for alphabets following geometric distributions Truncated binary encoding Unary coding: code that represents a number n with n ones followed by a zero Universal codes: encodes positive integers into binary code words Elias delta, gamma, and omega coding Exponential-Golomb coding Fibonacci coding Levenshtein coding Fast Efficient & Lossless Image Compression System (FELICS): a lossless image compression algorithm Incremental encoding: delta encoding applied to sequences of strings Prediction by partial matching (PPM): an adaptive statistical data compression technique based on context modeling and prediction Run-length encoding: lossless data compression taking advantage of strings of repeated characters SEQUITUR algorithm: lossless compression by incremental grammar inference on a string ==== Lossy compression algorithms ==== 3Dc: a lossy data compression algorithm for normal maps Audio and Speech compression A-law algorithm: standard companding algorithm Code-excited linear prediction (CELP): low bit-rate speech compression Linear predictive coding (LPC): lossy compression by representing the spectral envelope of a digital signal of speech in compressed form Mu-law algorithm: standard analog signal compression or companding algorithm Warped Linear Predictive Coding (WLPC) Image compression Block Truncation Coding (BTC): a type of lossy image compression technique for greyscale images Embedded Zerotree Wavelet (EZW) Fast Cosine Transform algorithms (FCT algorithms): computes Discrete Cosine Transform (DCT) efficiently Fractal compression: method used to compress images using fractals Set Partitioning in Hierarchical Trees (SPIHT) Wavelet compression: form of data compression well suited for image compression (sometimes also video compression and audio compression) Transform coding: type of data compression for "natural" data like audio signals or photographic images Vector quantization: technique often used in lossy data compression Video compression === Digital signal processing === Adaptive-additive algorithm (AA algorithm): find the spatial frequency phase of an observed wave source Discrete Fourier transform: determines the frequencies contained in a (segment of a) signal Bluestein's FFT algorithm Bruun's FFT algorithm Cooley–Tukey FFT algorithm Fast Fourier transform Prime-factor FFT algorithm Rader's FFT algorithm Fast folding algorithm: an efficient algorithm for the detection of approximately periodic events within time series data Gerchberg–Saxton algorithm: Phase retrieval algorithm for optical planes Goertzel algorithm: identify a particular frequency component in a signal. Can be used for DTMF digit decoding. Karplus-Strong string synthesis: physical modelling synthesis to simulate the sound of a hammered or plucked string or some types of percussion ==== Image processing ==== Adaptive histogram equalization: histogram equalization which adapts to local changes in contrast - Contrast Enhancement Blind deconvolution: image de-blurring algorithm when point spread function is unknown. Connected-component labeling: find and label disjoint regions Dithering and half-toning Error diffusion Floyd–Steinberg dithering Ordered dithering Riemersma dithering Elser difference-map algorithm: a search algorithm for general constraint satisfaction problems. Originally used for X-Ray diffraction microscopy Feature detection Canny edge detector: detect a wide range of edges in images Generalised Hough transform Hough transform Marr–Hildreth algorithm: an early edge detection algorithm SIFT (Scale-invariant feature transform): is an algorithm to detect and describe local features in images. SURF (Speeded Up Robust Features): is a robust local feature detector, first presented by Herbert Bay et al. in 2006, that can be used in computer vision tasks like object recognition or 3D reconstruction. It is partly inspired by the SIFT descriptor. The standard version of SURF is several times faster than SIFT and claimed by its authors to be more robust against different image transformations than SIFT. Histogram equalization: use histogram to improve image contrast - Contrast Enhancement Richardson–Lucy deconvolution: image de-blurring algorithm Median filtering Seam carving: content-aware image resizing algorithm Segmentation: partition a digital image into two or more regions GrowCut algorithm: an interactive segmentation algorithm Random walker algorithm Region growing Watershed transformation: a class of algorithms based on the watershed analogy == Software engineering == Cache algorithms CHS conversion: converting between disk addressing systems Double dabble: convert binary numbers to BCD Hash function: convert a large, possibly variable-sized amount of data into a small datum, usually a single integer that may serve as an index into an array Fowler–Noll–Vo hash function: fast with low collision rate Pearson hashing: computes 8-bit value only, optimized for 8-bit computers Zobrist hashing: used in the implementation of transposition tables Unicode collation algorithm Xor swap algorithm: swaps the values of two variables without using a buffer == Database algorithms == Algorithms for Recovery and Isolation Exploiting Semantics (ARIES): transaction recovery Join algorithms Block nested loop Hash join Nested loop join Sort-Merge Join The Chase == Distributed systems algorithms == Clock synchronization Berkeley algorithm Cristian's algorithm Intersection algorithm Marzullo's algorithm Consensus (computer science): agreeing on a single value or history among unreliable processors Chandra–Toueg consensus algorithm Paxos algorithm Raft (computer science) Detection of Process Termination Dijkstra-Scholten algorithm Huang's algorithm Lamport ordering: a partial ordering of events based on the happened-before relation Leader election: a method for dynamically selecting a coordinator Bully algorithm Mutual exclusion Lamport's Distributed Mutual Exclusion Algorithm Naimi-Trehel's log(n) Algorithm Maekawa's Algorithm Raymond's Algorithm Ricart–Agrawala Algorithm Snapshot algorithm: record a consistent global state for an asynchronous system Chandy–Lamport algorithm Vector clocks: generate a partial ordering of events in a distributed system and detect causality violations === Memory allocation and deallocation algorithms === Buddy memory allocation: an algorithm to allocate memory such with less fragmentation Garbage collectors Cheney's algorithm: an improvement on the Semi-space collector Generational garbage collector: Fast garbage collectors that segregate memory by age Mark-compact algorithm: a combination of the mark-sweep algorithm and Cheney's copying algorithm Mark and sweep Semi-space collector: an early copying collector Reference counting == Networking == Karn's algorithm: addresses the problem of getting accurate estimates of the round-trip time for messages when using TCP Luleå algorithm: a technique for storing and searching internet routing tables efficiently Network congestion Exponential backoff Nagle's algorithm: improve the efficiency of TCP/IP networks by coalescing packets Truncated binary exponential backoff == Operating systems algorithms == Banker's algorithm: algorithm used for deadlock avoidance Page replacement algorithms: for selecting the victim page under low memory conditions Adaptive replacement cache: better performance than LRU Clock with Adaptive Replacement (CAR): a page replacement algorithm with performance comparable to adaptive replacement cache === Process synchronization === Dekker's algorithm Lamport's Bakery algorithm Peterson's algorithm === Scheduling === Earliest deadline first scheduling Fair-share scheduling Least slack time scheduling List scheduling Multi level feedback queue Rate-monotonic scheduling Round-robin scheduling Shortest job next Shortest remaining time Top-nodes algorithm: resource calendar management === I/O scheduling === ==== Disk scheduling ==== Elevator algorithm: Disk scheduling algorithm that works like an elevator. Shortest seek first: Disk scheduling algorithm to reduce seek time. == See also == List of data structures List of machine learning algorithms List of pathfinding algorithms List of algorithm general topics List of terms relating to algorithms and data structures Heuristic == References ==
Wikipedia/Graph_algorithm
In combinatorics, an area of mathematics, graph enumeration describes a class of combinatorial enumeration problems in which one must count undirected or directed graphs of certain types, typically as a function of the number of vertices of the graph. These problems may be solved either exactly (as an algebraic enumeration problem) or asymptotically. The pioneers in this area of mathematics were George Pólya, Arthur Cayley and J. Howard Redfield. == Labeled vs unlabeled problems == In some graphical enumeration problems, the vertices of the graph are considered to be labeled in such a way as to be distinguishable from each other, while in other problems any permutation of the vertices is considered to form the same graph, so the vertices are considered identical or unlabeled. In general, labeled problems tend to be easier. As with combinatorial enumeration more generally, the Pólya enumeration theorem is an important tool for reducing unlabeled problems to labeled ones: each unlabeled class is considered as a symmetry class of labeled objects. The number of unlabelled graphs with n {\displaystyle n} vertices is still not known in a closed-form solution, but as almost all graphs are asymmetric this number is asymptotic to 2 ( n 2 ) n ! . {\displaystyle {\frac {2^{\tbinom {n}{2}}}{n!}}.} == Exact enumeration formulas == Some important results in this area include the following. The number of labeled n-vertex simple undirected graphs is 2n(n−1)/2. The number of labeled n-vertex simple directed graphs is 2n(n−1). The number Cn of connected labeled n-vertex undirected graphs satisfies the recurrence relation C n = 2 ( n 2 ) − 1 n ∑ k = 1 n − 1 k ( n k ) 2 ( n − k 2 ) C k . {\displaystyle C_{n}=2^{n \choose 2}-{\frac {1}{n}}\sum _{k=1}^{n-1}k{n \choose k}2^{n-k \choose 2}C_{k}.} from which one may easily calculate, for n = 1, 2, 3, ..., that the values for Cn are 1, 1, 4, 38, 728, 26704, 1866256, ...(sequence A001187 in the OEIS) The number of labeled n-vertex free trees is nn−2 (Cayley's formula). The number of unlabeled n-vertex caterpillars is 2 n − 4 + 2 ⌊ ( n − 4 ) / 2 ⌋ . {\displaystyle 2^{n-4}+2^{\lfloor (n-4)/2\rfloor }.} == Graph database == Various research groups have provided searchable database that lists graphs with certain properties of a small sizes. For example The House of Graphs Small Graph Database == References ==
Wikipedia/Graph_enumeration
In computer science, graph transformation, or graph rewriting, concerns the technique of creating a new graph out of an original graph algorithmically. It has numerous applications, ranging from software engineering (software construction and also software verification) to layout algorithms and picture generation. Graph transformations can be used as a computation abstraction. The basic idea is that if the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph. Formally, a graph rewriting system usually consists of a set of graph rewrite rules of the form L → R {\displaystyle L\rightarrow R} , with L {\displaystyle L} being called pattern graph (or left-hand side) and R {\displaystyle R} being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching, thus solving the subgraph isomorphism problem) and by replacing the found occurrence by an instance of the replacement graph. Rewrite rules can be further regulated in the case of labeled graphs, such as in string-regulated graph grammars. Sometimes graph grammar is used as a synonym for graph rewriting system, especially in the context of formal languages; the different wording is used to emphasize the goal of constructions, like the enumeration of all graphs from some starting graph, i.e. the generation of a graph language – instead of simply transforming a given state (host graph) into a new state. == Graph rewriting approaches == === Algebraic approach === The algebraic approach to graph rewriting is based upon category theory. The algebraic approach is further divided into sub-approaches, the most common of which are the double-pushout (DPO) approach and the single-pushout (SPO) approach. Other sub-approaches include the sesqui-pushout and the pullback approach. From the perspective of the DPO approach a graph rewriting rule is a pair of morphisms in the category of graphs and graph homomorphisms between them: r = ( L ← K → R ) {\displaystyle r=(L\leftarrow K\rightarrow R)} , also written L ⊇ K ⊆ R {\displaystyle L\supseteq K\subseteq R} , where K → L {\displaystyle K\rightarrow L} is injective. The graph K is called invariant or sometimes the gluing graph. A rewriting step or application of a rule r to a host graph G is defined by two pushout diagrams both originating in the same morphism k : K → D {\displaystyle k\colon K\rightarrow D} , where D is a context graph (this is where the name double-pushout comes from). Another graph morphism m : L → G {\displaystyle m\colon L\rightarrow G} models an occurrence of L in G and is called a match. Practical understanding of this is that L {\displaystyle L} is a subgraph that is matched from G {\displaystyle G} (see subgraph isomorphism problem), and after a match is found, L {\displaystyle L} is replaced with R {\displaystyle R} in host graph G {\displaystyle G} where K {\displaystyle K} serves as an interface, containing the nodes and edges which are preserved when applying the rule. The graph K {\displaystyle K} is needed to attach the pattern being matched to its context: if it is empty, the match can only designate a whole connected component of the graph G {\displaystyle G} . In contrast a graph rewriting rule of the SPO approach is a single morphism in the category of labeled multigraphs and partial mappings that preserve the multigraph structure: r : L → R {\displaystyle r\colon L\rightarrow R} . Thus a rewriting step is defined by a single pushout diagram. Practical understanding of this is similar to the DPO approach. The difference is, that there is no interface between the host graph G and the graph G' being the result of the rewriting step. From the practical perspective, the key distinction between DPO and SPO is how they deal with the deletion of nodes with adjacent edges, in particular, how they avoid that such deletions may leave behind "dangling edges". The DPO approach only deletes a node when the rule specifies the deletion of all adjacent edges as well (this dangling condition can be checked for a given match), whereas the SPO approach simply disposes the adjacent edges, without requiring an explicit specification. There is also another algebraic-like approach to graph rewriting, based mainly on Boolean algebra and an algebra of matrices, called matrix graph grammars. === Determinate graph rewriting === Yet another approach to graph rewriting, known as determinate graph rewriting, came out of logic and database theory. In this approach, graphs are treated as database instances, and rewriting operations as a mechanism for defining queries and views; therefore, all rewriting is required to yield unique results (up to isomorphism), and this is achieved by applying any rewriting rule concurrently throughout the graph, wherever it applies, in such a way that the result is indeed uniquely defined. === Term graph rewriting === Another approach to graph rewriting is term graph rewriting, which involves the processing or transformation of term graphs (also known as abstract semantic graphs) by a set of syntactic rewrite rules. Term graphs are a prominent topic in programming language research since term graph rewriting rules are capable of formally expressing a compiler's operational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can perform automated verification and logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings. The TERMGRAPH conference focuses entirely on research into term graph rewriting and its applications. == Classes of graph grammar and graph rewriting system == Graph rewriting systems naturally group into classes according to the kind of representation of graphs that are used and how the rewrites are expressed. The term graph grammar, otherwise equivalent to graph rewriting system or graph replacement system, is most often used in classifications. Some common types are: Attributed graph grammars, typically formalised using either the single-pushout approach or the double-pushout approach to characterising replacements, mentioned in the above section on the algebraic approach to graph rewriting. Hypergraph grammars, including as more restrictive subclasses port graph grammars, linear graph grammars and interaction nets. == Implementations and applications == Graphs are an expressive, visual and mathematically precise formalism for modelling of objects (entities) linked by relations; objects are represented by nodes and relations between them by edges. Nodes and edges are commonly typed and attributed. Computations are described in this model by changes in the relations between the entities or by attribute changes of the graph elements. They are encoded in graph rewrite/graph transformation rules and executed by graph rewrite systems/graph transformation tools. Tools that are application domain neutral: AGG, the attributed graph grammar system (Java). GP 2 is a visual rule-based graph programming language designed to facilitate formal reasoning over graph programs. GMTE Archived 2018-03-13 at the Wayback Machine, the Graph Matching and Transformation Engine for graph matching and transformation. It is an implementation of an extension of Messmer’s algorithm using C++. GrGen.NET, the graph rewrite generator, a graph transformation tool emitting C#-code or .NET-assemblies. GROOVE, a Java-based tool set for editing graphs and graph transformation rules, exploring the state spaces of graph grammars, and model checking those state spaces; can also be used as a graph transformation engine. Verigraph, a software specification and verification system based on graph rewriting (Haskell). Tools that solve software engineering tasks (mainly MDA) with graph rewriting: eMoflon, an EMF-compliant model-transformation tool with support for Story-Driven Modeling and Triple Graph Grammars. EMorF a graph rewriting system based on EMF, supporting in-place and model-to-model transformation. Fujaba uses Story driven modelling, a graph rewrite language based on PROGRES. Graph databases often support dynamic rewriting of graphs. GReAT. Gremlin, a graph-based programming language (see Graph Rewriting). Henshin, a graph rewriting system based on EMF, supporting in-place and model-to-model transformation, critical pair analysis, and model checking. PROGRES, an integrated environment and very high level language for PROgrammed Graph REwriting Systems. VIATRA. Mechanical engineering tools GraphSynth is an interpreter and UI environment for creating unrestricted graph grammars as well as testing and searching the resultant language variant. It saves graphs and graph grammar rules as XML files and is written in C#. Soley Studio, is an integrated development environment for graph transformation systems. Its main application focus is data analytics in the field of engineering. Biology applications Functional-structural plant modeling with a graph grammar based language Multicellular development modeling with string-regulated graph grammars Kappa is a rule-based language for modeling systems of interacting agents, primarily motivated by molecular systems biology. Artificial Intelligence/Natural Language Processing OpenCog provides a basic pattern matcher (on hypergraphs) which is used to implement various AI algorithms. RelEx is an English-language parser that employs graph re-writing to convert a link parse into a dependency parse. Computer programming language The Clean programming language is implemented using graph rewriting. == See also == Graph theory Shape grammar Formal grammar Abstract rewriting — a generalization of graph rewriting == References == === Citations === === Sources ===
Wikipedia/Graph_transformation
In graph theory, the strong perfect graph theorem is a forbidden graph characterization of the perfect graphs as being exactly the graphs that have neither odd holes (odd-length induced cycles of length at least 5) nor odd antiholes (complements of odd holes). It was conjectured by Claude Berge in 1961. A proof by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas was announced in 2002 and published by them in 2006. The proof of the strong perfect graph theorem won for its authors a $10,000 prize offered by Gérard Cornuéjols of Carnegie Mellon University and the 2009 Fulkerson Prize. == Statement == A perfect graph is a graph in which, for every induced subgraph, the size of the maximum clique equals the minimum number of colors in a coloring of the graph; perfect graphs include many well-known graph classes including the bipartite graphs, chordal graphs, and comparability graphs. In his 1961 and 1963 works defining for the first time this class of graphs, Claude Berge observed that it is impossible for a perfect graph to contain an odd hole, an induced subgraph in the form of an odd-length cycle graph of length five or more, because odd holes have clique number two and chromatic number three. Similarly, he observed that perfect graphs cannot contain odd antiholes, induced subgraphs complementary to odd holes: an odd antihole with 2k + 1 vertices has clique number k and chromatic number k + 1, which is again impossible for perfect graphs. The graphs having neither odd holes nor odd antiholes became known as the Berge graphs. Berge conjectured that every Berge graph is perfect, or equivalently that the perfect graphs and the Berge graphs define the same class of graphs. This became known as the strong perfect graph conjecture, until its proof in 2002, when it was renamed the strong perfect graph theorem. == Relation to the weak perfect graph theorem == Another conjecture of Berge, proved in 1972 by László Lovász, is that the complement of every perfect graph is also perfect. This became known as the perfect graph theorem, or (to distinguish it from the strong perfect graph conjecture/theorem) the weak perfect graph theorem. Because Berge's forbidden graph characterization is self-complementary, the weak perfect graph theorem follows immediately from the strong perfect graph theorem. == Proof ideas == The proof of the strong perfect graph theorem by Chudnovsky et al. follows an outline conjectured in 2001 by Conforti, Cornuéjols, Robertson, Seymour, and Thomas, according to which every Berge graph either forms one of five types of basic building block (special classes of perfect graphs) or it has one of four different types of structural decomposition into simpler graphs. A minimally imperfect Berge graph cannot have any of these decompositions, from which it follows that no counterexample to the theorem can exist. This idea was based on previous conjectured structural decompositions of similar type that would have implied the strong perfect graph conjecture but turned out to be false. The five basic classes of perfect graphs that form the base case of this structural decomposition are the bipartite graphs, line graphs of bipartite graphs, complementary graphs of bipartite graphs, complements of line graphs of bipartite graphs, and double split graphs. It is easy to see that bipartite graphs are perfect: in any nontrivial induced subgraph, the clique number and chromatic number are both two and therefore are equal. The perfection of complements of bipartite graphs, and of complements of line graphs of bipartite graphs, are both equivalent to Kőnig's theorem relating the sizes of maximum matchings, maximum independent sets, and minimum vertex covers in bipartite graphs. The perfection of line graphs of bipartite graphs can be stated equivalently as the fact that bipartite graphs have chromatic index equal to their maximum degree, proven by Kőnig (1916). Thus, all four of these basic classes are perfect. The double split graphs are a relative of the split graphs that can also be shown to be perfect. The four types of decompositions considered in this proof are 2-joins, complements of 2-joins, balanced skew partitions, and homogeneous pairs. A 2-join is a partition of the vertices of a graph into two subsets, with the property that the edges spanning the cut between these two subsets form two vertex-disjoint complete bipartite graphs. When a graph has a 2-join, it may be decomposed into induced subgraphs called "blocks", by replacing one of the two subsets of vertices by a shortest path within that subset that connects one of the two complete bipartite graphs to the other; when no such path exists, the block is formed instead by replacing one of the two subsets of vertices by two vertices, one for each complete bipartite subgraph. A 2-join is perfect if and only if its two blocks are both perfect. Therefore, if a minimally imperfect graph has a 2-join, it must equal one of its blocks, from which it follows that it must be an odd cycle and not Berge. For the same reason, a minimally imperfect graph whose complement has a 2-join cannot be Berge. A skew partition is a partition of a graph's vertices into two subsets, one of which induces a disconnected subgraph and the other of which has a disconnected complement; Chvátal (1985) had conjectured that no minimal counterexample to the strong perfect graph conjecture could have a skew partition. Chudnovsky et al. introduced some technical constraints on skew partitions, and were able to show that Chvátal's conjecture is true for the resulting "balanced skew partitions". The full conjecture is a corollary of the strong perfect graph theorem. A homogeneous pair is related to a modular decomposition of a graph. It is a partition of the graph into three subsets V1, V2, and V3 such that V1 and V2 together contain at least three vertices, V3 contains at least two vertices, and for each vertex v in V3 and each i in {1,2} either v is adjacent to all vertices in Vi or to none of them. It is not possible for a minimally imperfect graph to have a homogeneous pair. Subsequent to the proof of the strong perfect graph conjecture, Chudnovsky (2006) simplified it by showing that homogeneous pairs could be eliminated from the set of decompositions used in the proof. The proof that every Berge graph falls into one of the five basic classes or has one of the four types of decomposition follows a case analysis, according to whether certain configurations exist within the graph: a "stretcher", a subgraph that can be decomposed into three induced paths subject to certain additional constraints, the complement of a stretcher, and a "proper wheel", a configuration related to a wheel graph, consisting of an induced cycle together with a hub vertex adjacent to at least three cycle vertices and obeying several additional constraints. For each possible choice of whether a stretcher or its complement or a proper wheel exists within the given Berge graph, the graph can be shown to be in one of the basic classes or to be decomposable. This case analysis completes the proof. == Notes == == References == Berge, Claude (1961), "Färbung von Graphen, deren sämtliche bzw. deren ungerade Kreise starr sind", Wiss. Z. Martin-Luther-Univ. Halle-Wittenberg Math.-Natur. Reihe, 10: 114. Berge, Claude (1963), "Perfect graphs", Six Papers on Graph Theory, Calcutta: Indian Statistical Institute, pp. 1–21. Chudnovsky, Maria (2006), "Berge trigraphs", Journal of Graph Theory, 53 (1): 1–55, doi:10.1002/jgt.20165, MR 2245543. Chudnovsky, Maria; Robertson, Neil; Seymour, Paul; Thomas, Robin (2006), "The strong perfect graph theorem", Annals of Mathematics, 164 (1): 51–229, arXiv:math/0212070, doi:10.4007/annals.2006.164.51, MR 2233847. Chudnovsky, Maria; Robertson, Neil; Seymour, Paul; Thomas, Robin (2003), "Progress on perfect graphs", Mathematical Programming, Series B., 97 (1–2): 405–422, CiteSeerX 10.1.1.137.3013, doi:10.1007/s10107-003-0449-8, MR 2004404. Chvátal, Václav (1985), "Star-cutsets and perfect graphs", Journal of Combinatorial Theory, Series B, 39 (3): 189–199, doi:10.1016/0095-8956(85)90049-8, MR 0815391. Chvátal, Václav; Sbihi, Najiba (1987), "Bull-free Berge graphs are perfect", Graphs and Combinatorics, 3 (2): 127–139, doi:10.1007/BF01788536, MR 0932129. Cornuéjols, Gérard (2002), "The strong perfect graph conjecture", Proceedings of the International Congress of Mathematicians, Vol. III (Beijing, 2002) (PDF), Beijing: Higher Ed. Press, pp. 547–559, MR 1957560. Cornuéjols, G.; Cunningham, W. H. (1985), "Compositions for perfect graphs", Discrete Mathematics, 55 (3): 245–254, doi:10.1016/S0012-365X(85)80001-7, MR 0802663. Hougardy, S. (1991), Counterexamples to three conjectures concerning perfect graphs, Technical Report RR870-M, Grenoble, France: Laboratoire Artemis-IMAG, Universitá Joseph Fourier. As cited by Roussel, Rusu & Thuillier (2009). Kőnig, Dénes (1916), "Gráfok és alkalmazásuk a determinánsok és a halmazok elméletére", Matematikai és Természettudományi Értesítő, 34: 104–119. Lovász, László (1972a), "Normal hypergraphs and the perfect graph conjecture", Discrete Mathematics, 2 (3): 253–267, doi:10.1016/0012-365X(72)90006-4. Lovász, László (1972b), "A characterization of perfect graphs", Journal of Combinatorial Theory, Series B, 13 (2): 95–98, doi:10.1016/0095-8956(72)90045-7. Mackenzie, Dana (July 5, 2002), "Mathematics: Graph theory uncovers the roots of perfection", Science, 297 (5578): 38, doi:10.1126/science.297.5578.38, PMID 12098683. Reed, B. A. (1986), A semi-strong perfect graph theorem, Ph.D. thesis, Montréal, Québec, Canada: Department of Computer Science, McGill University. As cited by Roussel, Rusu & Thuillier (2009). Roussel, F.; Rusu, I.; Thuillier, H. (2009), "The strong perfect graph conjecture: 40 years of attempts, and its resolution", Discrete Mathematics, 309 (20): 6092–6113, doi:10.1016/j.disc.2009.05.024, MR 2552645. Rusu, Irena (1997), "Building counterexamples", Discrete Mathematics, 171 (1–3): 213–227, doi:10.1016/S0012-365X(96)00081-7, MR 1454452. Seymour, Paul (2006), "How the proof of the strong perfect graph conjecture was found" (PDF), Gazette des Mathématiciens (109): 69–83, MR 2245898. == External links == The Strong Perfect Graph Theorem, Václav Chvátal Weisstein, Eric W. "Strong Perfect Graph Theorem". MathWorld.
Wikipedia/Strong_perfect_graph_theorem
In graph theory, the Kelmans–Seymour conjecture states that every 5-vertex-connected graph that is not planar contains a subdivision of the 5-vertex complete graph K5. It is named for Paul Seymour and Alexander Kelmans, who independently described the conjecture; Seymour in 1977 and Kelmans in 1979. A proof was announced in 2016, and published in four papers in 2020. == Formulation == A graph is 5-vertex-connected when there are no five vertices that removed leave a disconnected graph. The complete graph is a graph with an edge between every five vertices, and a subdivision of a complete graph modifies this by replacing some of its edges by longer paths. So a graph G contains a subdivision of K5 if it is possible to pick out five vertices of G, and a set of ten paths connecting these five vertices in pairs without any of the paths sharing vertices or edges with each other. In any drawing of the graph on the Euclidean plane, at least two of the ten paths must cross each other, so a graph G that contains a K5 subdivision cannot be a planar graph. In the other direction, by Kuratowski's theorem, a graph that is not planar necessarily contains a subdivision of either K5 or of the complete bipartite graph K3,3. The Kelmans–Seymour conjecture refines this theorem by providing a condition under which only one of these two subdivisions, the subdivision of K5, can be guaranteed to exist. It states that, if a non-planar graph is 5-vertex-connected, then it contains a subdivision of K5. == Related results == A related result, Wagner's theorem, states that every 4-vertex-connected nonplanar graph contains a copy of K5 as a graph minor. One way of restating this result is that, in these graphs, it is always possible to perform a sequence of edge contraction operations so that the resulting graph contains a K5 subdivision. The Kelmans–Seymour conjecture states that, with a higher order of connectivity, these contractions are not required. An earlier conjecture of Gabriel Andrew Dirac (1964), proven in 2001 by Wolfgang Mader, states that every n-vertex graph with at least 3n − 5 edges contains a subdivision of K5. Because planar graphs have at most 3n − 6 edges, the graphs with at least 3n − 5 edges must be nonplanar. However, they need not be 5-connected, and 5-connected graphs can have as few as 2.5n edges. == Claimed proof == In 2016, a proof of the Kelmans–Seymour conjecture was claimed by Xingxing Yu of the Georgia Institute of Technology and his Ph.D. students Dawei He and Yan Wang. A sequence four papers proving this conjecture appeared in Journal of Combinatorial Theory, Series B. == See also == Four-color theorem Hajós' conjecture == References ==
Wikipedia/Kelmans–Seymour_conjecture
In graph theory, an adjacent vertex of a vertex v in a graph is a vertex that is connected to v by an edge. The neighbourhood of a vertex v in a graph G is the subgraph of G induced by all vertices adjacent to v, i.e., the graph composed of the vertices adjacent to v and all edges connecting vertices adjacent to v. The neighbourhood is often denoted ⁠ N G ( v ) {\displaystyle N_{G}(v)} ⁠ or (when the graph is unambiguous) ⁠ N ( v ) {\displaystyle N(v)} ⁠. The same neighbourhood notation may also be used to refer to sets of adjacent vertices rather than the corresponding induced subgraphs. The neighbourhood described above does not include v itself, and is more specifically the open neighbourhood of v; it is also possible to define a neighbourhood in which v itself is included, called the closed neighbourhood and denoted by ⁠ N G [ v ] {\displaystyle N_{G}[v]} ⁠. When stated without any qualification, a neighbourhood is assumed to be open. Neighbourhoods may be used to represent graphs in computer algorithms, via the adjacency list and adjacency matrix representations. Neighbourhoods are also used in the clustering coefficient of a graph, which is a measure of the average density of its neighbourhoods. In addition, many important classes of graphs may be defined by properties of their neighbourhoods, or by symmetries that relate neighbourhoods to each other. An isolated vertex has no adjacent vertices. The degree of a vertex is equal to the number of adjacent vertices. A special case is a loop that connects a vertex to itself; if such an edge exists, the vertex belongs to its own neighbourhood. == Local properties in graphs == If all vertices in G have neighbourhoods that are isomorphic to the same graph H, G is said to be locally H, and if all vertices in G have neighbourhoods that belong to some graph family F, G is said to be locally F. For instance, in the octahedron graph, shown in the figure, each vertex has a neighbourhood isomorphic to a cycle of four vertices, so the octahedron is locally C4. For example: Any complete graph Kn is locally Kn-1. The only graphs that are locally complete are disjoint unions of complete graphs. A Turán graph T(rs,r) is locally T((r-1)s,r-1). More generally any Turán graph is locally Turán. Every planar graph is locally outerplanar. However, not every locally outerplanar graph is planar. A graph is triangle-free if and only if it is locally independent. Every k-chromatic graph is locally (k-1)-chromatic. Every locally k-chromatic graph has chromatic number O ( k n ) {\displaystyle O({\sqrt {kn}})} . If a graph family F is closed under the operation of taking induced subgraphs, then every graph in F is also locally F. For instance, every chordal graph is locally chordal; every perfect graph is locally perfect; every comparability graph is locally comparable; every (k)-(ultra)-homogeneous graph is locally (k)-(ultra)-homogeneous. A graph is locally cyclic if every neighbourhood is a cycle. For instance, the octahedron is the unique connected locally C4 graph, the icosahedron is the unique connected locally C5 graph, and the Paley graph of order 13 is locally C6. Locally cyclic graphs other than K4 are exactly the underlying graphs of Whitney triangulations, embeddings of graphs on surfaces in such a way that the faces of the embedding are the cliques of the graph. Locally cyclic graphs can have as many as n 2 − o ( 1 ) {\displaystyle n^{2-o(1)}} edges. Claw-free graphs are the graphs that are locally co-triangle-free; that is, for all vertices, the complement graph of the neighbourhood of the vertex does not contain a triangle. A graph that is locally H is claw-free if and only if the independence number of H is at most two; for instance, the graph of the regular icosahedron is claw-free because it is locally C5 and C5 has independence number two. The locally linear graphs are the graphs in which every neighbourhood is an induced matching. The Johnson graphs are locally grid, meaning that each neighborhood is a rook's graph. == Neighbourhood of a set == For a set A of vertices, the neighbourhood of A is the union of the neighbourhoods of the vertices, and so it is the set of all vertices adjacent to at least one member of A. A set A of vertices in a graph is said to be a module if every vertex in A has the same set of neighbours outside of A. Any graph has a uniquely recursive decomposition into modules, its modular decomposition, which can be constructed from the graph in linear time; modular decomposition algorithms have applications in other graph algorithms including the recognition of comparability graphs. == See also == Markov blanket Moore neighbourhood Von Neumann neighbourhood Second neighborhood problem Vertex figure, a related concept in polyhedra Link (simplicial complex), a generalization of the neighborhood to simplicial complexes == Notes == == References == Cohen, Arjeh M. (1990), "Local recognition of graphs, buildings, and related geometries" (PDF), in Kantor, William M.; Liebler, Robert A.; Payne, Stanley E.; Shult, Ernest E. (eds.), Finite Geometries, Buildings, and Related Topics: Papers from the Conference on Buildings and Related Geometries held in Pingree Park, Colorado, July 17–23, 1988, Oxford Science Publications, Oxford University Press, pp. 85–94, MR 1072157; see in particular pp. 89–90 Fronček, Dalibor (1989), "Locally linear graphs", Mathematica Slovaca, 39 (1): 3–6, hdl:10338.dmlcz/136481, MR 1016323 Hartsfeld, Nora; Ringel, Gerhard (1991), "Clean triangulations", Combinatorica, 11 (2): 145–155, doi:10.1007/BF01206358, S2CID 28144260. Hell, Pavol (1978), "Graphs with given neighborhoods I", Problèmes combinatoires et théorie des graphes, Colloques internationaux C.N.R.S., vol. 260, pp. 219–223. Larrión, F.; Neumann-Lara, V.; Pizaña, M. A. (2002), "Whitney triangulations, local girth and iterated clique graphs", Discrete Mathematics, 258 (1–3): 123–135, doi:10.1016/S0012-365X(02)00266-2. Malnič, Aleksander; Mohar, Bojan (1992), "Generating locally cyclic triangulations of surfaces", Journal of Combinatorial Theory, Series B, 56 (2): 147–164, doi:10.1016/0095-8956(92)90015-P. Sedláček, J. (1983), "On local properties of finite graphs", Graph Theory, Lagów, Lecture Notes in Mathematics, vol. 1018, Springer-Verlag, pp. 242–247, doi:10.1007/BFb0071634, ISBN 978-3-540-12687-4. Seress, Ákos; Szabó, Tibor (1995), "Dense graphs with cycle neighborhoods", Journal of Combinatorial Theory, Series B, 63 (2): 281–293, doi:10.1006/jctb.1995.1020. Wigderson, Avi (1983), "Improving the performance guarantee for approximate graph coloring", Journal of the ACM, 30 (4): 729–735, doi:10.1145/2157.2158, S2CID 32214512.
Wikipedia/Neighbourhood_(graph_theory)
NetworkX is a Python library for studying graphs and networks. NetworkX is free software released under the BSD-new license. == History == NetworkX began development in 2002 by Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart. It is supported by the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory. The package was crafted with the aim of creating tools to analyze data and intervention strategies for controlling the epidemic spread of disease, while also exploring the structure and dynamics of more general social, biological, and infrastructural systems. Inspired by Guido van Rossum's 1998 essay on Python graph representation, NetworkX made its public debut at the 2004 SciPy annual conference. In April 2005, NetworkX was made available as open source software. Several Python packages focusing on graph theory, including igraph, graph-tool, and numerous others, are available. As of April 2024, NetworkX had over 50 million downloads, surpassing the download count of the second most popular package, igraph, by more than 50-fold. This substantial adoption rate could potentially be attributed to NetworkX's early release and its continued evolution within the SciPy ecosystem. In 2008, SageMath, an open source mathematics system, incorporated NetworkX into its package and added support for more graphing algorithms and functions. == Features == Classes for graphs and digraphs. Conversion of graphs to and from several formats. Ability to construct random graphs or construct them incrementally. Ability to find subgraphs, cliques, k-cores. Explore adjacency, degree, diameter, radius, center, betweenness, etc. Draw networks in 2D and 3D. == Supported graph types == === Overview === Graphs, in this context, represent collections of vertices (nodes) and edges (connections) between them. NetworkX provides support for several types of graphs, each suited for different applications and scenarios. === Directed graphs (DiGraph) === Directed graphs, or DiGraphs, consist of nodes connected by directed edges. In a directed graph, edges have a direction indicating the flow or relationship between nodes. === Undirected graphs (Graph) === Undirected graphs, simply referred to as graphs in NetworkX, are graphs where edges have no inherent direction. The connections between nodes are symmetrical, meaning if node A is connected to node B, then node B is also connected to node A. === MultiGraphs === MultiGraphs allow multiple edges between the same pair of nodes. In other words, MultiGraphs permit parallel edges, where more than one edge can exist between two nodes. === MultiDiGraphs === MultiDiGraphs are directed graphs that allow multiple directed edges between the same pair of nodes. Similar to MultiGraphs, MultiDiGraphs enable the modeling of scenarios where multiple directed relationships exist between nodes. === Challenges in visualization === While NetworkX provides powerful tools for graph creation and analysis, producing visualizations of complex graphs can be challenging. Visualizing large or densely connected graphs may require specialized techniques and external libraries beyond the capabilities of NetworkX alone. == Graph layouts == NetworkX provides various layout algorithms for visualizing graphs in two-dimensional space. These layout algorithms determine the positions of nodes and edges in a graph visualization, aiming to reveal its structure and relationships effectively. === Spring layout === The Spring Layout in NetworkX is a popular way to visualize graphs using a force-directed algorithm. It’s based on the Fruchterman-Reingold model, which works like a virtual physics simulation. Each node in your graph is a charged particle that repels other nodes, while the edges act like springs that pull connected nodes closer together. This balance creates a layout where the graph naturally spreads out into an informative shape. As the algorithm runs, it tries to reduce the overall "energy" of the system by adjusting the positions of the nodes step by step. The result often highlights patterns in the graph—like clusters or groups of nodes that are tightly connected. It works best for small to medium-sized graphs, where clarity and appearance are important. You can create this layout in NetworkX using the spring_layout() function, found in networkx.drawing.layout. The function gives you a few options to customize the layout, you can control the distance between nodes with the k parameter or decide how many iterations the simulation should run. It lets you lay out the graph in more than two dimensions by setting the dim parameter. This layout method is used for interactive and exploratory visualizations. The spring layout often reveals the structure of the graph in a intuitive and readable way === Spectral layout === The Spectral layout is based on the spectral properties of the graph's adjacency matrix. It uses the eigenvalues and eigenvectors of the adjacency matrix to position nodes in a low-dimensional space. Spectral layout tends to emphasize the global structure of the graph, making it useful for identifying clusters and communities. ==== How the spectral layout work ==== Source: Construct the Laplacian matrix of the graph. A Laplacian matrix L of a graph is defined as L = D − A {\displaystyle L=D-A} , where D {\displaystyle D} is the degree matrix (a diagonal matrix where D i i {\displaystyle D_{ii}} is the degree of vertex i) and A is the adjacency matrix. For a graph G {\displaystyle G} with n vertices, the adjacency matrix A is an n × n matrix where A i j = 1 {\displaystyle A_{ij}=1} if there is an edge between vertex i and vertex i and A i j = 0 {\displaystyle A_{ij}=0} otherwise. The Laplacian matrix has eigenvalues and corresponding eigenvectors that encode the graph's structural properties. Compute the eigenvectors corresponding to the smallest non-zero eigenvalues. Use these eigenvectors as coordinate values for positioning vertices. X-coordinates come from the second eigenvector (associated with the second-smallest eigenvalue), and Y-coordinates come from the third eigenvector. Scale and center the resulting layout as needed. ==== Why it reveals the network's structure ==== Nodes in dense clusters have similar eigenvector entries, causing them to group spatially. The Fiedler vector (second eigenvector) minimizes the ratio cut, separating the graph into clusters with minimal interconnections. In the plot above, we observe that the spectral layout helps capture the global and communal structures embedded in the graph. Comparing both layouts, we see that the spectral layout keeps nodes belonging to the same community closely packed, thereby capturing communities more easily as compared to the spring layout. ==== Codes for implementing the plot ==== === Circular layout === The Circular layout arranges nodes evenly around a circle, with edges drawn as straight lines connecting them. This layout is particularly suitable for visualizing cyclic or symmetric graphs, where the arrangement of nodes along the circle reflects the underlying topology of the graph. === Shell layout === The Shell layout organizes nodes into concentric circles or shells based on their distance from a specified center. Nodes within the same shell have the same distance from the center, while edges are drawn radially between nodes in adjacent shells. Shell layout is often used for visualizing hierarchical or tree structures. ==== Sample code ==== === Kamada–Kawai layout === The Kamada–Kawai layout algorithm positions nodes based on their pairwise distances, aiming to minimize the total energy of the system. It takes into account both the graph's topology and edge lengths, resulting in a layout that emphasizes geometric accuracy and readability. === Random Layout === Random layout assigns each node a random position inside the unit square (or specified box). It’s trivial and fast—O(n)—but gives no information about graph structure. Use it as a baseline to compare against more meaningful layouts, or when you just need an initial seeding for iterative algorithms. It’s also handy for stress-testing your rendering pipeline. === Planar Layout === Planar layout attempts to compute an embedding for planar graphs (graphs with no edge crossings) using graph combinatorial embedding. If the graph isn’t planar, it raises an exception. Planar embeddings exactly preserve the topology of planar networks—useful for circuit schematics, maps, or any truly planar structure. When it works, edges don’t cross, giving a clean representation. It’s linear‐time but only applicable to planar graphs. === Fruchterman–Reingold Layout === Although “spring layout” and “Fruchterman–Reingold” are often used interchangeably, NetworkX exposes both via spring_layout and fruchterman_reingold_layout. Internally they share the same physics-based algorithm. You can tweak attraction/repulsion constants, number of iterations, and temperature schedules. It produces an “organic” network view that highlights dense subgraphs. Use it when you want a familiar force-directed aesthetic with customizable parameters. === Spiral Layout === Spiral layout places nodes along an outward spiral, one after another in node order. It’s deterministic and extremely fast—useful for very large graphs where more expensive layouts would be too slow. Although it doesn’t use graph connectivity, you can impose ordering to reveal sequences or ranking. Because the spiral arms grow apart as they wind out, it can reduce overlap compared to a pure line or circle. It’s primarily a novelty, but useful for displaying long chains or temporal orderings. === Bipartite Layout === Bipartite layout is specialized for two‐set graphs: it places one node set on a horizontal line at y=0 and the other at y=1. You must supply the two partitions (e.g. via a node attribute or explicit list). Edges then connect vertically between layers, making the bipartite structure immediately clear. It’s ideal for affiliation networks, two‐mode data, or any graph with exactly two types of nodes. Positioning along the x-axis is usually automatic, but you can override via the align or manual x-coordinates. === Multipartite Layout === Multipartite layout generalizes bipartite: it places nodes in multiple horizontal layers (or “partitions”) based on a node attribute (e.g. layer). Each partition is drawn on its own y-coordinate rung, and edges run between adjacent layers by default. This makes it perfect for multilevel DAGs, workflow diagrams, or any graph with more than two strata. You control the vertical spacing and ordering within each layer. It’s computed in linear time after grouping nodes by their partition key. === BFS Layout === BFS layout (in our demo implemented via shell layout on BFS layers) arranges nodes by their distance from a source node. All nodes at distance d go to shell d. This visually encodes hop count from the root, so you see exactly how far each node lies in the graph. It’s perfect for shortest-path or reachability visualizations. Because it uses pure breadth-first search, it’s linear (O(n + m)) to compute. === Usage === NetworkX provides functions for applying different layout algorithms to graphs and visualizing the results using Matplotlib or other plotting libraries. Users can specify the desired layout algorithm when calling the drawing functions, allowing for flexible and customizable graph visualizations. == Suitability == NetworkX is suitable for operation on large real-world graphs: e.g., graphs in excess of 10 million nodes and 100 million edges. Due to its dependence on a pure-Python "dictionary of dictionary" data structure, NetworkX is a reasonably efficient, very scalable, highly portable framework for network and social network analysis. == Applications == NetworkX was designed to be easy to use and learn, as well as a powerful and sophisticated tool for network analysis. It is used widely on many levels, ranging from computer science and data analysis education to large-scale scientific studies. NetworkX has applications in any field that studies data as graphs or networks, such as mathematics, physics, biology, computer science and social science. The nodes in a NetworkX graph can be specialized to hold any data, and the data stored in edges is arbitrary, further making it widely applicable to different fields. It is able to read in networks from data and randomly generate networks with specified qualities. This allows it to be used to explore changes across wide amounts of networks. The figure below demonstrates a simple example of the software's ability to create and modify variations across large amounts of networks. NetworkX has many network and graph analysis algorithms, aiding in a wide array of data analysis purposes. One important example of this is its various options for shortest path algorithms. The following algorithms are included in NetworkX, with time complexities given the number of vertices (V) and edges (E) in the graph: Dijkstra: O((V+E) log V) Bellman-Ford: O(V * E) Goldberg-Radzik: O(V * E) Johnson: O(V^2 log(V) + VE) Floyd Warshall: O(V^3) A*: O((V+E) log V) An example of the use of NetworkX graph algorithms can be seen in a 2018 study, in which it was used to analyze the resilience of livestock production networks to the spread of epidemics. The study used a computer model to predict and study trends in epidemics throughout American hog production networks, taking into account all livestock industry roles. In the study, NetworkX was used to find information on degree, shortest paths, clustering, and k-cores as the model introduced infections and simulated their spread. This was then used to determine which networks are most susceptible to epidemics. In addition to network creation and analysis, NetworkX also has many visualization capabilities. It provides hooks into Matplotlib and GraphViz for 2D visuals, and VTK and UbiGraph for 3D visuals. This makes the package useful in easily demonstrating and reporting network analysis and data, and allows for the simplification of networks for visual processing. == Comparison with Matlab == MATLAB allows the user to graph networks. Matlab is widely used to graph networks by mathematicians, physicists, biologists, and computer scientists. Matlab graphs can be more useful than Python-Networkx graphs under many circumstances. === Dealing with large data === The main issue with Networkx is memory usage when dealing with large graphs. Networkx stores graph data in Python objects which makes it incapable of handling tens of millions of objects without consuming the computer memory. This causes an out-of-memory errors when working with large graphs. On the other hand, Matlab works differently. Matlab processes large sets of data more efficiently by integrating them with the existing infrastructure. The user can scale up and run their Matlab code interactively using parallel processing as well as in deployed production mode. The user can also run their Matlab code with a large set of data on different cloud data such as Databricks, Domino Data Lab, and Google® BigQuery. === Cost === Python is an open-source programming language. It can be downloaded and used completely with no charge. Also, Many widely used Python libraries and packages (including NetworkX) are completely free. While, MATLAB has 6 pricing edition(s), from $49 to $2,150. Matlab pricing varies depending on the type of the license. For example, High school and college students can get a Matlab license with a lower cost than individuals and business owners. == Getting access to Networkx through Matlab == MATLAB provides interoperability with other languages such as Python. Matlab provides access to many programming languages including C/C++, Java, and Python. Networkx can be called from Matlab which gives the user the advantage of using it within Matlab workflow. This allows the user to call Python-Networkx codes in Matlab. == Applications to pure mathematics == Networks are useful tools to visualize, which helps with analyzing or making good predictions, which helps researchers generalize ideas. It gives the clear picture when dealing with a finite set. It can be used in different fields of mathematics like Set Theory, Abstract Algebra, and Number Theory. === Graph a subgroups lattice of a group === Lattice of subgroups can be graphed for finite groups with a reasonable order. === Graph ordered relations === Ordered relations on finite sets with reasonable size (cardinality) can be graphed. === Graph equivalence relations === An Equivalence relation on a finite set with reasonable size can be graphed with Networkx. This is a helpful way to visualize it as it clearly shows the different equivalence classes. == Integration == NetworkX is integrated into SageMath. == See also == Social network analysis software JGraph == References == == External links == Official website: networkx.github.io NetworkX discussion group Survey of existing graph theory software NetworkX on StackOverflow networkx on GitHub
Wikipedia/NetworkX
In graph theory, two graphs G {\displaystyle G} and G ′ {\displaystyle G'} are homeomorphic if there is a graph isomorphism from some subdivision of G {\displaystyle G} to some subdivision of G ′ {\displaystyle G'} . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in diagrams), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if their diagrams are homeomorphic in the topological sense. == Subdivision and smoothing == In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G. The subdivision of some edge e with endpoints {u,v} yields a graph containing one new vertex w, and with an edge set replacing e by two new edges, {u,w} and {w,v}. For directed edges, this operation shall preserve their propagating direction. For example, the edge e, with endpoints {u,v}: can be subdivided into two edges, e1 and e2, connecting to a new vertex w of degree-2, or indegree-1 and outdegree-1 for the directed edge: Determining whether for graphs G and H, H is homeomorphic to a subgraph of G, is an NP-complete problem. === Reversion === The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges (e1, e2) incident on w, removes both edges containing w and replaces (e1, e2) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree-2 (i.e., 2-valent) vertices can be smoothed. The limit of this operation is realized by the graph that has no more degree-2 vertices. For example, the simple connected graph with two edges, e1 {u,w} and e2 {w,v}: has a vertex (namely w) that can be smoothed away, resulting in: === Barycentric subdivisions === The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph. This procedure can be repeated, so that the nth barycentric subdivision is the barycentric subdivision of the n−1st barycentric subdivision of the graph. The second such subdivision is always a simple graph. == Embedding on a surface == It is evident that subdividing a graph preserves planarity. Kuratowski's theorem states that a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on six vertices, three of which connect to each of the other three). In fact, a graph homeomorphic to K5 or K3,3 is called a Kuratowski subgraph. A generalization, following from the Robertson–Seymour theorem, asserts that for each integer g, there is a finite obstruction set of graphs L ( g ) = { G i ( g ) } {\displaystyle L(g)=\left\{G_{i}^{(g)}\right\}} such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the G i ( g ) {\displaystyle G_{i}^{(g)\!}} . For example, L ( 0 ) = { K 5 , K 3 , 3 } {\displaystyle L(0)=\left\{K_{5},K_{3,3}\right\}} consists of the Kuratowski subgraphs. == Example == In the following example, graph G and graph H are homeomorphic. If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H, then G′ and H′ have a similar graph drawing: Therefore, there exists an isomorphism between G' and H', meaning G and H are homeomorphic. === mixed graph === The following mixed graphs are homeomorphic. The directed edges are shown to have an intermediate arrow head. == See also == Minor (graph theory) Edge contraction == References == == Further reading == Yellen, Jay; Gross, Jonathan L. (2005), Graph Theory and Its Applications, Discrete Mathematics and Its Applications (2nd ed.), Chapman & Hall/CRC, ISBN 978-1-58488-505-4
Wikipedia/Subdivision_(graph_theory)
Graph Theory, 1736–1936 is a book in the history of mathematics on graph theory. It focuses on the foundational documents of the field, beginning with the 1736 paper of Leonhard Euler on the Seven Bridges of Königsberg and ending with the first textbook on the subject, published in 1936 by Dénes Kőnig. Graph Theory, 1736–1936 was edited by Norman L. Biggs, E. Keith Lloyd, and Robin J. Wilson, and published in 1976 by the Clarendon Press. The Oxford University Press published a paperback second edition in 1986, with a corrected reprint in 1998. == Topics == Graph Theory, 1736–1936 contains copies, extracts, and translations of 37 original sources in graph theory, grouped into ten chapters and punctuated by commentary on their meaning and context. It begins with Euler's 1736 paper "Solutio problematis ad geometriam situs pertinentis" on the seven bridges of Königsberg (both in the original Latin and in English translation) and ending with Dénes Kőnig's book Theorie der endlichen und unendlichen Graphen. The source material touches on recreational mathematics, chemical graph theory, the analysis of electrical circuits, and applications of graph theory in abstract algebra. Also included are background material and portraits on the mathematicians who originally developed this material. The chapters of the book organize the material into topics within graph theory, rather than being strictly chronological. The first chapter, on paths, includes maze-solving algorithms as well as Euler's work on Euler tours. Next, a chapter on circuits includes material on knight's tours in chess (a topic that long predates Euler), Hamiltonian cycles, and the work of Thomas Kirkman on polyhedral graphs. Next follow chapters on spanning trees and Cayley's formula, chemical graph theory and graph enumeration, and planar graphs, Kuratowski's theorem, and Euler's polyhedral formula. There are three chapters on the four color theorem and graph coloring, a chapter on algebraic graph theory, and a final chapter on graph factorization. Appendices provide a brief update on graph history since 1936, biographies of the authors of the works included in the book, and a comprehensive bibliography. == Audience and reception == Reviewer Ján Plesník names the book the first ever published on the history of graph theory, and although Hazel Perfect notes that parts of it can be difficult to read, Plesník states that it can also be used as "a self-contained introduction" to the field, and Edward Maziarz suggests its use as a textbook for graph theory courses. Perfect calls the book "fascinating ... full of information", thoroughly researched and carefully written, and Maziarz finds inspiring the ways in which it describes serious mathematics as arising from frivolous starting points. Fernando Q. Gouvêa calls it a "must-have" for anyone interested in graph theory, and Philip Peak also recommends it to anyone interested more generally in the history of mathematics. == References == == External links == Full text of first edition at the Internet Archive
Wikipedia/Graph_Theory,_1736–1936
In combinatorics, an area of mathematics, graph enumeration describes a class of combinatorial enumeration problems in which one must count undirected or directed graphs of certain types, typically as a function of the number of vertices of the graph. These problems may be solved either exactly (as an algebraic enumeration problem) or asymptotically. The pioneers in this area of mathematics were George Pólya, Arthur Cayley and J. Howard Redfield. == Labeled vs unlabeled problems == In some graphical enumeration problems, the vertices of the graph are considered to be labeled in such a way as to be distinguishable from each other, while in other problems any permutation of the vertices is considered to form the same graph, so the vertices are considered identical or unlabeled. In general, labeled problems tend to be easier. As with combinatorial enumeration more generally, the Pólya enumeration theorem is an important tool for reducing unlabeled problems to labeled ones: each unlabeled class is considered as a symmetry class of labeled objects. The number of unlabelled graphs with n {\displaystyle n} vertices is still not known in a closed-form solution, but as almost all graphs are asymmetric this number is asymptotic to 2 ( n 2 ) n ! . {\displaystyle {\frac {2^{\tbinom {n}{2}}}{n!}}.} == Exact enumeration formulas == Some important results in this area include the following. The number of labeled n-vertex simple undirected graphs is 2n(n−1)/2. The number of labeled n-vertex simple directed graphs is 2n(n−1). The number Cn of connected labeled n-vertex undirected graphs satisfies the recurrence relation C n = 2 ( n 2 ) − 1 n ∑ k = 1 n − 1 k ( n k ) 2 ( n − k 2 ) C k . {\displaystyle C_{n}=2^{n \choose 2}-{\frac {1}{n}}\sum _{k=1}^{n-1}k{n \choose k}2^{n-k \choose 2}C_{k}.} from which one may easily calculate, for n = 1, 2, 3, ..., that the values for Cn are 1, 1, 4, 38, 728, 26704, 1866256, ...(sequence A001187 in the OEIS) The number of labeled n-vertex free trees is nn−2 (Cayley's formula). The number of unlabeled n-vertex caterpillars is 2 n − 4 + 2 ⌊ ( n − 4 ) / 2 ⌋ . {\displaystyle 2^{n-4}+2^{\lfloor (n-4)/2\rfloor }.} == Graph database == Various research groups have provided searchable database that lists graphs with certain properties of a small sizes. For example The House of Graphs Small Graph Database == References ==
Wikipedia/Enumeration_of_graphs
In mathematics, topological graph theory is a branch of graph theory. It studies the embedding of graphs in surfaces, spatial embeddings of graphs, and graphs as topological spaces. It also studies immersions of graphs. Embedding a graph in a surface means that we want to draw the graph on a surface, a sphere for example, without two edges intersecting. A basic embedding problem often presented as a mathematical puzzle is the three utilities problem. Other applications can be found in printing electronic circuits where the aim is to print (embed) a circuit (the graph) on a circuit board (the surface) without two connections crossing each other and resulting in a short circuit. == Graphs as topological spaces == To an undirected graph we may associate an abstract simplicial complex C with a single-element set per vertex and a two-element set per edge. The geometric realization |C| of the complex consists of a copy of the unit interval [0,1] per edge, with the endpoints of these intervals glued together at vertices. In this view, embeddings of graphs into a surface or as subdivisions of other graphs are both instances of topological embedding, homeomorphism of graphs is just the specialization of topological homeomorphism, the notion of a connected graph coincides with topological connectedness, and a connected graph is a tree if and only if its fundamental group is trivial. Other simplicial complexes associated with graphs include the Whitney complex or clique complex, with a set per clique of the graph, and the matching complex, with a set per matching of the graph (equivalently, the clique complex of the complement of the line graph). The matching complex of a complete bipartite graph is called a chessboard complex, as it can be also described as the complex of sets of nonattacking rooks on a chessboard. == Example studies == John Hopcroft and Robert Tarjan derived a means of testing the planarity of a graph in time linear to the number of edges. Their algorithm does this by constructing a graph embedding which they term a "palm tree". Efficient planarity testing is fundamental to graph drawing. Fan Chung et al studied the problem of embedding a graph into a book with the graph's vertices in a line along the spine of the book. Its edges are drawn on separate pages in such a way that edges residing on the same page do not cross. This problem abstracts layout problems arising in the routing of multilayer printed circuit boards. Graph embeddings are also used to prove structural results about graphs, via graph minor theory and the graph structure theorem. == See also == Crossing number (graph theory) Genus Planar graph Real tree Toroidal graph Topological combinatorics Voltage graph == Notes ==
Wikipedia/Topological_graph_theory
In graph theory, the Erdős–Faber–Lovász conjecture is a problem about graph coloring, named after Paul Erdős, Vance Faber, and László Lovász, who formulated it in 1972. It says: If k complete graphs, each having exactly k vertices, have the property that every pair of complete graphs has at most one shared vertex, then the union of the graphs can be properly colored with k colors. The conjecture for all sufficiently large values of k was proved by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. == Equivalent formulations == Haddad & Tardif (2004) introduced the problem with a story about seating assignment in committees: suppose that, in a university department, there are k committees, each consisting of k faculty members, and that all committees meet in the same room, which has k chairs. Suppose also that at most one person belongs to the intersection of any two committees. Is it possible to assign the committee members to chairs in such a way that each member sits in the same chair for all the different committees to which he or she belongs? In this model of the problem, the faculty members correspond to graph vertices, committees correspond to complete graphs, and chairs correspond to vertex colors. A linear hypergraph (also known as partial linear space) is a hypergraph with the property that every two hyperedges have at most one vertex in common. A hypergraph is said to be uniform if all of its hyperedges have the same number of vertices as each other. The n cliques of size n in the Erdős–Faber–Lovász conjecture may be interpreted as the hyperedges of an n-uniform linear hypergraph that has the same vertices as the underlying graph. In this language, the Erdős–Faber–Lovász conjecture states that, given any n-uniform linear hypergraph with n hyperedges, one may n-color the vertices such that each hyperedge has one vertex of each color. A simple hypergraph is a hypergraph in which at most one hyperedge connects any pair of vertices and there are no hyperedges of size at most one. In the graph coloring formulation of the Erdős–Faber–Lovász conjecture, it is safe to remove vertices that belong to a single clique, as their coloring presents no difficulty; once this is done, the hypergraph that has a vertex for each clique, and a hyperedge for each graph vertex, forms a simple hypergraph. And, the hypergraph dual of vertex coloring is edge coloring. Thus, the Erdős–Faber–Lovász conjecture is equivalent to the statement that any simple hypergraph with n vertices has chromatic index (edge coloring number) at most n. The graph of the Erdős–Faber–Lovász conjecture may be represented as an intersection graph of sets: to each vertex of the graph, correspond the set of the cliques containing that vertex, and connect any two vertices by an edge whenever their corresponding sets have a nonempty intersection. Using this description of the graph, the conjecture may be restated as follows: if some family of sets has n total elements, and any two sets intersect in at most one element, then the intersection graph of the sets may be n-colored. The intersection number of a graph G is the minimum number of elements in a family of sets whose intersection graph is G, or equivalently the minimum number of vertices in a hypergraph whose line graph is G. Klein & Margraf (2003) define the linear intersection number of a graph, similarly, to be the minimum number of vertices in a linear hypergraph whose line graph is G. As they observe, the Erdős–Faber–Lovász conjecture is equivalent to the statement that the chromatic number of any graph is at most equal to its linear intersection number. Haddad & Tardif (2004) present another yet equivalent formulation, in terms of the theory of clones. == History, partial results, and eventual proof == Paul Erdős, Vance Faber, and László Lovász formulated the harmless-looking conjecture at a party in Boulder, Colorado in September 1972. Its difficulty was realised only slowly. Paul Erdős originally offered US$50 for proving the conjecture in the affirmative, and later raised the reward to US$500. In fact, Paul Erdős considered this to be one of his three most favourite combinatorial problems. Chiang & Lawler (1988) proved that the chromatic number of the graphs in the conjecture is at most 3 2 k − 2 {\displaystyle {\tfrac {3}{2}}k-2} , and Kahn (1992) improved this to k + o(k). In 2023, almost 50 years after the original conjecture was stated, it was resolved for all sufficiently large n by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. == Related problems == It is also of interest to consider the chromatic number of graphs formed as the union of k cliques of k vertices each, without restricting how big the intersections of pairs of cliques can be. In this case, the chromatic number of their union is at most 1+ k√k − 1, and some graphs formed in this way require this many colors. A version of the conjecture that uses the fractional chromatic number in place of the chromatic number is known to be true. That is, if a graph G is formed as the union of k k-cliques that intersect pairwise in at most one vertex, then G can be k-colored. In the framework of edge coloring simple hypergraphs, Hindman (1981) defines a number L from a simple hypergraph as the number of hypergraph vertices that belong to a hyperedge of three or more vertices. He shows that, for any fixed value of L, a finite calculation suffices to verify that the conjecture is true for all simple hypergraphs with that value of L. Based on this idea, he shows that the conjecture is indeed true for all simple hypergraphs with L ≤ 10. In the formulation of coloring graphs formed by unions of cliques, Hindman's result shows that the conjecture is true whenever at most ten of the cliques contain a vertex that belongs to three or more cliques. In particular, it is true for n ≤ 10. == See also == List of conjectures by Paul Erdős == Notes == == References ==
Wikipedia/Erdős–Faber–Lovász_conjecture
In computer science, persistence refers to the characteristic of state of a system that outlives (persists for longer than) the process that created it. This is achieved in practice by storing the state as data in computer data storage. Programs have to transfer data to and from storage devices and have to provide mappings from the native programming-language data structures to the storage device data structures. Picture editing programs or word processors, for example, achieve state persistence by saving their documents to files. == Orthogonal or transparent persistence == Persistence is said to be "orthogonal" or "transparent" when it is implemented as an intrinsic property of the execution environment of a program. An orthogonal persistence environment does not require any specific actions by programs running in it to retrieve or save their state. Non-orthogonal persistence requires data to be written and read to and from storage using specific instructions in a program, resulting in the use of persist as a transitive verb: On completion, the program persists the data. The advantage of orthogonal persistence environments is simpler and less error-prone programs. The term "persistent" was first introduced by Atkinson and Morrison in the sense of orthogonal persistence: they used an adjective rather than a verb to emphasize persistence as a property of the data, as distinct from an imperative action performed by a program. The use of the transitive verb "persist" (describing an action performed by a program) is a back-formation. === Adoption === Orthogonal persistence is widely adopted in operating systems for hibernation and in platform virtualization systems such as VMware and VirtualBox for state saving. Research prototype languages such as PS-algol, Napier88, Fibonacci and pJama, successfully demonstrated the concepts along with the advantages to programmers. == Persistence techniques == === System images === Using system images is the simplest persistence strategy. Notebook hibernation is an example of orthogonal persistence using a system image because it does not require any actions by the programs running on the machine. An example of non-orthogonal persistence using a system image is a simple text editing program executing specific instructions to save an entire document to a file. Shortcomings: Requires enough RAM to hold the entire system state. State changes made to a system after its last image was saved are lost in the case of a system failure or shutdown. Saving an image for every single change would be too time-consuming for most systems, so images are not used as the single persistence technique for critical systems. === Journals === Using journals is the second simplest persistence technique. Journaling is the process of storing events in a log before each one is applied to a system. Such logs are called journals. On startup, the journal is read and each event is reapplied to the system, avoiding data loss in the case of system failure or shutdown. The entire "Undo/Redo" history of user commands in a picture editing program, for example, when written to a file, constitutes a journal capable of recovering the state of an edited picture at any point in time. Journals are used by journaling file systems, prevalent systems and database management systems where they are also called "transaction logs" or "redo logs". Shortcomings: When journals are used exclusively, the entire (potentially large) history of all system events must be reapplied on every system startup. As a result, journals are often combined with other persistence techniques. === Dirty writes === This technique is the writing to storage of only those portions of system state that have been modified (are dirty) since their last write. Sophisticated document editing applications, for example, will use dirty writes to save only those portions of a document that were actually changed since the last save. Shortcomings: This technique requires state changes to be intercepted within a program. This is achieved in a non-transparent way by requiring specific storage-API calls or in a transparent way with automatic program transformation. This results in code that is slower than native code and more complicated to debug. == Persistence layers == Any software layer that makes it easier for a program to persist its state is generically called a persistence layer. Most persistence layers will not achieve persistence directly but will use an underlying database management system. == System prevalence == System prevalence is a technique that combines system images and transaction journals, mentioned above, to overcome their limitations. Shortcomings: A prevalent system must have enough RAM to hold the entire system state. == Database management systems (DBMSs) == DBMSs use a combination of the dirty writes and transaction journaling techniques mentioned above. They provide not only persistence but also other services such as queries, auditing and access control. == Persistent operating systems == Persistent operating systems are operating systems that remain persistent even after a crash or unexpected shutdown. Operating systems that employ this ability include KeyKOS EROS, the successor to KeyKOS Coyotos, successor to EROS Multics with its single-level store Phantom IBM System/38 IBM i Grasshopper OS [1] Lua OS tahrpuppy-6.0.5 == See also == Persistent data Persistent data structure Persistent identifier Persistent memory Copy-on-write CRUD Java Data Objects Java Persistence API System prevalence Orthogonality Service Data Object Snapshot (computer storage) == References ==
Wikipedia/Persistence_(computer_science)
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph, or a planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points. Every graph that can be drawn on a plane can be drawn on the sphere as well, and vice versa, by means of stereographic projection. Plane graphs can be encoded by combinatorial maps or rotation systems. An equivalence class of topologically equivalent drawings on the sphere, usually with additional assumptions such as the absence of isthmuses, is called a planar map. Although a plane graph has an external or unbounded face, none of the faces of a planar map has a particular status. Planar graphs generalize to graphs drawable on a surface of a given genus. In this terminology, planar graphs have genus 0, since the plane (and the sphere) are surfaces of genus 0. See "graph embedding" for other related topics. == Planarity criteria == === Kuratowski's and Wagner's theorems === The Polish mathematician Kazimierz Kuratowski provided a characterization of planar graphs in terms of forbidden graphs, now known as Kuratowski's theorem: A finite graph is planar if and only if it does not contain a subgraph that is a subdivision of the complete graph K5 or the complete bipartite graph K3,3 (utility graph). A subdivision of a graph results from inserting vertices into edges (for example, changing an edge • —— • to • — • — • ) zero or more times. Instead of considering subdivisions, Wagner's theorem deals with minors: A finite graph is planar if and only if it does not have K5 or K3,3 as a minor. A minor of a graph results from taking a subgraph and repeatedly contracting an edge into a vertex, with each neighbor of the original end-vertices becoming a neighbor of the new vertex. Klaus Wagner asked more generally whether any minor-closed class of graphs is determined by a finite set of "forbidden minors". This is now the Robertson–Seymour theorem, proved in a long series of papers. In the language of this theorem, K5 and K3,3 are the forbidden minors for the class of finite planar graphs. === Other criteria === In practice, it is difficult to use Kuratowski's criterion to quickly decide whether a given graph is planar. However, there exist fast algorithms for this problem: for a graph with n vertices, it is possible to determine in time O(n) (linear time) whether the graph may be planar or not (see planarity testing). For a simple, connected, planar graph with v vertices and e edges and f faces, the following simple conditions hold for v ≥ 3: Theorem 1. e ≤ 3v − 6; Theorem 2. If there are no cycles of length 3, then e ≤ 2v − 4. Theorem 3. f ≤ 2v − 4. In this sense, planar graphs are sparse graphs, in that they have only O(v) edges, asymptotically smaller than the maximum O(v2). The graph K3,3, for example, has 6 vertices, 9 edges, and no cycles of length 3. Therefore, by Theorem 2, it cannot be planar. These theorems provide necessary conditions for planarity that are not sufficient conditions, and therefore can only be used to prove a graph is not planar, not that it is planar. If both theorem 1 and 2 fail, other methods may be used. Whitney's planarity criterion gives a characterization based on the existence of an algebraic dual; Mac Lane's planarity criterion gives an algebraic characterization of finite planar graphs, via their cycle spaces; The Fraysseix–Rosenstiehl planarity criterion gives a characterization based on the existence of a bipartition of the cotree edges of a depth-first search tree. It is central to the left-right planarity testing algorithm; Schnyder's theorem gives a characterization of planarity in terms of partial order dimension; Colin de Verdière's planarity criterion gives a characterization based on the maximum multiplicity of the second eigenvalue of certain Schrödinger operators defined by the graph. The Hanani–Tutte theorem states that a graph is planar if and only if it has a drawing in which each independent pair of edges crosses an even number of times; it can be used to characterize the planar graphs via a system of equations modulo 2. == Properties == === Euler's formula === Euler's formula states that if a finite, connected, planar graph is drawn in the plane without any edge intersections, and v is the number of vertices, e is the number of edges and f is the number of faces (regions bounded by edges, including the outer, infinitely large region), then v − e + f = 2. {\displaystyle v-e+f=2.} As an illustration, in the butterfly graph given above, v = 5, e = 6 and f = 3. In general, if the property holds for all planar graphs of f faces, any change to the graph that creates an additional face while keeping the graph planar would keep v − e + f an invariant. Since the property holds for all graphs with f = 2, by mathematical induction it holds for all cases. Euler's formula can also be proved as follows: if the graph isn't a tree, then remove an edge which completes a cycle. This lowers both e and f by one, leaving v − e + f constant. Repeat until the remaining graph is a tree; trees have v = e + 1 and f = 1, yielding v − e + f = 2, i. e., the Euler characteristic is 2. In a finite, connected, simple, planar graph, any face (except possibly the outer one) is bounded by at least three edges and every edge touches at most two faces, so 3f ≤ 2e; using Euler's formula, one can then show that these graphs are sparse in the sense that if v ≥ 3: e ≤ 3 v − 6. {\displaystyle e\leq 3v-6.} Euler's formula is also valid for convex polyhedra. This is no coincidence: every convex polyhedron can be turned into a connected, simple, planar graph by using the Schlegel diagram of the polyhedron, a perspective projection of the polyhedron onto a plane with the center of perspective chosen near the center of one of the polyhedron's faces. Not every planar graph corresponds to a convex polyhedron in this way: the trees do not, for example. Steinitz's theorem says that the polyhedral graphs formed from convex polyhedra are precisely the finite 3-connected simple planar graphs. More generally, Euler's formula applies to any polyhedron whose faces are simple polygons that form a surface topologically equivalent to a sphere, regardless of its convexity. === Average degree === Connected planar graphs with more than one edge obey the inequality 2e ≥ 3f, because each face has at least three face-edge incidences and each edge contributes exactly two incidences. It follows via algebraic transformations of this inequality with Euler's formula v − e + f = 2 that for finite planar graphs the average degree is strictly less than 6. Graphs with higher average degree cannot be planar. === Coin graphs === We say that two circles drawn in a plane kiss (or osculate) whenever they intersect in exactly one point. A "coin graph" is a graph formed by a set of circles, no two of which have overlapping interiors, by making a vertex for each circle and an edge for each pair of circles that kiss. The circle packing theorem, first proved by Paul Koebe in 1936, states that a graph is planar if and only if it is a coin graph. This result provides an easy proof of Fáry's theorem, that every simple planar graph can be embedded in the plane in such a way that its edges are straight line segments that do not cross each other. If one places each vertex of the graph at the center of the corresponding circle in a coin graph representation, then the line segments between centers of kissing circles do not cross any of the other edges. === Planar graph density === The meshedness coefficient or density D of a planar graph, or network, is the ratio of the number f − 1 of bounded faces (the same as the circuit rank of the graph, by Mac Lane's planarity criterion) by its maximal possible values 2v − 5 for a graph with v vertices: D = f − 1 2 v − 5 {\displaystyle D={\frac {f-1}{2v-5}}} The density obeys 0 ≤ D ≤ 1, with D = 0 for a completely sparse planar graph (a tree), and D = 1 for a completely dense (maximal) planar graph. === Dual graph === Given an embedding G of a (not necessarily simple) connected graph in the plane without edge intersections, we construct the dual graph G* as follows: we choose one vertex in each face of G (including the outer face) and for each edge e in G we introduce a new edge in G* connecting the two vertices in G* corresponding to the two faces in G that meet at e. Furthermore, this edge is drawn so that it crosses e exactly once and that no other edge of G or G* is intersected. Then G* is again the embedding of a (not necessarily simple) planar graph; it has as many edges as G, as many vertices as G has faces and as many faces as G has vertices. The term "dual" is justified by the fact that G** = G; here the equality is the equivalence of embeddings on the sphere. If G is the planar graph corresponding to a convex polyhedron, then G* is the planar graph corresponding to the dual polyhedron. Duals are useful because many properties of the dual graph are related in simple ways to properties of the original graph, enabling results to be proven about graphs by examining their dual graphs. While the dual constructed for a particular embedding is unique (up to isomorphism), graphs may have different (i.e. non-isomorphic) duals, obtained from different (i.e. non-homeomorphic) embeddings. == Families of planar graphs == === Maximal planar graphs === A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property. All faces (including the outer one) are then bounded by three edges, explaining the alternative term plane triangulation (which technically means a plane drawing of the graph). The alternative names "triangular graph" or "triangulated graph" have also been used, but are ambiguous, as they more commonly refer to the line graph of a complete graph and to the chordal graphs respectively. Every maximal planar graph on more than 3 vertices is at least 3-connected. If a maximal planar graph has v vertices with v > 2, then it has precisely 3v − 6 edges and 2v − 4 faces. Apollonian networks are the maximal planar graphs formed by repeatedly splitting triangular faces into triples of smaller triangles. Equivalently, they are the planar 3-trees. Strangulated graphs are the graphs in which every peripheral cycle is a triangle. In a maximal planar graph (or more generally a polyhedral graph) the peripheral cycles are the faces, so maximal planar graphs are strangulated. The strangulated graphs include also the chordal graphs, and are exactly the graphs that can be formed by clique-sums (without deleting edges) of complete graphs and maximal planar graphs. === Outerplanar graphs === Outerplanar graphs are graphs with an embedding in the plane such that all vertices belong to the unbounded face of the embedding. Every outerplanar graph is planar, but the converse is not true: K4 is planar but not outerplanar. A theorem similar to Kuratowski's states that a finite graph is outerplanar if and only if it does not contain a subdivision of K4 or of K2,3. The above is a direct corollary of the fact that a graph G is outerplanar if the graph formed from G by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph. A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For k > 1 a planar embedding is k-outerplanar if removing the vertices on the outer face results in a (k − 1)-outerplanar embedding. A graph is k-outerplanar if it has a k-outerplanar embedding. === Halin graphs === A Halin graph is a graph formed from an undirected plane tree (with no degree-two nodes) by connecting its leaves into a cycle, in the order given by the plane embedding of the tree. Equivalently, it is a polyhedral graph in which one face is adjacent to all the others. Every Halin graph is planar. Like outerplanar graphs, Halin graphs have low treewidth, making many algorithmic problems on them more easily solved than in unrestricted planar graphs. === Upward planar graphs === An upward planar graph is a directed acyclic graph that can be drawn in the plane with its edges as non-crossing curves that are consistently oriented in an upward direction. Not every planar directed acyclic graph is upward planar, and it is NP-complete to test whether a given graph is upward planar. === Convex planar graphs === A planar graph is said to be convex if all of its faces (including the outer face) are convex polygons. Not all planar graphs have a convex embedding (e.g. the complete bipartite graph K2,4). A sufficient condition that a graph can be drawn convexly is that it is a subdivision of a 3-vertex-connected planar graph. Tutte's spring theorem even states that for simple 3-vertex-connected planar graphs the position of the inner vertices can be chosen to be the average of its neighbors. === Word-representable planar graphs === Word-representable planar graphs include triangle-free planar graphs and, more generally, 3-colourable planar graphs, as well as certain face subdivisions of triangular grid graphs, and certain triangulations of grid-covered cylinder graphs. == Theorems == === Enumeration of planar graphs === The asymptotic for the number of (labeled) planar graphs on n {\displaystyle n} vertices is g ⋅ n − 7 / 2 ⋅ γ n ⋅ n ! {\displaystyle g\cdot n^{-7/2}\cdot \gamma ^{n}\cdot n!} , where γ ≈ 27.22687 {\displaystyle \gamma \approx 27.22687} and g ≈ 0.43 × 10 − 5 {\displaystyle g\approx 0.43\times 10^{-5}} . Almost all planar graphs have an exponential number of automorphisms. The number of unlabeled (non-isomorphic) planar graphs on n {\displaystyle n} vertices is between 27.2 n {\displaystyle 27.2^{n}} and 30.06 n {\displaystyle 30.06^{n}} . === Other results === The four color theorem states that every planar graph is 4-colorable (i.e., 4-partite). Fáry's theorem states that every simple planar graph admits a representation as a planar straight-line graph. A universal point set is a set of points such that every planar graph with n vertices has such an embedding with all vertices in the point set; there exist universal point sets of quadratic size, formed by taking a rectangular subset of the integer lattice. Every simple outerplanar graph admits an embedding in the plane such that all vertices lie on a fixed circle and all edges are straight line segments that lie inside the disk and don't intersect, so n-vertex regular polygons are universal for outerplanar graphs. Scheinerman's conjecture (now a theorem) states that every planar graph can be represented as an intersection graph of line segments in the plane. The planar separator theorem states that every n-vertex planar graph can be partitioned into two subgraphs of size at most 2n/3 by the removal of O(√n) vertices. As a consequence, planar graphs also have treewidth and branch-width O(√n). The planar product structure theorem states that every planar graph is a subgraph of the strong graph product of a graph of treewidth at most 8 and a path. This result has been used to show that planar graphs have bounded queue number, bounded non-repetitive chromatic number, and universal graphs of near-linear size. It also has applications to vertex ranking and p-centered colouring of planar graphs. For two planar graphs with v vertices, it is possible to determine in time O(v) whether they are isomorphic or not (see also graph isomorphism problem). Any planar graph on n nodes has at most 8(n-2) maximal cliques, which implies that the class of planar graphs is a class with few cliques. == Generalizations == An apex graph is a graph that may be made planar by the removal of one vertex, and a k-apex graph is a graph that may be made planar by the removal of at most k vertices. A 1-planar graph is a graph that may be drawn in the plane with at most one simple crossing per edge, and a k-planar graph is a graph that may be drawn with at most k simple crossings per edge. A map graph is a graph formed from a set of finitely many simply-connected interior-disjoint regions in the plane by connecting two regions when they share at least one boundary point. When at most three regions meet at a point, the result is a planar graph, but when four or more regions meet at a point, the result can be nonplanar (for example, if one thinks of a circle divided into sectors, with the sectors being the regions, then the corresponding map graph is the complete graph as all the sectors have a common boundary point - the centre point). A toroidal graph is a graph that can be embedded without crossings on the torus. More generally, the genus of a graph is the minimum genus of a two-dimensional surface into which the graph may be embedded; planar graphs have genus zero and nonplanar toroidal graphs have genus one. Every graph can be embedded without crossings into some (orientable, connected) closed two-dimensional surface (sphere with handles) and thus the genus of a graph is well defined. Obviously, if the graph can be embedded without crossings into a (orientable, connected, closed) surface with genus g, it can be embedded without crossings into all (orientable, connected, closed) surfaces with greater or equal genus. There are also other concepts in graph theory that are called "X genus" with "X" some qualifier; in general these differ from the above defined concept of "genus" without any qualifier. Especially the non-orientable genus of a graph (using non-orientable surfaces in its definition) is different for a general graph from the genus of that graph (using orientable surfaces in its definition). Any graph may be embedded into three-dimensional space without crossings. In fact, any graph can be drawn without crossings in a two plane setup, where two planes are placed on top of each other and the edges are allowed to "jump up" and "drop down" from one plane to the other at any place (not just at the graph vertices) so that the edges can avoid intersections with other edges. This can be interpreted as saying that it is possible to make any electrical conductor network with a two-sided circuit board where electrical connection between the sides of the board can be made (as is possible with typical real life circuit boards, with the electrical connections on the top side of the board achieved through pieces of wire and at the bottom side by tracks of copper constructed on to the board itself and electrical connection between the sides of the board achieved through drilling holes, passing the wires through the holes and soldering them into the tracks); one can also interpret this as saying that in order to build any road network, one only needs just bridges or just tunnels, not both (2 levels is enough, 3 is not needed). Also, in three dimensions the question about drawing the graph without crossings is trivial. However, a three-dimensional analogue of the planar graphs is provided by the linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles are topologically linked with each other. In analogy to Kuratowski's and Wagner's characterizations of the planar graphs as being the graphs that do not contain K5 or K3,3 as a minor, the linklessly embeddable graphs may be characterized as the graphs that do not contain as a minor any of the seven graphs in the Petersen family. In analogy to the characterizations of the outerplanar and planar graphs as being the graphs with Colin de Verdière graph invariant at most two or three, the linklessly embeddable graphs are the graphs that have Colin de Verdière invariant at most four. == See also == Combinatorial map a combinatorial object that can encode plane graphs Planarization, a planar graph formed from a drawing with crossings by replacing each crossing point by a new vertex Thickness (graph theory), the smallest number of planar graphs into which the edges of a given graph may be partitioned Planarity, a puzzle computer game in which the objective is to embed a planar graph onto a plane Sprouts (game), a pencil-and-paper game where a planar graph subject to certain constraints is constructed as part of the game play Three utilities problem, a popular puzzle == Notes == == References == Kuratowski, Kazimierz (1930), "Sur le problème des courbes gauches en topologie" (PDF), Fundamenta Mathematicae (in French), 15: 271–283, doi:10.4064/fm-15-1-271-283. Wagner, K. (1937), "Über eine Eigenschaft der ebenen Komplexe", Mathematische Annalen (in German), 114: 570–590, doi:10.1007/BF01594196, S2CID 123534907. Boyer, John M.; Myrvold, Wendy J. (2005), "On the cutting edge: Simplified O(n) planarity by edge addition" (PDF), Journal of Graph Algorithms and Applications, 8 (3): 241–273, doi:10.7155/jgaa.00091. McKay, Brendan; Brinkmann, Gunnar, A useful planar graph generator. de Fraysseix, H.; Ossona de Mendez, P.; Rosenstiehl, P. (2006), "Trémaux trees and planarity", International Journal of Foundations of Computer Science, 17 (5): 1017–1029, arXiv:math/0610935, doi:10.1142/S0129054106004248, S2CID 40107560. Special Issue on Graph Drawing. Bader, D.A.; Sreshta, S. (October 1, 2003), A New Parallel Algorithm for Planarity Testing (Technical report), UNM-ECE Technical Report 03-002, archived from the original on 2016-03-16 Fisk, Steve (1978), "A short proof of Chvátal's watchman theorem", Journal of Combinatorial Theory, Series B, 24 (3): 374, doi:10.1016/0095-8956(78)90059-X. == External links == Edge Addition Planarity Algorithm Source Code, version 1.0 — Free C source code for reference implementation of Boyer–Myrvold planarity algorithm, which provides both a combinatorial planar embedder and Kuratowski subgraph isolator. An open source project with free licensing provides the Edge Addition Planarity Algorithms, current version. Public Implementation of a Graph Algorithm Library and Editor — GPL graph algorithm library including planarity testing, planarity embedder and Kuratowski subgraph exhibition in linear time. Boost Graph Library tools for planar graphs, including linear time planarity testing, embedding, Kuratowski subgraph isolation, and straight-line drawing. 3 Utilities Puzzle and Planar Graphs NetLogo Planarity model — NetLogo version of John Tantalo's game
Wikipedia/Planarity_(graph_theory)
Graphviz (short for Graph Visualization Software) is a package of open-source tools initiated by AT&T Labs Research for drawing graphs (as in nodes and edges, not as in bar charts) specified in DOT language scripts having the file name extension "gv". It also provides libraries for software applications to use the tools. Graphviz is free software licensed under the Eclipse Public License. == Tools == dot a command-line tool to produce layered graph drawings in a variety of output formats, such as (PostScript, PDF, SVG, annotated text and so on). neato useful for undirected graphs up to about 1000 nodes. "Spring model" layout minimizes global energy. fdp force-directed graph drawing similar to "spring model", but minimizes forces instead of energy. Useful for undirected graphs. sfdp multiscale version of fdp for the layout of large undirected graphs. twopi for radial graph layouts. Nodes are placed on concentric circles depending on their distance from a given root node. circo circular layout. Suitable for certain diagrams of multiple cyclic structures, such as certain telecommunications networks. dotty a graphical user interface to visualize and edit graphs. lefty a programmable (in a language inspired by EZ) widget that displays DOT graphs and allows the user to perform actions on them with the mouse. Therefore, Lefty can be used as the view in a model–view–controller GUI application that uses graphs. gml2gv, gv2gml convert to/from GML, another graph file format. graphml2gv convert a GraphML file to the DOT format. gxl2gv, gv2gxl convert to/from GXL, another graph file format. == Applications that use Graphviz == Notable applications of Graphviz include: ArgoUML's alternative UML diagram rendering called argouml-graphviz. AsciiDoc can embed Graphviz syntax as a diagram. Bison is able to output the grammar as dot for visualization of the language. Confluence has a Graphviz plugin to render diagrams from text descriptions. ConnectedText has a Graphviz plugin. Doxygen uses Graphviz to generate diagrams, including class hierarchies, collaboration and call trees for source code. FreeCAD uses Graphviz to display the dependencies between objects in documents. Gephi has a Graphviz plugin. Gramps uses Graphviz to create genealogical (family tree) diagrams. Graph-tool a Python library for graph manipulation and visualization. OmniGraffle version 5 and later uses the Graphviz engine, with a limited set of commands, for automatically laying out graphs. Org-mode can work with DOT source code blocks. PlantUML uses Graphviz to generate UML diagrams from text descriptions. Puppet can produce DOT resource graphs that can be viewed with Graphviz. Scribus is an open-source DTP program that can use Graphviz to render graphs by using its internal editor in a special frame type called render frame. Sphinx is a documentation generator that can use Graphviz to embed graphs in documents. Terraform an infrastructure-as-code tool from Hashicorp allows output of an execution plan as a DOT resource graph TOra a free-software database development and administration GUI, available under the GNU GPL. Trac wiki has a Graphviz plugin. Zim includes a plugin that allows adding and editing in-page diagrams using the Graphviz dot language. == See also == Graph drawing Graph theory Microsoft Automatic Graph Layout == References == == External links == Official website graphviz on GitLab Graphviz, Projects & Software Page, AT&T Labs Research An Introduction to Graphviz and dot (M. Simionato, 2004) Create relationship diagrams with Graphviz (Shashank Sharma, 2005) Archived 2011-08-13 at the Wayback Machine
Wikipedia/Graphviz
In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected clusters merge into significantly larger connected, so-called spanning clusters. The applications of percolation theory to materials science and in many other disciplines are discussed here and in the articles Network theory and Percolation (cognitive psychology). == Introduction == A representative question (and the source of the name) is as follows. Assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question is modelled mathematically as a three-dimensional network of n × n × n vertices, usually called "sites", in which the edge or "bonds" between each two neighbors may be open (allowing the liquid through) with probability p, or closed with probability 1 – p, and they are assumed to be independent. Therefore, for a given p, what is the probability that an open path (meaning a path, each of whose links is an "open" bond) exists from the top to the bottom? The behavior for large n is of primary interest. This problem, called now bond percolation, was introduced in the mathematics literature by Broadbent & Hammersley (1957), and has been studied intensively by mathematicians and physicists since then. In a slightly different mathematical model for obtaining a random graph, a site is "occupied" with probability p or "empty" (in which case its edges are removed) with probability 1 – p; the corresponding problem is called site percolation. The question is the same: for a given p, what is the probability that a path exists between top and bottom? Similarly, one can ask, given a connected graph at what fraction 1 – p of failures the graph will become disconnected (no large component). The same questions can be asked for any lattice dimension. As is quite typical, it is actually easier to examine infinite networks than just large ones. In this case the corresponding question is: does an infinite open cluster exist? That is, is there a path of connected points of infinite length "through" the network? By Kolmogorov's zero–one law, for any given p, the probability that an infinite cluster exists is either zero or one. Since this probability is an increasing function of p (proof via coupling argument), there must be a critical p (denoted by pc) below which the probability is always 0 and above which the probability is always 1. In practice, this criticality is very easy to observe. Even for n as small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values of p. == History == The Flory–Stockmayer theory was the first theory investigating percolation processes. The history of the percolation model as we know it has its root in the coal industry. Since the industrial revolution, the economical importance of this source of energy fostered many scientific studies to understand its composition and optimize its use. During the 1930s and 1940s, the qualitative analysis by organic chemistry left more and more room to more quantitative studies. In this context, the British Coal Utilisation Research Association (BCURA) was created in 1938. It was a research association funded by the coal mines owners. In 1942, Rosalind Franklin, who then recently graduated in chemistry from the university of Cambridge, joined the BCURA. She started research on the density and porosity of coal. During the Second World War, coal was an important strategic resource. It was used as a source of energy, but also was the main constituent of gas masks. Coal is a porous medium. To measure its 'real' density, one was to sink it in a liquid or a gas whose molecules are small enough to fill its microscopic pores. While trying to measure the density of coal using several gases (helium, methanol, hexane, benzene), and as she found different values depending on the gas used, Rosalind Franklin showed that the pores of coal are made of microstructures of various lengths that act as a microscopic sieve to discriminate the gases. She also discovered that the size of these structures depends on the temperature of carbonation during the coal production. With this research, she obtained a PhD degree and left the BCURA in 1946. In the mid fifties, Simon Broadbent worked in the BCURA as a statistician. Among other interests, he studied the use of coal in gas masks. One question is to understand how a fluid can diffuse in the coal pores, modeled as a random maze of open or closed tunnels. In 1954, during a symposium on Monte Carlo methods, he asks questions to John Hammersley on the use of numerical methods to analyze this model. Broadbent and Hammersley introduced in their article of 1957 a mathematical model to model this phenomenon, that is percolation. == Computation of the critical parameter == For most infinite lattice graphs, pc cannot be calculated exactly, though in some cases pc there is an exact value. For example: for the square lattice ℤ2 in two dimensions, pc = ⁠1/2⁠ for bond percolation, a fact which was an open question for more than 20 years and was finally resolved by Harry Kesten in the early 1980s, see Kesten (1982). For site percolation on the square lattice, the value of pc is not known from analytic derivation but only via simulations of large lattices which provide the estimate pc = 0.59274621 ± 0.00000013. A limit case for lattices in high dimensions is given by the Bethe lattice, whose threshold is at pc = ⁠1/z − 1⁠ for a coordination number z. In other words: for the regular tree of degree z {\displaystyle z} , p c {\displaystyle p_{c}} is equal to 1 / ( z − 1 ) {\displaystyle 1/(z-1)} . For a random tree-like network without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by p c = 1 g 1 ′ ( 1 ) {\displaystyle p_{c}={\frac {1}{g_{1}'(1)}}} , where g 1 ( z ) {\displaystyle g_{1}(z)} is the generating function corresponding to the excess degree distribution. So, for random Erdős–Rényi networks of average degree ⟨ k ⟩ {\displaystyle \langle k\rangle } , pc = ⁠1/⟨k⟩⁠. In networks with low clustering, 0 < C ≪ 1 {\displaystyle 0<C\ll 1} , the critical point gets scaled by ( 1 − C ) − 1 {\displaystyle (1-C)^{-1}} such that: p c = 1 1 − C 1 g 1 ′ ( 1 ) . {\displaystyle p_{c}={\frac {1}{1-C}}{\frac {1}{g_{1}'(1)}}.} This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable. == Phases == === Subcritical and supercritical === The main fact in the subcritical phase is "exponential decay". That is, when p < pc, the probability that a specific point (for example, the origin) is contained in an open cluster (meaning a maximal connected set of "open" edges of the graph) of size r decays to zero exponentially in r. This was proved for percolation in three and more dimensions by Menshikov (1986) and independently by Aizenman & Barsky (1987). In two dimensions, it formed part of Kesten's proof that pc = ⁠1/2⁠. The dual graph of the square lattice ℤ2 is also the square lattice. It follows that, in two dimensions, the supercritical phase is dual to a subcritical percolation process. This provides essentially full information about the supercritical model with d = 2. The main result for the supercritical phase in three and more dimensions is that, for sufficiently large N, there is almost certainly an infinite open cluster in the two-dimensional slab ℤ2 × [0, N]d − 2. This was proved by Grimmett & Marstrand (1990). In two dimensions with p < ⁠1/2⁠, there is with probability one a unique infinite closed cluster (a closed cluster is a maximal connected set of "closed" edges of the graph). Thus the subcritical phase may be described as finite open islands in an infinite closed ocean. When p > ⁠1/2⁠ just the opposite occurs, with finite closed islands in an infinite open ocean. The picture is more complicated when d ≥ 3 since pc < ⁠1/2⁠, and there is coexistence of infinite open and closed clusters for p between pc and 1 − pc. === Criticality === Percolation has a singularity at the critical point p = pc and many properties behave as of a power-law with p − p c {\displaystyle p-p_{c}} , near p c {\displaystyle p_{c}} . Scaling theory predicts the existence of critical exponents, depending on the number d of dimensions, that determine the class of the singularity. When d = 2 these predictions are backed up by arguments from conformal field theory and Schramm–Loewner evolution, and include predicted numerical values for the exponents. Most of these predictions are conjectural except when the number d of dimensions satisfies either d = 2 or d ≥ 6. They include: There are no infinite clusters (open or closed) The probability that there is an open path from some fixed point (say the origin) to a distance of r decreases polynomially, i.e. is on the order of rα for some α α does not depend on the particular lattice chosen, or on other local parameters. It depends only on the dimension d (this is an instance of the universality principle). αd decreases from d = 2 until d = 6 and then stays fixed. α2 = −⁠5/48⁠ α6 = −1. The shape of a large cluster in two dimensions is conformally invariant. See Grimmett (1999). In 11 or more dimensions, these facts are largely proved using a technique known as the lace expansion. It is believed that a version of the lace expansion should be valid for 7 or more dimensions, perhaps with implications also for the threshold case of 6 dimensions. The connection of percolation to the lace expansion is found in Hara & Slade (1990). In two dimensions, the first fact ("no percolation in the critical phase") is proved for many lattices, using duality. Substantial progress has been made on two-dimensional percolation through the conjecture of Oded Schramm that the scaling limit of a large cluster may be described in terms of a Schramm–Loewner evolution. This conjecture was proved by Smirnov (2001) in the special case of site percolation on the triangular lattice. == Different models == Directed percolation that models the effect of gravitational forces acting on the liquid was also introduced in Broadbent & Hammersley (1957), and has connections with the contact process. The first model studied was Bernoulli percolation. In this model all bonds are independent. This model is called bond percolation by physicists. A generalization was next introduced as the Fortuin–Kasteleyn random cluster model, which has many connections with the Ising model and other Potts models. Bernoulli (bond) percolation on complete graphs is an example of a random graph. The critical probability is p = ⁠1/N⁠, where N is the number of vertices (sites) of the graph. Bootstrap percolation removes active cells from clusters when they have too few active neighbors, and looks at the connectivity of the remaining cells. First passage percolation. Invasion percolation. == Applications == === In biology, biochemistry, and physical virology === Percolation theory has been used to successfully predict the fragmentation of biological virus shells (capsids), with the fragmentation threshold of Hepatitis B virus capsid predicted and detected experimentally. When a critical number of subunits has been randomly removed from the nanoscopic shell, it fragments and this fragmentation may be detected using Charge Detection Mass Spectroscopy (CDMS) among other single-particle techniques. This is a molecular analog to the common board game Jenga, and has relevance to the broader study of virus disassembly. More stable viral particles (tilings with greater fragmentation thresholds) are found in greater abundance in nature. === In ecology === Percolation theory has been applied to studies of how environment fragmentation impacts animal habitats and models of how the plague bacterium Yersinia pestis spreads. == See also == == References == Aizenman, Michael; Barsky, David (1987), "Sharpness of the phase transition in percolation models", Communications in Mathematical Physics, 108 (3): 489–526, Bibcode:1987CMaPh.108..489A, doi:10.1007/BF01212322, S2CID 35592821 Menshikov, Mikhail (1986), "Coincidence of critical points in percolation problems", Soviet Mathematics - Doklady, 33: 856–859 === Further reading === == External links == PercoVIS: a macOS program to visualize percolation on networks in real time Interactive Percolation Nanohub online course on Percolation Theory
Wikipedia/Percolation_theory
In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A flow network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. As such, efficient algorithms for solving network flows can also be applied to solve problems that can be reduced to a flow network, including survey design, airline scheduling, image segmentation, and the matching problem. == Definition == A network is a directed graph G = (V, E) with a non-negative capacity function c for each edge, and without multiple arcs (i.e. edges with the same source and target nodes). Without loss of generality, we may assume that if (u, v) ∈ E, then (v, u) is also a member of E. Additionally, if (v, u) ∉ E then we may add (v, u) to E and then set the c(v, u) = 0. If two nodes in G are distinguished – one as the source s and the other as the sink t – then (G, c, s, t) is called a flow network. == Flows == Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as what is the maximum number of units that can be transferred from the source node s to the sink node t? The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other. The excess function xf : V → ℝ represents the net flow entering a given node u (i.e. the sum of the flows entering u) and is defined by x f ( u ) = ∑ w ∈ V f ( w , u ) − ∑ w ∈ V f ( u , w ) . {\displaystyle x_{f}(u)=\sum _{w\in V}f(w,u)-\sum _{w\in V}f(u,w).} A node u is said to be active if xf (u) > 0 (i.e. the node u consumes flow), deficient if xf (u) < 0 (i.e. the node u produces flow), or conserving if xf (u) = 0. In flow networks, the source s is deficient, and the sink t is active. Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions. A pseudo-flow is a function f of each edge in the network that satisfies the following two constraints for all nodes u and v: Skew symmetry constraint: The flow on an arc from u to v is equivalent to the negation of the flow on the arc from v to u, that is: f (u, v) = −f (v, u). The sign of the flow indicates the flow's direction. Capacity constraint: An arc's flow cannot exceed its capacity, that is: f (u, v) ≤ c(u, v). A pre-flow is a pseudo-flow that, for all v ∈ V \{s}, satisfies the additional constraint: Non-deficient flows: The net flow entering the node v is non-negative, except for the source, which "produces" flow. That is: xf (v) ≥ 0 for all v ∈ V \{s}. A feasible flow, or just a flow, is a pseudo-flow that, for all v ∈ V \{s, t}, satisfies the additional constraint: Flow conservation constraint: The total net flow entering a node v is zero for all nodes in the network except the source s and the sink t, that is: xf (v) = 0 for all v ∈ V \{s, t}. In other words, for all nodes in the network except the source s and the sink t, the total sum of the incoming flow of a node is equal to its outgoing flow (i.e. ∑ ( u , v ) ∈ E f ( u , v ) = ∑ ( v , z ) ∈ E f ( v , z ) {\displaystyle \sum _{(u,v)\in E}f(u,v)=\sum _{(v,z)\in E}f(v,z)} , for each vertex v ∈ V \{s, t}). The value |f| of a feasible flow f for a network, is the net flow into the sink t of the flow network, that is: |f| = xf (t). Note, the flow value in a network is also equal to the total outgoing flow of source s, that is: |f| = −xf (s). Also, if we define A as a set of nodes in G such that s ∈ A and t ∉ A, the flow value is equal to the total net flow going out of A (i.e. |f| = f out(A) − f in(A)). The flow value in a network is the total amount of flow from s to t. == Concepts useful to flow problems == === Flow decomposition === Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters. === Adding arcs and flows === We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc: Given any two nodes u and v, having two arcs from u to v with capacities c1(u,v) and c2(u,v) respectively is equivalent to considering only a single arc from u to v with a capacity equal to c1(u,v)+c2(u,v). Given any two nodes u and v, having two arcs from u to v with pseudo-flows f1(u,v) and f2(u,v) respectively is equivalent to considering only a single arc from u to v with a pseudo-flow equal to f1(u,v)+f2(u,v). Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero. === Residuals === The residual capacity of an arc e with respect to a pseudo-flow f is denoted cf, and it is the difference between the arc's capacity and its flow. That is, cf (e) = c(e) − f(e). From this we can construct a residual network, denoted Gf (V, Ef), with a capacity function cf which models the amount of available capacity on the set of arcs in G = (V, E). More specifically, capacity function cf of each arc (u, v) in the residual network represents the amount of flow which can be transferred from u to v given the current state of the flow within the network. This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Note that there can be an unsaturated path (a path with available capacity) from u to v in the residual network, even though there is no such path from u to v in the original network. Since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v. === Augmenting paths === An augmenting path is a path (u1, u2, ..., uk) in the residual network, where u1 = s, uk = t, and for all ui, ui + 1 (cf (ui, ui + 1) > 0) (1 ≤ i < k). More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network Gf. The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow. The term "augmenting the flow" for an augmenting path means updating the flow f of each arc in this augmenting path to equal the capacity c of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck. === Multiple sources and/or sinks === Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink. == Example == In Figure 1 you see a flow network with source labeled s, sink t, and four additional nodes. The flow and capacity is denoted f / c {\displaystyle f/c} . Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow from s to t is 5, which can be easily seen from the fact that the total outgoing flow from s is 5, which is also the incoming flow to t. By the skew symmetry constraint, from c to a is -2 because the flow from a to c is 2. In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge ( d , c ) {\displaystyle (d,c)} . This network is not at maximum flow. There is available capacity along the paths ( s , a , c , t ) {\displaystyle (s,a,c,t)} , ( s , a , b , d , t ) {\displaystyle (s,a,b,d,t)} and ( s , a , b , d , c , t ) {\displaystyle (s,a,b,d,c,t)} , which are then the augmenting paths. The bottleneck of the ( s , a , c , t ) {\displaystyle (s,a,c,t)} path is equal to min ( c ( s , a ) − f ( s , a ) , c ( a , c ) − f ( a , c ) , c ( c , t ) − f ( c , t ) ) {\displaystyle \min(c(s,a)-f(s,a),c(a,c)-f(a,c),c(c,t)-f(c,t))} = min ( c f ( s , a ) , c f ( a , c ) , c f ( c , t ) ) {\displaystyle =\min(c_{f}(s,a),c_{f}(a,c),c_{f}(c,t))} = min ( 5 − 3 , 3 − 2 , 2 − 1 ) {\displaystyle =\min(5-3,3-2,2-1)} = min ( 2 , 1 , 1 ) = 1 {\displaystyle =\min(2,1,1)=1} . == Applications == Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law. Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time. == Classifying flow problems == The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another. In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network. In a minimum cost flow problem, each edge u , v {\displaystyle u,v} has a given cost k ( u , v ) {\displaystyle k(u,v)} , and the cost of sending the flow f ( u , v ) {\displaystyle f(u,v)} across the edge is f ( u , v ) ⋅ k ( u , v ) {\displaystyle f(u,v)\cdot k(u,v)} . The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. In a circulation problem, you have a lower bound ℓ ( u , v ) {\displaystyle \ell (u,v)} on the edges, in addition to the upper bound c ( u , v ) {\displaystyle c(u,v)} . Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with ℓ ( t , s ) {\displaystyle \ell (t,s)} and c ( t , s ) {\displaystyle c(t,s)} . The flow circulates through the network, hence the name of the problem. In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head. In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks. == See also == Braess's paradox Centrality Ford–Fulkerson algorithm Edmonds-Karp algorithm Dinic's algorithm Traffic flow (computer networking) Flow graph (disambiguation) Max-flow min-cut theorem Oriented matroid Shortest path problem Nowhere-zero flow == References == == Further reading == George T. Heineman; Gary Pollice; Stanley Selkow (2008). "Chapter 8:Network Flow Algorithms". Algorithms in a Nutshell. Oreilly Media. pp. 226–250. ISBN 978-0-596-51624-6. Ravindra K. Ahuja; Thomas L. Magnanti; James B. Orlin (1993). Network Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN 0-13-617549-X. Bollobás, Béla (1979). Graph Theory: An Introductory Course. Heidelberg: Springer-Verlag. ISBN 3-540-90399-2. Chartrand, Gary; Oellermann, Ortrud R. (1993). Applied and Algorithmic Graph Theory. New York: McGraw-Hill. ISBN 0-07-557101-3. Even, Shimon (1979). Graph Algorithms. Rockville, Maryland: Computer Science Press. ISBN 0-914894-21-8. Gibbons, Alan (1985). Algorithmic Graph Theory. Cambridge: Cambridge University Press. ISBN 0-521-28881-9. Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2001) [1990]. "26". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 696–697. ISBN 0-262-03293-7. == External links == Maximum Flow Problem Real graph instances Lemon C++ library with several maximum flow and minimum cost circulation algorithms QuickGraph Archived 2018-01-21 at the Wayback Machine, graph data structures and algorithms for .Net
Wikipedia/Flow_network
In graph theory, an intersection graph is a graph that represents the pattern of intersections of a family of sets. Any graph can be represented as an intersection graph, but some important special classes of graphs can be defined by the types of sets that are used to form an intersection representation of them. == Formal definition == Formally, an intersection graph G is an undirected graph formed from a family of sets S i , i = 0 , 1 , 2 , … {\displaystyle S_{i},\,\,\,i=0,1,2,\dots } by creating one vertex vi for each set Si, and connecting two vertices vi and vj by an edge whenever the corresponding two sets have a nonempty intersection, that is, E ( G ) = { { v i , v j } ∣ i ≠ j , S i ∩ S j ≠ ∅ } . {\displaystyle E(G)=\{\{v_{i},v_{j}\}\mid i\neq j,S_{i}\cap S_{j}\neq \emptyset \}.} == All graphs are intersection graphs == Any undirected graph G may be represented as an intersection graph. For each vertex vi of G, form a set Si consisting of the edges incident to vi; then two such sets have a nonempty intersection if and only if the corresponding vertices share an edge. Therefore, G is the intersection graph of the sets Si. Erdős, Goodman & Pósa (1966) provide a construction that is more efficient, in the sense that it requires a smaller total number of elements in all of the sets Si combined. For it, the total number of set elements is at most ⁠n2/4⁠, where n is the number of vertices in the graph. They credit the observation that all graphs are intersection graphs to Szpilrajn-Marczewski (1945), but say to see also Čulík (1964). The intersection number of a graph is the minimum total number of elements in any intersection representation of the graph. == Classes of intersection graphs == Many important graph families can be described as intersection graphs of more restricted types of set families, for instance sets derived from some kind of geometric configuration: An interval graph is defined as the intersection graph of intervals on the real line, or of connected subgraphs of a path graph. An indifference graph may be defined as the intersection graph of unit intervals on the real line A circular arc graph is defined as the intersection graph of arcs on a circle. A polygon-circle graph is defined as the intersection of polygons with corners on a circle. One characterization of a chordal graph is as the intersection graph of connected subgraphs of a tree. A trapezoid graph is defined as the intersection graph of trapezoids formed from two parallel lines. They are a generalization of the notion of permutation graph, in turn they are a special case of the family of the complements of comparability graphs known as cocomparability graphs. A unit disk graph is defined as the intersection graph of unit disks in the plane. A circle graph is the intersection graph of a set of chords of a circle. The circle packing theorem states that planar graphs are exactly the intersection graphs of families of closed disks in the plane bounded by non-crossing circles. Scheinerman's conjecture (now a theorem) states that every planar graph can also be represented as an intersection graph of line segments in the plane. However, intersection graphs of line segments may be nonplanar as well, and recognizing intersection graphs of line segments is complete for the existential theory of the reals (Schaefer 2010). The line graph of a graph G is defined as the intersection graph of the edges of G, where we represent each edge as the set of its two endpoints. A string graph is the intersection graph of curves on a plane. A graph has boxicity k if it is the intersection graph of multidimensional boxes of dimension k, but not of any smaller dimension. A clique graph is the intersection graph of maximal cliques of another graph A block graph of clique tree is the intersection graph of biconnected components of another graph Scheinerman (1985) characterized the intersection classes of graphs, families of finite graphs that can be described as the intersection graphs of sets drawn from a given family of sets. It is necessary and sufficient that the family have the following properties: Every induced subgraph of a graph in the family must also be in the family. Every graph formed from a graph in the family by replacing a vertex by a clique must also belong to the family. There exists an infinite sequence of graphs in the family, each of which is an induced subgraph of the next graph in the sequence, with the property that every graph in the family is an induced subgraph of a graph in the sequence. If the intersection graph representations have the additional requirement that different vertices must be represented by different sets, then the clique expansion property can be omitted. == Related concepts == An order-theoretic analog to the intersection graphs are the inclusion orders. In the same way that an intersection representation of a graph labels every vertex with a set so that vertices are adjacent if and only if their sets have nonempty intersection, so an inclusion representation f of a poset labels every element with a set so that for any x and y in the poset, x ≤ y if and only if f(x) ⊆ f(y). == See also == Contact graph == References == Čulík, K. (1964), "Applications of graph theory to mathematical logic and linguistics", Theory of Graphs and its Applications (Proc. Sympos. Smolenice, 1963), Prague: Publ. House Czechoslovak Acad. Sci., pp. 13–20, MR 0176940. Erdős, Paul; Goodman, A. W.; Pósa, Louis (1966), "The representation of a graph by set intersections" (PDF), Canadian Journal of Mathematics, 18 (1): 106–112, doi:10.4153/CJM-1966-014-3, MR 0186575, S2CID 646660. Golumbic, Martin Charles (1980), Algorithmic Graph Theory and Perfect Graphs, Academic Press, ISBN 0-12-289260-7. McKee, Terry A.; McMorris, F. R. (1999), Topics in Intersection Graph Theory, SIAM Monographs on Discrete Mathematics and Applications, vol. 2, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 0-89871-430-3, MR 1672910. Szpilrajn-Marczewski, E. (1945), "Sur deux propriétés des classes d'ensembles", Fund. Math., 33: 303–307, doi:10.4064/fm-33-1-303-307, MR 0015448. Schaefer, Marcus (2010), "Complexity of some geometric and topological problems" (PDF), Graph Drawing, 17th International Symposium, GS 2009, Chicago, IL, USA, September 2009, Revised Papers, Lecture Notes in Computer Science, vol. 5849, Springer-Verlag, pp. 334–344, doi:10.1007/978-3-642-11805-0_32, ISBN 978-3-642-11804-3. Scheinerman, Edward R. (1985), "Characterizing intersection classes of graphs", Discrete Mathematics, 55 (2): 185–193, doi:10.1016/0012-365X(85)90047-0, MR 0798535. == Further reading == For an overview of both the theory of intersection graphs and important special classes of intersection graphs, see McKee & McMorris (1999). == External links == Jan Kratochvíl, A video lecture on intersection graphs (June 2007) E. Prisner, A Journey through Intersection Graph County
Wikipedia/Intersection_graph
In mathematics, a dense graph is a graph in which the number of edges is close to the maximal number of edges (where every pair of vertices is connected by one edge). The opposite, a graph with only a few edges, is a sparse graph. The distinction of what constitutes a dense or sparse graph is ill-defined, and is often represented by 'roughly equal to' statements. Due to this, the way that density is defined often depends on the context of the problem. The graph density of simple graphs is defined to be the ratio of the number of edges |E| with respect to the maximum possible edges. For undirected simple graphs, the graph density is: D = | E | ( | V | 2 ) = 2 | E | | V | ( | V | − 1 ) {\displaystyle D={\frac {|E|}{\binom {|V|}{2}}}={\frac {2|E|}{|V|(|V|-1)}}} For directed, simple graphs, the maximum possible edges is twice that of undirected graphs (as there are two directions to an edge) so the density is: D = | E | 2 ( | V | 2 ) = | E | | V | ( | V | − 1 ) {\displaystyle D={\frac {|E|}{2{\binom {|V|}{2}}}}={\frac {|E|}{|V|(|V|-1)}}} where E is the number of edges and V is the number of vertices in the graph. The maximum number of edges for an undirected graph is ( | V | 2 ) = | V | ( | V | − 1 ) 2 {\displaystyle {\binom {|V|}{2}}={\frac {|V|(|V|-1)}{2}}} , so the maximal density is 1 (for complete graphs) and the minimal density is 0. For families of graphs of increasing size, one often calls them sparse if D → 0 {\displaystyle D\rightarrow 0} as | V | → ∞ {\displaystyle |V|\rightarrow \infty } . Sometimes, in computer science, a more restrictive definition of sparse is used like | E | = O ( | V | log ⁡ | V | ) {\displaystyle |E|=O(|V|\log |V|)} or even | E | = O ( | V | ) {\displaystyle |E|=O(|V|)} . In this same context, a dense graph may be defined as any graph where |E| is "close" to | V | 2 {\displaystyle |V|^{2}} . == Upper density == Upper density is an extension of the concept of graph density defined above from finite graphs to infinite graphs. Intuitively, an infinite graph has arbitrarily large finite subgraphs with any density less than its upper density, and does not have arbitrarily large finite subgraphs with density greater than its upper density. Formally, the upper density of a graph G is the infimum of the values α such that the finite subgraphs of G with density α have a bounded number of vertices. It can be shown using the Erdős–Stone theorem that the upper density can only be 1 or one of the superparticular ratios 0, ⁠1/2⁠, ⁠2/3⁠, ⁠3/4⁠, ⁠4/5⁠, … ⁠n/n + 1⁠ == Sparse and tight graphs == Lee & Streinu (2008) and Streinu & Theran (2009) define a graph as being (k, l)-sparse if every nonempty subgraph with n vertices has at most kn − l edges, and (k, l)-tight if it is (k, l)-sparse and has exactly kn − l edges. Thus trees are exactly the (1,1)-tight graphs, forests are exactly the (1,1)-sparse graphs, and graphs with arboricity k are exactly the (k,k)-sparse graphs. Pseudoforests are exactly the (1,0)-sparse graphs, and the Laman graphs arising in rigidity theory are exactly the (2,3)-tight graphs. Other graph families not characterized by their sparsity can also be described in this way. For instance the facts that any planar graph with n vertices has at most 3n – 6 edges (except for graphs with fewer than 3 vertices), and that any subgraph of a planar graph is planar, together imply that the planar graphs are (3,6)-sparse. However, not every (3,6)-sparse graph is planar. Similarly, outerplanar graphs are (2,3)-sparse and planar bipartite graphs are (2,4)-sparse. Streinu and Theran show that testing (k,l)-sparsity may be performed in polynomial time when k and l are integers and 0 ≤ l < 2k. For a graph family, the existence of k and l such that the graphs in the family are all (k,l)-sparse is equivalent to the graphs in the family having bounded degeneracy or having bounded arboricity. More precisely, it follows from a result of Nash-Williams (1964) that the graphs of arboricity at most a are exactly the (a,a)-sparse graphs. Similarly, the graphs of degeneracy at most d are ( d , ( d + 1 2 ) ) {\displaystyle \left(d,{\binom {d+1}{2}}\right)} -sparse graphs. == Sparse and dense classes of graphs == Nešetřil & Ossona de Mendez (2010) considered that the sparsity/density dichotomy makes it necessary to consider infinite graph classes instead of single graph instances. They defined somewhere dense graph classes as those classes of graphs for which there exists a threshold t such that every complete graph appears as a t-subdivision in a subgraph of a graph in the class. To the contrary, if such a threshold does not exist, the class is nowhere dense. The classes of graphs with bounded degeneracy and of nowhere dense graphs are both included in the biclique-free graphs, graph families that exclude some complete bipartite graph as a subgraph. == See also == Bounded expansion Dense subgraph == Notes == == References == Coleman, Thomas F.; Moré, Jorge J. (1983), "Estimation of sparse Jacobian matrices and graph coloring Problems", SIAM Journal on Numerical Analysis, 20 (1): 187–209, doi:10.1137/0720013 Diestel, Reinhard (2005), Graph Theory, Graduate Texts in Mathematics, Springer-Verlag, ISBN 3-540-26183-4, OCLC 181535575 Lee, Audrey; Streinu, Ileana (2008), "Pebble game algorithms and sparse graphs", Discrete Mathematics, 308 (8): 1425–1437, arXiv:math/0702129, doi:10.1016/j.disc.2007.07.104, MR 2392060 Nash-Williams, C. St. J. A. (1964), "Decomposition of finite graphs into forests", Journal of the London Mathematical Society, 39 (1): 12, doi:10.1112/jlms/s1-39.1.12, MR 0161333 Lick, Don R; White, Arthur T (1970), "k-Degenerate graphs", Canadian Journal of Mathematics, 22 (5): 1082–1096 Preiss, first (1998), Data Structures and Algorithms with Object-Oriented Design Patterns in C++, John Wiley & Sons, ISBN 0-471-24134-2 Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2010), "From Sparse Graphs to Nowhere Dense Structures: Decompositions, Independence, Dualities and Limits", European Congress of Mathematics, European Mathematical Society, pp. 135–165 Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Heidelberg: Springer, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058 Streinu, I.; Theran, L. (2009), "Sparse hypergraphs and pebble game algorithms", European Journal of Combinatorics, 30 (8): 1944–1964, arXiv:math/0703921, doi:10.1016/j.ejc.2008.12.018 Telle, Jan Arne; Villanger, Yngve (2012), "FPT algorithms for domination in biclique-free graphs", in Epstein, Leah; Ferragina, Paolo (eds.), Algorithms – ESA 2012: 20th Annual European Symposium, Ljubljana, Slovenia, September 10–12, 2012, Proceedings, Lecture Notes in Computer Science, vol. 7501, Springer, pp. 802–812, doi:10.1007/978-3-642-33090-2_69 == Further reading == Black, Paul E., "Sparse graph", Dictionary of Algorithms and Data Structures, NIST, retrieved 29 September 2005
Wikipedia/Sparse_graph
In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph. == Models == A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise. Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert but often called the Erdős–Rényi model, denoted G(n,p). In it, every possible edge occurs independently with probability 0 < p < 1. The probability of obtaining any one particular random graph with m edges is p m ( 1 − p ) N − m {\displaystyle p^{m}(1-p)^{N-m}} with the notation N = ( n 2 ) {\displaystyle N={\tbinom {n}{2}}} . A closely related model, also called the Erdős–Rényi model and denoted G(n,M), assigns equal probability to all graphs with exactly M edges. With 0 ≤ M ≤ N, G(n,M) has ( N M ) {\displaystyle {\tbinom {N}{M}}} elements and every element occurs with probability 1 / ( N M ) {\displaystyle 1/{\tbinom {N}{M}}} . The G(n,M) model can be viewed as a snapshot at a particular time (M) of the random graph process G ~ n {\displaystyle {\tilde {G}}_{n}} , a stochastic process that starts with n vertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges. If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 < p < 1, then we get an object G called an infinite random graph. Except in the trivial cases when p is 0 or 1, such a G almost surely has the following property: Given any n + m elements a 1 , … , a n , b 1 , … , b m ∈ V {\displaystyle a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{m}\in V} , there is a vertex c in V that is adjacent to each of a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} and is not adjacent to any of b 1 , … , b m {\displaystyle b_{1},\ldots ,b_{m}} . It turns out that if the vertex set is countable then there is, up to isomorphism, only a single graph with this property, namely the Rado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply the random graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property. Another model, which generalizes Gilbert's random graph model, is the random dot-product model. A random dot-product graph associates with each vertex a real vector. The probability of an edge uv between any vertices u and v is some function of the dot product u • v of their respective vectors. The network probability matrix models random graphs through edge probabilities, which represent the probability p i , j {\displaystyle p_{i,j}} that a given edge e i , j {\displaystyle e_{i,j}} exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure. For M ≃ pN, where N is the maximal number of edges possible, the two most widely used models, G(n,M) and G(n,p), are almost interchangeable. Random regular graphs form a special case, with properties that may differ from random graphs in general. Once we have a model of random graphs, every function on graphs, becomes a random variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur. == Terminology == The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that the error probabilities tend to zero. == Properties == The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value of n {\displaystyle n} and p {\displaystyle p} what the probability is that G ( n , p ) {\displaystyle G(n,p)} is connected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to as n {\displaystyle n} grows very large. Percolation theory characterizes the connectedness of random graphs, especially infinitely large ones. Percolation is related to the robustness of the graph (called also network). Given a random graph of n {\displaystyle n} nodes and an average degree ⟨ k ⟩ {\displaystyle \langle k\rangle } . Next we remove randomly a fraction 1 − p {\displaystyle 1-p} of nodes and leave only a fraction p {\displaystyle p} . There exists a critical percolation threshold p c = 1 ⟨ k ⟩ {\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}} below which the network becomes fragmented while above p c {\displaystyle p_{c}} a giant connected component exists. Localized percolation refers to removing a node its neighbors, next nearest neighbors etc. until a fraction of 1 − p {\displaystyle 1-p} of nodes from the network is removed. It was shown that for random graph with Poisson distribution of degrees p c = 1 ⟨ k ⟩ {\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}} exactly as for random removal. Random graphs are widely used in the probabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via the Szemerédi regularity lemma, the existence of that property on almost all graphs. In random regular graphs, G ( n , r − r e g ) {\displaystyle G(n,r-reg)} are the set of r {\displaystyle r} -regular graphs with r = r ( n ) {\displaystyle r=r(n)} such that n {\displaystyle n} and m {\displaystyle m} are the natural numbers, 3 ≤ r < n {\displaystyle 3\leq r<n} , and r n = 2 m {\displaystyle rn=2m} is even. The degree sequence of a graph G {\displaystyle G} in G n {\displaystyle G^{n}} depends only on the number of edges in the sets V n ( 2 ) = { i j : 1 ≤ j ≤ n , i ≠ j } ⊂ V ( 2 ) , i = 1 , ⋯ , n . {\displaystyle V_{n}^{(2)}=\left\{ij\ :\ 1\leq j\leq n,i\neq j\right\}\subset V^{(2)},\qquad i=1,\cdots ,n.} If edges, M {\displaystyle M} in a random graph, G M {\displaystyle G_{M}} is large enough to ensure that almost every G M {\displaystyle G_{M}} has minimum degree at least 1, then almost every G M {\displaystyle G_{M}} is connected and, if n {\displaystyle n} is even, almost every G M {\displaystyle G_{M}} has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected. Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more than n 4 log ⁡ ( n ) {\displaystyle {\tfrac {n}{4}}\log(n)} edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex. For some constant c {\displaystyle c} , almost every labeled graph with n {\displaystyle n} vertices and at least c n log ⁡ ( n ) {\displaystyle cn\log(n)} edges is Hamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian. Properties of random graph may change or remain invariant under graph transformations. Mashaghi A. et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. == Colouring == Given a random graph G of order n with the vertex V(G) = {1, ..., n}, by the greedy algorithm on the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent to vertex 1, otherwise it is colored 2, etc.). The number of proper colorings of random graphs given a number of q colors, called its chromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parameters n and the number of edges m or the connection probability p has been studied empirically using an algorithm based on symbolic pattern matching. == Random trees == A random tree is a tree or arborescence that is formed by a stochastic process. In a large range of random graphs of order n and size M(n) the distribution of the number of tree components of order k is asymptotically Poisson. Types of random trees include uniform spanning tree, random minimum spanning tree, random binary tree, treap, rapidly exploring random tree, Brownian tree, and random forest. == Conditional random graphs == Consider a given random graph model defined on the probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} and let P ( G ) : Ω → R m {\displaystyle {\mathcal {P}}(G):\Omega \rightarrow R^{m}} be a real valued function which assigns to each graph in Ω {\displaystyle \Omega } a vector of m properties. For a fixed p ∈ R m {\displaystyle \mathbf {p} \in R^{m}} , conditional random graphs are models in which the probability measure P {\displaystyle P} assigns zero probability to all graphs such that ' P ( G ) ≠ p {\displaystyle {\mathcal {P}}(G)\neq \mathbf {p} } . Special cases are conditionally uniform random graphs, where P {\displaystyle P} assigns equal probability to all the graphs having specified properties. They can be seen as a generalization of the Erdős–Rényi model G(n,M), when the conditioning information is not necessarily the number of edges M, but whatever other arbitrary graph property P ( G ) {\displaystyle {\mathcal {P}}(G)} . In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties. == History == The earliest use of a random graph model was by Helen Hall Jennings and Jacob Moreno in 1938 where a "chance sociogram" (a directed Erdős-Rényi model) was considered in studying comparing the fraction of reciprocated links in their network data with the random model. Another use, under the name "random net", was by Ray Solomonoff and Anatol Rapoport in 1951, using a model of directed graphs with fixed out-degree and randomly chosen attachments to other vertices. The Erdős–Rényi model of random graphs was first defined by Paul Erdős and Alfréd Rényi in their 1959 paper "On Random Graphs" and independently by Gilbert in his paper "Random graphs". == See also == Bose–Einstein condensation: a network theory approach – model in network sciencePages displaying wikidata descriptions as a fallback Cavity method – Mathematical method in statistical physics Complex networks – Network with non-trivial topological featuresPages displaying short descriptions of redirect targets Dual-phase evolution – Process that drives self-organization within complex adaptive systems Erdős–Rényi model – Two closely related models for generating random graphs Exponential random graph model – statistical models for network analysisPages displaying wikidata descriptions as a fallback Graph theory – Area of discrete mathematics Interdependent networks – Subfield of network science Network science – Academic field Percolation – Filtration of fluids through porous materials Percolation theory – Mathematical theory on behavior of connected clusters in a random graph Random graph theory of gelation – Mathematical theory for sol–gel processes Regular graph – Graph where each vertex has the same number of neighbors Scale free network – Network whose degree distribution follows a power lawPages displaying short descriptions of redirect targets Semilinear response – Extension of linear response theory in mesoscopic regimes Stochastic block model – Concept in network science Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces == References ==
Wikipedia/Random_graph
Extremal graph theory is a branch of combinatorics, itself an area of mathematics, that lies at the intersection of extremal combinatorics and graph theory. In essence, extremal graph theory studies how global properties of a graph influence local substructure. Results in extremal graph theory deal with quantitative connections between various graph properties, both global (such as the number of vertices and edges) and local (such as the existence of specific subgraphs), and problems in extremal graph theory can often be formulated as optimization problems: how big or small a parameter of a graph can be, given some constraints that the graph has to satisfy? A graph that is an optimal solution to such an optimization problem is called an extremal graph, and extremal graphs are important objects of study in extremal graph theory. Extremal graph theory is closely related to fields such as Ramsey theory, spectral graph theory, computational complexity theory, and additive combinatorics, and frequently employs the probabilistic method. == History == Mantel's Theorem (1907) and Turán's Theorem (1941) were some of the first milestones in the study of extremal graph theory. In particular, Turán's theorem would later on become a motivation for the finding of results such as the Erdős–Stone theorem (1946). This result is surprising because it connects the chromatic number with the maximal number of edges in an H {\displaystyle H} -free graph. An alternative proof of Erdős–Stone was given in 1975, and utilised the Szemerédi regularity lemma, an essential technique in the resolution of extremal graph theory problems. == Topics and concepts == === Graph coloring === A proper (vertex) coloring of a graph G {\displaystyle G} is a coloring of the vertices of G {\displaystyle G} such that no two adjacent vertices have the same color. The minimum number of colors needed to properly color G {\displaystyle G} is called the chromatic number of G {\displaystyle G} , denoted χ ( G ) {\displaystyle \chi (G)} . Determining the chromatic number of specific graphs is a fundamental question in extremal graph theory, because many problems in the area and related areas can be formulated in terms of graph coloring. Two simple lower bounds to the chromatic number of a graph G {\displaystyle G} is given by the clique number ω ( G ) {\displaystyle \omega (G)} —all vertices of a clique must have distinct colors—and by | V ( G ) | / α ( G ) {\displaystyle |V(G)|/\alpha (G)} , where α ( G ) {\displaystyle \alpha (G)} is the independence number, because the set of vertices with a given color must form an independent set. A greedy coloring gives the upper bound χ ( G ) ≤ Δ ( G ) + 1 {\displaystyle \chi (G)\leq \Delta (G)+1} , where Δ ( G ) {\displaystyle \Delta (G)} is the maximum degree of G {\displaystyle G} . When G {\displaystyle G} is not an odd cycle or a clique, Brooks' theorem states that the upper bound can be reduced to Δ ( G ) {\displaystyle \Delta (G)} . When G {\displaystyle G} is a planar graph, the four-color theorem states that G {\displaystyle G} has chromatic number at most four. In general, determining whether a given graph has a coloring with a prescribed number of colors is known to be NP-hard. In addition to vertex coloring, other types of coloring are also studied, such as edge colorings. The chromatic index χ ′ ( G ) {\displaystyle \chi '(G)} of a graph G {\displaystyle G} is the minimum number of colors in a proper edge-coloring of a graph, and Vizing's theorem states that the chromatic index of a graph G {\displaystyle G} is either Δ ( G ) {\displaystyle \Delta (G)} or Δ ( G ) + 1 {\displaystyle \Delta (G)+1} . === Forbidden subgraphs === The forbidden subgraph problem is one of the central problems in extremal graph theory. Given a graph G {\displaystyle G} , the forbidden subgraph problem asks for the maximal number of edges ex ⁡ ( n , G ) {\displaystyle \operatorname {ex} (n,G)} in an n {\displaystyle n} -vertex graph that does not contain a subgraph isomorphic to G {\displaystyle G} . When G = K r {\displaystyle G=K_{r}} is a complete graph, Turán's theorem gives an exact value for ex ⁡ ( n , K r ) {\displaystyle \operatorname {ex} (n,K_{r})} and characterizes all graphs attaining this maximum; such graphs are known as Turán graphs. For non-bipartite graphs G {\displaystyle G} , the Erdős–Stone theorem gives an asymptotic value of ex ⁡ ( n , G ) {\displaystyle \operatorname {ex} (n,G)} in terms of the chromatic number of G {\displaystyle G} . The problem of determining the asymptotics of ex ⁡ ( n , G ) {\displaystyle \operatorname {ex} (n,G)} when G {\displaystyle G} is a bipartite graph is open; when G {\displaystyle G} is a complete bipartite graph, this is known as the Zarankiewicz problem. === Homomorphism density === The homomorphism density t ( H , G ) {\displaystyle t(H,G)} of a graph H {\displaystyle H} in a graph G {\displaystyle G} describes the probability that a randomly chosen map from the vertex set of H {\displaystyle H} to the vertex set of G {\displaystyle G} is also a graph homomorphism. It is closely related to the subgraph density, which describes how often a graph H {\displaystyle H} is found as a subgraph of G {\displaystyle G} . The forbidden subgraph problem can be restated as maximizing the edge density of a graph with G {\displaystyle G} -density zero, and this naturally leads to generalization in the form of graph homomorphism inequalities, which are inequalities relating t ( H , G ) {\displaystyle t(H,G)} for various graphs H {\displaystyle H} . By extending the homomorphism density to graphons, which are objects that arise as a limit of dense graphs, the graph homomorphism density can be written in the form of integrals, and inequalities such as the Cauchy-Schwarz inequality and Hölder's inequality can be used to derive homomorphism inequalities. A major open problem relating homomorphism densities is Sidorenko's conjecture, which states a tight lower bound on the homomorphism density of a bipartite graph in a graph G {\displaystyle G} in terms of the edge density of G {\displaystyle G} . === Graph regularity === Szemerédi's regularity lemma states that all graphs are 'regular' in the following sense: the vertex set of any given graph can be partitioned into a bounded number of parts such that the bipartite graph between most pairs of parts behave like random bipartite graphs. This partition gives a structural approximation to the original graph, which reveals information about the properties of the original graph. The regularity lemma is a central result in extremal graph theory, and also has numerous applications in the adjacent fields of additive combinatorics and computational complexity theory. In addition to (Szemerédi) regularity, closely related notions of graph regularity such as strong regularity and Frieze-Kannan weak regularity have also been studied, as well as extensions of regularity to hypergraphs. Applications of graph regularity often utilize forms of counting lemmas and removal lemmas. In simplest forms, the graph counting lemma uses regularity between pairs of parts in a regular partition to approximate the number of subgraphs, and the graph removal lemma states that given a graph with few copies of a given subgraph, we can remove a small number of edges to eliminate all copies of the subgraph. == See also == Related fields Ramsey theory Ramsey-Turán theory Spectral graph theory Additive combinatorics Computational complexity theory Probabilistic combinatorics Techniques and methods Probabilistic method Dependent random choice Container method Hypergraph regularity method Theorems and conjectures (in addition to ones mentioned above) Ore's theorem Ruzsa–Szemerédi problem == References ==
Wikipedia/Extremal_graph_theory
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in a wide variety of fields such as biology, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion. Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.: 1–4  While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances.: 3  Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.: 572–573  == History == In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. The founding of the field of statistical mechanics is generally credited to three physicists: Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates James Clerk Maxwell, who developed models of probability distribution of such states Josiah Willard Gibbs, who coined the name of the field in 1884 In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further. Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem. The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871: "In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus." "Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day. == Principles: mechanics and ensembles == In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts: The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics). An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics) Using these two concepts, the state at any other time, past or future, can in principle be calculated. There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in. Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix. As is usual for probabilities, the ensemble can be interpreted in different ways: an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials. These two meanings are equivalent for many purposes, and will be used interchangeably in this article. However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state. One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. == Statistical thermodynamics == The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material. Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving. === Fundamental postulate === A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.). There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge. The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate: Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic. Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation. Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy). Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates: where the third postulate can be replaced by the following: === Three thermodynamic ensembles === There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics. Microcanonical ensemble describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition. Canonical ensemble describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy. Grand canonical ensemble describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers. For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.: 227  The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology. Important cases where the thermodynamic ensembles do not give identical results include: Microscopic systems. Large systems at a phase transition. Large systems with long-range interactions. In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system. === Calculation methods === Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities. ==== Exact ==== There are some cases which allow exact solutions. For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics). Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics. A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model. ==== Monte Carlo ==== Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations. The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level. The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble. Path integral Monte Carlo, also used to sample the canonical ensemble. ==== Other ==== For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion. For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function. Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions. Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful. == Non-equilibrium statistical mechanics == Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example: heat transport by the internal motions in a material, driven by a temperature imbalance, electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance, spontaneous chemical reactions driven by a decrease in free energy, friction, dissipation, quantum decoherence, systems being pumped by external forces (optical pumping, etc.), and irreversible processes in general. All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.) In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics. Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections. === Stochastic methods === One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier. === Near-equilibrium methods === Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.: 664  This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics. A few of the theoretical tools used to make this connection include: Fluctuation–dissipation theorem Onsager reciprocal relations Green–Kubo relations Landauer–Büttiker formalism Mori–Zwanzig formalism GENERIC formalism === Hybrid methods === An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method. == Applications == The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in: propagation of uncertainty over time, regression analysis of gravitational orbits, ensemble forecasting of weather, dynamics of neural networks, bounded-rational potential games in game theory and non-equilibrium economics. Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases). Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics. === Quantum statistical mechanics === Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic. == Index of statistical mechanics topics == === Physics === Probability amplitude Statistical physics Boltzmann factor Feynman–Kac formula Fluctuation theorem Information entropy Vacuum expectation value Cosmic variance Negative probability Gibbs state Master equation Partition function (mathematics) Quantum probability === Percolation theory === Percolation theory Schramm–Loewner evolution == See also == List of textbooks in thermodynamics and statistical mechanics Laplace transform § Statistical mechanics == References == == Further reading == Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Waveland Press. ISBN 978-1-4786-1005-2. Müller-Kirsten, Harald J W. (2013). Basics of Statistical Physics (PDF). doi:10.1142/8709. ISBN 978-981-4449-53-3. Kadanoff, Leo P. "Statistical Physics and other resources". Archived from the original on August 12, 2021. Retrieved June 18, 2023. Kadanoff, Leo P. (2000). Statistical Physics: Statics, Dynamics and Renormalization. World Scientific. ISBN 978-981-02-3764-6. Flamm, Dieter (1998). "History and outlook of statistical physics". arXiv:physics/9803005. == External links == Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy. Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter. Thermodynamics and Statistical Mechanics by Richard Fitzpatrick Cohen, Doron (2011). "Lecture Notes in Statistical Mechanics and Mesoscopics". arXiv:1107.0568 [quant-ph]. Videos of lecture series in statistical mechanics on YouTube taught by Leonard Susskind. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28.
Wikipedia/Statistical_physics
A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples. Semantic networks are used in neurolinguistics and natural language processing applications such as semantic parsing and word-sense disambiguation. Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., of social media posts), to reveal biases (e.g., in news coverage), or even to map an entire research field. == History == Examples of the use of semantic networks in logic, directed acyclic graphs as a mnemonic tool, dates back centuries, the earliest documented use being the Greek philosopher Porphyry's commentary on Aristotle's categories in the third century AD. In computing history, "Semantic Nets" for the propositional calculus were first implemented for computers by Richard H. Richens of the Cambridge Language Research Unit in 1956 as an "interlingua" for machine translation of natural languages, although the importance of this work and the Cambridge Language Research Unit was only belatedly realized. Semantic networks were also independently implemented by Robert F. Simmons and Sheldon Klein, using the first-order predicate calculus as a base, after being inspired by a demonstration of Victor Yngve. The "line of research was originated by the first President of the Association for Computational Linguistics, Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962–1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text." Other researchers, most notably M. Ross Quillian and others at System Development Corporation helped contribute to their work in the early 1960s as part of the SYNTHEX project. It's these publications at System Development Corporation that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done by Allan M. Collins and Quillian (e.g., Collins and Quillian; Collins and Loftus Quillian). Still later in 2006, Hermann Helbig fully described MultiNet. In the late 1980s, two universities in the Netherlands, Groningen and Twente, jointly began a project called Knowledge Graphs, which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitate algebras on the graph. In the subsequent decades, the distinction between semantic networks and knowledge graphs was blurred. In 2012, Google gave their knowledge graph the name Knowledge Graph. The semantic link network was systematically studied as a semantic social networking method. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004. This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998 and the Active Document Framework ADF. Since 2003, research has developed toward social semantic networking. This work is a systematic innovation at the age of the World Wide Web and global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network). The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network. Recently it has been developed to support Cyber-Physical-Social Intelligence. It was used for creating a general summarization method. The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links It has been verified that Semantic Link Network play an important role in understanding and representation through text summarisation applications. Semantic Link Network has been extended from cyberspace to cyber-physical-social space. Competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized the Semantic Similarity Network (SSN) that contains specialized relationships and propagation algorithms to simplify the semantic similarity representation and calculations. == Basics of semantic networks == A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based. They consist of arcs (spokes) and nodes (hubs) which can be organized into a taxonomic hierarchy. Different semantic networks can also be connected by bridge nodes. Semantic networks contributed to the ideas of spreading activation, inheritance, and nodes as proto-objects. One process of constructing semantic networks, known also as co-occurrence networks, includes identifying keywords in the text, calculating the frequencies of co-occurrences, and analyzing the networks to find central words and clusters of themes in the network. == In linguistics == In the field of linguistics, semantic networks represent how the human mind handles associated concepts. Typically, concepts in a semantic network can have one of two different relationships: either semantic or associative. If semantic in relation, the two concepts are linked by any of the following semantic relationships: synonymy, antonymy, hypernymy, hyponymy, holonymy, meronymy, metonymy, or polysemy. These are not the only semantic relationships, but some of the most common. If associative in relation, the two concepts are linked based on their frequency to occur together. These associations are accidental, meaning that nothing about their individual meanings requires them to be associated with one another, only that they typically are. Examples of this would be pig and farm, pig and trough, or pig and mud. While nothing about the meaning of pig forces it to be associated with farms, as pigs can be wild, the fact that pigs are so frequently found on farms creates an accidental associated relationship. These thematic relationships are common within semantic networks and are notable results in free association tests. As the initial word is given, activation of the most closely related concepts begin, spreading outward to the lesser associated concepts. An example of this would be the initial word pig prompting mammal, then animal, and then breathes. This example shows that taxonomic relationships are inherent within semantic networks. The most closely related concepts typically share semantic features, which are determinants of semantic similarity scores. Words with higher similarity scores are more closely related, thus have higher probability of being a close word in the semantic network. These relationships can be suggested into the brain through priming, where previous examples of the same relationship are shown before the target word is shown. The effect of priming on a semantic network linking can be seen through the speed of the reaction time to the word. Priming can help to reveal the structure of a semantic network and which words are most closely associated with the original word. Disruption of a semantic network can lead to a semantic deficit (not to be confused with as semantic dementia). === In the brain === There exists physical manifestation of semantic relationships in the brain as well. Category-specific semantic circuits show that words belonging to different categories are processed in circuits differently located throughout the brain. For example, the semantic circuits for a word associated with the face or mouth (such as lick) is located in a different place of the brain than a word associated with the leg or foot (such as kick). This is a primary result of a 2013 study published by Friedemann Pulvermüller. These semantic circuits are directly tied to their sensorimotor areas of the brain. This is known as embodied semantics, a subtopic of embodied language processing. If brain damage occurs, the normal processing of semantic networks could be disrupted, leading to preference into what kind of relationships dominate the semantic network in the mind. == Examples == === In Lisp === The following code shows an example of a semantic network in the Lisp programming language using an association list. To extract all the information about the "canary" type, one would use the assoc function with a key of "canary". === WordNet === An example of a semantic network is WordNet, a lexical database of English. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined are meronymy (A is a meronym of B if A is part of B), holonymy (B is a holonym of A if B contains A), hyponymy (or troponymy) (A is subordinate of B; A is kind of B), hypernymy (A is superordinate of B), synonymy (A denotes the same as B) and antonymy (A denotes the opposite of B). WordNet properties have been studied from a network theory perspective and compared to other semantic networks created from Roget's Thesaurus and word association tasks. From this perspective the three of them are a small world structure. === Other examples === It is also possible to represent logical descriptions using semantic networks such as the existential graphs of Charles Sanders Peirce or the related conceptual graphs of John F. Sowa. These have expressive power equal to or exceeding standard first-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing. Other examples of semantic networks are Gellish models. Gellish English with its Gellish English dictionary, is a formal language that is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable. SciCrunch is a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities. Another example of semantic networks, based on category theory, is ologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function. Commutative diagrams also are prescribed to constrain the semantics. In the social sciences people sometimes use the term semantic network to refer to co-occurrence networks. The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks. This process includes identifying keywords in the text, constructing co-occurrence networks, and analyzing the networks to find central words and clusters of themes in the network. It is a particularly useful method to analyze large text and big data. == Software tools == There are also elaborate types of semantic networks connected with corresponding sets of software tools used for lexical knowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro or the MultiNet paradigm of Hermann Helbig, especially suited for the semantic representation of natural language expressions and used in several NLP applications. Semantic networks are used in specialized information retrieval tasks, such as plagiarism detection. They provide information on hierarchical relations in order to employ semantic compression to reduce language diversity and enable the system to match word meanings, independently from sets of words used. The Knowledge Graph proposed by Google in 2012 is actually an application of semantic network in search engine. Modeling multi-relational data like semantic networks in low-dimensional spaces through forms of embedding has benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE (NeurIPS 2013). Applications of embedding knowledge base data include Social network analysis and Relationship extraction. == See also == === Other examples === Cognition Network Technology Lexipedia OpenCog Open Mind Common Sense (OMCS) Schema.org Semantic computing SNOMED CT Universal Networking Language (UNL) Wikidata Freebase == References == == Further reading == Allen, J. and A. Frisch (1982). "What's in a Semantic Network". In: Proceedings of the 20th. annual meeting of ACL, Toronto, pp. 19–27. John F. Sowa, Alexander Borgida (1991). Principles of Semantic Networks: Explorations in the Representation of Knowledge. Segev, E. (Ed.) (2022). Semantic Network Analysis in Social Sciences. New York: Routledge. == External links == "Semantic Networks" by John F. Sowa "Semantic Link Network" by Hai Zhuge
Wikipedia/Semantic_network
igraph is a library collection for creating and manipulating graphs and analyzing networks. It is written in C and also exists as Python and R packages. There exists moreover an interface for Mathematica. The software is widely used in academic research in network science and related fields. The publication that introduces the software has 13502 citations as of July 3, 2024 (2024-07-03) according to Google Scholar. igraph was originally developed by Gábor Csárdi and Tamás Nepusz. It is written in the C programming language in order to achieve good performance and it is freely available under GNU General Public License Version 2. == Basic properties == The three most important properties of igraph that shaped its development are as follows: igraph is capable of handling large networks efficiently it can be productively used with a high-level programming language interactive and non-interactive usage are both supported == Characteristics == The software is open source, source code can be downloaded from the project's GitHub page. There are several open source software packages that use igraph functions. As an example, R packages tnet, igraphtosonia and cccd depend on igraph R package. Users can use igraph on many operating systems. The C library and R and Python packages need the respective software, otherwise igraph is portable. The C library of igraph is well documented as well as the R package and the Python package == Functions == igraph can be used to generate graphs, compute centrality measures and path length based properties as well as graph components and graph motifs. It also can be used for degree-preserving randomization. igraph can read and write file formats such as Pajek, GraphML, LGL, NCOL, DIMACS, and GML, as well as simple edge lists. The library contains several layout tools as well. == References == == External links == Official website
Wikipedia/Igraph
In graph theory, a factor of a graph G is a spanning subgraph, i.e., a subgraph that has the same vertex set as G. A k-factor of a graph is a spanning k-regular subgraph, and a k-factorization partitions the edges of the graph into disjoint k-factors. A graph G is said to be k-factorable if it admits a k-factorization. In particular, a 1-factor is a perfect matching, and a 1-factorization of a k-regular graph is a proper edge coloring with k colors. A 2-factor is a collection of disjoint cycles that spans all vertices of the graph. == 1-factorization == If a graph is 1-factorable then it has to be a regular graph. However, not all regular graphs are 1-factorable. A k-regular graph is 1-factorable if it has chromatic index k; examples of such graphs include: Any regular bipartite graph. Hall's marriage theorem can be used to show that a k-regular bipartite graph contains a perfect matching. One can then remove the perfect matching to obtain a (k − 1)-regular bipartite graph, and apply the same reasoning repeatedly. Any complete graph with an even number of nodes (see below). However, there are also k-regular graphs that have chromatic index k + 1, and these graphs are not 1-factorable; examples of such graphs include: Any regular graph with an odd number of nodes. The Petersen graph. === Complete graphs === A 1-factorization of a complete graph corresponds to pairings in a round-robin tournament. The 1-factorization of complete graphs is a special case of Baranyai's theorem concerning the 1-factorization of complete hypergraphs. One method for constructing a 1-factorization of a complete graph on an even number of vertices involves placing all but one of the vertices in a regular polygon, with the remaining vertex at the center. With this arrangement of vertices, one way of constructing a 1-factor of the graph is to choose an edge e from the center to a single polygon vertex together with all possible edges that lie on lines perpendicular to e. The 1-factors that can be constructed in this way form a 1-factorization of the graph. The number of distinct 1-factorizations of K2, K4, K6, K8, ... is 1, 1, 6, 6240, 1225566720, 252282619805368320, 98758655816833727741338583040, ... (OEIS: A000438). === 1-factorization conjecture === Let G be a k-regular graph with 2n nodes. If k is sufficiently large, it is known that G has to be 1-factorable: If k = 2n − 1, then G is the complete graph K2n, and hence 1-factorable (see above). If k = 2n − 2, then G can be constructed by removing a perfect matching from K2n. Again, G is 1-factorable. Chetwynd & Hilton (1985) show that if k ≥ 12n/7, then G is 1-factorable. The 1-factorization conjecture is a long-standing conjecture that states that k ≈ n is sufficient. In precise terms, the conjecture is: If n is odd and k ≥ n, then G is 1-factorable. If n is even and k ≥ n − 1 then G is 1-factorable. The overfull conjecture implies the 1-factorization conjecture. === Perfect 1-factorization === A perfect pair from a 1-factorization is a pair of 1-factors whose union induces a Hamiltonian cycle. A perfect 1-factorization (P1F) of a graph is a 1-factorization having the property that every pair of 1-factors is a perfect pair. A perfect 1-factorization should not be confused with a perfect matching (also called a 1-factor). In 1964, Anton Kotzig conjectured that every complete graph K2n where n ≥ 2 has a perfect 1-factorization. So far, it is known that the following graphs have a perfect 1-factorization: the infinite family of complete graphs K2p where p is an odd prime (by Anderson and also Nakamura, independently), the infinite family of complete graphs Kp+1 where p is an odd prime, and sporadic additional results, including K2n where 2n ∈ {16, 28, 36, 40, 50, 126, 170, 244, 344, 730, 1332, 1370, 1850, 2198, 3126, 6860, 12168, 16808, 29792}. Some newer results are collected here. If the complete graph Kn+1 has a perfect 1-factorization, then the complete bipartite graph Kn,n also has a perfect 1-factorization. == 2-factorization == If a graph is 2-factorable, then it has to be 2k-regular for some integer k. Julius Petersen showed in 1891 that this necessary condition is also sufficient: any 2k-regular graph is 2-factorable. If a connected graph is 2k-regular and has an even number of edges it may also be k-factored, by choosing each of the two factors to be an alternating subset of the edges of an Euler tour. This applies only to connected graphs; disconnected counterexamples include disjoint unions of odd cycles, or of copies of K2k+1. The Oberwolfach problem concerns the existence of 2-factorizations of complete graphs into isomorphic subgraphs. It asks for which subgraphs this is possible. This is known when the subgraph is connected (in which case it is a Hamiltonian cycle and this special case is the problem of Hamiltonian decomposition) but the general case remains open. == References == == Bibliography == == Further reading ==
Wikipedia/Graph_factorization
Geometric graph theory in the broader sense is a large and amorphous subfield of graph theory, concerned with graphs defined by geometric means. In a stricter sense, geometric graph theory studies combinatorial and geometric properties of geometric graphs, meaning graphs drawn in the Euclidean plane with possibly intersecting straight-line edges, and topological graphs, where the edges are allowed to be arbitrary continuous curves connecting the vertices; thus, it can be described as "the theory of geometric and topological graphs" (Pach 2013). Geometric graphs are also known as spatial networks. == Different types of geometric graphs == A planar straight-line graph is a graph in which the vertices are embedded as points in the Euclidean plane, and the edges are embedded as non-crossing line segments. Fáry's theorem states that any planar graph may be represented as a planar straight line graph. A triangulation is a planar straight line graph to which no more edges may be added, so called because every face is necessarily a triangle; a special case of this is the Delaunay triangulation, a graph defined from a set of points in the plane by connecting two points with an edge whenever there exists a circle containing only those two points. The 1-skeleton of a polyhedron or polytope is the set of vertices and edges of said polyhedron or polytope. The skeleton of any convex polyhedron is a planar graph, and the skeleton of any k-dimensional convex polytope is a k-connected graph. Conversely, Steinitz's theorem states that any 3-connected planar graph is the skeleton of a convex polyhedron; for this reason, this class of graphs is also known as the polyhedral graphs. A Euclidean graph is a graph in which the vertices represent points in the plane, and each edge is assigned the length equal to the Euclidean distance between its endpoints. The Euclidean minimum spanning tree is the minimum spanning tree of a Euclidean complete graph. It is also possible to define graphs by conditions on the distances; in particular, a unit distance graph is formed by connecting pairs of points that are a unit distance apart in the plane. The Hadwiger–Nelson problem concerns the chromatic number of these graphs. An intersection graph is a graph in which each vertex is associated with a set and in which vertices are connected by edges whenever the corresponding sets have a nonempty intersection. When the sets are geometric objects, the result is a geometric graph. For instance, the intersection graph of line segments in one dimension is an interval graph; the intersection graph of unit disks in the plane is a unit disk graph. The Circle packing theorem states that the intersection graphs of non-crossing circles are exactly the planar graphs. Scheinerman's conjecture (proven in 2009) states that every planar graph can be represented as the intersection graph of line segments in the plane. A Levi graph of a family of points and lines has a vertex for each of these objects and an edge for every incident point-line pair. The Levi graphs of projective configurations lead to many important symmetric graphs and cages. The visibility graph of a closed polygon connects each pair of vertices by an edge whenever the line segment connecting the vertices lies entirely in the polygon. It is not known how to test efficiently whether an undirected graph can be represented as a visibility graph. A partial cube is a graph for which the vertices can be associated with the vertices of a hypercube, in such a way that distance in the graph equals Hamming distance between the corresponding hypercube vertices. Many important families of combinatorial structures, such as the acyclic orientations of a graph or the adjacencies between regions in a hyperplane arrangement, can be represented as partial cube graphs. An important special case of a partial cube is the skeleton of the permutohedron, a graph in which vertices represent permutations of a set of ordered objects and edges represent swaps of objects adjacent in the order. Several other important classes of graphs including median graphs have related definitions involving metric embeddings (Bandelt & Chepoi 2008). A flip graph is a graph formed from the triangulations of a point set, in which each vertex represents a triangulation and two triangulations are connected by an edge if they differ by the replacement of one edge for another. It is also possible to define related flip graphs for partitions into quadrilaterals or pseudotriangles, and for higher-dimensional triangulations. The flip graph of triangulations of a convex polygon forms the skeleton of the associahedron or Stasheff polytope. The flip graph of the regular triangulations of a point set (projections of higher-dimensional convex hulls) can also be represented as a skeleton, of the so-called secondary polytope. == See also == Topological graph theory Chemical graph Spatial network == References == Bandelt, Hans-Jürgen; Chepoi, Victor (2008). "Metric graph theory and geometry: a survey" (PDF). Surveys on Discrete and Computational Geometry - Twenty Years Later. Contemporary Mathematics. Vol. 453. American Mathematical Society. pp. 49–86. Pach, János, ed. (2004). Towards a Theory of Geometric Graphs. Contemporary Mathematics. Vol. 342. American Mathematical Society. Pach, János (2013). "The beginnings of geometric graph theory". Erdös centennial. Bolyai Soc. Math. Stud. Vol. 25. Budapest: János Bolyai Math. Soc. pp. 465–484. doi:10.1007/978-3-642-39286-3_17. MR 3203608. Pisanski, Tomaž; Randić, Milan (2000). "Bridges between geometry and graph theory". In Gorini, C. A. (ed.). Geometry at Work: Papers in Applied Geometry. Washington, DC: Mathematical Association of America. pp. 174–194. Archived from the original on 2007-09-27. == External links == Media related to Geometric graph theory at Wikimedia Commons
Wikipedia/Geometric_graph_theory
In graph theory, the Hadwiger conjecture states that if G {\displaystyle G} is loopless and has no K t {\displaystyle K_{t}} minor then its chromatic number satisfies χ ( G ) < t {\displaystyle \chi (G)<t} . It is known to be true for 1 ≤ t ≤ 6 {\displaystyle 1\leq t\leq 6} . The conjecture is a generalization of the four color theorem and is considered to be one of the most important and challenging open problems in the field. In more detail, if all proper colorings of an undirected graph G {\displaystyle G} use k {\displaystyle k} or more colors, then one can find k {\displaystyle k} disjoint connected subgraphs of G {\displaystyle G} such that each subgraph is connected by an edge to each other subgraph. Contracting the edges within each of these subgraphs so that each subgraph collapses to a single vertex produces a complete graph K k {\displaystyle K_{k}} on k {\displaystyle k} vertices as a minor of G {\displaystyle G} . The conjecture was made by Hugo Hadwiger in 1943. Bollobás, Catlin & Erdős (1980) call it "one of the deepest unsolved problems in graph theory". == Equivalent forms == An equivalent form of the Hadwiger conjecture (the contrapositive of the form stated above) is that, if there is no sequence of edge contractions (each merging the two endpoints of some edge into a single supervertex) that brings a graph G {\displaystyle G} to the complete graph K k {\displaystyle K_{k}} , then G {\displaystyle G} must have a vertex coloring with k − 1 {\displaystyle k-1} colors. In a minimal k {\displaystyle k} -coloring of any graph G {\displaystyle G} , contracting each color class of the coloring to a single vertex will produce a complete graph K k {\displaystyle K_{k}} . However, this contraction process does not produce a minor of G {\displaystyle G} because there is (by definition) no edge between any two vertices in the same color class, thus the contraction is not an edge contraction (which is required for minors). Hadwiger's conjecture states that there exists a different way of properly edge contracting sets of vertices to single vertices, producing a complete graph K k {\displaystyle K_{k}} , in such a way that all the contracted sets are connected. If F k {\displaystyle {\mathcal {F}}_{k}} denotes the family of graphs having the property that all minors of graphs in F k {\displaystyle {\mathcal {F}}_{k}} can be ( k − 1 ) {\displaystyle (k-1)} -colored, then it follows from the Robertson–Seymour theorem that F k {\displaystyle {\mathcal {F}}_{k}} can be characterized by a finite set of forbidden minors. Hadwiger's conjecture is that this set consists of a single forbidden minor, K k {\displaystyle K_{k}} . The Hadwiger number h ( G ) {\displaystyle h(G)} of a graph G {\displaystyle G} is the size k {\displaystyle k} of the largest complete graph K k {\displaystyle K_{k}} that is a minor of G {\displaystyle G} (or equivalently can be obtained by contracting edges of G {\displaystyle G} ). It is also known as the contraction clique number of G {\displaystyle G} . The Hadwiger conjecture can be stated in the simple algebraic form χ ( G ) ≤ h ( G ) {\displaystyle \chi (G)\leq h(G)} where χ ( G ) {\displaystyle \chi (G)} denotes the chromatic number of G {\displaystyle G} . == Special cases and partial results == The case k = 2 {\displaystyle k=2} is trivial: a graph requires more than one color if and only if it has an edge, and that edge is itself a K 2 {\displaystyle K_{2}} minor. The case k = 3 {\displaystyle k=3} is also easy: the graphs requiring three colors are the non-bipartite graphs, and every non-bipartite graph has an odd cycle, which can be contracted to a 3-cycle, that is, a K 3 {\displaystyle K_{3}} minor. In the same paper in which he introduced the conjecture, Hadwiger proved its truth for k = 4 {\displaystyle k=4} . The graphs with no K 4 {\displaystyle K_{4}} minor are the series–parallel graphs and their subgraphs. Each graph of this type has a vertex with at most two incident edges; one can 3-color any such graph by removing one such vertex, coloring the remaining graph recursively, and then adding back and coloring the removed vertex. Because the removed vertex has at most two edges, one of the three colors will always be available to color it when the vertex is added back. The truth of the conjecture for k = 5 {\displaystyle k=5} implies the four color theorem: for, if the conjecture is true, every graph requiring five or more colors would have a K 5 {\displaystyle K_{5}} minor and would (by Wagner's theorem) be nonplanar. Klaus Wagner proved in 1937 that the case k = 5 {\displaystyle k=5} is actually equivalent to the four color theorem and therefore we now know it to be true. As Wagner showed, every graph that has no K 5 {\displaystyle K_{5}} minor can be decomposed via clique-sums into pieces that are either planar or an 8-vertex Möbius ladder, and each of these pieces can be 4-colored independently of each other, so the 4-colorability of a K 5 {\displaystyle K_{5}} -minor-free graph follows from the 4-colorability of each of the planar pieces. Robertson, Seymour & Thomas (1993) proved the conjecture for k = 6 {\displaystyle k=6} , also using the four color theorem; their paper with this proof won the 1994 Fulkerson Prize. It follows from their proof that linklessly embeddable graphs, a three-dimensional analogue of planar graphs, have chromatic number at most five. Due to this result, the conjecture is known to be true for k ≤ 6 {\displaystyle k\leq 6} , but it remains unsolved for all k > 6 {\displaystyle k>6} . For k = 7 {\displaystyle k=7} , some partial results are known: every 7-chromatic graph must contain either a K 7 {\displaystyle K_{7}} minor or both a K 4 , 4 {\displaystyle K_{4,4}} minor and a K 3 , 5 {\displaystyle K_{3,5}} minor. Every graph G {\displaystyle G} has a vertex with at most O ( h ( G ) log ⁡ h ( G ) ) {\textstyle O{\bigl (}h(G){\sqrt {\log h(G)}}{\bigr )}} incident edges, from which it follows that a greedy coloring algorithm that removes this low-degree vertex, colors the remaining graph, and then adds back the removed vertex and colors it, will color the given graph with O ( h ( G ) log ⁡ h ( G ) ) {\textstyle O{\bigl (}h(G){\sqrt {\log h(G)}}{\bigr )}} colors. In the 1980s, Alexander V. Kostochka and Andrew Thomason both independently proved that every graph with no K k {\displaystyle K_{k}} minor has average degree O ( k log ⁡ k ) {\textstyle O(k{\sqrt {\log k}})} and can thus be colored using O ( k log ⁡ k ) {\textstyle O(k{\sqrt {\log k}})} colors. A sequence of improvements to this bound have led to a proof of O ( k log ⁡ log ⁡ k ) {\displaystyle O(k\log \log k)} -colorability for graphs without K k {\displaystyle K_{k}} minors. == Generalizations == György Hajós conjectured (not to be confused with the Hajóz conjecture on graph decomposition into cycles) that Hadwiger's conjecture could be strengthened to subdivisions rather than minors: that is, that every graph with chromatic number k {\displaystyle k} contains a subdivision of a complete graph K k {\displaystyle K_{k}} . Hajós' conjecture is true for k ≤ 4 {\displaystyle k\leq 4} , but Catlin (1979) found counterexamples to this strengthened conjecture for k ≥ 7 {\displaystyle k\geq 7} ; the cases k = 5 {\displaystyle k=5} and k = 6 {\displaystyle k=6} remain open. Erdős & Fajtlowicz (1981) observed that Hajós' conjecture fails badly for random graphs: for any ε > 0 {\displaystyle \varepsilon >0} , in the limit as the number of vertices, n {\displaystyle n} , goes to infinity, the probability approaches one that a random n {\displaystyle n} -vertex graph has chromatic number ≥ ( 1 2 − ε ) n / log 2 ⁡ n {\displaystyle \geq ({\tfrac {1}{2}}-\varepsilon )n/\log _{2}n} , and that its largest clique subdivision has O ( n ) {\textstyle O({\sqrt {n}})} vertices. In this context, it is worth noting that the probability also approaches one that a random n {\displaystyle n} -vertex graph has Hadwiger number greater than or equal to its chromatic number, so the Hadwiger conjecture holds for random graphs with high probability; more precisely, the Hadwiger number is with high probability proportional to n / log ⁡ n {\textstyle n/{\sqrt {\log n}}} . Borowiecki (1993) asked whether Hadwiger's conjecture could be extended to list coloring. For k ≤ 4 {\displaystyle k\leq 4} , every graph with list chromatic number k {\displaystyle k} has a k {\displaystyle k} -vertex clique minor. However, the maximum list chromatic number of planar graphs is 5, not 4, so the extension fails already for K 5 {\displaystyle K_{5}} -minor-free graphs. More generally, for every t ≥ 1 {\displaystyle t\geq 1} , there exist graphs whose Hadwiger number is 3 t + 1 {\displaystyle 3t+1} and whose list chromatic number is 4 t + 1 {\displaystyle 4t+1} . Gerards and Seymour conjectured that every graph G {\displaystyle G} with chromatic number k {\displaystyle k} has a complete graph K k {\displaystyle K_{k}} as an odd minor. Such a structure can be represented as a family of k {\displaystyle k} vertex-disjoint subtrees of G {\displaystyle G} , each of which is two-colored, such that each pair of subtrees is connected by a monochromatic edge. Although graphs with no odd K k {\displaystyle K_{k}} minor are not necessarily sparse, a similar upper bound holds for them as it does for the standard Hadwiger conjecture: a graph with no odd K k {\displaystyle K_{k}} minor has chromatic number χ ( G ) = O ( k log ⁡ k ) {\textstyle \chi (G)=O{\bigl (}k{\sqrt {\log k}}{\bigr )}} . By imposing extra conditions on G {\displaystyle G} , it may be possible to prove the existence of larger minors than K k {\displaystyle K_{k}} . One example is the snark theorem, that every cubic graph requiring four colors in any edge coloring has the Petersen graph as a minor, conjectured by W. T. Tutte and announced to be proved in 2001 by Robertson, Sanders, Seymour, and Thomas. == Notes == == References ==
Wikipedia/Hadwiger_conjecture_(graph_theory)
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants. == Branches of algebraic graph theory == === Using linear algebra === The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks. === Using group theory === The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group. This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values (the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to its irreducible characters. === Studying graph invariants === Finally, the third branch of algebraic graph theory concerns algebraic properties of invariants of graphs, and especially the chromatic polynomial, the Tutte polynomial and knot invariants. The chromatic polynomial of a graph, for example, counts the number of its proper vertex colorings. For the Petersen graph, this polynomial is t ( t − 1 ) ( t − 2 ) ( t 7 − 12 t 6 + 67 t 5 − 230 t 4 + 529 t 3 − 814 t 2 + 775 t − 352 ) {\displaystyle t(t-1)(t-2)(t^{7}-12t^{6}+67t^{5}-230t^{4}+529t^{3}-814t^{2}+775t-352)} . In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motivated by attempts to prove the four color theorem. However, there are still many open problems, such as characterizing graphs which have the same chromatic polynomial, and determining which polynomials are chromatic. == See also == Spectral graph theory Algebraic combinatorics Algebraic connectivity Dulmage–Mendelsohn decomposition Graph property Adjacency matrix == References == == External links == Media related to Algebraic graph theory at Wikimedia Commons
Wikipedia/Algebraic_graph_theory
In graph theory, graph coloring is a methodic assignment of labels traditionally called "colors" to elements of a graph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case of graph labeling. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edges so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face (or region) so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. However, non-vertex coloring problems are often stated and studied as-is. This is partly pedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring. The convention of using colors originates from coloring the countries in a political map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use any finite set as the "color set". The nature of the coloring problem depends on the number of colors but not on what they are. Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzle Sudoku. Graph coloring is still a very active field of research. Note: Many terms used in this article are defined in Glossary of graph theory. == History == The first results about graph coloring deal almost exclusively with planar graphs in the form of map coloring. While trying to color a map of the counties of England, Francis Guthrie postulated the four color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie's brother passed on the question to his mathematics teacher Augustus De Morgan at University College, who mentioned it in a letter to William Hamilton in 1852. Arthur Cayley raised the problem at a meeting of the London Mathematical Society in 1879. The same year, Alfred Kempe published a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of the Royal Society and later President of the London Mathematical Society. In 1890, Percy John Heawood pointed out that Kempe's argument was wrong. However, in that paper he proved the five color theorem, saying that every planar map can be colored with no more than five colors, using ideas of Kempe. In the following century, a vast amount of work was done and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 by Kenneth Appel and Wolfgang Haken. The proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments. The proof of the four color theorem is noteworthy, aside from its solution of a century-old problem, for being the first major computer-aided proof. In 1912, George David Birkhoff introduced the chromatic polynomial to study the coloring problem, which was generalised to the Tutte polynomial by W. T. Tutte, both of which are important invariants in algebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879, and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century. In 1960, Claude Berge formulated another conjecture about graph coloring, the strong perfect graph conjecture, originally motivated by an information-theoretic concept called the zero-error capacity of a graph introduced by Shannon. The conjecture remained unresolved for 40 years, until it was established as the celebrated strong perfect graph theorem by Chudnovsky, Robertson, Seymour, and Thomas in 2002. Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem (see section § Vertex coloring below) is one of Karp's 21 NP-complete problems from 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence of Zykov (1949). One of the major applications of graph coloring, register allocation in compilers, was introduced in 1981. == Definition and terminology == === Vertex coloring === When used without any qualification, a coloring of a graph almost always refers to a proper vertex coloring, namely a labeling of the graph's vertices with colors such that no two vertices sharing the same edge have the same color. Since a vertex with a loop (i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless. The terminology of using colors for vertex labels goes back to map coloring. Labels like red and blue are only used when the number of colors is small, and normally it is understood that the labels are drawn from the integers {1, 2, 3, ...}. A coloring using at most k colors is called a (proper) k-coloring. The smallest number of colors needed to color a graph G is called its chromatic number, and is often denoted χ(G). Sometimes γ(G) is used, since χ(G) is also used to denote the Euler characteristic of a graph. A graph that can be assigned a (proper) k-coloring is k-colorable, and it is k-chromatic if its chromatic number is exactly k. A subset of vertices assigned to the same color is called a color class; every such class forms an independent set. Thus, a k-coloring is the same as a partition of the vertex set into k independent sets, and the terms k-partite and k-colorable have the same meaning. === Chromatic polynomial === The chromatic polynomial counts the number of ways a graph can be colored using some of a given number of colors. For example, using three colors, the graph in the adjacent image can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4 × 12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (every assignment of four colors to any 4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this: The chromatic polynomial is a function P(G, t) that counts the number of t-colorings of G. As the name indicates, for a given G the function is indeed a polynomial in t. For the example graph, P(G, t) = t(t − 1)2(t − 2), and indeed P(G, 4) = 72. The chromatic polynomial includes more information about the colorability of G than does the chromatic number. Indeed, χ is the smallest positive integer that is not a zero of the chromatic polynomial χ(G) = min{k : P(G, k) > 0}. === Edge coloring === An edge coloring of a graph is a proper coloring of the edges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring with k colors is called a k-edge-coloring and is equivalent to the problem of partitioning the edge set into k matchings. The smallest number of colors needed for an edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′(G). A Tait coloring is a 3-edge coloring of a cubic graph. The four color theorem is equivalent to the assertion that every planar cubic bridgeless graph admits a Tait coloring. === Total coloring === Total coloring is a type of coloring on the vertices and edges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic number χ″(G) of a graph G is the fewest colors needed in any total coloring of G. === Face coloring === For a graph with a strong embedding on a surface, the face coloring is the dual of the vertex coloring problem. === Tutte's flow theory === For a graph G with a strong embedding on an orientable surface, William T. Tutte discovered that if the graph is k-face-colorable then G admits a nowhere-zero k-flow. The equivalence holds if the surface is sphere. === Unlabeled coloring === An unlabeled coloring of a graph is an orbit of a coloring under the action of the automorphism group of the graph. The colors remain labeled; it is the graph that is unlabeled. There is an analogue of the chromatic polynomial which counts the number of unlabeled colorings of a graph from a given finite color set. If we interpret a coloring of a graph on d vertices as a vector in ⁠ Z d {\displaystyle \mathbb {Z} ^{d}} ⁠, the action of an automorphism is a permutation of the coefficients in the coloring vector. == Properties == === Upper bounds on the chromatic number === Assigning distinct colors to distinct vertices always yields a proper coloring, so 1 ≤ χ ( G ) ≤ n . {\displaystyle 1\leq \chi (G)\leq n.} The only graphs that can be 1-colored are edgeless graphs. A complete graph K n {\displaystyle K_{n}} of n vertices requires χ ( K n ) = n {\displaystyle \chi (K_{n})=n} colors. In an optimal coloring there must be at least one of the graph's m edges between every pair of color classes, so χ ( G ) ( χ ( G ) − 1 ) ≤ 2 m . {\displaystyle \chi (G)(\chi (G)-1)\leq 2m.} More generally a family F {\displaystyle {\mathcal {F}}} of graphs is χ-bounded if there is some function c {\displaystyle c} such that the graphs G {\displaystyle G} in F {\displaystyle {\mathcal {F}}} can be colored with at most c ( ω ( G ) ) {\displaystyle c(\omega (G))} colors, where ω ( G ) {\displaystyle \omega (G)} is the clique number of G {\displaystyle G} . For the family of the perfect graphs this function is c ( ω ( G ) ) = ω ( G ) {\displaystyle c(\omega (G))=\omega (G)} . The 2-colorable graphs are exactly the bipartite graphs, including trees and forests. By the four color theorem, every planar graph can be 4-colored. A greedy coloring shows that every graph can be colored with one more color than the maximum vertex degree, χ ( G ) ≤ Δ ( G ) + 1. {\displaystyle \chi (G)\leq \Delta (G)+1.} Complete graphs have χ ( G ) = n {\displaystyle \chi (G)=n} and Δ ( G ) = n − 1 {\displaystyle \Delta (G)=n-1} , and odd cycles have χ ( G ) = 3 {\displaystyle \chi (G)=3} and Δ ( G ) = 2 {\displaystyle \Delta (G)=2} , so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved; Brooks' theorem states that Brooks' theorem: χ ( G ) ≤ Δ ( G ) {\displaystyle \chi (G)\leq \Delta (G)} for a connected, simple graph G, unless G is a complete graph or an odd cycle. === Lower bounds on the chromatic number === Several lower bounds for the chromatic bounds have been discovered over the years: If G contains a clique of size k, then at least k colors are needed to color that clique; in other words, the chromatic number is at least the clique number: χ ( G ) ≥ ω ( G ) . {\displaystyle \chi (G)\geq \omega (G).} For perfect graphs this bound is tight. Finding cliques is known as the clique problem. Hoffman's bound: Let W {\displaystyle W} be a real symmetric matrix such that W i , j = 0 {\displaystyle W_{i,j}=0} whenever ( i , j ) {\displaystyle (i,j)} is not an edge in G {\displaystyle G} . Define χ W ( G ) = 1 − λ max ( W ) λ min ( W ) {\displaystyle \chi _{W}(G)=1-{\tfrac {\lambda _{\max }(W)}{\lambda _{\min }(W)}}} , where λ max ( W ) , λ min ( W ) {\displaystyle \lambda _{\max }(W),\lambda _{\min }(W)} are the largest and smallest eigenvalues of W {\displaystyle W} . Define χ H ( G ) = max W χ W ( G ) {\textstyle \chi _{H}(G)=\max _{W}\chi _{W}(G)} , with W {\displaystyle W} as above. Then: χ H ( G ) ≤ χ ( G ) . {\displaystyle \chi _{H}(G)\leq \chi (G).} Vector chromatic number: Let W {\displaystyle W} be a positive semi-definite matrix such that W i , j ≤ − 1 k − 1 {\displaystyle W_{i,j}\leq -{\tfrac {1}{k-1}}} whenever ( i , j ) {\displaystyle (i,j)} is an edge in G {\displaystyle G} . Define χ V ( G ) {\displaystyle \chi _{V}(G)} to be the least k for which such a matrix W {\displaystyle W} exists. Then χ V ( G ) ≤ χ ( G ) . {\displaystyle \chi _{V}(G)\leq \chi (G).} Lovász number: The Lovász number of a complementary graph is also a lower bound on the chromatic number: ϑ ( G ¯ ) ≤ χ ( G ) . {\displaystyle \vartheta ({\bar {G}})\leq \chi (G).} Fractional chromatic number: The fractional chromatic number of a graph is a lower bound on the chromatic number as well: χ f ( G ) ≤ χ ( G ) . {\displaystyle \chi _{f}(G)\leq \chi (G).} These bounds are ordered as follows: χ H ( G ) ≤ χ V ( G ) ≤ ϑ ( G ¯ ) ≤ χ f ( G ) ≤ χ ( G ) . {\displaystyle \chi _{H}(G)\leq \chi _{V}(G)\leq \vartheta ({\bar {G}})\leq \chi _{f}(G)\leq \chi (G).} === Graphs with high chromatic number === Graphs with large cliques have a high chromatic number, but the opposite is not true. The Grötzsch graph is an example of a 4-chromatic graph without a triangle, and the example can be generalized to the Mycielskians. Theorem (William T. Tutte 1947, Alexander Zykov 1949, Jan Mycielski 1955): There exist triangle-free graphs with arbitrarily high chromatic number. To prove this, both, Mycielski and Zykov, each gave a construction of an inductively defined family of triangle-free graphs but with arbitrarily large chromatic number. Burling (1965) constructed axis aligned boxes in R 3 {\displaystyle \mathbb {R} ^{3}} whose intersection graph is triangle-free and requires arbitrarily many colors to be properly colored. This family of graphs is then called the Burling graphs. The same class of graphs is used for the construction of a family of triangle-free line segments in the plane, given by Pawlik et al. (2014). It shows that the chromatic number of its intersection graph is arbitrarily large as well. Hence, this implies that axis aligned boxes in R 3 {\displaystyle \mathbb {R} ^{3}} as well as line segments in R 2 {\displaystyle \mathbb {R} ^{2}} are not χ-bounded. From Brooks's theorem, graphs with high chromatic number must have high maximum degree. But colorability is not an entirely local phenomenon: A graph with high girth looks locally like a tree, because all cycles are long, but its chromatic number need not be 2: Theorem (Erdős): There exist graphs of arbitrarily high girth and chromatic number. === Bounds on the chromatic index === An edge coloring of G is a vertex coloring of its line graph L ( G ) {\displaystyle L(G)} , and vice versa. Thus, χ ′ ( G ) = χ ( L ( G ) ) . {\displaystyle \chi '(G)=\chi (L(G)).} There is a strong relationship between edge colorability and the graph's maximum degree Δ ( G ) {\displaystyle \Delta (G)} . Since all edges incident to the same vertex need their own color, we have χ ′ ( G ) ≥ Δ ( G ) . {\displaystyle \chi '(G)\geq \Delta (G).} Moreover, Kőnig's theorem: χ ′ ( G ) = Δ ( G ) {\displaystyle \chi '(G)=\Delta (G)} if G is bipartite. In general, the relationship is even stronger than what Brooks's theorem gives for vertex coloring: Vizing's Theorem: A graph of maximal degree Δ {\displaystyle \Delta } has edge-chromatic number Δ {\displaystyle \Delta } or Δ + 1 {\displaystyle \Delta +1} . === Other properties === A graph has a k-coloring if and only if it has an acyclic orientation for which the longest path has length at most k; this is the Gallai–Hasse–Roy–Vitaver theorem (Nešetřil & Ossona de Mendez 2012). For planar graphs, vertex colorings are essentially dual to nowhere-zero flows. About infinite graphs, much less is known. The following are two of the few results about infinite graph coloring: If all finite subgraphs of an infinite graph G are k-colorable, then so is G, under the assumption of the axiom of choice. This is the de Bruijn–Erdős theorem of de Bruijn & Erdős (1951). If a graph admits a full n-coloring for every n ≥ n0, it admits an infinite full coloring (Fawcett 1978). === Open problems === As stated above, ω ( G ) ≤ χ ( G ) ≤ Δ ( G ) + 1. {\displaystyle \omega (G)\leq \chi (G)\leq \Delta (G)+1.} A conjecture of Reed from 1998 is that the value is essentially closer to the lower bound, χ ( G ) ≤ ⌈ ω ( G ) + Δ ( G ) + 1 2 ⌉ . {\displaystyle \chi (G)\leq \left\lceil {\frac {\omega (G)+\Delta (G)+1}{2}}\right\rceil .} The chromatic number of the plane, where two points are adjacent if they have unit distance, is unknown, although it is one of 5, 6, or 7. Other open problems concerning the chromatic number of graphs include the Hadwiger conjecture stating that every graph with chromatic number k has a complete graph on k vertices as a minor, the Erdős–Faber–Lovász conjecture bounding the chromatic number of unions of complete graphs that have at most one vertex in common to each pair, and the Albertson conjecture that among k-chromatic graphs the complete graphs are the ones with smallest crossing number. When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-color theorem, they conjectured that for planar graphs G, the polynomial P ( G , t ) {\displaystyle P(G,t)} has no zeros in the region [ 4 , ∞ ) {\displaystyle [4,\infty )} . Although it is known that such a chromatic polynomial has no zeros in the region [ 5 , ∞ ) {\displaystyle [5,\infty )} and that P ( G , 4 ) ≠ 0 {\displaystyle P(G,4)\neq 0} , their conjecture is still unresolved. It also remains an unsolved problem to characterize graphs which have the same chromatic polynomial and to determine which polynomials are chromatic. == Algorithms == === Polynomial time === Determining if a graph can be colored with 2 colors is equivalent to determining whether or not the graph is bipartite, and thus computable in linear time using breadth-first search or depth-first search. More generally, the chromatic number and a corresponding coloring of perfect graphs can be computed in polynomial time using semidefinite programming. Closed formulas for chromatic polynomials are known for many classes of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time. If the graph is planar and has low branch-width (or is nonplanar but with a known branch-decomposition), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branch-width. === Exact algorithms === Brute-force search for a k-coloring considers each of the k n {\displaystyle k^{n}} assignments of k colors to n vertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for every k = 1 , … , n − 1 {\displaystyle k=1,\ldots ,n-1} , impractical for all but the smallest input graphs. Using dynamic programming and a bound on the number of maximal independent sets, k-colorability can be decided in time and space O ( 2.4423 n ) {\displaystyle O(2.4423^{n})} . Using the principle of inclusion–exclusion and Yates's algorithm for the fast zeta transform, k-colorability can be decided in time O ( 2 n n ) {\displaystyle O(2^{n}n)} for any k. Faster algorithms are known for 3- and 4-colorability, which can be decided in time O ( 1.3289 n ) {\displaystyle O(1.3289^{n})} and O ( 1.7272 n ) {\displaystyle O(1.7272^{n})} , respectively. Exponentially faster algorithms are also known for 5- and 6-colorability, as well as for restricted families of graphs, including sparse graphs. === Contraction === The contraction G / u v {\displaystyle G/uv} of a graph G is the graph obtained by identifying the vertices u and v, and removing any edges between them. The remaining edges originally incident to u or v are now incident to their identification (i.e., the new fused node uv). This operation plays a major role in the analysis of graph coloring. The chromatic number satisfies the recurrence relation: χ ( G ) = min { χ ( G + u v ) , χ ( G / u v ) } {\displaystyle \chi (G)={\text{min}}\{\chi (G+uv),\chi (G/uv)\}} due to Zykov (1949), where u and v are non-adjacent vertices, and G + u v {\displaystyle G+uv} is the graph with the edge uv added. Several algorithms are based on evaluating this recurrence and the resulting computation tree is sometimes called a Zykov tree. The running time is based on a heuristic for choosing the vertices u and v. The chromatic polynomial satisfies the following recurrence relation P ( G − u v , k ) = P ( G / u v , k ) + P ( G , k ) {\displaystyle P(G-uv,k)=P(G/uv,k)+P(G,k)} where u and v are adjacent vertices, and G − u v {\displaystyle G-uv} is the graph with the edge uv removed. P ( G − u v , k ) {\displaystyle P(G-uv,k)} represents the number of possible proper colorings of the graph, where the vertices may have the same or different colors. Then the proper colorings arise from two different graphs. To explain, if the vertices u and v have different colors, then we might as well consider a graph where u and v are adjacent. If u and v have the same colors, we might as well consider a graph where u and v are contracted. Tutte's curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial. These expressions give rise to a recursive procedure called the deletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case the algorithm runs in time within a polynomial factor of ( 1 + 5 2 ) n + m = O ( 1.6180 n + m ) {\displaystyle \left({\tfrac {1+{\sqrt {5}}}{2}}\right)^{n+m}=O(1.6180^{n+m})} for n vertices and m edges. The analysis can be improved to within a polynomial factor of the number t ( G ) {\displaystyle t(G)} of spanning trees of the input graph. In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls. The running time depends on the heuristic used to pick the vertex pair. === Greedy coloring === The greedy algorithm considers the vertices in a specific order v 1 {\displaystyle v_{1}} , ..., v n {\displaystyle v_{n}} and assigns to v i {\displaystyle v_{i}} the smallest available color not used by v i {\displaystyle v_{i}} 's neighbours among v 1 {\displaystyle v_{1}} , ..., v i − 1 {\displaystyle v_{i-1}} , adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number of χ ( G ) {\displaystyle \chi (G)} colors. On the other hand, greedy colorings can be arbitrarily bad; for example, the crown graph on n vertices can be 2-colored, but has an ordering that leads to a greedy coloring with n / 2 {\displaystyle n/2} colors. For chordal graphs, and for special cases of chordal graphs such as interval graphs and indifference graphs, the greedy coloring algorithm can be used to find optimal colorings in polynomial time, by choosing the vertex ordering to be the reverse of a perfect elimination ordering for the graph. The perfectly orderable graphs generalize this property, but it is NP-hard to find a perfect ordering of these graphs. If the vertices are ordered according to their degrees, the resulting greedy coloring uses at most max i min { d ( x i ) + 1 , i } {\displaystyle {\text{max}}_{i}{\text{ min}}\{d(x_{i})+1,i\}} colors, at most one more than the graph's maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm. Another heuristic due to Brélaz establishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors. Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes called sequential coloring algorithms. The maximum (worst) number of colors that can be obtained by the greedy algorithm, by using a vertex ordering chosen to maximize this number, is called the Grundy number of a graph. === Heuristic algorithms === Two well-known polynomial-time heuristics for graph colouring are the DSatur and recursive largest first (RLF) algorithms. Similarly to the greedy colouring algorithm, DSatur colours the vertices of a graph one after another, expending a previously unused colour when needed. Once a new vertex has been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of different colours in its neighbourhood and colours this vertex next. This is defined as the degree of saturation of a given vertex. The recursive largest first algorithm operates in a different fashion by constructing each color class one at a time. It does this by identifying a maximal independent set of vertices in the graph using specialised heuristic rules. It then assigns these vertices to the same color and removes them from the graph. These actions are repeated on the remaining subgraph until no vertices remain. The worst-case complexity of DSatur is O ( n 2 ) {\displaystyle O(n^{2})} , where n {\displaystyle n} is the number of vertices in the graph. The algorithm can also be implemented using a binary heap to store saturation degrees, operating in O ( ( n + m ) log ⁡ n ) {\displaystyle O((n+m)\log n)} where m {\displaystyle m} is the number of edges in the graph. This produces much faster runs with sparse graphs. The overall complexity of RLF is slightly higher than DSatur at O ( m n ) {\displaystyle O(mn)} . DSatur and RLF are exact for bipartite, cycle, and wheel graphs. === Parallel and distributed algorithms === It is known that a χ-chromatic graph can be c-colored in the deterministic LOCAL model, in O ( n 1 / α ) {\displaystyle O(n^{1/\alpha })} . rounds, with α = ⌊ c − 1 χ − 1 ⌋ {\displaystyle \alpha =\left\lfloor {\frac {c-1}{\chi -1}}\right\rfloor } . A matching lower bound of Ω ( n 1 / α ) {\displaystyle \Omega (n^{1/\alpha })} rounds is also known. This lower bound holds even if quantum computers that can exchange quantum information, possibly with a pre-shared entangled state, are allowed. In the field of distributed algorithms, graph coloring is closely related to the problem of symmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ the multi-trials technique by Schneider and Wattenhofer. In a symmetric graph, a deterministic distributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has a unique identifier, for example, from the set {1, 2, ..., n}. Put otherwise, we assume that we are given an n-coloring. The challenge is to reduce the number of colors from n to, e.g., Δ + 1. The more colors are employed, e.g. O(Δ) instead of Δ + 1, the fewer communication rounds are required. A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires Θ(n) communication rounds in the worst case – information may need to be propagated from one side of the network to another side. The simplest interesting case is an n-cycle. Richard Cole and Uzi Vishkin show that there is a distributed algorithm that reduces the number of colors from n to O(log n) in one synchronous communication step. By iterating the same procedure, it is possible to obtain a 3-coloring of an n-cycle in O(log* n) communication steps (assuming that we have unique node identifiers). The function log*, iterated logarithm, is an extremely slowly growing function, "almost constant". Hence the result by Cole and Vishkin raised the question of whether there is a constant-time distributed algorithm for 3-coloring an n-cycle. Linial (1992) showed that this is not possible: any deterministic distributed algorithm requires Ω(log* n) communication steps to reduce an n-coloring to a 3-coloring in an n-cycle. The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs as well; the running time is poly(Δ) + O(log* n). The technique was extended to unit disk graphs by Schneider and Wattenhofer. The fastest deterministic algorithms for (Δ + 1)-coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn. The algorithm by Barenboim et al. runs in time O(Δ) + log*(n)/2, which is optimal in terms of n since the constant factor 1/2 cannot be improved due to Linial's lower bound. Panconesi & Srinivasan (1996) use network decompositions to compute a Δ+1 coloring in time 2 O ( log ⁡ n ) {\displaystyle 2^{O\left({\sqrt {\log n}}\right)}} . The problem of edge coloring has also been studied in the distributed model. Panconesi & Rizzi (2001) achieve a (2Δ − 1)-coloring in O(Δ + log* n) time in this model. The lower bound for distributed vertex coloring due to Linial (1992) applies to the distributed edge coloring problem as well. === Decentralized algorithms === Decentralized algorithms are ones where no message passing is allowed (in contrast to distributed algorithms where local message passing takes places), and efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in wireless channel allocation it is usually reasonable to assume that a station will be able to detect whether other interfering transmitters are using the same channel (e.g. by measuring the SINR). This sensing information is sufficient to allow algorithms based on learning automata to find a proper graph coloring with probability one. === Computational complexity === Graph coloring is computationally hard. It is NP-complete to decide if a given graph admits a k-coloring for a given k except for the cases k ∈ {0,1,2}. In particular, it is NP-hard to compute the chromatic number. The 3-coloring problem remains NP-complete even on 4-regular planar graphs. On graphs with maximal degree 3 or less, however, Brooks' theorem implies that the 3-coloring problem can be solved in linear time. Further, for every k > 3, a k-coloring of a planar graph exists by the four color theorem, and it is possible to find such a coloring in polynomial time. However, finding the lexicographically smallest 4-coloring of a planar graph is NP-complete. The best known approximation algorithm computes a coloring of size at most within a factor O(n(log log n)2(log n)−3) of the chromatic number. For all ε > 0, approximating the chromatic number within n1−ε is NP-hard. It is also NP-hard to color a 3-colorable graph with 5 colors, 4-colorable graph with 7 colours, and a k-colorable graph with ( k ⌊ k / 2 ⌋ ) − 1 {\displaystyle \textstyle {\binom {k}{\lfloor k/2\rfloor }}-1} colors for k ≥ 5. Computing the coefficients of the chromatic polynomial is ♯P-hard. In fact, even computing the value of χ ( G , k ) {\displaystyle \chi (G,k)} is ♯P-hard at any rational point k except for k = 1 and k = 2. There is no FPRAS for evaluating the chromatic polynomial at any rational point k ≥ 1.5 except for k = 2 unless NP = RP. For edge coloring, the proof of Vizing's result gives an algorithm that uses at most Δ+1 colors. However, deciding between the two candidate values for the edge chromatic number is NP-complete. In terms of approximation algorithms, Vizing's algorithm shows that the edge chromatic number can be approximated to within 4/3, and the hardness result shows that no (4/3 − ε)-algorithm exists for any ε > 0 unless P = NP. These are among the oldest results in the literature of approximation algorithms, even though neither paper makes explicit use of that notion. == Applications == === Scheduling === Vertex coloring models to a number of scheduling problems. In the cleanest form, a given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be in conflict in the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimum makespan, the optimal time to finish all jobs without conflicts. Details of the scheduling problem define the structure of the graph. For example, when assigning aircraft to flights, the resulting conflict graph is an interval graph, so the coloring problem can be solved efficiently. In bandwidth allocation to radio stations, the resulting conflict graph is a unit disk graph, so the coloring problem is 3-approximable. === Register allocation === A compiler is a computer program that translates one computer language into another. To improve the execution time of the resulting code, one of the techniques of compiler optimization is register allocation, where the most frequently used values of the compiled program are kept in the fast processor registers. Ideally, values are assigned to registers so that they can all reside in the registers when they are used. The textbook approach to this problem is to model it as a graph coloring problem. The compiler constructs an interference graph, where vertices are variables and an edge connects two vertices if they are needed at the same time. If the graph can be colored with k colors then any set of variables needed at the same time can be stored in at most k registers. === Other applications === The problem of coloring a graph arises in many practical areas such as sports scheduling, designing seating plans, exam timetabling, the scheduling of taxis, and solving Sudoku puzzles. == Other colorings == === Ramsey theory === An important class of improper coloring problems is studied in Ramsey theory, where the graph's edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is the theorem on friends and strangers, which states that in any coloring of the edges of K 6 {\displaystyle K_{6}} , the complete graph of six vertices, there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure. === Modular Coloring === Modular coloring is a type of graph coloring in which the color of each vertex is the sum of the colors of its adjacent vertices. Let k ≥ 2 be a number of colors where Z k {\displaystyle \mathbb {Z} _{k}} is the set of integers modulo k consisting of the elements (or colors) 0,1,2, ..., k-2, k-1. First, we color each vertex in G using the elements of Z k {\displaystyle \mathbb {Z} _{k}} , allowing two adjacent vertices to be assigned the same color. In other words, we want c to be a coloring such that c: V(G) → Z k {\displaystyle \mathbb {Z} _{k}} where adjacent vertices can be assigned the same color. For each vertex v in G, the color sum of v, σ(v), is the sum of all of the adjacent vertices to v mod k. The color sum of v is denoted by σ(v) = ∑u ∈ N(v) c(u) where u is an arbitrary vertex in the neighborhood of v, N(v). We then color each vertex with the new coloring determined by the sum of the adjacent vertices. The graph G has a modular k-coloring if, for every pair of adjacent vertices a,b, σ(a) ≠ σ(b). The modular chromatic number of G, mc(G), is the minimum value of k such that there exists a modular k-coloring of G.< For example, let there be a vertex v adjacent to vertices with the assigned colors 0, 1, 1, and 3 mod 4 (k=4). The color sum would be σ(v) = 0 + 1 + 1+ 3 mod 4 = 5 mod 4 = 1. This would be the new color of vertex v. We would repeat this process for every vertex in G. If two adjacent vertices have equal color sums, G does not have a modulo 4 coloring. If none of the adjacent vertices have equal color sums, G has a modulo 4 coloring. === Other colorings === Coloring can also be considered for signed graphs and gain graphs. == See also == Critical graph Graph coloring game Graph homomorphism Hajós construction Mathematics of Sudoku Multipartite graph Uniquely colorable graph == Notes == == References == == External links == GCol An open-source python library for graph coloring. High-Performance Graph Colouring Algorithms Suite of 8 different algorithms (implemented in C++) used in the book A Guide to Graph Colouring: Algorithms and Applications (Springer International Publishers, 2015). CoLoRaTiOn by Jim Andrews and Mike Fellows is a graph coloring puzzle Links to Graph Coloring source codes Archived 2008-07-04 at the Wayback Machine Code for efficiently computing Tutte, Chromatic and Flow Polynomials Archived 2008-04-16 at the Wayback Machine by Gary Haggard, David J. Pearce and Gordon Royle A graph coloring Web App by Jose Antonio Martin H.
Wikipedia/Graph_coloring
In combinatorics, an area of mathematics, graph enumeration describes a class of combinatorial enumeration problems in which one must count undirected or directed graphs of certain types, typically as a function of the number of vertices of the graph. These problems may be solved either exactly (as an algebraic enumeration problem) or asymptotically. The pioneers in this area of mathematics were George Pólya, Arthur Cayley and J. Howard Redfield. == Labeled vs unlabeled problems == In some graphical enumeration problems, the vertices of the graph are considered to be labeled in such a way as to be distinguishable from each other, while in other problems any permutation of the vertices is considered to form the same graph, so the vertices are considered identical or unlabeled. In general, labeled problems tend to be easier. As with combinatorial enumeration more generally, the Pólya enumeration theorem is an important tool for reducing unlabeled problems to labeled ones: each unlabeled class is considered as a symmetry class of labeled objects. The number of unlabelled graphs with n {\displaystyle n} vertices is still not known in a closed-form solution, but as almost all graphs are asymmetric this number is asymptotic to 2 ( n 2 ) n ! . {\displaystyle {\frac {2^{\tbinom {n}{2}}}{n!}}.} == Exact enumeration formulas == Some important results in this area include the following. The number of labeled n-vertex simple undirected graphs is 2n(n−1)/2. The number of labeled n-vertex simple directed graphs is 2n(n−1). The number Cn of connected labeled n-vertex undirected graphs satisfies the recurrence relation C n = 2 ( n 2 ) − 1 n ∑ k = 1 n − 1 k ( n k ) 2 ( n − k 2 ) C k . {\displaystyle C_{n}=2^{n \choose 2}-{\frac {1}{n}}\sum _{k=1}^{n-1}k{n \choose k}2^{n-k \choose 2}C_{k}.} from which one may easily calculate, for n = 1, 2, 3, ..., that the values for Cn are 1, 1, 4, 38, 728, 26704, 1866256, ...(sequence A001187 in the OEIS) The number of labeled n-vertex free trees is nn−2 (Cayley's formula). The number of unlabeled n-vertex caterpillars is 2 n − 4 + 2 ⌊ ( n − 4 ) / 2 ⌋ . {\displaystyle 2^{n-4}+2^{\lfloor (n-4)/2\rfloor }.} == Graph database == Various research groups have provided searchable database that lists graphs with certain properties of a small sizes. For example The House of Graphs Small Graph Database == References ==
Wikipedia/Graphical_enumeration
Microsoft Automatic Graph Layout (MSAGL) is a .NET library for automatic graph layout. It was created by Lev Nachmanson at Microsoft Research. Earlier versions carried the name GLEE (Graph Layout Execution Engine). == Contents == The MSAGL software supplies four programming libraries: Microsoft.MSAGL.dll, a device-independent graph layout engine; Microsoft.MSAGL.Drawing.dll, a device-independent implementation of graphs as graphical user interface objects, with all kinds of graphical attributes, and support for interface events such as mouse actions; Microsoft.MSAGL.GraphViewerGDI.dll, a Windows.Forms-based graph viewer control. Microsoft.MSAGL.WpfGraphControl.dll, a WPF (Windows Presentation Foundation) based graph viewer control. A trivial application is supplied to demonstrate the viewer. == Features == MSAGL performs layout based on "principles of the Sugiyama scheme; it produces so called layered, or hierarchical, layouts" (according to the MSAGL home page). A modified Coffman–Graham scheduling algorithm is then used to find a layout that would fit in a given space. More detailed description of the algorithm can be found in U.S. patent 7,932,907. At some time, it did not support a wide range of different layout algorithms, unlike, for instance, GraphViz or GUESS. It does not appear to support incremental layout. == Availability and licensing == MSAGL is distributed under MIT License as open source at GitHub. == See also == graph layout Graph algorithms Graphviz, an open-source graph drawing system from AT&T == References == == External links == MSAGL home page MSAGL in a browser US Patent 7932907
Wikipedia/Microsoft_Automatic_Graph_Layout
In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set S {\displaystyle S} of vertices such that for every two vertices in S {\displaystyle S} , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in S {\displaystyle S} . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening. A maximal independent set is an independent set that is not a proper subset of any other independent set. A maximum independent set is an independent set of largest possible size for a given graph G {\displaystyle G} . This size is called the independence number of G {\displaystyle G} and is usually denoted by α ( G ) {\displaystyle \alpha (G)} . The optimization problem of finding such a set is called the maximum independent set problem. It is a strongly NP-hard problem. As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph. Every maximum independent set also is maximal, but the converse implication does not necessarily hold. == Properties == === Relationship to other graph parameters === A set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a theme that is explored in Ramsey theory. A set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set α ( G ) {\displaystyle \alpha (G)} and the size of a minimum vertex cover β ( G ) {\displaystyle \beta (G)} is equal to the number of vertices in the graph. A vertex coloring of a graph G {\displaystyle G} corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a vertex coloring, the chromatic number χ ( G ) {\displaystyle \chi (G)} , is at least the quotient of the number of vertices in G {\displaystyle G} and the independent number α ( G ) {\displaystyle \alpha (G)} . In a bipartite graph with no isolated vertices, the number of vertices in a maximum independent set equals the number of edges in a minimum edge covering; this is Kőnig's theorem. === Maximal independent set === An independent set that is not a proper subset of another independent set is called maximal. Such sets are dominating sets. Every graph contains at most 3n/3 maximal independent sets, but many graphs have far fewer. The number of maximal independent sets in n-vertex cycle graphs is given by the Perrin numbers, and the number of maximal independent sets in n-vertex path graphs is given by the Padovan sequence. Therefore, both numbers are proportional to powers of 1.324718..., the plastic ratio. == Finding independent sets == In computer science, several computational problems related to independent sets have been studied. In the maximum independent set problem, the input is an undirected graph, and the output is a maximum independent set in the graph. If there are multiple maximum independent sets, only one need be output. This problem is sometimes referred to as "vertex packing". In the maximum-weight independent set problem, the input is an undirected graph with weights on its vertices and the output is an independent set with maximum total weight. The maximum independent set problem is the special case in which all weights are one. In the maximal independent set listing problem, the input is an undirected graph, and the output is a list of all its maximal independent sets. The maximum independent set problem may be solved using as a subroutine an algorithm for the maximal independent set listing problem, because the maximum independent set must be included among all the maximal independent sets. In the independent set decision problem, the input is an undirected graph and a number k, and the output is a Boolean value: true if the graph contains an independent set of size k, and false otherwise. The first three of these problems are all important in practical applications; the independent set decision problem is not, but is necessary in order to apply the theory of NP-completeness to problems related to independent sets. === Maximum independent sets and maximum cliques === The independent set problem and the clique problem are complementary: a clique in G is an independent set in the complement graph of G and vice versa. Therefore, many computational results may be applied equally well to either problem. For example, the results related to the clique problem have the following corollaries: The independent set decision problem is NP-complete, and hence it is not believed that there is an efficient algorithm for solving it. The maximum independent set problem is NP-hard and it is also hard to approximate. Despite the close relationship between maximum cliques and maximum independent sets in arbitrary graphs, the independent set and clique problems may be very different when restricted to special classes of graphs. For instance, for sparse graphs (graphs in which the number of edges is at most a constant times the number of vertices in any subgraph), the maximum clique has bounded size and may be found exactly in linear time; however, for the same classes of graphs, or even for the more restricted class of bounded degree graphs, finding the maximum independent set is MAXSNP-complete, implying that, for some constant c (depending on the degree) it is NP-hard to find an approximate solution that comes within a factor of c of the optimum. === Exact algorithms === The maximum independent set problem is NP-hard. However, it can be solved more efficiently than the O(n2 2n) time that would be given by a naive brute force algorithm that examines every vertex subset and checks whether it is an independent set. As of 2017 it can be solved in time O(1.1996n) using polynomial space. When restricted to graphs with maximum degree 3, it can be solved in time O(1.0836n). For many classes of graphs, a maximum weight independent set may be found in polynomial time. Famous examples are claw-free graphs, P5-free graphs and perfect graphs. For chordal graphs, a maximum weight independent set can be found in linear time. Modular decomposition is a good tool for solving the maximum weight independent set problem; the linear time algorithm on cographs is the basic example for that. Another important tool are clique separators as described by Tarjan. Kőnig's theorem implies that in a bipartite graph the maximum independent set can be found in polynomial time using a bipartite matching algorithm. === Approximation algorithms === In general, the maximum independent set problem cannot be approximated to a constant factor in polynomial time (unless P = NP). In fact, Max Independent Set in general is Poly-APX-complete, meaning it is as hard as any problem that can be approximated to a polynomial factor. However, there are efficient approximation algorithms for restricted classes of graphs. ==== In planar graphs ==== In planar graphs, the maximum independent set may be approximated to within any approximation ratio c < 1 in polynomial time; similar polynomial-time approximation schemes exist in any family of graphs closed under taking minors. ==== In bounded degree graphs ==== In bounded degree graphs, effective approximation algorithms are known with approximation ratios that are constant for a fixed value of the maximum degree; for instance, a greedy algorithm that forms a maximal independent set by, at each step, choosing the minimum degree vertex in the graph and removing its neighbors, achieves an approximation ratio of (Δ+2)/3 on graphs with maximum degree Δ. Approximation hardness bounds for such instances were proven in Berman & Karpinski (1999). Indeed, even Max Independent Set on 3-regular 3-edge-colorable graphs is APX-complete. ==== In interval intersection graphs ==== An interval graph is a graph in which the nodes are 1-dimensional intervals (e.g. time intervals) and there is an edge between two intervals if and only if they intersect. An independent set in an interval graph is just a set of non-overlapping intervals. The problem of finding maximum independent sets in interval graphs has been studied, for example, in the context of job scheduling: given a set of jobs that has to be executed on a computer, find a maximum set of jobs that can be executed without interfering with each other. This problem can be solved exactly in polynomial time using earliest deadline first scheduling. ==== In geometric intersection graphs ==== A geometric intersection graph is a graph in which the nodes are geometric shapes and there is an edge between two shapes if and only if they intersect. An independent set in a geometric intersection graph is just a set of disjoint (non-overlapping) shapes. The problem of finding maximum independent sets in geometric intersection graphs has been studied, for example, in the context of Automatic label placement: given a set of locations in a map, find a maximum set of disjoint rectangular labels near these locations. Finding a maximum independent set in intersection graphs is still NP-complete, but it is easier to approximate than the general maximum independent set problem. A recent survey can be found in the introduction of Chan & Har-Peled (2012). ==== In d-claw-free graphs ==== A d-claw in a graph is a set of d+1 vertices, one of which (the "center") is connected to the other d vertices, but the other d vertices are not connected to each other. A d-claw-free graph is a graph that does not have a d-claw subgraph. Consider the algorithm that starts with an empty set, and incrementally adds an arbitrary vertex to it as long as it is not adjacent to any existing vertex. In d-claw-free graphs, every added vertex invalidates at most d − 1 vertices from the maximum independent set; therefore, this trivial algorithm attains a (d − 1)-approximation algorithm for the maximum independent set. In fact, it is possible to get much better approximation ratios: Neuwohner presented a polynomial time algorithm that, for any constant ε>0, finds a (d/2 − 1/63,700,992+ε)-approximation for the maximum weight independent set in a d-claw free graph. Cygan presented a quasi-polynomial time algorithm that, for any ε>0, attains a (d+ε)/3 approximation. === Finding maximal independent sets === The problem of finding a maximal independent set can be solved in polynomial time by a trivial parallel greedy algorithm . All maximal independent sets can be found in time O(3n/3) = O(1.4423n). === Counting independent sets === The counting problem #IS asks, given an undirected graph, how many independent sets it contains. This problem is intractable, namely, it is ♯P-complete, already on graphs with maximal degree three. It is further known that, assuming that NP is different from RP, the problem cannot be tractably approximated in the sense that it does not have a fully polynomial-time approximation scheme with randomization (FPRAS), even on graphs with maximal degree six; however it does have an fully polynomial-time approximation scheme (FPTAS) in the case where the maximal degree is five. The problem #BIS, of counting independent sets on bipartite graphs, is also ♯P-complete, already on graphs with maximal degree three. It is not known whether #BIS admits a FPRAS. The question of counting maximal independent sets has also been studied. == Applications == The maximum independent set and its complement, the minimum vertex cover problem, is involved in proving the computational complexity of many theoretical problems. They also serve as useful models for real world optimization problems, for example maximum independent set is a useful model for discovering stable genetic components for designing engineered genetic systems. == See also == An independent set of edges is a set of edges of which no two have a vertex in common. It is usually called a matching. A vertex coloring is a partition of the vertex set into independent sets. == Notes == == References == == External links == Weisstein, Eric W. "Maximal Independent Vertex Set". MathWorld. Challenging Benchmarks for Maximum Clique, Maximum Independent Set, Minimum Vertex Cover and Vertex Coloring Archived 2013-05-29 at the Wayback Machine Independent Set and Vertex Cover, Hanan Ayad.
Wikipedia/Independent_set_(graph_theory)
In graph theory, two graphs G {\displaystyle G} and G ′ {\displaystyle G'} are homeomorphic if there is a graph isomorphism from some subdivision of G {\displaystyle G} to some subdivision of G ′ {\displaystyle G'} . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in diagrams), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if their diagrams are homeomorphic in the topological sense. == Subdivision and smoothing == In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G. The subdivision of some edge e with endpoints {u,v} yields a graph containing one new vertex w, and with an edge set replacing e by two new edges, {u,w} and {w,v}. For directed edges, this operation shall preserve their propagating direction. For example, the edge e, with endpoints {u,v}: can be subdivided into two edges, e1 and e2, connecting to a new vertex w of degree-2, or indegree-1 and outdegree-1 for the directed edge: Determining whether for graphs G and H, H is homeomorphic to a subgraph of G, is an NP-complete problem. === Reversion === The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges (e1, e2) incident on w, removes both edges containing w and replaces (e1, e2) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree-2 (i.e., 2-valent) vertices can be smoothed. The limit of this operation is realized by the graph that has no more degree-2 vertices. For example, the simple connected graph with two edges, e1 {u,w} and e2 {w,v}: has a vertex (namely w) that can be smoothed away, resulting in: === Barycentric subdivisions === The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph. This procedure can be repeated, so that the nth barycentric subdivision is the barycentric subdivision of the n−1st barycentric subdivision of the graph. The second such subdivision is always a simple graph. == Embedding on a surface == It is evident that subdividing a graph preserves planarity. Kuratowski's theorem states that a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on six vertices, three of which connect to each of the other three). In fact, a graph homeomorphic to K5 or K3,3 is called a Kuratowski subgraph. A generalization, following from the Robertson–Seymour theorem, asserts that for each integer g, there is a finite obstruction set of graphs L ( g ) = { G i ( g ) } {\displaystyle L(g)=\left\{G_{i}^{(g)}\right\}} such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the G i ( g ) {\displaystyle G_{i}^{(g)\!}} . For example, L ( 0 ) = { K 5 , K 3 , 3 } {\displaystyle L(0)=\left\{K_{5},K_{3,3}\right\}} consists of the Kuratowski subgraphs. == Example == In the following example, graph G and graph H are homeomorphic. If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H, then G′ and H′ have a similar graph drawing: Therefore, there exists an isomorphism between G' and H', meaning G and H are homeomorphic. === mixed graph === The following mixed graphs are homeomorphic. The directed edges are shown to have an intermediate arrow head. == See also == Minor (graph theory) Edge contraction == References == == Further reading == Yellen, Jay; Gross, Jonathan L. (2005), Graph Theory and Its Applications, Discrete Mathematics and Its Applications (2nd ed.), Chapman & Hall/CRC, ISBN 978-1-58488-505-4
Wikipedia/Homeomorphism_(graph_theory)
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.). == Operations == The basic operations provided by a graph data structure G usually include: adjacent(G, x, y): tests whether there is an edge from the vertex x to the vertex y; neighbors(G, x): lists all vertices y such that there is an edge from the vertex x to the vertex y; add_vertex(G, x): adds the vertex x, if it is not there; remove_vertex(G, x): removes the vertex x, if it is there; add_edge(G, x, y, z): adds the edge z from the vertex x to the vertex y, if it is not there; remove_edge(G, x, y): removes the edge from the vertex x to the vertex y, if it is there; get_vertex_value(G, x): returns the value associated with the vertex x; set_vertex_value(G, x, v): sets the value associated with the vertex x to v. Structures that associate values to the edges usually also provide: get_edge_value(G, x, y): returns the value associated with the edge (x, y); set_edge_value(G, x, y, v): sets the value associated with the edge (x, y) to v. == Common data structures for graph representation == Adjacency list Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are also stored as objects, in which case each vertex stores its incident edges and each edge stores its incident vertices. Adjacency matrix A two-dimensional matrix, in which the rows represent source vertices and columns represent destination vertices. Data on edges and vertices must be stored externally. Only the cost for one edge can be stored between each pair of vertices. Incidence matrix A two-dimensional matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate the incidence relation between the vertex at a row and edge at a column. The following table gives the time complexity cost of performing various operations on graphs, for each of these representations, with |V| the number of vertices and |E| the number of edges. In the matrix representations, the entries encode the cost of following an edge. The cost of edges that are not present are assumed to be ∞. Adjacency lists are generally preferred for the representation of sparse graphs, while an adjacency matrix is preferred if the graph is dense; that is, the number of edges | E | {\displaystyle |E|} is close to the number of vertices squared, | V | 2 {\displaystyle |V|^{2}} , or if one must be able to quickly look up if there is an edge connecting two vertices. === More efficient representation of adjacency sets === The time complexity of operations in the adjacency list representation can be improved by storing the sets of adjacent vertices in more efficient data structures, such as hash tables or balanced binary search trees (the latter representation requires that vertices are identified by elements of a linearly ordered set, such as integers or character strings). A representation of adjacent vertices via hash tables leads to an amortized average time complexity of O ( 1 ) {\displaystyle O(1)} to test adjacency of two given vertices and to remove an edge and an amortized average time complexity of O ( deg ⁡ ( x ) ) {\displaystyle O(\deg(x))} to remove a given vertex x of degree deg ⁡ ( x ) {\displaystyle \deg(x)} . The time complexity of the other operations and the asymptotic space requirement do not change. == Parallel representations == The parallelization of graph problems faces significant challenges: Data-driven computations, unstructured problems, poor locality and high data access to computation ratio. The graph representation used for parallel architectures plays a significant role in facing those challenges. Poorly chosen representations may unnecessarily drive up the communication cost of the algorithm, which will decrease its scalability. In the following, shared and distributed memory architectures are considered. === Shared memory === In the case of a shared memory model, the graph representations used for parallel processing are the same as in the sequential case, since parallel read-only access to the graph representation (e.g. an adjacency list) is efficient in shared memory. === Distributed memory === In the distributed memory model, the usual approach is to partition the vertex set V {\displaystyle V} of the graph into p {\displaystyle p} sets V 0 , … , V p − 1 {\displaystyle V_{0},\dots ,V_{p-1}} . Here, p {\displaystyle p} is the amount of available processing elements (PE). The vertex set partitions are then distributed to the PEs with matching index, additionally to the corresponding edges. Every PE has its own subgraph representation, where edges with an endpoint in another partition require special attention. For standard communication interfaces like MPI, the ID of the PE owning the other endpoint has to be identifiable. During computation in a distributed graph algorithms, passing information along these edges implies communication. Partitioning the graph needs to be done carefully - there is a trade-off between low communication and even size partitioning But partitioning a graph is a NP-hard problem, so it is not feasible to calculate them. Instead, the following heuristics are used. 1D partitioning: Every processor gets n / p {\displaystyle n/p} vertices and the corresponding outgoing edges. This can be understood as a row-wise or column-wise decomposition of the adjacency matrix. For algorithms operating on this representation, this requires an All-to-All communication step as well as O ( m ) {\displaystyle {\mathcal {O}}(m)} message buffer sizes, as each PE potentially has outgoing edges to every other PE. 2D partitioning: Every processor gets a submatrix of the adjacency matrix. Assume the processors are aligned in a rectangle p = p r × p c {\displaystyle p=p_{r}\times p_{c}} , where p r {\displaystyle p_{r}} and p c {\displaystyle p_{c}} are the amount of processing elements in each row and column, respectively. Then each processor gets a submatrix of the adjacency matrix of dimension ( n / p r ) × ( n / p c ) {\displaystyle (n/p_{r})\times (n/p_{c})} . This can be visualized as a checkerboard pattern in a matrix. Therefore, each processing unit can only have outgoing edges to PEs in the same row and column. This bounds the amount of communication partners for each PE to p r + p c − 1 {\displaystyle p_{r}+p_{c}-1} out of p = p r × p c {\displaystyle p=p_{r}\times p_{c}} possible ones. == Compressed representations == Graphs with trillions of edges occur in machine learning, social network analysis, and other areas. Compressed graph representations have been developed to reduce I/O and memory requirements. General techniques such as Huffman coding are applicable, but the adjacency list or adjacency matrix can be processed in specific ways to increase efficiency. == Graph traversal == === Breadth first search and depth first search === Breadth-first search (BFS) and depth-first search (DFS) are two closely-related approaches that are used for exploring all of the nodes in a given connected component. Both start with an arbitrary node, the "root". == See also == Graph traversal for more information on graph walking strategies Graph database for graph (data structure) persistency Graph rewriting for rule based transformations of graphs (graph data structures) Graph drawing software for software, systems, and providers of systems for drawing graphs == References == == External links == Boost Graph Library: a powerful C++ graph library s.a. Boost (C++ libraries) Networkx: a Python graph library GraphMatcher a java program to align directed/undirected graphs. GraphBLAS A specification for a library interface for operations on graphs, with a particular focus on sparse graphs.
Wikipedia/Graph_(data_structure)
In graph theory, a graph property or graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph. == Definitions == While graph drawing and graph representation are valid topics in graph theory, in order to focus only on the abstract structure of graphs, a graph property is defined to be a property preserved under all possible isomorphisms of a graph. In other words, it is a property of the graph itself, not of a specific drawing or representation of the graph. Informally, the term "graph invariant" is used for properties expressed quantitatively, while "property" usually refers to descriptive characterizations of graphs. For example, the statement "graph does not have vertices of degree 1" is a "property" while "the number of vertices of degree 1 in a graph" is an "invariant". More formally, a graph property is a class of graphs with the property that any two isomorphic graphs either both belong to the class, or both do not belong to it. Equivalently, a graph property may be formalized using the indicator function of the class, a function from graphs to Boolean values that is true for graphs in the class and false otherwise; again, any two isomorphic graphs must have the same function value as each other. A graph invariant or graph parameter may similarly be formalized as a function from graphs to a broader class of values, such as integers, real numbers, sequences of numbers, or polynomials, that again has the same value for any two isomorphic graphs. == Properties of properties == Many graph properties are well-behaved with respect to certain natural partial orders or preorders defined on graphs: A graph property P is hereditary if every induced subgraph of a graph with property P also has property P. For instance, being a perfect graph or being a chordal graph are hereditary properties. A graph property is monotone if every subgraph of a graph with property P also has property P. For instance, being a bipartite graph or being a triangle-free graph is monotone. Every monotone property is hereditary, but not necessarily vice versa; for instance, subgraphs of chordal graphs are not necessarily chordal, so being a chordal graph is not monotone. A graph property is minor-closed if every graph minor of a graph with property P also has property P. For instance, being a planar graph is minor-closed. Every minor-closed property is monotone, but not necessarily vice versa; for instance, minors of triangle-free graphs are not necessarily themselves triangle-free. These definitions may be extended from properties to numerical invariants of graphs: a graph invariant is hereditary, monotone, or minor-closed if the function formalizing the invariant forms a monotonic function from the corresponding partial order on graphs to the real numbers. Additionally, graph invariants have been studied with respect to their behavior with regard to disjoint unions of graphs: A graph invariant is additive if, for all two graphs G and H, the value of the invariant on the disjoint union of G and H is the sum of the values on G and on H. For instance, the number of vertices is additive. A graph invariant is multiplicative if, for all two graphs G and H, the value of the invariant on the disjoint union of G and H is the product of the values on G and on H. For instance, the Hosoya index (number of matchings) is multiplicative. A graph invariant is maxing if, for all two graphs G and H, the value of the invariant on the disjoint union of G and H is the maximum of the values on G and on H. For instance, the chromatic number is maxing. In addition, graph properties can be classified according to the type of graph they describe: whether the graph is undirected or directed, whether the property applies to multigraphs, etc. == Values of invariants == The target set of a function that defines a graph invariant may be one of: A truth-value, true or false, for the indicator function of a graph property. An integer, such as the number of vertices or chromatic number of a graph. A real number, such as the fractional chromatic number of a graph. A sequence of integers, such as the degree sequence of a graph. A polynomial, such as the Tutte polynomial of a graph. == Graph invariants and graph isomorphism == Easily computable graph invariants are instrumental for fast recognition of graph isomorphism, or rather non-isomorphism, since for any invariant at all, two graphs with different values cannot (by definition) be isomorphic. Two graphs with the same invariants may or may not be isomorphic, however. A graph invariant I(G) is called complete if the identity of the invariants I(G) and I(H) implies the isomorphism of the graphs G and H. Finding an efficiently-computable such invariant (the problem of graph canonization) would imply an easy solution to the challenging graph isomorphism problem. However, even polynomial-valued invariants such as the chromatic polynomial are not usually complete. The claw graph and the path graph on 4 vertices both have the same chromatic polynomial, for example. == Examples == === Properties === Connected graphs Bipartite graphs Planar graphs Triangle-free graphs Perfect graphs Eulerian graphs Hamiltonian graphs === Integer invariants === Order, the number of vertices Size, the number of edges Number of connected components Circuit rank, a linear combination of the numbers of edges, vertices, and components diameter, the longest of the shortest path lengths between pairs of vertices girth, the length of the shortest cycle Vertex connectivity, the smallest number of vertices whose removal disconnects the graph Edge connectivity, the smallest number of edges whose removal disconnects the graph Chromatic number, the smallest number of colors for the vertices in a proper coloring Chromatic index, the smallest number of colors for the edges in a proper edge coloring Choosability (or list chromatic number), the least number k such that G is k-choosable Independence number, the largest size of an independent set of vertices Clique number, the largest order of a complete subgraph Arboricity Graph genus Pagenumber Hosoya index Wiener index Colin de Verdière graph invariant Boxicity === Real number invariants === Clustering coefficient Betweenness centrality Fractional chromatic number Algebraic connectivity Isoperimetric number Estrada index Strength === Sequences and polynomials === Degree sequence Graph spectrum Characteristic polynomial of the adjacency matrix Chromatic polynomial, the number of k {\displaystyle k} -colorings viewed as a function of k {\displaystyle k} Tutte polynomial, a bivariate function that encodes much of the graph's connectivity === Edge partition === (a, b)-decomposition for any natural a,b == See also == Hereditary property Logic of graphs, one of several formal languages used to specify graph properties Topological index, a closely related concept in chemical graph theory == External links == List of integer invariants == References ==
Wikipedia/Graph_properties
Chemical graph theory is the topology branch of mathematical chemistry which applies graph theory to mathematical modelling of chemical phenomena. The pioneers of chemical graph theory are Alexandru Balaban, Ante Graovac, Iván Gutman, Haruo Hosoya, Milan Randić and Nenad Trinajstić (also Harry Wiener and others). In 1988, it was reported that several hundred researchers worked in this area, producing about 500 articles annually. A number of monographs have been written in the area, including the two-volume comprehensive text by Trinajstić, Chemical Graph Theory, that summarized the field up to mid-1980s. The adherents of the theory maintain that the properties of a chemical graph (i.e., a graph-theoretical representation of a molecule) give valuable insights into the chemical phenomena. Others contend that graphs play only a fringe role in chemical research. One variant of the theory is the representation of materials as infinite Euclidean graphs, particularly crystals by periodic graphs. == See also == Chemical graph generator Molecule mining MATH/CHEM/COMP Topological index == References ==
Wikipedia/Chemical_graph_theory
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases, that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics. == Etymology == According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge, from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time. References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'". == History == === Classical physics === One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.: 35–38  By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and the then newly discovered helium respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law.: 27–29  However, despite the success of Drude's model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.: 366–368  In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas." === Advent of quantum mechanics === Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice.: 366–368  The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations were first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics. In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered that a voltage developed across conductors which was transverse to both an electric current in the conductor and a magnetic field applied perpendicular to the current. This phenomenon, arising due to the nature of charge carriers in the conductor, came to be termed the Hall effect, but it was not properly explained at the time because the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for a theoretical explanation of the quantum Hall effect which was discovered half a century later.: 458–460  Magnetism as a property of matter has been known in China since 4000 BC.: 1–2  However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets.: 9  The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization can occur in one dimension and it is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.: 36–38, g48  === Modern many-body physics === The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Soviet physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and Robert Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair. The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory. The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant e 2 / h {\displaystyle e^{2}/h} .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators.: 69, 74  Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant e 2 / h {\displaystyle e^{2}/h} . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, La2-xBaxCuO4, which is superconducting at temperatures as high as 39 kelvin. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations. == Theoretical == Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. === Emergence === Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity. === Electronic theory of solids === The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments.: 90–91  This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law.: 101–103  In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms.: 48  In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem. Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly.: 330–337  Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids. === Symmetry breaking === Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry. Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations. === Phase transition === Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system. For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules. In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances. Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system.: 75ff  The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.: 8–11  Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.: 11  == Experimental == Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction. === Scattering === Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as the dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure.: 33–34  Neutrons can also probe atomic length scales and are used to study the scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes.: 33–34 : 39–43  Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy. : 258–259  === External magnetic fields === In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data.: 69 : 185  Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimental testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.: 57  === Magnetic resonance spectroscopy === The local structure, as well as the structure of the nearest neighbour atoms, can be investigated in condensed matter with magnetic resonance methods, such as electron paramagnetic resonance (EPR) and nuclear magnetic resonance (NMR), which are very sensitive to the details of the surrounding of nuclei and electrons by means of the hyperfine coupling. Both localized electrons and specific stable or unstable isotopes of the nuclei become the probe of these hyperfine interactions), which couple the electron or nuclear spin to the local electric and magnetic fields. These methods are suitable to study defects, diffusion, phase transitions and magnetic order. Common experimental methods include NMR, nuclear quadrupole resonance (NQR), implanted radioactive probes as in the case of muon spin spectroscopy ( μ {\displaystyle \mu } SR), Mössbauer spectroscopy, β {\displaystyle \beta } NMR and perturbed angular correlation (PAC). PAC is especially ideal for the study of phase changes at extreme temperatures above 2000 °C due to the temperature independence of the method. === Cold atomic gases === Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering. In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state. == Applications == Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, magnetic storage, liquid crystals, optical fibres and several phenomena studied in the context of nanotechnology.: 111ff  Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laureates in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as the molecular car, molecular windmill and many more. In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, and the topological non-Abelian anyons from fractional quantum Hall effect states. Condensed matter physics also has important uses for biomedicine. For example, magnetic resonance imaging is widely used in medical imaging of soft tissue and other physiological features which cannot be viewed with traditional x-ray imaging. == See also == == Notes == == References == == Further reading == Anderson, Philip W. (2018-03-09). Basic Notions Of Condensed Matter Physics. CRC Press. ISBN 978-0-429-97374-1. Girvin, Steven M.; Yang, Kun (2019-02-28). Modern Condensed Matter Physics. Cambridge University Press. ISBN 978-1-108-57347-4. Coleman, Piers (2015). Introduction to Many-Body Physics, Cambridge University Press, ISBN 0-521-86488-7. P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition, ISBN 0-521-79450-1 Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, ISBN 0-521-84508-4. Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, ISBN 0-470-61798-5. Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, ISBN 0-19-505329-X. == External links == Media related to Condensed matter physics at Wikimedia Commons
Wikipedia/Condensed_matter_physics
In graph theory, the crossing number cr(G) of a graph G is the lowest number of edge crossings of a plane drawing of the graph G. For instance, a graph is planar if and only if its crossing number is zero. Determining the crossing number continues to be of great importance in graph drawing, as user studies have shown that drawing graphs with few crossings makes it easier for people to understand the drawing. The study of crossing numbers originated in Turán's brick factory problem, in which Pál Turán asked for a factory plan that minimized the number of crossings between tracks connecting brick kilns to storage sites. Mathematically, this problem can be formalized as asking for the crossing number of a complete bipartite graph. The same problem arose independently in sociology at approximately the same time, in connection with the construction of sociograms. Turán's conjectured formula for the crossing numbers of complete bipartite graphs remains unproven, as does an analogous formula for the complete graphs. The crossing number inequality states that, for graphs where the number e of edges is sufficiently larger than the number n of vertices, the crossing number is at least proportional to e3/n2. It has applications in VLSI design and incidence geometry. Without further qualification, the crossing number allows drawings in which the edges may be represented by arbitrary curves. A variation of this concept, the rectilinear crossing number, requires all edges to be straight line segments, and may differ from the crossing number. In particular, the rectilinear crossing number of a complete graph is essentially the same as the minimum number of convex quadrilaterals determined by a set of n points in general position. The problem of determining this number is closely related to the happy ending problem. == Definitions == For the purposes of defining the crossing number, a drawing of an undirected graph is a mapping from the vertices of the graph to disjoint points in the plane, and from the edges of the graph to curves connecting their two endpoints. No vertex should be mapped onto an edge that it is not an endpoint of, and whenever two edges have curves that intersect (other than at a shared endpoint) their intersections should form a finite set of proper crossings, where the two curves are transverse. A crossing is counted separately for each of these crossing points, for each pair of edges that cross. The crossing number of a graph is then the minimum, over all such drawings, of the number of crossings in a drawing. Some authors add more constraints to the definition of a drawing, for instance that each pair of edges have at most one intersection point (a shared endpoint or crossing). For the crossing number as defined above, these constraints make no difference, because a crossing-minimal drawing cannot have edges with multiple intersection points. If two edges with a shared endpoint cross, the drawing can be changed locally at the crossing point, leaving the rest of the drawing unchanged, to produce a different drawing with one fewer crossing. And similarly, if two edges cross two or more times, the drawing can be changed locally at two crossing points to make a different drawing with two fewer crossings. However, these constraints are relevant for variant definitions of the crossing number that, for instance, count only the numbers of pairs of edges that cross rather than the number of crossings. == Special cases == As of April 2015, crossing numbers are known for very few graph families. In particular, except for a few initial cases, the crossing number of complete graphs, bipartite complete graphs, and products of cycles all remain unknown, although there has been some progress on lower bounds. === Complete bipartite graphs === During World War II, Hungarian mathematician Pál Turán was forced to work in a brick factory, pushing wagon loads of bricks from kilns to storage sites. The factory had tracks from each kiln to each storage site, and the wagons were harder to push at the points where tracks crossed each other, from which Turán was led to ask his brick factory problem: how can the kilns, storage sites, and tracks be arranged to minimize the total number of crossings? Mathematically, the kilns and storage sites can be formalized as the vertices of a complete bipartite graph, with the tracks as its edges. A factory layout can be represented as a drawing of this graph, so the problem becomes: what is the minimum possible number of crossings in a drawing of a complete bipartite graph? Kazimierz Zarankiewicz attempted to solve Turán's brick factory problem; his proof contained an error, but he established a valid upper bound of cr ( K m , n ) ≤ ⌊ n 2 ⌋ ⌊ n − 1 2 ⌋ ⌊ m 2 ⌋ ⌊ m − 1 2 ⌋ {\displaystyle {\textrm {cr}}(K_{m,n})\leq \left\lfloor {\frac {n}{2}}\right\rfloor \left\lfloor {\frac {n-1}{2}}\right\rfloor \left\lfloor {\frac {m}{2}}\right\rfloor \left\lfloor {\frac {m-1}{2}}\right\rfloor } for the crossing number of the complete bipartite graph Km,n. This bound has been conjectured to be the optimal number of crossings for all complete bipartite graphs. === Complete graphs and graph coloring === The problem of determining the crossing number of the complete graph was first posed by Anthony Hill, and appeared in print in 1960. Hill and his collaborator John Ernest were two constructionist artists fascinated by mathematics. They not only formulated this problem but also originated a conjectural formula for this crossing number, which Richard K. Guy published in 1960. Namely, it is known that there always exists a drawing with cr ( K p ) ≤ 1 4 ⌊ p 2 ⌋ ⌊ p − 1 2 ⌋ ⌊ p − 2 2 ⌋ ⌊ p − 3 2 ⌋ {\displaystyle {\textrm {cr}}(K_{p})\leq {\frac {1}{4}}\left\lfloor {\frac {p}{2}}\right\rfloor \left\lfloor {\frac {p-1}{2}}\right\rfloor \left\lfloor {\frac {p-2}{2}}\right\rfloor \left\lfloor {\frac {p-3}{2}}\right\rfloor } crossings. This formula gives values of 1, 3, 9, 18, 36, 60, 100, 150, 225, 315 for p = 5, ..., 14; see sequence A000241 in the On-line Encyclopedia of Integer Sequences. The conjecture is that there can be no better drawing, so that this formula gives the optimal number of crossings for the complete graphs. An independent formulation of the same conjecture was made by Thomas L. Saaty in 1964. Saaty further verified that this formula gives the optimal number of crossings for p ≤ 10 and Pan and Richter showed that it also is optimal for p = 11, 12. The Albertson conjecture, formulated by Michael O. Albertson in 2007, states that, among all graphs with chromatic number n, the complete graph Kn has the minimum number of crossings. That is, if the conjectured formula for the crossing number of the complete graph is correct, then every n-chromatic graph has crossing number at least equal to the same formula. The Albertson conjecture is now known to hold for n ≤ 16. === Cubic graphs === The smallest cubic graphs with crossing numbers 1–11 are known (sequence A110507 in the OEIS). The smallest 1-crossing cubic graph is the complete bipartite graph K3,3, with 6 vertices. The smallest 2-crossing cubic graph is the Petersen graph, with 10 vertices. The smallest 3-crossing cubic graph is the Heawood graph, with 14 vertices. The smallest 4-crossing cubic graph is the Möbius-Kantor graph, with 16 vertices. The smallest 5-crossing cubic graph is the Pappus graph, with 18 vertices. The smallest 6-crossing cubic graph is the Desargues graph, with 20 vertices. None of the four 7-crossing cubic graphs, with 22 vertices, are well known. The smallest 8-crossing cubic graphs include the Nauru graph and the McGee graph or (3,7)-cage graph, with 24 vertices. The smallest 11-crossing cubic graphs include the Coxeter graph with 28 vertices. In 2009, Pegg and Exoo conjectured that the smallest cubic graph with crossing number 13 is the Tutte–Coxeter graph and the smallest cubic graph with crossing number 170 is the Tutte 12-cage. == Connections to the bisection width == The 2/3-bisection width b ( G ) {\displaystyle b(G)} of a simple graph G {\displaystyle G} is the minimum number of edges whose removal results in a partition of the vertex set into two separated sets so that no set has more than 2 n / 3 {\displaystyle 2n/3} vertices. Computing b ( G ) {\displaystyle b(G)} is NP-hard. Leighton proved that cr ⁡ ( G ) + n = Ω ( b 2 ( G ) ) {\displaystyle \operatorname {cr} (G)+n=\Omega (b^{2}(G))} , provided that G {\displaystyle G} has bounded vertex degrees. This fundamental inequality can be used to derive an asymptotic lower bound for cr ⁡ ( G ) {\displaystyle \operatorname {cr} (G)} , when b ( G ) {\displaystyle b(G)} , or an estimate of it is known. In addition, this inequality has algorithmic application. Specifically, Bhat and Leighton used it (for the first time) for deriving an upper bound on the number of edge crossings in a drawing which is obtained by a divide and conquer approximation algorithm for computing cr ⁡ ( G ) {\displaystyle \operatorname {cr} (G)} . == Complexity and approximation == In general, determining the crossing number of a graph is hard; Garey and Johnson showed in 1983 that it is an NP-hard problem. In fact the problem remains NP-hard even when restricted to cubic graphs and to near-planar graphs (graphs that become planar after removal of a single edge). A closely related problem, determining the rectilinear crossing number, is complete for the existential theory of the reals. On the positive side, there are efficient algorithms for determining whether the crossing number is less than a fixed constant k. In other words, the problem is fixed-parameter tractable. It remains difficult for larger k, such as k = |V|/2. There are also efficient approximation algorithms for approximating cr ⁡ ( G ) + n {\displaystyle \operatorname {cr} (G)+n} on graphs of bounded degree which use the general and previously developed framework of Bhat and Leighton. In practice heuristic algorithms are used, such as the simple algorithm which starts with no edges and continually adds each new edge in a way that produces the fewest additional crossings possible. These algorithms are used in the Rectilinear Crossing Number distributed computing project. == The crossing number inequality == For an undirected simple graph G with n vertices and e edges such that e > 7n the crossing number is always at least cr ⁡ ( G ) ≥ e 3 29 n 2 . {\displaystyle \operatorname {cr} (G)\geq {\frac {e^{3}}{29n^{2}}}.} This relation between edges, vertices, and the crossing number was discovered independently by Ajtai, Chvátal, Newborn, and Szemerédi, and by Leighton . It is known as the crossing number inequality or crossing lemma. The constant 29 is the best known to date, and is due to Ackerman. The constant 7 can be lowered to 4, but at the expense of replacing 29 with the worse constant of 64. The motivation of Leighton in studying crossing numbers was for applications to VLSI design in theoretical computer science. Later, Székely also realized that this inequality yielded very simple proofs of some important theorems in incidence geometry, such as Beck's theorem and the Szemerédi-Trotter theorem, and Tamal Dey used it to prove upper bounds on geometric k-sets. == Variations == If edges are required to be drawn as straight line segments, rather than arbitrary curves, then some graphs need more crossings. The rectilinear crossing number is defined to be the minimum number of crossings of a drawing of this type. It is always at least as large as the crossing number, and is larger for some graphs. It is known that, in general, the rectilinear crossing number can not be bounded by a function of the crossing number. The rectilinear crossing numbers for K5 through K12 are 1, 3, 9, 19, 36, 62, 102, 153, (A014540) and values up to K27 are known, with K28 requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project. A graph has local crossing number k if it can be drawn with at most k crossings per edge, but not fewer. The graphs that can be drawn with at most k crossings per edge are also called k-planar. Other variants of the crossing number include the pairwise crossing number (the minimum number of pairs of edges that cross in any drawing) and the odd crossing number (the number of pairs of edges that cross an odd number of times in any drawing). The odd crossing number is at most equal to the pairwise crossing number, which is at most equal to the crossing number. However, by the Hanani–Tutte theorem, whenever one of these numbers is zero, they all are. Schaefer (2014, 2018) surveys many such variants. == See also == Planarization, a planar graph formed by replacing each crossing by a new vertex Three utilities problem, the puzzle that asks whether K3,3 can be drawn with 0 crossings == References ==
Wikipedia/Crossing_number_(graph_theory)
In graph theory, a lattice graph, mesh graph, or grid graph is a graph whose drawing, embedded in some Euclidean space ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, forms a regular tiling. This implies that the group of bijective transformations that send the graph to itself is a lattice in the group-theoretical sense. Typically, no clear distinction is made between such a graph in the more abstract sense of graph theory, and its drawing in space (often the plane or 3D space). This type of graph may more shortly be called just a lattice, mesh, or grid. Moreover, these terms are also commonly used for a finite section of the infinite graph, as in "an 8 × 8 square grid". The term lattice graph has also been given in the literature to various other kinds of graphs with some regular structure, such as the Cartesian product of a number of complete graphs. == Square grid graph == A common type of lattice graph (known under different names, such as grid graph or square grid graph) is the graph whose vertices correspond to the points in the plane with integer coordinates, x-coordinates being in the range 1, ..., n, y-coordinates being in the range 1, ..., m, and two vertices being connected by an edge whenever the corresponding points are at distance 1. In other words, it is the unit distance graph for the integer points in a rectangle with sides parallel to the axes. === Properties === A square grid graph is a Cartesian product of graphs, namely, of two path graphs with n − 1 and m − 1 edges. Since a path graph is a median graph, the latter fact implies that the square grid graph is also a median graph. All square grid graphs are bipartite, which is easily verified by the fact that one can color the vertices in a checkerboard fashion. A path graph is a grid graph on the 1 × n {\displaystyle 1\times n} grid. A 2 × 2 {\displaystyle 2\times 2} grid graph is a 4-cycle. Every planar graph H is a minor of the h × h grid, where h = 2 | V ( H ) | + 4 | E ( H ) | {\displaystyle h=2|V(H)|+4|E(H)|} . Grid graphs are fundamental objects in the theory of graph minors because of the grid exclusion theorem. They play a major role in bidimensionality theory. == Other kinds == A triangular grid graph is a graph that corresponds to a triangular grid. A Hanan grid graph for a finite set of points in the plane is produced by the grid obtained by intersections of all vertical and horizontal lines through each point of the set. The rook's graph (the graph that represents all legal moves of the rook chess piece on a chessboard) is also sometimes called the lattice graph, although this graph is different from the lattice graph described here because all points in one row or column are adjacent. The valid moves of the fairy chess piece the wazir form a square lattice graph. == See also == Lattice path Pick's theorem Integer triangles in a 2D lattice Regular graph == References ==
Wikipedia/Lattice_graph
Informally, the reconstruction conjecture in graph theory says that graphs are determined uniquely by their subgraphs. It is due to Kelly and Ulam. == Formal statements == Given a graph G = ( V , E ) {\displaystyle G=(V,E)} , a vertex-deleted subgraph of G {\displaystyle G} is a subgraph formed by deleting exactly one vertex from G {\displaystyle G} . By definition, it is an induced subgraph of G {\displaystyle G} . For a graph G {\displaystyle G} , the deck of G, denoted D ( G ) {\displaystyle D(G)} , is the multiset of isomorphism classes of all vertex-deleted subgraphs of G {\displaystyle G} . Each graph in D ( G ) {\displaystyle D(G)} is called a card. Two graphs that have the same deck are said to be hypomorphic. With these definitions, the conjecture can be stated as: Reconstruction Conjecture: Any two hypomorphic graphs on at least three vertices are isomorphic. (The requirement that the graphs have at least three vertices is necessary because both graphs on two vertices have the same decks.) Harary suggested a stronger version of the conjecture: Set Reconstruction Conjecture: Any two graphs on at least four vertices with the same sets of vertex-deleted subgraphs are isomorphic. Given a graph G = ( V , E ) {\displaystyle G=(V,E)} , an edge-deleted subgraph of G {\displaystyle G} is a subgraph formed by deleting exactly one edge from G {\displaystyle G} . For a graph G {\displaystyle G} , the edge-deck of G, denoted E D ( G ) {\displaystyle ED(G)} , is the multiset of all isomorphism classes of edge-deleted subgraphs of G {\displaystyle G} . Each graph in E D ( G ) {\displaystyle ED(G)} is called an edge-card. Edge Reconstruction Conjecture: (Harary, 1964) Any two graphs with at least four edges and having the same edge-decks are isomorphic. == Recognizable properties == In context of the reconstruction conjecture, a graph property is called recognizable if one can determine the property from the deck of a graph. The following properties of graphs are recognizable: Order of the graph – The order of a graph G {\displaystyle G} , | V ( G ) | {\displaystyle |V(G)|} is recognizable from D ( G ) {\displaystyle D(G)} as the multiset D ( G ) {\displaystyle D(G)} contains each subgraph of G {\displaystyle G} created by deleting one vertex of G {\displaystyle G} . Hence | V ( G ) | = | D ( G ) | {\displaystyle |V(G)|=|D(G)|} Number of edges of the graph – The number of edges in a graph G {\displaystyle G} with n {\displaystyle n} vertices, | E ( G ) | {\displaystyle |E(G)|} is recognizable. First note that each edge of G {\displaystyle G} occurs in n − 2 {\displaystyle n-2} members of D ( G ) {\displaystyle D(G)} . This is true by the definition of D ( G ) {\displaystyle D(G)} which ensures that each edge is included every time that each of the vertices it is incident with is included in a member of D ( G ) {\displaystyle D(G)} , so an edge will occur in every member of D ( G ) {\displaystyle D(G)} except for the two in which its endpoints are deleted. Hence, | E ( G ) | = ∑ q i n − 2 {\displaystyle |E(G)|=\sum {\frac {q_{i}}{n-2}}} where q i {\displaystyle q_{i}} is the number of edges in the ith member of D ( G ) {\displaystyle D(G)} . Degree sequence – The degree sequence of a graph G {\displaystyle G} is recognizable because the degree of every vertex is recognizable. To find the degree of a vertex v i {\displaystyle v_{i}} —the vertex absent from the ith member of D ( G ) {\displaystyle D(G)} —, we will examine the graph created by deleting it, G i {\displaystyle G_{i}} . This graph contains all of the edges not incident with v i {\displaystyle v_{i}} , so if q i {\displaystyle q_{i}} is the number of edges in G i {\displaystyle G_{i}} , then | E ( G ) | − q i = deg ⁡ ( v i ) {\displaystyle |E(G)|-q_{i}=\deg(v_{i})} . If we can tell the degree of every vertex in the graph, we can tell the degree sequence of the graph. (Vertex-)Connectivity – By definition, a graph is n {\displaystyle n} -vertex-connected when deleting any vertex creates a n − 1 {\displaystyle n-1} -vertex-connected graph; thus, if every card is a n − 1 {\displaystyle n-1} -vertex-connected graph, we know the original graph was n {\displaystyle n} -vertex-connected. We can also determine if the original graph was connected, as this is equivalent to having any two of the G i {\displaystyle G_{i}} being connected. Tutte polynomial Characteristic polynomial Planarity The number of spanning trees in a graph Chromatic polynomial Being a perfect graph or an interval graph, or certain other subclasses of perfect graphs == Verification == Both the reconstruction and set reconstruction conjectures have been verified for all graphs with at most 13 vertices by Brendan McKay. In a probabilistic sense, it has been shown by Béla Bollobás that almost all graphs are reconstructible. This means that the probability that a randomly chosen graph on n {\displaystyle n} vertices is not reconstructible goes to 0 as n {\displaystyle n} goes to infinity. In fact, it was shown that not only are almost all graphs reconstructible, but in fact that the entire deck is not necessary to reconstruct them — almost all graphs have the property that there exist three cards in their deck that uniquely determine the graph. === Reconstructible graph families === The conjecture has been verified for a number of infinite classes of graphs (and, trivially, their complements). Regular graphs - Regular Graphs are reconstructible by direct application of some of the facts that can be recognized from the deck of a graph. Given an n {\displaystyle n} -regular graph G {\displaystyle G} and its deck D ( G ) {\displaystyle D(G)} , we can recognize that the deck is of a regular graph by recognizing its degree sequence. Let us now examine one member of the deck D ( G ) {\displaystyle D(G)} , G i {\displaystyle G_{i}} . This graph contains some number of vertices with a degree of n {\displaystyle n} and n {\displaystyle n} vertices with a degree of n − 1 {\displaystyle n-1} . We can add a vertex to this graph and then connect it to the n {\displaystyle n} vertices of degree n − 1 {\displaystyle n-1} to create an n {\displaystyle n} -regular graph which is isomorphic to the graph which we started with. Therefore, all regular graphs are reconstructible from their decks. A particular type of regular graph which is interesting is the complete graph. Trees Disconnected graphs Unit interval graphs Separable graphs without end vertices Maximal planar graphs Maximal outerplanar graphs Outerplanar graphs Critical blocks == Reduction == The reconstruction conjecture is true if all 2-connected graphs are reconstructible. == Duality == The vertex reconstruction conjecture obeys the duality that if G {\displaystyle G} can be reconstructed from its vertex deck D ( G ) {\displaystyle D(G)} , then its complement G ′ {\displaystyle G'} can be reconstructed from D ( G ′ ) {\displaystyle D(G')} as follows: Start with D ( G ′ ) {\displaystyle D(G')} , take the complement of every card in it to get D ( G ) {\displaystyle D(G)} , use this to reconstruct G {\displaystyle G} , then take the complement again to get G ′ {\displaystyle G'} . Edge reconstruction does not obey any such duality: Indeed, for some classes of edge-reconstructible graphs it is not known if their complements are edge reconstructible. == Other structures == It has been shown that the following are not in general reconstructible: Digraphs: Infinite families of non-reconstructible digraphs are known, including tournaments (Stockmeyer) and non-tournaments (Stockmeyer). A tournament is reconstructible if it is not strongly connected. A weaker version of the reconstruction conjecture has been conjectured for digraphs, see new digraph reconstruction conjecture. Hypergraphs (Kocay). Infinite graphs. If T is the tree where every vertex has countably infinite degree, then the union of two disjoint copies of T is hypomorphic, but not isomorphic, to T.(Fisher) Locally finite graphs, which are graphs where every vertex has finite degree. The question of reconstructibility for locally finite infinite trees (the Harary-Schwenk-Scott conjecture from 1972) was a longstanding open problem until 2017, when a non-reconstructible tree of maximum degree 3 was found by Bowler et al. == See also == New digraph reconstruction conjecture Partial symmetry == Further reading == For further information on this topic, see the survey by Nash-Williams. == References ==
Wikipedia/Reconstruction_conjecture
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges. == Symbols == Square brackets [ ] G[S] is the induced subgraph of a graph G for vertex subset S. Prime symbol ' The prime symbol is often used to modify notation for graph invariants so that it applies to the line graph instead of the given graph. For instance, α(G) is the independence number of a graph; α′(G) is the matching number of the graph, which equals the independence number of its line graph. Similarly, χ(G) is the chromatic number of a graph; χ ′(G) is the chromatic index of the graph, which equals the chromatic number of its line graph. == A == absorbing An absorbing set A {\displaystyle A} of a directed graph G {\displaystyle G} is a set of vertices such that for any vertex v ∈ G ∖ A {\displaystyle v\in G\setminus A} , there is an edge from v {\displaystyle v} towards a vertex of A {\displaystyle A} . achromatic The achromatic number of a graph is the maximum number of colors in a complete coloring. acyclic 1. A graph is acyclic if it has no cycles. An undirected acyclic graph is the same thing as a forest. An acyclic directed graph, which is a digraph without directed cycles, is often called a directed acyclic graph, especially in computer science. 2. An acyclic coloring of an undirected graph is a proper coloring in which every two color classes induce a forest. adjacency matrix The adjacency matrix of a graph is a matrix whose rows and columns are both indexed by vertices of the graph, with a one in the cell for row i and column j when vertices i and j are adjacent, and a zero otherwise. adjacent 1. The relation between two vertices that are both endpoints of the same edge. 2. The relation between two distinct edges that share an end vertex. α For a graph G, α(G) (using the Greek letter alpha) is its independence number (see independent), and α′(G) is its matching number (see matching). alternating In a graph with a matching, an alternating path is a path whose edges alternate between matched and unmatched edges. An alternating cycle is, similarly, a cycle whose edges alternate between matched and unmatched edges. An augmenting path is an alternating path that starts and ends at unsaturated vertices. A larger matching can be found as the symmetric difference of the matching and the augmenting path; a matching is maximum if and only if it has no augmenting path. antichain In a directed acyclic graph, a subset S of vertices that are pairwise incomparable, i.e., for any x ≤ y {\displaystyle x\leq y} in S, there is no directed path from x to y or from y to x. Inspired by the notion of antichains in partially ordered sets. anti-edge Synonym for non-edge, a pair of non-adjacent vertices. anti-triangle A three-vertex independent set, the complement of a triangle. apex 1. An apex graph is a graph in which one vertex can be removed, leaving a planar subgraph. The removed vertex is called the apex. A k-apex graph is a graph that can be made planar by the removal of k vertices. 2. Synonym for universal vertex, a vertex adjacent to all other vertices. arborescence Synonym for a rooted and directed tree; see tree. arc See edge. arrow An ordered pair of vertices, such as an edge in a directed graph. An arrow (x, y) has a tail x, a head y, and a direction from x to y; y is said to be the direct successor to x and x the direct predecessor to y. The arrow (y, x) is the inverted arrow of the arrow (x, y). articulation point A vertex in a connected graph whose removal would disconnect the graph. More generally, a vertex whose removal increases the number of components. -ary A k-ary tree is a rooted tree in which every internal vertex has no more than k children. A 1-ary tree is just a path. A 2-ary tree is also called a binary tree, although that term more properly refers to 2-ary trees in which the children of each node are distinguished as being left or right children (with at most one of each type). A k-ary tree is said to be complete if every internal vertex has exactly k children. augmenting A special type of alternating path; see alternating. automorphism A graph automorphism is a symmetry of a graph, an isomorphism from the graph to itself. == B == bag One of the sets of vertices in a tree decomposition. balanced A bipartite or multipartite graph is balanced if each two subsets of its vertex partition have sizes within one of each other. ball A ball (also known as a neighborhood ball or distance ball) is the set of all vertices that are at most distance r from a vertex. More formally, for a given vertex v and radius r, the ball B(v,r) consists of all vertices whose shortest path distance to v is less than or equal to r. bandwidth The bandwidth of a graph G is the minimum, over all orderings of vertices of G, of the length of the longest edge (the number of steps in the ordering between its two endpoints). It is also one less than the size of the maximum clique in a proper interval completion of G, chosen to minimize the clique size. biclique Synonym for complete bipartite graph or complete bipartite subgraph; see complete. biconnected Usually a synonym for 2-vertex-connected, but sometimes includes K2 though it is not 2-connected. See connected; for biconnected components, see component. binding number The smallest possible ratio of the number of neighbors of a proper subset of vertices to the size of the subset. bipartite A bipartite graph is a graph whose vertices can be divided into two disjoint sets such that the vertices in one set are not connected to each other, but may be connected to vertices in the other set. Put another way, a bipartite graph is a graph with no odd cycles; equivalently, it is a graph that may be properly colored with two colors. Bipartite graphs are often written G = (U,V,E) where U and V are the subsets of vertices of each color. However, unless the graph is connected, it may not have a unique 2-coloring. biregular A biregular graph is a bipartite graph in which there are only two different vertex degrees, one for each set of the vertex bipartition. block 1. A block of a graph G is a maximal subgraph which is either an isolated vertex, a bridge edge, or a 2-connected subgraph. If a block is 2-connected, every pair of vertices in it belong to a common cycle. Every edge of a graph belongs in exactly one block. 2. The block graph of a graph G is another graph whose vertices are the blocks of G, with an edge connecting two vertices when the corresponding blocks share an articulation point; that is, it is the intersection graph of the blocks of G. The block graph of any graph is a forest. 3. The block-cut (or block-cutpoint) graph of a graph G is a bipartite graph where one partite set consists of the cut-vertices of G, and the other has a vertex b i {\displaystyle b_{i}} for each block B i {\displaystyle B_{i}} of G. When G is connected, its block-cutpoint graph is a tree. 4. A block graph (also called a clique tree if connected, and sometimes erroneously called a Husimi tree) is a graph all of whose blocks are complete graphs. A forest is a block graph; so in particular the block graph of any graph is a block graph, and every block graph may be constructed as the block graph of a graph. bond A minimal cut-set: a set of edges whose removal disconnects the graph, for which no proper subset has the same property. book 1. A book, book graph, or triangular book is a complete tripartite graph K1,1,n; a collection of n triangles joined at a shared edge. 2. Another type of graph, also called a book, or a quadrilateral book, is a collection of 4-cycles joined at a shared edge; the Cartesian product of a star with an edge. 3. A book embedding is an embedding of a graph onto a topological book, a space formed by joining a collection of half-planes along a shared line. Usually, the vertices of the embedding are required to be on the line, which is called the spine of the embedding, and the edges of the embedding are required to lie within a single half-plane, one of the pages of the book. boundary 1. In a graph embedding, a boundary walk is the subgraph containing all incident edges and vertices to a face. bramble A bramble is a collection of mutually touching connected subgraphs, where two subgraphs touch if they share a vertex or each includes one endpoint of an edge. The order of a bramble is the smallest size of a set of vertices that has a nonempty intersection with all of the subgraphs. The treewidth of a graph is the maximum order of any of its brambles. branch A path of degree-two vertices, ending at vertices whose degree is unequal to two. branch-decomposition A branch-decomposition of G is a hierarchical clustering of the edges of G, represented by an unrooted binary tree with its leaves labeled by the edges of G. The width of a branch-decomposition is the maximum, over edges e of this binary tree, of the number of shared vertices between the subgraphs determined by the edges of G in the two subtrees separated by e. The branchwidth of G is the minimum width of any branch-decomposition of G. branchwidth See branch-decomposition. bridge 1. A bridge, isthmus, or cut edge is an edge whose removal would disconnect the graph. A bridgeless graph is one that has no bridges; equivalently, a 2-edge-connected graph. 2. A bridge of a subgraph H is a maximal connected subgraph separated from the rest of the graph by H. That is, it is a maximal subgraph that is edge-disjoint from H and in which each two vertices and edges belong to a path that is internally disjoint from H. H may be a set of vertices. A chord is a one-edge bridge. In planarity testing, H is a cycle and a peripheral cycle is a cycle with at most one bridge; it must be a face boundary in any planar embedding of its graph. 3. A bridge of a cycle can also mean a path that connects two vertices of a cycle but is shorter than either of the paths in the cycle connecting the same two vertices. A bridged graph is a graph in which every cycle of four or more vertices has a bridge. bridgeless A bridgeless or isthmus-free graph is a graph that has no bridge edges (i.e., isthmi); that is, each connected component is a 2-edge-connected graph. butterfly 1. The butterfly graph has five vertices and six edges; it is formed by two triangles that share a vertex. 2. The butterfly network is a graph used as a network architecture in distributed computing, closely related to the cube-connected cycles. == C == C Cn is an n-vertex cycle graph; see cycle. cactus A cactus graph, cactus tree, cactus, or Husimi tree is a connected graph in which each edge belongs to at most one cycle. Its blocks are cycles or single edges. If, in addition, each vertex belongs to at most two blocks, then it is called a Christmas cactus. cage A cage is a regular graph with the smallest possible order for its girth. canonical canonization A canonical form of a graph is an invariant such that two graphs have equal invariants if and only if they are isomorphic. Canonical forms may also be called canonical invariants or complete invariants, and are sometimes defined only for the graphs within a particular family of graphs. Graph canonization is the process of computing a canonical form. card A graph formed from a given graph by deleting one vertex, especially in the context of the reconstruction conjecture. See also deck, the multiset of all cards of a graph. carving width Carving width is a notion of graph width analogous to branchwidth, but using hierarchical clusterings of vertices instead of hierarchical clusterings of edges. caterpillar A caterpillar tree or caterpillar is a tree in which the internal nodes induce a path. center The center of a graph is the set of vertices of minimum eccentricity. centroid A centroid of a tree is a vertex v such that if rooted at v, no other vertex has subtree size greater than half the size of the tree. chain 1. Synonym for walk. 2. When applying methods from algebraic topology to graphs, an element of a chain complex, namely a set of vertices or a set of edges. Cheeger constant See expansion. cherry A cherry is a path on three vertices. χ χ(G) (using the Greek letter chi) is the chromatic number of G and χ ′(G) is its chromatic index; see chromatic and coloring. child In a rooted tree, a child of a vertex v is a neighbor of v along an outgoing edge, one that is directed away from the root. chord chordal 1. A chord of a cycle is an edge that does not belong to the cycle, for which both endpoints belong to the cycle. 2. A chordal graph is a graph in which every cycle of four or more vertices has a chord, so the only induced cycles are triangles. 3. A strongly chordal graph is a chordal graph in which every cycle of length six or more has an odd chord. 4. A chordal bipartite graph is not chordal (unless it is a forest); it is a bipartite graph in which every cycle of six or more vertices has a chord, so the only induced cycles are 4-cycles. 5. A chord of a circle is a line segment connecting two points on the circle; the intersection graph of a collection of chords is called a circle graph. chromatic Having to do with coloring; see color. Chromatic graph theory is the theory of graph coloring. The chromatic number χ(G) is the minimum number of colors needed in a proper coloring of G. χ ′(G) is the chromatic index of G, the minimum number of colors needed in a proper edge coloring of G. choosable choosability A graph is k-choosable if it has a list coloring whenever each vertex has a list of k available colors. The choosability of the graph is the smallest k for which it is k-choosable. circle A circle graph is the intersection graph of chords of a circle. circuit A circuit may refer to a closed trail or an element of the cycle space (an Eulerian spanning subgraph). The circuit rank of a graph is the dimension of its cycle space. circumference The circumference of a graph is the length of its longest simple cycle. The graph is Hamiltonian if and only if its circumference equals its order. class 1. A class of graphs or family of graphs is a (usually infinite) collection of graphs, often defined as the graphs having some specific property. The word "class" is used rather than "set" because, unless special restrictions are made (such as restricting the vertices to be drawn from a particular set, and defining edges to be sets of two vertices) classes of graphs are usually not sets when formalized using set theory. 2. A color class of a colored graph is the set of vertices or edges having one particular color. 3. In the context of Vizing's theorem, on edge coloring simple graphs, a graph is said to be of class one if its chromatic index equals its maximum degree, and class two if its chromatic index equals one plus the degree. According to Vizing's theorem, all simple graphs are either of class one or class two. claw A claw is a tree with one internal vertex and three leaves, or equivalently the complete bipartite graph K1,3. A claw-free graph is a graph that does not have an induced subgraph that is a claw. clique A clique is a set of mutually adjacent vertices (or the complete subgraph induced by that set). Sometimes a clique is defined as a maximal set of mutually adjacent vertices (or maximal complete subgraph), one that is not part of any larger such set (or subgraph). A k-clique is a clique of order k. The clique number ω(G) of a graph G is the order of its largest clique. The clique graph of a graph G is the intersection graph of the maximal cliques in G. See also biclique, a complete bipartite subgraph. clique tree A synonym for a block graph. clique-width The clique-width of a graph G is the minimum number of distinct labels needed to construct G by operations that create a labeled vertex, form the disjoint union of two labeled graphs, add an edge connecting all pairs of vertices with given labels, or relabel all vertices with a given label. The graphs of clique-width at most 2 are exactly the cographs. closed 1. A closed neighborhood is one that includes its central vertex; see neighbourhood. 2. A closed walk is one that starts and ends at the same vertex; see walk. 3. A graph is transitively closed if it equals its own transitive closure; see transitive. 4. A graph property is closed under some operation on graphs if, whenever the argument or arguments to the operation have the property, then so does the result. For instance, hereditary properties are closed under induced subgraphs; monotone properties are closed under subgraphs; and minor-closed properties are closed under minors. closure 1. For the transitive closure of a directed graph, see transitive. 2. A closure of a directed graph is a set of vertices that have no outgoing edges to vertices outside the closure. For instance, a sink is a one-vertex closure. The closure problem is the problem of finding a closure of minimum or maximum weight. co- This prefix has various meanings usually involving complement graphs. For instance, a cograph is a graph produced by operations that include complementation; a cocoloring is a coloring in which each vertex induces either an independent set (as in proper coloring) or a clique (as in a coloring of the complement). color coloring 1. A graph coloring is a labeling of the vertices of a graph by elements from a given set of colors, or equivalently a partition of the vertices into subsets, called "color classes", each of which is associated with one of the colors. 2. Some authors use "coloring", without qualification, to mean a proper coloring, one that assigns different colors to the endpoints of each edge. In graph coloring, the goal is to find a proper coloring that uses as few colors as possible; for instance, bipartite graphs are the graphs that have colorings with only two colors, and the four color theorem states that every planar graph can be colored with at most four colors. A graph is said to be k-colored if it has been (properly) colored with k colors, and k-colorable or k-chromatic if this is possible. 3. Many variations of coloring have been studied, including edge coloring (coloring edges so that no two edges with the same endpoint share a color), list coloring (proper coloring with each vertex restricted to a subset of the available colors), acyclic coloring (every 2-colored subgraph is acyclic), co-coloring (every color class induces an independent set or a clique), complete coloring (every two color classes share an edge), and total coloring (both edges and vertices are colored). 4. The coloring number of a graph is one plus the degeneracy. It is so called because applying a greedy coloring algorithm to a degeneracy ordering of the graph uses at most this many colors. comparability An undirected graph is a comparability graph if its vertices are the elements of a partially ordered set and two vertices are adjacent when they are comparable in the partial order. Equivalently, a comparability graph is a graph that has a transitive orientation. Many other classes of graphs can be defined as the comparability graphs of special types of partial order. complement The complement graph G ¯ {\displaystyle {\bar {G}}} of a simple graph G is another graph on the same vertex set as G, with an edge for each two vertices that are not adjacent in G. complete 1. A complete graph is one in which every two vertices are adjacent: all edges that could exist are present. A complete graph with n vertices is often denoted Kn. A complete bipartite graph is one in which every two vertices on opposite sides of the partition of vertices are adjacent. A complete bipartite graph with a vertices on one side of the partition and b vertices on the other side is often denoted Ka,b. The same terminology and notation has also been extended to complete multipartite graphs, graphs in which the vertices are divided into more than two subsets and every pair of vertices in different subsets are adjacent; if the numbers of vertices in the subsets are a, b, c, ... then this graph is denoted Ka, b, c, .... 2. A completion of a given graph is a supergraph that has some desired property. For instance, a chordal completion is a supergraph that is a chordal graph. 3. A complete matching is a synonym for a perfect matching; see matching. 4. A complete coloring is a proper coloring in which each pairs of colors is used for the endpoints of at least one edge. Every coloring with a minimum number of colors is complete, but there may exist complete colorings with larger numbers of colors. The achromatic number of a graph is the maximum number of colors in a complete coloring. 5. A complete invariant of a graph is a synonym for a canonical form, an invariant that has different values for non-isomorphic graphs. component A connected component of a graph is a maximal connected subgraph. The term is also used for maximal subgraphs or subsets of a graph's vertices that have some higher order of connectivity, including biconnected components, triconnected components, and strongly connected components. condensation The condensation of a directed graph G is a directed acyclic graph with one vertex for each strongly connected component of G, and an edge connecting pairs of components that contain the two endpoints of at least one edge in G. cone A graph that contains a universal vertex. connect Cause to be connected. connected A connected graph is one in which each pair of vertices forms the endpoints of a path. Higher forms of connectivity include strong connectivity in directed graphs (for each two vertices there are paths from one to the other in both directions), k-vertex-connected graphs (removing fewer than k vertices cannot disconnect the graph), and k-edge-connected graphs (removing fewer than k edges cannot disconnect the graph). connected component Synonym for component. contraction Edge contraction is an elementary operation that removes an edge from a graph while merging the two vertices that it previously joined. Vertex contraction (sometimes called vertex identification) is similar, but the two vertices are not necessarily connected by an edge. Path contraction occurs upon the set of edges in a path that contract to form a single edge between the endpoints of the path. The inverse of edge contraction is vertex splitting. converse The converse graph is a synonym for the transpose graph; see transpose. core 1. A k-core is the induced subgraph formed by removing all vertices of degree less than k, and all vertices whose degree becomes less than k after earlier removals. See degeneracy. 2. A core is a graph G such that every graph homomorphism from G to itself is an isomorphism. 3. The core of a graph G is a minimal graph H such that there exist homomorphisms from G to H and vice versa. H is unique up to isomorphism. It can be represented as an induced subgraph of G, and is a core in the sense that all of its self-homomorphisms are isomorphisms. 4. In the theory of graph matchings, the core of a graph is an aspect of its Dulmage–Mendelsohn decomposition, formed as the union of all maximum matchings. cotree 1. The complement of a spanning tree. 2. A rooted tree structure used to describe a cograph, in which each cograph vertex is a leaf of the tree, each internal node of the tree is labeled with 0 or 1, and two cograph vertices are adjacent if and only if their lowest common ancestor in the tree is labeled 1. cover A vertex cover is a set of vertices incident to every edge in a graph. An edge cover is a set of edges incident to every vertex in a graph. A set of subgraphs of a graph covers that graph if its union – taken vertex-wise and edge-wise – is equal to the graph. critical A critical graph for a given property is a graph that has the property but such that every subgraph formed by deleting a single vertex does not have the property. For instance, a factor-critical graph is one that has a perfect matching (a 1-factor) for every vertex deletion, but (because it has an odd number of vertices) has no perfect matching itself. Compare hypo-, used for graphs which do not have a property but for which every one-vertex deletion does. cube cubic 1. Cube graph, the eight-vertex graph of the vertices and edges of a cube. 2. Hypercube graph, a higher-dimensional generalization of the cube graph. 3. Folded cube graph, formed from a hypercube by adding a matching connecting opposite vertices. 4. Halved cube graph, the half-square of a hypercube graph. 5. Partial cube, a distance-preserving subgraph of a hypercube. 6. The cube of a graph G is the graph power G3. 7. Cubic graph, another name for a 3-regular graph, one in which each vertex has three incident edges. 8. Cube-connected cycles, a cubic graph formed by replacing each vertex of a hypercube by a cycle. cut cut-set A cut is a partition of the vertices of a graph into two subsets, or the set (also known as a cut-set) of edges that span such a partition, if that set is non-empty. An edge is said to span the partition if it has endpoints in both subsets. Thus, the removal of a cut-set from a connected graph disconnects it. cut point See articulation point. cut space The cut space of a graph is a GF(2)-vector space having the cut-sets of the graph as its elements and symmetric difference of sets as its vector addition operation. cycle 1. A cycle may be either a kind of graph or a kind of walk. As a walk it may be either be a closed walk (also called a tour) or more usually a closed walk without repeated vertices and consequently edges (also called a simple cycle). In the latter case it is usually regarded as a graph, i.e., the choices of first vertex and direction are usually considered unimportant; that is, cyclic permutations and reversals of the walk produce the same cycle. Important special types of cycle include Hamiltonian cycles, induced cycles, peripheral cycles, and the shortest cycle, which defines the girth of a graph. A k-cycle is a cycle of length k; for instance a 2-cycle is a digon and a 3-cycle is a triangle. A cycle graph is a graph that is itself a simple cycle; a cycle graph with n vertices is commonly denoted Cn. 2. The cycle space is a vector space generated by the simple cycles in a graph, often over the field of 2 elements but also over other fields. == D == DAG Abbreviation for directed acyclic graph, a directed graph without any directed cycles. deck The multiset of graphs formed from a single graph G by deleting a single vertex in all possible ways, especially in the context of the reconstruction conjecture. An edge-deck is formed in the same way by deleting a single edge in all possible ways. The graphs in a deck are also called cards. See also critical (graphs that have a property that is not held by any card) and hypo- (graphs that do not have a property that is held by all cards). decomposition See tree decomposition, path decomposition, or branch-decomposition. degenerate degeneracy A k-degenerate graph is an undirected graph in which every induced subgraph has minimum degree at most k. The degeneracy of a graph is the smallest k for which it is k-degenerate. A degeneracy ordering is an ordering of the vertices such that each vertex has minimum degree in the induced subgraph of it and all later vertices; in a degeneracy ordering of a k-degenerate graph, every vertex has at most k later neighbours. Degeneracy is also known as the k-core number, width, and linkage, and one plus the degeneracy is also called the coloring number or Szekeres–Wilf number. k-degenerate graphs have also been called k-inductive graphs. degree 1. The degree of a vertex in a graph is its number of incident edges. The degree of a graph G (or its maximum degree) is the maximum of the degrees of its vertices, often denoted Δ(G); the minimum degree of G is the minimum of its vertex degrees, often denoted δ(G). Degree is sometimes called valency; the degree of v in G may be denoted dG(v), d(G), or deg(v). The total degree is the sum of the degrees of all vertices; by the handshaking lemma it is an even number. The degree sequence is the collection of degrees of all vertices, in sorted order from largest to smallest. In a directed graph, one may distinguish the in-degree (number of incoming edges) and out-degree (number of outgoing edges). 2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor. Δ, δ Δ(G) (using the Greek letter delta) is the maximum degree of a vertex in G, and δ(G) is the minimum degree; see degree. density In a graph of n nodes, the density is the ratio of the number of edges of the graph to the number of edges in a complete graph on n nodes. See dense graph. depth The depth of a node in a rooted tree is the number of edges in the path from the root to the node. For instance, the depth of the root is 0 and the depth of any one of its adjacent nodes is 1. It is the level of a node minus one. Note, however, that some authors instead use depth as a synonym for the level of a node. diameter The diameter of a connected graph is the maximum length of a shortest path. That is, it is the maximum of the distances between pairs of vertices in the graph. If the graph has weights on its edges, then its weighted diameter measures path length by the sum of the edge weights along a path, while the unweighted diameter measures path length by the number of edges. For disconnected graphs, definitions vary: the diameter may be defined as infinite, or as the largest diameter of a connected component, or it may be undefined. diamond The diamond graph is an undirected graph with four vertices and five edges. diconnected Strongly connected. (Not to be confused with disconnected) digon A digon is a simple cycle of length two in a directed graph or a multigraph. Digons cannot occur in simple undirected graphs as they require repeating the same edge twice, which violates the definition of simple. digraph Synonym for directed graph. dipath See directed path. direct predecessor The tail of a directed edge whose head is the given vertex. direct successor The head of a directed edge whose tail is the given vertex. directed A directed graph is one in which the edges have a distinguished direction, from one vertex to another. In a mixed graph, a directed edge is again one that has a distinguished direction; directed edges may also be called arcs or arrows. directed arc See arrow. directed edge See arrow. directed line See arrow. directed path A path in which all the edges have the same direction. If a directed path leads from vertex x to vertex y, x is a predecessor of y, y is a successor of x, and y is said to be reachable from x. direction 1. The asymmetric relation between two adjacent vertices in a graph, represented as an arrow. 2. The asymmetric relation between two vertices in a directed path. disconnect Cause to be disconnected. disconnected Not connected. disjoint 1. Two subgraphs are edge disjoint if they share no edges, and vertex disjoint if they share no vertices. 2. The disjoint union of two or more graphs is a graph whose vertex and edge sets are the disjoint unions of the corresponding sets. dissociation number A subset of vertices in a graph G is called dissociation if it induces a subgraph with maximum degree 1. distance The distance between any two vertices in a graph is the length of the shortest path having the two vertices as its endpoints. domatic A domatic partition of a graph is a partition of the vertices into dominating sets. The domatic number of the graph is the maximum number of dominating sets in such a partition. dominating A dominating set is a set of vertices that includes or is adjacent to every vertex in the graph; not to be confused with a vertex cover, a vertex set that is incident to all edges in the graph. Important special types of dominating sets include independent dominating sets (dominating sets that are also independent sets) and connected dominating sets (dominating sets that induced connected subgraphs). A single-vertex dominating set may also be called a universal vertex. The domination number of a graph is the number of vertices in the smallest dominating set. dualA dual graph of a plane graph G is a graph that has a vertex for each face of G. == E == E E(G) is the edge set of G; see edge set. ear An ear of a graph is a path whose endpoints may coincide but in which otherwise there are no repetitions of vertices or edges. ear decomposition An ear decomposition is a partition of the edges of a graph into a sequence of ears, each of whose endpoints (after the first one) belong to a previous ear and each of whose interior points do not belong to any previous ear. An open ear is a simple path (an ear without repeated vertices), and an open ear decomposition is an ear decomposition in which each ear after the first is open; a graph has an open ear decomposition if and only if it is biconnected. An ear is odd if it has an odd number of edges, and an odd ear decomposition is an ear decomposition in which each ear is odd; a graph has an odd ear decomposition if and only if it is factor-critical. eccentricity The eccentricity of a vertex is the farthest distance from it to any other vertex. edge An edge is (together with vertices) one of the two basic units out of which graphs are constructed. Each edge has two (or in hypergraphs, more) vertices to which it is attached, called its endpoints. Edges may be directed or undirected; undirected edges are also called lines and directed edges are also called arcs or arrows. In an undirected simple graph, an edge may be represented as the set of its vertices, and in a directed simple graph it may be represented as an ordered pair of its vertices. An edge that connects vertices x and y is sometimes written xy. edge cut A set of edges whose removal disconnects the graph. A one-edge cut is called a bridge, isthmus, or cut edge. edge set The set of edges of a given graph G, sometimes denoted by E(G). edgeless graph The edgeless graph or totally disconnected graph on a given set of vertices is the graph that has no edges. It is sometimes called the empty graph, but this term can also refer to a graph with no vertices. embedding A graph embedding is a topological representation of a graph as a subset of a topological space with each vertex represented as a point, each edge represented as a curve having the endpoints of the edge as endpoints of the curve, and no other intersections between vertices or edges. A planar graph is a graph that has such an embedding onto the Euclidean plane, and a toroidal graph is a graph that has such an embedding onto a torus. The genus of a graph is the minimum possible genus of a two-dimensional manifold onto which it can be embedded. empty graph 1. An edgeless graph on a nonempty set of vertices. 2. The order-zero graph, a graph with no vertices and no edges. end An end of an infinite graph is an equivalence class of rays, where two rays are equivalent if there is a third ray that includes infinitely many vertices from both of them. endpoint One of the two vertices joined by a given edge, or one of the first or last vertex of a walk, trail or path. The first endpoint of a given directed edge is called the tail and the second endpoint is called the head. enumeration Graph enumeration is the problem of counting the graphs in a given class of graphs, as a function of their order. More generally, enumeration problems can refer either to problems of counting a certain class of combinatorial objects (such as cliques, independent sets, colorings, or spanning trees), or of algorithmically listing all such objects. Eulerian An Eulerian path is a walk that uses every edge of a graph exactly once. An Eulerian circuit (also called an Eulerian cycle or an Euler tour) is a closed walk that uses every edge exactly once. An Eulerian graph is a graph that has an Eulerian circuit. For an undirected graph, this means that the graph is connected and every vertex has even degree. For a directed graph, this means that the graph is strongly connected and every vertex has in-degree equal to the out-degree. In some cases, the connectivity requirement is loosened, and a graph meeting only the degree requirements is called Eulerian. even Divisible by two; for instance, an even cycle is a cycle whose length is even. expander An expander graph is a graph whose edge expansion, vertex expansion, or spectral expansion is bounded away from zero. expansion 1. The edge expansion, isoperimetric number, or Cheeger constant of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of edges leaving S to the number of vertices in S. 2. The vertex expansion, vertex isoperimetric number, or magnification of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of vertices outside but adjacent to S to the number of vertices in S. 3. The unique neighbor expansion of a graph G is the minimum ratio, over subsets of at most half of the vertices of G, of the number of vertices outside S but adjacent to a unique vertex in S to the number of vertices in S. 4. The spectral expansion of a d-regular graph G is the spectral gap between the largest eigenvalue d of its adjacency matrix and the second-largest eigenvalue. 5. A family of graphs has bounded expansion if all its r-shallow minors have a ratio of edges to vertices bounded by a function of r, and polynomial expansion if the function of r is a polynomial. == F == face In a plane graph or graph embedding, a connected component of the subset of the plane or surface of the embedding that is disjoint from the graph. For an embedding in the plane, all but one face will be bounded; the one exceptional face that extends to infinity is called the outer (or infinite) face. factor A factor of a graph is a spanning subgraph: a subgraph that includes all of the vertices of the graph. The term is primarily used in the context of regular subgraphs: a k-factor is a factor that is k-regular. In particular, a 1-factor is the same thing as a perfect matching. A factor-critical graph is a graph for which deleting any one vertex produces a graph with a 1-factor. factorization A graph factorization is a partition of the edges of the graph into factors; a k-factorization is a partition into k-factors. For instance a 1-factorization is an edge coloring with the additional property that each vertex is incident to an edge of each color. family A synonym for class. finite A graph is finite if it has a finite number of vertices and a finite number of edges. Many sources assume that all graphs are finite without explicitly saying so. A graph is locally finite if each vertex has a finite number of incident edges. An infinite graph is a graph that is not finite: it has infinitely many vertices, infinitely many edges, or both. first order The first order logic of graphs is a form of logic in which variables represent vertices of a graph, and there exists a binary predicate to test whether two vertices are adjacent. To be distinguished from second order logic, in which variables can also represent sets of vertices or edges. -flap For a set of vertices X, an X-flap is a connected component of the induced subgraph formed by deleting X. The flap terminology is commonly used in the context of havens, functions that map small sets of vertices to their flaps. See also the bridge of a cycle, which is either a flap of the cycle vertices or a chord of the cycle. forbidden A forbidden graph characterization is a characterization of a family of graphs as being the graphs that do not have certain other graphs as subgraphs, induced subgraphs, or minors. If H is one of the graphs that does not occur as a subgraph, induced subgraph, or minor, then H is said to be forbidden. forcing graph A forcing graph is a graph H such that evaluating the subgraph density of H in the graphs of a graph sequence G(n) is sufficient to test whether that sequence is quasi-random. forest A forest is an undirected graph without cycles (a disjoint union of unrooted trees), or a directed graph formed as a disjoint union of rooted trees. free edge An edge which is not in a matching. free vertex 1. A vertex not on a matched edge in a matching 2. A vertex which has not been matched. Frucht 1. Robert Frucht 2. The Frucht graph, one of the two smallest cubic graphs with no nontrivial symmetries. 3. Frucht's theorem that every finite group is the group of symmetries of a finite graph. full Synonym for induced. functional graph A functional graph is a directed graph where every vertex has out-degree one. Equivalently, a functional graph is a maximal directed pseudoforest. == G == G A variable often used to denote a graph. genus The genus of a graph is the minimum genus of a surface onto which it can be embedded; see embedding. geodesic As a noun, a geodesic is a synonym for a shortest path. When used as an adjective, it means related to shortest paths or shortest path distances. giant In the theory of random graphs, a giant component is a connected component that contains a constant fraction of the vertices of the graph. In standard models of random graphs, there is typically at most one giant component. girth The girth of a graph is the length of its shortest cycle. graph The fundamental object of study in graph theory, a system of vertices connected in pairs by edges. Often subdivided into directed graphs or undirected graphs according to whether the edges have an orientation or not. Mixed graphs include both types of edges. greedy Produced by a greedy algorithm. For instance, a greedy coloring of a graph is a coloring produced by considering the vertices in some sequence and assigning each vertex the first available color. Grötzsch 1. Herbert Grötzsch 2. The Grötzsch graph, the smallest triangle-free graph requiring four colors in any proper coloring. 3. Grötzsch's theorem that triangle-free planar graphs can always be colored with at most three colors. Grundy number 1. The Grundy number of a graph is the maximum number of colors produced by a greedy coloring, with a badly-chosen vertex ordering. == H == H A variable often used to denote a graph, especially when another graph has already been denoted by G. H-coloring An H-coloring of a graph G (where H is also a graph) is a homomorphism from H to G. H-free A graph is H-free if it does not have an induced subgraph isomorphic to H, that is, if H is a forbidden induced subgraph. The H-free graphs are the family of all graphs (or, often, all finite graphs) that are H-free. For instance the triangle-free graphs are the graphs that do not have a triangle graph as a subgraph. The property of being H-free is always hereditary. A graph is H-minor-free if it does not have a minor isomorphic to H. Hadwiger 1. Hugo Hadwiger 2. The Hadwiger number of a graph is the order of the largest complete minor of the graph. It is also called the contraction clique number or the homomorphism degree. 3. The Hadwiger conjecture is the conjecture that the Hadwiger number is never less than the chromatic number. Hamiltonian A Hamiltonian path or Hamiltonian cycle is a simple spanning path or simple spanning cycle: it covers all of the vertices in the graph exactly once. A graph is Hamiltonian if it contains a Hamiltonian cycle, and traceable if it contains a Hamiltonian path. haven A k-haven is a function that maps every set X of fewer than k vertices to one of its flaps, often satisfying additional consistency conditions. The order of a haven is the number k. Havens can be used to characterize the treewidth of finite graphs and the ends and Hadwiger numbers of infinite graphs. height 1. The height of a node in a rooted tree is the number of edges in a longest path, going away from the root (i.e. its nodes have strictly increasing depth), that starts at that node and ends at a leaf. 2. The height of a rooted tree is the height of its root. That is, the height of a tree is the number of edges in a longest possible path, going away from the root, that starts at the root and ends at a leaf. 3. The height of a directed acyclic graph is the maximum length of a directed path in this graph. hereditary A hereditary property of graphs is a property that is closed under induced subgraphs: if G has a hereditary property, then so must every induced subgraph of G. Compare monotone (closed under all subgraphs) or minor-closed (closed under minors). hexagon A simple cycle consisting of exactly six edges and six vertices. hole A hole is an induced cycle of length four or more. An odd hole is a hole of odd length. An anti-hole is an induced subgraph of order four whose complement is a cycle; equivalently, it is a hole in the complement graph. This terminology is mainly used in the context of perfect graphs, which are characterized by the strong perfect graph theorem as being the graphs with no odd holes or odd anti-holes. The hole-free graphs are the same as the chordal graphs. homomorphic equivalence Two graphs are homomorphically equivalent if there exist two homomorphisms, one from each graph to the other graph. homomorphism 1. A graph homomorphism is a mapping from the vertex set of one graph to the vertex set of another graph that maps adjacent vertices to adjacent vertices. This type of mapping between graphs is the one that is most commonly used in category-theoretic approaches to graph theory. A proper graph coloring can equivalently be described as a homomorphism to a complete graph. 2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor. hyperarc A directed hyperedge having a source and target set. hyperedge An edge in a hypergraph, having any number of endpoints, in contrast to the requirement that edges of graphs have exactly two endpoints. hypercube A hypercube graph is a graph formed from the vertices and edges of a geometric hypercube. hypergraph A hypergraph is a generalization of a graph in which each edge (called a hyperedge in this context) may have more than two endpoints. hypo- This prefix, in combination with a graph property, indicates a graph that does not have the property but such that every subgraph formed by deleting a single vertex does have the property. For instance, a hypohamiltonian graph is one that does not have a Hamiltonian cycle, but for which every one-vertex deletion produces a Hamiltonian subgraph. Compare critical, used for graphs which have a property but for which every one-vertex deletion does not. == I == in-degree The number of incoming edges in a directed graph; see degree. incidence An incidence in a graph is a vertex-edge pair such that the vertex is an endpoint of the edge. incidence matrix The incidence matrix of a graph is a matrix whose rows are indexed by vertices of the graph, and whose columns are indexed by edges, with a one in the cell for row i and column j when vertex i and edge j are incident, and a zero otherwise. incident The relation between an edge and one of its endpoints. incomparability An incomparability graph is the complement of a comparability graph; see comparability. independent 1. An independent set is a set of vertices that induces an edgeless subgraph. It may also be called a stable set or a coclique. The independence number α(G) is the size of the maximum independent set. 2. In the graphic matroid of a graph, a subset of edges is independent if the corresponding subgraph is a tree or forest. In the bicircular matroid, a subset of edges is independent if the corresponding subgraph is a pseudoforest. indifference An indifference graph is another name for a proper interval graph or unit interval graph; see proper. induced An induced subgraph or full subgraph of a graph is a subgraph formed from a subset of vertices and from all of the edges that have both endpoints in the subset. Special cases include induced paths and induced cycles, induced subgraphs that are paths or cycles. inductive Synonym for degenerate. infinite An infinite graph is one that is not finite; see finite. internal A vertex of a path or tree is internal if it is not a leaf; that is, if its degree is greater than one. Two paths are internally disjoint (some people call it independent) if they do not have any vertex in common, except the first and last ones. intersection 1. The intersection of two graphs is their largest common subgraph, the graph formed by the vertices and edges that belong to both graphs. 2. An intersection graph is a graph whose vertices correspond to sets or geometric objects, with an edge between two vertices exactly when the corresponding two sets or objects have a nonempty intersection. Several classes of graphs may be defined as the intersection graphs of certain types of objects, for instance chordal graphs (intersection graphs of subtrees of a tree), circle graphs (intersection graphs of chords of a circle), interval graphs (intersection graphs of intervals of a line), line graphs (intersection graphs of the edges of a graph), and clique graphs (intersection graphs of the maximal cliques of a graph). Every graph is an intersection graph for some family of sets, and this family is called an intersection representation of the graph. The intersection number of a graph G is the minimum total number of elements in any intersection representation of G. interval 1. An interval graph is an intersection graph of intervals of a line. 2. The interval [u, v] in a graph is the union of all shortest paths from u to v. 3. Interval thickness is a synonym for pathwidth. invariant A synonym of property. inverted arrow An arrow with an opposite direction compared to another arrow. The arrow (y, x) is the inverted arrow of the arrow (x, y). isolated An isolated vertex of a graph is a vertex whose degree is zero, that is, a vertex with no incident edges. isomorphic Two graphs are isomorphic if there is an isomorphism between them; see isomorphism. isomorphism A graph isomorphism is a one-to-one incidence preserving correspondence of the vertices and edges of one graph to the vertices and edges of another graph. Two graphs related in this way are said to be isomorphic. isoperimetric See expansion. isthmus Synonym for bridge, in the sense of an edge whose removal disconnects the graph. == J == join The join of two graphs is formed from their disjoint union by adding an edge from each vertex of one graph to each vertex of the other. Equivalently, it is the complement of the disjoint union of the complements. == K == K For the notation for complete graphs, complete bipartite graphs, and complete multipartite graphs, see complete. κ κ(G) (using the Greek letter kappa) can refer to the vertex connectivity of G or to the clique number of G. kernel A kernel of a directed graph is a set of vertices which is both stable and absorbing. knot An inescapable section of a directed graph. See knot (mathematics) and knot theory. == L == L L(G) is the line graph of G; see line. label 1. Information associated with a vertex or edge of a graph. A labeled graph is a graph whose vertices or edges have labels. The terms vertex-labeled or edge-labeled may be used to specify which objects of a graph have labels. Graph labeling refers to several different problems of assigning labels to graphs subject to certain constraints. See also graph coloring, in which the labels are interpreted as colors. 2. In the context of graph enumeration, the vertices of a graph are said to be labeled if they are all distinguishable from each other. For instance, this can be made to be true by fixing a one-to-one correspondence between the vertices and the integers from 1 to the order of the graph. When vertices are labeled, graphs that are isomorphic to each other (but with different vertex orderings) are counted as separate objects. In contrast, when the vertices are unlabeled, graphs that are isomorphic to each other are not counted separately. leaf 1. A leaf vertex or pendant vertex (especially in a tree) is a vertex whose degree is 1. A leaf edge or pendant edge is the edge connecting a leaf vertex to its single neighbour. 2. A leaf power of a tree is a graph whose vertices are the leaves of the tree and whose edges connect leaves whose distance in the tree is at most a given threshold. length In an unweighted graph, the length of a cycle, path, or walk is the number of edges it uses. In a weighted graph, it may instead be the sum of the weights of the edges that it uses. Length is used to define the shortest path, girth (shortest cycle length), and longest path between two vertices in a graph. level 1. This is the depth of a node plus 1, although some define it instead to be synonym of depth. A node's level in a rooted tree is the number of nodes in the path from the root to the node. For instance, the root has level 1 and any one of its adjacent nodes has level 2. 2. A set of all node having the same level or depth. line A synonym for an undirected edge. The line graph L(G) of a graph G is a graph with a vertex for each edge of G and an edge for each pair of edges that share an endpoint in G. linkage A synonym for degeneracy. list 1. An adjacency list is a computer representation of graphs for use in graph algorithms. 2. List coloring is a variation of graph coloring in which each vertex has a list of available colors. local A local property of a graph is a property that is determined only by the neighbourhoods of the vertices in the graph. For instance, a graph is locally finite if all of its neighborhoods are finite. loop A loop or self-loop is an edge both of whose endpoints are the same vertex. It forms a cycle of length 1. These are not allowed in simple graphs. == M == magnification Synonym for vertex expansion. matching A matching is a set of edges in which no two share any vertex. A vertex is matched or saturated if it is one of the endpoints of an edge in the matching. A perfect matching or complete matching is a matching that matches every vertex; it may also be called a 1-factor, and can only exist when the order is even. A near-perfect matching, in a graph with odd order, is one that saturates all but one vertex. A maximum matching is a matching that uses as many edges as possible; the matching number α′(G) of a graph G is the number of edges in a maximum matching. A maximal matching is a matching to which no additional edges can be added. maximal 1. A subgraph of given graph G is maximal for a particular property if it has that property but no other supergraph of it that is also a subgraph of G also has the same property. That is, it is a maximal element of the subgraphs with the property. For instance, a maximal clique is a complete subgraph that cannot be expanded to a larger complete subgraph. The word "maximal" should be distinguished from "maximum": a maximum subgraph is always maximal, but not necessarily vice versa. 2. A simple graph with a given property is maximal for that property if it is not possible to add any more edges to it (keeping the vertex set unchanged) while preserving both the simplicity of the graph and the property. Thus, for instance, a maximal planar graph is a planar graph such that adding any more edges to it would create a non-planar graph. maximum A subgraph of a given graph G is maximum for a particular property if it is the largest subgraph (by order or size) among all subgraphs with that property. For instance, a maximum clique is any of the largest cliques in a given graph. median 1. A median of a triple of vertices, a vertex that belongs to shortest paths between all pairs of vertices, especially in median graphs and modular graphs. 2. A median graph is a graph in which every three vertices have a unique median. Meyniel 1. Henri Meyniel, French graph theorist. 2. A Meyniel graph is a graph in which every odd cycle of length five or more has at least two chords. minimal A subgraph of given graph is minimal for a particular property if it has that property but no proper subgraph of it also has the same property. That is, it is a minimal element of the subgraphs with the property. minimum cut A cut whose cut-set has minimum total weight, possibly restricted to cuts that separate a designated pair of vertices; they are characterized by the max-flow min-cut theorem. minor A graph H is a minor of another graph G if H can be obtained by deleting edges or vertices from G and contracting edges in G. It is a shallow minor if it can be formed as a minor in such a way that the subgraphs of G that were contracted to form vertices of H all have small diameter. H is a topological minor of G if G has a subgraph that is a subdivision of H. A graph is H-minor-free if it does not have H as a minor. A family of graphs is minor-closed if it is closed under minors; the Robertson–Seymour theorem characterizes minor-closed families as having a finite set of forbidden minors. mixed A mixed graph is a graph that may include both directed and undirected edges. modular 1. Modular graph, a graph in which each triple of vertices has at least one median vertex that belongs to shortest paths between all pairs of the triple. 2. Modular decomposition, a decomposition of a graph into subgraphs within which all vertices connect to the rest of the graph in the same way. 3. Modularity of a graph clustering, the difference of the number of cross-cluster edges from its expected value. monotone A monotone property of graphs is a property that is closed under subgraphs: if G has a monotone property, then so must every subgraph of G. Compare hereditary (closed under induced subgraphs) or minor-closed (closed under minors). Moore graph A Moore graph is a regular graph for which the Moore bound is met exactly. The Moore bound is an inequality relating the degree, diameter, and order of a graph, proved by Edward F. Moore. Every Moore graph is a cage. multigraph A multigraph is a graph that allows multiple adjacencies (and, often, self-loops); a graph that is not required to be simple. multiple adjacency A multiple adjacency or multiple edge is a set of more than one edge that all have the same endpoints (in the same direction, in the case of directed graphs). A graph with multiple edges is often called a multigraph. multiplicity The multiplicity of an edge is the number of edges in a multiple adjacency. The multiplicity of a graph is the maximum multiplicity of any of its edges. == N == N 1. For the notation for open and closed neighborhoods, see neighbourhood. 2. A lower-case n is often used (especially in computer science) to denote the number of vertices in a given graph. neighbor neighbour A vertex that is adjacent to a given vertex. neighborhood neighbourhood The open neighbourhood (or neighborhood) of a vertex v is the subgraph induced by all vertices that are adjacent to v. The closed neighbourhood is defined in the same way but also includes v itself. The open neighborhood of v in G may be denoted NG(v) or N(v), and the closed neighborhood may be denoted NG[v] or N[v]. When the openness or closedness of a neighborhood is not specified, it is assumed to be open. network A graph in which attributes (e.g. names) are associated with the nodes and/or edges. node A synonym for vertex. non-edge A non-edge or anti-edge is a pair of vertices that are not adjacent; the edges of the complement graph. null graph See empty graph. == O == odd 1. An odd cycle is a cycle whose length is odd. The odd girth of a non-bipartite graph is the length of its shortest odd cycle. An odd hole is a special case of an odd cycle: one that is induced and has four or more vertices. 2. An odd vertex is a vertex whose degree is odd. By the handshaking lemma every finite undirected graph has an even number of odd vertices. 3. An odd ear is a simple path or simple cycle with an odd number of edges, used in odd ear decompositions of factor-critical graphs; see ear. 4. An odd chord is an edge connecting two vertices that are an odd distance apart in an even cycle. Odd chords are used to define strongly chordal graphs. 5. An odd graph is a special case of a Kneser graph, having one vertex for each (n − 1)-element subset of a (2n − 1)-element set, and an edge connecting two subsets when their corresponding sets are disjoint. open 1. See neighbourhood. 2. See walk. order 1. The order of a graph G is the number of its vertices, |V(G)|. The variable n is often used for this quantity. See also size, the number of edges. 2. A type of logic of graphs; see first order and second order. 3. An order or ordering of a graph is an arrangement of its vertices into a sequence, especially in the context of topological ordering (an order of a directed acyclic graph in which every edge goes from an earlier vertex to a later vertex in the order) and degeneracy ordering (an order in which each vertex has minimum degree in the induced subgraph of it and all later vertices). 4. For the order of a haven or bramble, see haven and bramble. orientation oriented 1. An orientation of an undirected graph is an assignment of directions to its edges, making it into a directed graph. An oriented graph is one that has been assigned an orientation. So, for instance, a polytree is an oriented tree; it differs from a directed tree (an arborescence) in that there is no requirement of consistency in the directions of its edges. Other special types of orientation include tournaments, orientations of complete graphs; strong orientations, orientations that are strongly connected; acyclic orientations, orientations that are acyclic; Eulerian orientations, orientations that are Eulerian; and transitive orientations, orientations that are transitively closed. 2. Oriented graph, used by some authors as a synonym for a directed graph. out-degree See degree. outer See face. outerplanar An outerplanar graph is a graph that can be embedded in the plane (without crossings) so that all vertices are on the outer face of the graph. == P == parent In a rooted tree, a parent of a vertex v is a neighbor of v along the incoming edge, the one that is directed toward the root. path A path may either be a walk or a walk without repeated vertices and consequently edges (also called a simple path), depending on the source. Important special cases include induced paths and shortest paths. path decomposition A path decomposition of a graph G is a tree decomposition whose underlying tree is a path. Its width is defined in the same way as for tree decompositions, as one less than the size of the largest bag. The minimum width of any path decomposition of G is the pathwidth of G. pathwidth The pathwidth of a graph G is the minimum width of a path decomposition of G. It may also be defined in terms of the clique number of an interval completion of G. It is always between the bandwidth and the treewidth of G. It is also known as interval thickness, vertex separation number, or node searching number. pendant See leaf. perfect 1. A perfect graph is a graph in which, in every induced subgraph, the chromatic number equals the clique number. The perfect graph theorem and strong perfect graph theorem are two theorems about perfect graphs, the former proving that their complements are also perfect and the latter proving that they are exactly the graphs with no odd holes or anti-holes. 2. A perfectly orderable graph is a graph whose vertices can be ordered in such a way that a greedy coloring algorithm with this ordering optimally colors every induced subgraph. The perfectly orderable graphs are a subclass of the perfect graphs. 3. A perfect matching is a matching that saturates every vertex; see matching. 4. A perfect 1-factorization is a partition of the edges of a graph into perfect matchings so that each two matchings form a Hamiltonian cycle. peripheral 1. A peripheral cycle or non-separating cycle is a cycle with at most one bridge. 2. A peripheral vertex is a vertex whose eccentricity is maximum. In a tree, this must be a leaf. Petersen 1. Julius Petersen (1839–1910), Danish graph theorist. 2. The Petersen graph, a 10-vertex 15-edge graph frequently used as a counterexample. 3. Petersen's theorem that every bridgeless cubic graph has a perfect matching. planar A planar graph is a graph that has an embedding onto the Euclidean plane. A plane graph is a planar graph for which a particular embedding has already been fixed. A k-planar graph is one that can be drawn in the plane with at most k crossings per edge. polytree A polytree is an oriented tree; equivalently, a directed acyclic graph whose underlying undirected graph is a tree. power 1. A graph power Gk of a graph G is another graph on the same vertex set; two vertices are adjacent in Gk when they are at distance at most k in G. A leaf power is a closely related concept, derived from a power of a tree by taking the subgraph induced by the tree's leaves. 2. Power graph analysis is a method for analyzing complex networks by identifying cliques, bicliques, and stars within the network. 3. Power laws in the degree distributions of scale-free networks are a phenomenon in which the number of vertices of a given degree is proportional to a power of the degree. predecessor A vertex coming before a given vertex in a directed path. prime 1. A prime graph is defined from an algebraic group, with a vertex for each prime number that divides the order of the group. 2. In the theory of modular decomposition, a prime graph is a graph without any nontrivial modules. 3. In the theory of splits, cuts whose cut-set is a complete bipartite graph, a prime graph is a graph without any splits. Every quotient graph of a maximal decomposition by splits is a prime graph, a star, or a complete graph. 4. A prime graph for the Cartesian product of graphs is a connected graph that is not itself a product. Every connected graph can be uniquely factored into a Cartesian product of prime graphs. proper 1. A proper subgraph is a subgraph that removes at least one vertex or edge relative to the whole graph; for finite graphs, proper subgraphs are never isomorphic to the whole graph, but for infinite graphs they can be. 2. A proper coloring is an assignment of colors to the vertices of a graph (a coloring) that assigns different colors to the endpoints of each edge; see color. 3. A proper interval graph or proper circular arc graph is an intersection graph of a collection of intervals or circular arcs (respectively) such that no interval or arc contains another interval or arc. Proper interval graphs are also called unit interval graphs (because they can always be represented by unit intervals) or indifference graphs. property A graph property is something that can be true of some graphs and false of others, and that depends only on the graph structure and not on incidental information such as labels. Graph properties may equivalently be described in terms of classes of graphs (the graphs that have a given property). More generally, a graph property may also be a function of graphs that is again independent of incidental information, such as the size, order, or degree sequence of a graph; this more general definition of a property is also called an invariant of the graph. pseudoforest A pseudoforest is an undirected graph in which each connected component has at most one cycle, or a directed graph in which each vertex has at most one outgoing edge. pseudograph A pseudograph is a graph or multigraph that allows self-loops. == Q == quasi-line graph A quasi-line graph or locally co-bipartite graph is a graph in which the open neighborhood of every vertex can be partitioned into two cliques. These graphs are always claw-free and they include as a special case the line graphs. They are used in the structure theory of claw-free graphs. quasi-random graph sequence A quasi-random graph sequence is a sequence of graphs that shares several properties with a sequence of random graphs generated according to the Erdős–Rényi random graph model. quiver A quiver is a directed multigraph, as used in category theory. The edges of a quiver are called arrows. == R == radius The radius of a graph is the minimum eccentricity of any vertex. Ramanujan A Ramanujan graph is a graph whose spectral expansion is as large as possible. That is, it is a d-regular graph, such that the second-largest eigenvalue of its adjacency matrix is at most 2 d − 1 {\displaystyle 2{\sqrt {d-1}}} . ray A ray, in an infinite graph, is an infinite simple path with exactly one endpoint. The ends of a graph are equivalence classes of rays. reachability The ability to get from one vertex to another within a graph. reachable Has an affirmative reachability. A vertex y is said to be reachable from a vertex x if there exists a path from x to y. recognizable In the context of the reconstruction conjecture, a graph property is recognizable if its truth can be determined from the deck of the graph. Many graph properties are known to be recognizable. If the reconstruction conjecture is true, all graph properties are recognizable. reconstruction The reconstruction conjecture states that each undirected graph G is uniquely determined by its deck, a multiset of graphs formed by removing one vertex from G in all possible ways. In this context, reconstruction is the formation of a graph from its deck. rectangle A simple cycle consisting of exactly four edges and four vertices. regular A graph is d-regular when all of its vertices have degree d. A regular graph is a graph that is d-regular for some d. regular tournament A regular tournament is a tournament where in-degree equals out-degree for all vertices. reverse See transpose. root 1. A designated vertex in a graph, particularly in directed trees and rooted graphs. 2. The inverse operation to a graph power: a kth root of a graph G is another graph on the same vertex set such that two vertices are adjacent in G if and only if they have distance at most k in the root. == S == saturated See matching. searching number Node searching number is a synonym for pathwidth. second order The second order logic of graphs is a form of logic in which variables may represent vertices, edges, sets of vertices, and (sometimes) sets of edges. This logic includes predicates for testing whether a vertex and edge are incident, as well as whether a vertex or edge belongs to a set. To be distinguished from first order logic, in which variables can only represent vertices. self-loop Synonym for loop. separating vertex See articulation point. separation number Vertex separation number is a synonym for pathwidth. sibling In a rooted tree, a sibling of a vertex v is a vertex which has the same parent vertex as v. simple 1. A simple graph is a graph without loops and without multiple adjacencies. That is, each edge connects two distinct endpoints and no two edges have the same endpoints. A simple edge is an edge that is not part of a multiple adjacency. In many cases, graphs are assumed to be simple unless specified otherwise. 2. A simple path or a simple cycle is a path or cycle that has no repeated vertices and consequently no repeated edges. sink A sink, in a directed graph, is a vertex with no outgoing edges (out-degree equals 0). size The size of a graph G is the number of its edges, |E(G)|. The variable m is often used for this quantity. See also order, the number of vertices. small-world network A small-world network is a graph in which most nodes are not neighbors of one another, but most nodes can be reached from every other node by a small number of hops or steps. Specifically, a small-world network is defined to be a graph where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network snark A snark is a simple, connected, bridgeless cubic graph with chromatic index equal to 4. source A source, in a directed graph, is a vertex with no incoming edges (in-degree equals 0). space In algebraic graph theory, several vector spaces over the binary field may be associated with a graph. Each has sets of edges or vertices for its vectors, and symmetric difference of sets as its vector sum operation. The edge space is the space of all sets of edges, and the vertex space is the space of all sets of vertices. The cut space is a subspace of the edge space that has the cut-sets of the graph as its elements. The cycle space has the Eulerian spanning subgraphs as its elements. spanner A spanner is a (usually sparse) graph whose shortest path distances approximate those in a dense graph or other metric space. Variations include geometric spanners, graphs whose vertices are points in a geometric space; tree spanners, spanning trees of a graph whose distances approximate the graph distances, and graph spanners, sparse subgraphs of a dense graph whose distances approximate the original graph's distances. A greedy spanner is a graph spanner constructed by a greedy algorithm, generally one that considers all edges from shortest to longest and keeps the ones that are needed to preserve the distance approximation. spanning A subgraph is spanning when it includes all of the vertices of the given graph. Important cases include spanning trees, spanning subgraphs that are trees, and perfect matchings, spanning subgraphs that are matchings. A spanning subgraph may also be called a factor, especially (but not only) when it is regular. sparse A sparse graph is one that has few edges relative to its number of vertices. In some definitions the same property should also be true for all subgraphs of the given graph. spectral spectrum The spectrum of a graph is the collection of eigenvalues of its adjacency matrix. Spectral graph theory is the branch of graph theory that uses spectra to analyze graphs. See also spectral expansion. split 1. A split graph is a graph whose vertices can be partitioned into a clique and an independent set. A related class of graphs, the double split graphs, are used in the proof of the strong perfect graph theorem. 2. A split of an arbitrary graph is a partition of its vertices into two nonempty subsets, such that the edges spanning this cut form a complete bipartite subgraph. The splits of a graph can be represented by a tree structure called its split decomposition. A split is called a strong split when it is not crossed by any other split. A split is called nontrivial when both of its sides have more than one vertex. A graph is called prime when it has no nontrivial splits. 3. Vertex splitting (sometimes called vertex cleaving) is an elementary graph operation that splits a vertex into two, where these two new vertices are adjacent to the vertices that the original vertex was adjacent to. The inverse of vertex splitting is vertex contraction. square 1. The square of a graph G is the graph power G2; in the other direction, G is the square root of G2. The half-square of a bipartite graph is the subgraph of its square induced by one side of the bipartition. 2. A squaregraph is a planar graph that can be drawn so that all bounded faces are 4-cycles and all vertices of degree ≤ 3 belong to the outer face. 3. A square grid graph is a lattice graph defined from points in the plane with integer coordinates connected by unit-length edges. stable A stable set is a synonym for an independent set. star A star is a tree with one internal vertex; equivalently, it is a complete bipartite graph K1,n for some n ≥ 2. The special case of a star with three leaves is called a claw. strength The strength of a graph is the minimum ratio of the number of edges removed from the graph to components created, over all possible removals; it is analogous to toughness, based on vertex removals. strong 1. For strong connectivity and strongly connected components of directed graphs, see connected and component. A strong orientation is an orientation that is strongly connected; see orientation. 2. For the strong perfect graph theorem, see perfect. 3. A strongly regular graph is a regular graph in which every two adjacent vertices have the same number of shared neighbours and every two non-adjacent vertices have the same number of shared neighbours. 4. A strongly chordal graph is a chordal graph in which every even cycle of length six or more has an odd chord. 5. A strongly perfect graph is a graph in which every induced subgraph has an independent set meeting all maximal cliques. The Meyniel graphs are also called "very strongly perfect graphs" because in them, every vertex belongs to such an independent set. subforest A subgraph of a forest. subgraph A subgraph of a graph G is another graph formed from a subset of the vertices and edges of G. The vertex subset must include all endpoints of the edge subset, but may also include additional vertices. A spanning subgraph is one that includes all vertices of the graph; an induced subgraph is one that includes all the edges whose endpoints belong to the vertex subset. subtree A subtree is a connected subgraph of a tree. Sometimes, for rooted trees, subtrees are defined to be a special type of connected subgraph, formed by all vertices and edges reachable from a chosen vertex. successor A vertex coming after a given vertex in a directed path. superconcentrator A superconcentrator is a graph with two designated and equal-sized subsets of vertices I and O, such that for every two equal-sized subsets S of I and T of O there exists a family of disjoint paths connecting every vertex in S to a vertex in T. Some sources require in addition that a superconcentrator be a directed acyclic graph, with I as its sources and O as its sinks. supergraph A graph formed by adding vertices, edges, or both to a given graph. If H is a subgraph of G, then G is a supergraph of H. == T == theta 1. A theta graph is the union of three internally disjoint (simple) paths that have the same two distinct end vertices. 2. The theta graph of a collection of points in the Euclidean plane is constructed by constructing a system of cones surrounding each point and adding one edge per cone, to the point whose projection onto a central ray of the cone is smallest. 3. The Lovász number or Lovász theta function of a graph is a graph invariant related to the clique number and chromatic number that can be computed in polynomial time by semidefinite programming. Thomsen graph The Thomsen graph is a name for the complete bipartite graph K 3 , 3 {\displaystyle K_{3,3}} . topological 1. A topological graph is a representation of the vertices and edges of a graph by points and curves in the plane (not necessarily avoiding crossings). 2. Topological graph theory is the study of graph embeddings. 3. Topological sorting is the algorithmic problem of arranging a directed acyclic graph into a topological order, a vertex sequence such that each edge goes from an earlier vertex to a later vertex in the sequence. totally disconnected Synonym for edgeless. tour A closed trail, a walk that starts and ends at the same vertex and has no repeated edges. Euler tours are tours that use all of the graph edges; see Eulerian. tournament A tournament is an orientation of a complete graph; that is, it is a directed graph such that every two vertices are connected by exactly one directed edge (going in only one of the two directions between the two vertices). traceable A traceable graph is a graph that contains a Hamiltonian path. trail A walk without repeated edges. transitive Having to do with the transitive property. The transitive closure of a given directed graph is a graph on the same vertex set that has an edge from one vertex to another whenever the original graph has a path connecting the same two vertices. A transitive reduction of a graph is a minimal graph having the same transitive closure; directed acyclic graphs have a unique transitive reduction. A transitive orientation is an orientation of a graph that is its own transitive closure; it exists only for comparability graphs. transpose The transpose graph of a given directed graph is a graph on the same vertices, with each edge reversed in direction. It may also be called the converse or reverse of the graph. tree 1. A tree is an undirected graph that is both connected and acyclic, or a directed graph in which there exists a unique walk from one vertex (the root of the tree) to all remaining vertices. 2. A k-tree is a graph formed by gluing (k + 1)-cliques together on shared k-cliques. A tree in the ordinary sense is a 1-tree according to this definition. tree decomposition A tree decomposition of a graph G is a tree whose nodes are labeled with sets of vertices of G; these sets are called bags. For each vertex v, the bags that contain v must induce a subtree of the tree, and for each edge uv there must exist a bag that contains both u and v. The width of a tree decomposition is one less than the maximum number of vertices in any of its bags; the treewidth of G is the minimum width of any tree decomposition of G. treewidth The treewidth of a graph G is the minimum width of a tree decomposition of G. It can also be defined in terms of the clique number of a chordal completion of G, the order of a haven of G, or the order of a bramble of G. triangle A cycle of length three in a graph. A triangle-free graph is an undirected graph that does not have any triangle subgraphs. trivial A trivial graph is a graph with 0 or 1 vertices. A graph with 0 vertices is also called null graph. Turán 1. Pál Turán 2. A Turán graph is a balanced complete multipartite graph. 3. Turán's theorem states that Turán graphs have the maximum number of edges among all clique-free graphs of a given order. 4. Turán's brick factory problem asks for the minimum number of crossings in a drawing of a complete bipartite graph. twin Two vertices u,v are true twins if they have the same closed neighborhood: NG[u] = NG[v] (this implies u and v are neighbors), and they are false twins if they have the same open neighborhood: NG(u) = NG(v)) (this implies u and v are not neighbors). == U == unary vertex In a rooted tree, a unary vertex is a vertex which has exactly one child vertex. undirected An undirected graph is a graph in which the two endpoints of each edge are not distinguished from each other. See also directed and mixed. In a mixed graph, an undirected edge is again one in which the endpoints are not distinguished from each other. uniform A hypergraph is k-uniform when all its edges have k endpoints, and uniform when it is k-uniform for some k. For instance, ordinary graphs are the same as 2-uniform hypergraphs. universal 1. A universal graph is a graph that contains as subgraphs all graphs in a given family of graphs, or all graphs of a given size or order within a given family of graphs. 2. A universal vertex (also called an apex or dominating vertex) is a vertex that is adjacent to every other vertex in the graph. For instance, wheel graphs and connected threshold graphs always have a universal vertex. 3. In the logic of graphs, a vertex that is universally quantified in a formula may be called a universal vertex for that formula. unweighted graph A graph whose vertices and edges have not been assigned weights; the opposite of a weighted graph. utility graph The utility graph is a name for the complete bipartite graph K 3 , 3 {\displaystyle K_{3,3}} . == V == V See vertex set. valency Synonym for degree. vertex A vertex (plural vertices) is (together with edges) one of the two basic units out of which graphs are constructed. Vertices of graphs are often considered to be atomic objects, with no internal structure. vertex cut separating set A set of vertices whose removal disconnects the graph. A one-vertex cut is called an articulation point or cut vertex. vertex set The set of vertices of a given graph G, sometimes denoted by V(G). vertices See vertex. Vizing 1. Vadim G. Vizing 2. Vizing's theorem that the chromatic index is at most one more than the maximum degree. 3. Vizing's conjecture on the domination number of Cartesian products of graphs. volume The sum of the degrees of a set of vertices. == W == W The letter W is used in notation for wheel graphs and windmill graphs. The notation is not standardized. Wagner 1. Klaus Wagner 2. The Wagner graph, an eight-vertex Möbius ladder. 3. Wagner's theorem characterizing planar graphs by their forbidden minors. 4. Wagner's theorem characterizing the K5-minor-free graphs. walk A walk is a finite or infinite sequence of edges which joins a sequence of vertices. Walks are also sometimes called chains. A walk is open if its first and last vertices are distinct, and closed if they are repeated. weakly connected A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. weight A numerical value, assigned as a label to a vertex or edge of a graph. The weight of a subgraph is the sum of the weights of the vertices or edges within that subgraph. weighted graph A graph whose vertices or edges have been assigned weights. A vertex-weighted graph has weights on its vertices and an edge-weighted graph has weights on its edges. well-colored A well-colored graph is a graph all of whose greedy colorings use the same number of colors. well-covered A well-covered graph is a graph all of whose maximal independent sets are the same size. wheel A wheel graph is a graph formed by adding a universal vertex to a simple cycle. width 1. A synonym for degeneracy. 2. For other graph invariants known as width, see bandwidth, branchwidth, clique-width, pathwidth, and treewidth. 3. The width of a tree decomposition or path decomposition is one less than the maximum size of one of its bags, and may be used to define treewidth and pathwidth. 4. The width of a directed acyclic graph is the maximum cardinality of an antichain. windmill A windmill graph is the union of a collection of cliques, all of the same order as each other, with one shared vertex belonging to all the cliques and all other vertices and edges distinct. == See also == List of graph theory topics Gallery of named graphs Graph algorithms Glossary of areas of mathematics == References ==
Wikipedia/Weighted_graph
In graph theory, an induced subgraph of a graph is another graph, formed from a subset of the vertices of the graph and all of the edges, from the original graph, connecting pairs of vertices in that subset. == Definition == Formally, let G = ( V , E ) {\displaystyle G=(V,E)} be any graph, and let S ⊆ V {\displaystyle S\subseteq V} be any subset of vertices of G. Then the induced subgraph G [ S ] {\displaystyle G[S]} is the graph whose vertex set is S {\displaystyle S} and whose edge set consists of all of the edges in E {\displaystyle E} that have both endpoints in S {\displaystyle S} . That is, for any two vertices u , v ∈ S {\displaystyle u,v\in S} , u {\displaystyle u} and v {\displaystyle v} are adjacent in G [ S ] {\displaystyle G[S]} if and only if they are adjacent in G {\displaystyle G} . The same definition works for undirected graphs, directed graphs, and even multigraphs. The induced subgraph G [ S ] {\displaystyle G[S]} may also be called the subgraph induced in G {\displaystyle G} by S {\displaystyle S} , or (if context makes the choice of G {\displaystyle G} unambiguous) the induced subgraph of S {\displaystyle S} . == Examples == Important types of induced subgraphs include the following. Induced paths are induced subgraphs that are paths. The shortest path between any two vertices in an unweighted graph is always an induced path, because any additional edges between pairs of vertices that could cause it to be not induced would also cause it to be not shortest. Conversely, in distance-hereditary graphs, every induced path is a shortest path. Induced cycles are induced subgraphs that are cycles. The girth of a graph is defined by the length of its shortest cycle, which is always an induced cycle. According to the strong perfect graph theorem, induced cycles and their complements play a critical role in the characterization of perfect graphs. Cliques and independent sets are induced subgraphs that are respectively complete graphs or edgeless graphs. Induced matchings are induced subgraphs that are matchings. The neighborhood of a vertex is the induced subgraph of all vertices adjacent to it. == Computation == The induced subgraph isomorphism problem is a form of the subgraph isomorphism problem in which the goal is to test whether one graph can be found as an induced subgraph of another. Because it includes the clique problem as a special case, it is NP-complete. == References ==
Wikipedia/Induced_subgraph
In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge. There are 2 distinct notions of multiple edges: Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes. Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges. A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two. For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph that is permitted to have loops. == Undirected multigraph (edges without own identity) == A multigraph G is an ordered pair G := (V, E) with V a set of vertices or nodes, E a multiset of unordered pairs of vertices, called edges or lines. == Undirected multigraph (edges with own identity) == A multigraph G is an ordered triple G := (V, E, r) with V a set of vertices or nodes, E a set of edges or lines, r : E → {{x,y} : x, y ∈ V}, assigning to each edge an unordered pair of endpoint nodes. Some authors allow multigraphs to have loops, that is, an edge that connects a vertex to itself, while others call these pseudographs, reserving the term multigraph for the case with no loops. == Directed multigraph (edges without own identity) == A multidigraph is a directed graph which is permitted to have multiple arcs, i.e., arcs with the same source and target nodes. A multidigraph G is an ordered pair G := (V, A) with V a set of vertices or nodes, A a multiset of ordered pairs of vertices called directed edges, arcs or arrows. A mixed multigraph G := (V, E, A) may be defined in the same way as a mixed graph. == Directed multigraph (edges with own identity) == A multidigraph or quiver G is an ordered 4-tuple G := (V, A, s, t) with V a set of vertices or nodes, A a set of edges or lines, s : A → V {\displaystyle s:A\rightarrow V} , assigning to each edge its source node, t : A → V {\displaystyle t:A\rightarrow V} , assigning to each edge its target node. This notion might be used to model the possible flight connections offered by an airline. In this case the multigraph would be a directed graph with pairs of directed parallel edges connecting cities to show that it is possible to fly both to and from these locations. In category theory a small category can be defined as a multidigraph (with edges having their own identity) equipped with an associative composition law and a distinguished self-loop at each vertex serving as the left and right identity for composition. For this reason, in category theory the term graph is standardly taken to mean "multidigraph", and the underlying multidigraph of a category is called its underlying digraph. == Labeling == Multigraphs and multidigraphs also support the notion of graph labeling, in a similar way. However there is no unity in terminology in this case. The definitions of labeled multigraphs and labeled multidigraphs are similar, and we define only the latter ones here. Definition 1: A labeled multidigraph is a labeled graph with labeled arcs. Formally: A labeled multidigraph G is a multigraph with labeled vertices and arcs. Formally it is an 8-tuple G = ( Σ V , Σ A , V , A , s , t , ℓ V , ℓ A ) {\displaystyle G=(\Sigma _{V},\Sigma _{A},V,A,s,t,\ell _{V},\ell _{A})} where V {\displaystyle V} is a set of vertices and A {\displaystyle A} is a set of arcs. Σ V {\displaystyle \Sigma _{V}} and Σ A {\displaystyle \Sigma _{A}} are finite alphabets of the available vertex and arc labels, s : A → V {\displaystyle s\colon A\rightarrow \ V} and t : A → V {\displaystyle t\colon A\rightarrow \ V} are two maps indicating the source and target vertex of an arc, ℓ V : V → Σ V {\displaystyle \ell _{V}\colon V\rightarrow \Sigma _{V}} and ℓ A : A → Σ A {\displaystyle \ell _{A}\colon A\rightarrow \Sigma _{A}} are two maps describing the labeling of the vertices and arcs. Definition 2: A labeled multidigraph is a labeled graph with multiple labeled arcs, i.e. arcs with the same end vertices and the same arc label (note that this notion of a labeled graph is different from the notion given by the article graph labeling). == See also == Multidimensional network Glossary of graph theory terms Graph theory == Notes == == References == Balakrishnan, V. K. (1997). Graph Theory. McGraw-Hill. ISBN 0-07-005489-4. Bollobás, Béla (2002). Modern Graph Theory. Graduate Texts in Mathematics. Vol. 184. Springer. ISBN 0-387-98488-7. Chartrand, Gary; Zhang, Ping (2012). A First Course in Graph Theory. Dover. ISBN 978-0-486-48368-9. Diestel, Reinhard (2010). Graph Theory. Graduate Texts in Mathematics. Vol. 173 (4th ed.). Springer. ISBN 978-3-642-14278-9. Gross, Jonathan L.; Yellen, Jay (1998). Graph Theory and Its Applications. CRC Press. ISBN 0-8493-3982-0. Gross, Jonathan L.; Yellen, Jay, eds. (2003). Handbook of Graph Theory. CRC. ISBN 1-58488-090-2. Harary, Frank (1995). Graph Theory. Addison Wesley. ISBN 0-201-41033-8. Janson, Svante; Knuth, Donald E.; Luczak, Tomasz; Pittel, Boris (1993). "The birth of the giant component". Random Structures and Algorithms. 4 (3): 231–358. arXiv:math/9310236. Bibcode:1993math.....10236J. doi:10.1002/rsa.3240040303. ISSN 1042-9832. MR 1220220. S2CID 206454812. Wilson, Robert A. (2002). Graphs, Colourings and the Four-Colour Theorem. Oxford Science Publ. ISBN 0-19-851062-4. Zwillinger, Daniel (2002). CRC Standard Mathematical Tables and Formulae (31st ed.). Chapman & Hall/CRC. ISBN 1-58488-291-3. == External links == This article incorporates public domain material from Paul E. Black. "Multigraph". Dictionary of Algorithms and Data Structures. NIST.
Wikipedia/Pseudograph
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. The problem is not known to be solvable in polynomial time nor to be NP-complete, and therefore may be in the computational complexity class NP-intermediate. It is known that the graph isomorphism problem is in the low hierarchy of class NP, which implies that it is not NP-complete unless the polynomial time hierarchy collapses to its second level. At the same time, isomorphism for many special classes of graphs can be solved in polynomial time, and in practice graph isomorphism can often be solved efficiently. This problem is a special case of the subgraph isomorphism problem, which asks whether a given graph G contains a subgraph that is isomorphic to another given graph H; this problem is known to be NP-complete. It is also known to be a special case of the non-abelian hidden subgroup problem over the symmetric group. In the area of image recognition it is known as the exact graph matching problem. == State of the art == In November 2015, László Babai announced a quasi-polynomial time algorithm for all graphs, that is, one with running time 2 O ( ( log ⁡ n ) c ) {\displaystyle 2^{O((\log n)^{c})}} for some fixed c > 0 {\displaystyle c>0} . On January 4, 2017, Babai retracted the quasi-polynomial claim and stated a sub-exponential time bound instead after Harald Helfgott discovered a flaw in the proof. On January 9, 2017, Babai announced a correction (published in full on January 19) and restored the quasi-polynomial claim, with Helfgott confirming the fix. Helfgott further claims that one can take c = 3, so the running time is 2O((log n)3). Babai published a "preliminary report" on related work at the 2019 Symposium on Theory of Computing, describing a quasipolynomial algorithm for graph canonization, but as of 2025 the full version of these algorithms remains unpublished. Prior to this, the best accepted theoretical algorithm was due to Babai & Luks (1983), and was based on the earlier work by Luks (1982) combined with a subfactorial algorithm of V. N. Zemlyachenko (Zemlyachenko, Korneenko & Tyshkevich 1985). The algorithm has run time 2O(√n log n) for graphs with n vertices and relies on the classification of finite simple groups. Without this classification theorem, a slightly weaker bound 2O(√n log2 n) was obtained first for strongly regular graphs by László Babai (1980), and then extended to general graphs by Babai & Luks (1983). Improvement of the exponent √n for strongly regular graphs was done by Spielman (1996). For hypergraphs of bounded rank, a subexponential upper bound matching the case of graphs was obtained by Babai & Codenotti (2008). There are several competing practical algorithms for graph isomorphism, such as those due to McKay (1981), Schmidt & Druffel (1976), Ullman (1976), and Stoichev (2019). While they seem to perform well on random graphs, a major drawback of these algorithms is their exponential time performance in the worst case. The graph isomorphism problem is computationally equivalent to the problem of computing the automorphism group of a graph, and is weaker than the permutation group isomorphism problem and the permutation group intersection problem. For the latter two problems, Babai, Kantor & Luks (1983) obtained complexity bounds similar to that for graph isomorphism. == Solved special cases == A number of important special cases of the graph isomorphism problem have efficient, polynomial-time solutions: Trees Planar graphs (In fact, planar graph isomorphism is in log space, a class contained in P) Interval graphs Permutation graphs Circulant graphs Bounded-parameter graphs Graphs of bounded treewidth Graphs of bounded genus (Planar graphs are graphs of genus 0.) Graphs of bounded degree Graphs with bounded eigenvalue multiplicity k-Contractible graphs (a generalization of bounded degree and bounded genus) Color-preserving isomorphism of colored graphs with bounded color multiplicity (i.e., at most k vertices have the same color for a fixed k) is in class NC, which is a subclass of P. == Complexity class GI == Since the graph isomorphism problem is neither known to be NP-complete nor known to be tractable, researchers have sought to gain insight into the problem by defining a new class GI, the set of problems with a polynomial-time Turing reduction to the graph isomorphism problem. If in fact the graph isomorphism problem is solvable in polynomial time, GI would equal P. On the other hand, if the problem is NP-complete, GI would equal NP and all problems in NP would be solvable in quasi-polynomial time. As is common for complexity classes within the polynomial time hierarchy, a problem is called GI-hard if there is a polynomial-time Turing reduction from any problem in GI to that problem, i.e., a polynomial-time solution to a GI-hard problem would yield a polynomial-time solution to the graph isomorphism problem (and so all problems in GI). A problem X {\displaystyle X} is called complete for GI, or GI-complete, if it is both GI-hard and a polynomial-time solution to the GI problem would yield a polynomial-time solution to X {\displaystyle X} . The graph isomorphism problem is contained in both NP and co-AM. GI is contained in and low for Parity P, as well as contained in the potentially much smaller class SPP. That it lies in Parity P means that the graph isomorphism problem is no harder than determining whether a polynomial-time nondeterministic Turing machine has an even or odd number of accepting paths. GI is also contained in and low for ZPPNP. This essentially means that an efficient Las Vegas algorithm with access to an NP oracle can solve graph isomorphism so easily that it gains no power from being given the ability to do so in constant time. === GI-complete and GI-hard problems === ==== Isomorphism of other objects ==== There are a number of classes of mathematical objects for which the problem of isomorphism is a GI-complete problem. A number of them are graphs endowed with additional properties or restrictions: digraphs labelled graphs, with the proviso that an isomorphism is not required to preserve the labels, but only the equivalence relation consisting of pairs of vertices with the same label "polarized graphs" (made of a complete graph Km and an empty graph Kn plus some edges connecting the two; their isomorphism must preserve the partition) 2-colored graphs explicitly given finite structures multigraphs hypergraphs finite automata Markov Decision Processes commutative class 3 nilpotent (i.e., xyz = 0 for every elements x, y, z) semigroups finite rank associative algebras over a fixed algebraically closed field with zero squared radical and commutative factor over the radical. context-free grammars normal-form games balanced incomplete block designs Recognizing combinatorial isomorphism of convex polytopes represented by vertex-facet incidences. ==== GI-complete classes of graphs ==== A class of graphs is called GI-complete if recognition of isomorphism for graphs from this subclass is a GI-complete problem. The following classes are GI-complete: connected graphs graphs of diameter 2 and radius 1 directed acyclic graphs regular graphs bipartite graphs without non-trivial strongly regular subgraphs bipartite Eulerian graphs bipartite regular graphs line graphs split graphs chordal graphs regular self-complementary graphs polytopal graphs of general, simple, and simplicial convex polytopes in arbitrary dimensions. Many classes of digraphs are also GI-complete. ==== Other GI-complete problems ==== There are other nontrivial GI-complete problems in addition to isomorphism problems. Finding a graph's automorphism group. Counting automorphisms of a graph. The recognition of self-complementarity of a graph or digraph. A clique problem for a class of so-called M-graphs. It is shown that finding an isomorphism for n-vertex graphs is equivalent to finding an n-clique in an M-graph of size n2. This fact is interesting because the problem of finding a clique of order (1 − ε)n in a M-graph of size n2 is NP-complete for arbitrarily small positive ε. The problem of homeomorphism of 2-complexes. The definability problem for first-order logic. The input of this problem is a relational database instance I and a relation R, and the question to answer is whether there exists a first-order query Q (without constants) such that Q evaluated on I gives R as the answer. ==== GI-hard problems ==== The problem of counting the number of isomorphisms between two graphs is polynomial-time equivalent to the problem of telling whether even one exists. The problem of deciding whether two convex polytopes given by either the V-description or H-description are projectively or affinely isomorphic. The latter means existence of a projective or affine map between the spaces that contain the two polytopes (not necessarily of the same dimension) which induces a bijection between the polytopes. == Program checking == Manuel Blum and Sampath Kannan (1995) have shown a probabilistic checker for programs for graph isomorphism. Suppose P is a claimed polynomial-time procedure that checks if two graphs are isomorphic, but it is not trusted. To check if graphs G and H are isomorphic: Ask P whether G and H are isomorphic. If the answer is "yes": Attempt to construct an isomorphism using P as subroutine. Mark a vertex u in G and v in H, and modify the graphs to make them distinctive (with a small local change). Ask P if the modified graphs are isomorphic. If no, change v to a different vertex. Continue searching. Either the isomorphism will be found (and can be verified), or P will contradict itself. If the answer is "no": Perform the following 100 times. Choose randomly G or H, and randomly permute its vertices. Ask P if the graph is isomorphic to G and H. (As in AM protocol for graph nonisomorphism). If any of the tests are failed, judge P as invalid program. Otherwise, answer "no". This procedure is polynomial-time and gives the correct answer if P is a correct program for graph isomorphism. If P is not a correct program, but answers correctly on G and H, the checker will either give the correct answer, or detect invalid behaviour of P. If P is not a correct program, and answers incorrectly on G and H, the checker will detect invalid behaviour of P with high probability, or answer wrong with probability 2−100. Notably, P is used only as a blackbox. == Applications == Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching, i.e., identification of similarities between graphs, is an important tools in these areas. In these areas graph isomorphism problem is known as the exact graph matching. In cheminformatics and in mathematical chemistry, graph isomorphism testing is used to identify a chemical compound within a chemical database. Also, in organic mathematical chemistry graph isomorphism testing is useful for generation of molecular graphs and for computer synthesis. Chemical database search is an example of graphical data mining, where the graph canonization approach is often used. In particular, a number of identifiers for chemical substances, such as SMILES and InChI, designed to provide a standard and human-readable way to encode molecular information and to facilitate the search for such information in databases and on the web, use canonization step in their computation, which is essentially the canonization of the graph which represents the molecule. In electronic design automation graph isomorphism is the basis of the Layout Versus Schematic (LVS) circuit design step, which is a verification whether the electric circuits represented by a circuit schematic and an integrated circuit layout are the same. == See also == Graph automorphism problem Graph canonization == Notes == == References ==
Wikipedia/Graph_isomorphism_problem
In chemical graph theory and in mathematical chemistry, a molecular graph or chemical graph is a representation of the structural formula of a chemical compound in terms of graph theory. A chemical graph is a labeled graph whose vertices correspond to the atoms of the compound and edges correspond to chemical bonds. Its vertices are labeled with the kinds of the corresponding atoms and edges are labeled with the types of bonds. For particular purposes any of the labelings may be ignored. A hydrogen-depleted molecular graph or hydrogen-suppressed molecular graph is the molecular graph with hydrogen vertices deleted. In some important cases (topological index calculation etc.) the following classical definition is sufficient: a molecular graph is a connected, undirected graph which admits a one-to-one correspondence with the structural formula of a chemical compound in which the vertices of the graph correspond to atoms of the molecule and edges of the graph correspond to chemical bonds between these atoms. One variant is to represent materials as infinite Euclidean graphs, in particular, crystals as periodic graphs. == History == Arthur Cayley was probably the first to publish results that consider molecular graphs as early as in 1874, even before the introduction of the term "graph". For the purposes of enumeration of isomers, Cayley considered "diagrams" made of points labelled by atoms and connected by links into an assemblage. He further introduced the terms plerogram and kenogram, which are the molecular graph and the hydrogen-suppressed molecular graph respectively. If one continues to delete atoms connected by a single link further, one arrives at a mere kenogram, possibly empty. Danail Bonchev in his Chemical Graph Theory traces the origins of representation of chemical forces by diagrams which may be called "chemical graphs" to as early as the mid-18th century. In the early 18th century, Isaac Newton's notion of gravity had led to speculative ideas that atoms are held together by some kind of "gravitational force". In particular, since 1758 Scottish chemist William Cullen in his lectures used what he called "affinity diagrams" to represent forces supposedly existing between pairs of molecules in a chemical reaction. In a 1789 book by William Higgins similar diagrams were used to represent forces within molecules. These and some other contemporary diagrams had no relation to chemical bonds: the latter notion was introduced only in the following century. == See also == Chemical graph generator Simplified molecular-input line-entry system Substructure search == References ==
Wikipedia/Molecular_graph
In graph theory, a branch of mathematics, many important families of graphs can be described by a finite set of individual graphs that do not belong to the family and further exclude all graphs from the family which contain any of these forbidden graphs as (induced) subgraph or minor. A prototypical example of this phenomenon is Kuratowski's theorem, which states that a graph is planar (can be drawn without crossings in the plane) if and only if it does not contain either of two forbidden graphs, the complete graph K5 and the complete bipartite graph K3,3. For Kuratowski's theorem, the notion of containment is that of graph homeomorphism, in which a subdivision of one graph appears as a subgraph of the other. Thus, every graph either has a planar drawing (in which case it belongs to the family of planar graphs) or it has a subdivision of at least one of these two graphs as a subgraph (in which case it does not belong to the planar graphs). == Definition == More generally, a forbidden graph characterization is a method of specifying a family of graph, or hypergraph, structures, by specifying substructures that are forbidden to exist within any graph in the family. Different families vary in the nature of what is forbidden. In general, a structure G is a member of a family F {\displaystyle {\mathcal {F}}} if and only if a forbidden substructure is not contained in G. The forbidden substructure might be one of: subgraphs, smaller graphs obtained from subsets of the vertices and edges of a larger graph, induced subgraphs, smaller graphs obtained by selecting a subset of the vertices and using all edges with both endpoints in that subset, homeomorphic subgraphs (also called topological minors), smaller graphs obtained from subgraphs by collapsing paths of degree-two vertices to single edges, or graph minors, smaller graphs obtained from subgraphs by arbitrary edge contractions. The set of structures that are forbidden from belonging to a given graph family can also be called an obstruction set for that family. Forbidden graph characterizations may be used in algorithms for testing whether a graph belongs to a given family. In many cases, it is possible to test in polynomial time whether a given graph contains any of the members of the obstruction set, and therefore whether it belongs to the family defined by that obstruction set. In order for a family to have a forbidden graph characterization, with a particular type of substructure, the family must be closed under substructures. That is, every substructure (of a given type) of a graph in the family must be another graph in the family. Equivalently, if a graph is not part of the family, all larger graphs containing it as a substructure must also be excluded from the family. When this is true, there always exists an obstruction set (the set of graphs that are not in the family but whose smaller substructures all belong to the family). However, for some notions of what a substructure is, this obstruction set could be infinite. The Robertson–Seymour theorem proves that, for the particular case of graph minors, a family that is closed under minors always has a finite obstruction set. == List of forbidden characterizations for graphs and hypergraphs == == See also == Erdős–Hajnal conjecture Forbidden subgraph problem Matroid minor Zarankiewicz problem == References ==
Wikipedia/Forbidden_graph_characterization
This partial list of graphs contains definitions of graphs and graph families. For collected definitions of graph theory terms that do not refer to individual graph types, such as vertex and path, see Glossary of graph theory. For links to existing articles about particular kinds of graphs, see Category:Graphs. Some of the finite structures considered in graph theory have names, sometimes inspired by the graph's topology, and sometimes after their discoverer. A famous example is the Petersen graph, a concrete graph on 10 vertices that appears as a minimal example or counterexample in many different contexts. == Individual graphs == == Highly symmetric graphs == === Strongly regular graphs === The strongly regular graph on v vertices and rank k is usually denoted srg(v,k,λ,μ). === Symmetric graphs === A symmetric graph is one in which there is a symmetry (graph automorphism) taking any ordered pair of adjacent vertices to any other ordered pair; the Foster census lists all small symmetric 3-regular graphs. Every strongly regular graph is symmetric, but not vice versa. === Semi-symmetric graphs === == Graph families == === Complete graphs === The complete graph on n {\displaystyle n} vertices is often called the n {\displaystyle n} -clique and usually denoted K n {\displaystyle K_{n}} , from German komplett. === Complete bipartite graphs === The complete bipartite graph is usually denoted K n , m {\displaystyle K_{n,m}} . For n = 1 {\displaystyle n=1} see the section on star graphs. The graph K 2 , 2 {\displaystyle K_{2,2}} equals the 4-cycle C 4 {\displaystyle C_{4}} (the square) introduced below. === Cycles === The cycle graph on n {\displaystyle n} vertices is called the n-cycle and usually denoted C n {\displaystyle C_{n}} . It is also called a cyclic graph, a polygon or the n-gon. Special cases are the triangle C 3 {\displaystyle C_{3}} , the square C 4 {\displaystyle C_{4}} , and then several with Greek naming pentagon C 5 {\displaystyle C_{5}} , hexagon C 6 {\displaystyle C_{6}} , etc. === Friendship graphs === The friendship graph Fn can be constructed by joining n copies of the cycle graph C3 with a common vertex. === Fullerene graphs === In graph theory, a fullerene is any polyhedral graph with all faces of size 5 or 6 (including the external face). It follows from Euler's polyhedron formula, V − E + F = 2 (where V, E, F indicate the number of vertices, edges, and faces), that there are exactly 12 pentagons in a fullerene and h = V/2 − 10 hexagons. Therefore V = 20 + 2h; E = 30 + 3h. Fullerene graphs are the Schlegel representations of the corresponding fullerene compounds. An algorithm to generate all the non-isomorphic fullerenes with a given number of hexagonal faces has been developed by G. Brinkmann and A. Dress. G. Brinkmann also provided a freely available implementation, called fullgen. === Platonic solids === The complete graph on four vertices forms the skeleton of the tetrahedron, and more generally the complete graphs form skeletons of simplices. The hypercube graphs are also skeletons of higher-dimensional regular polytopes. === Truncated solids === === Snarks === A snark is a bridgeless cubic graph that requires four colors in any proper edge coloring. The smallest snark is the Petersen graph, already listed above. === Star === A star Sk is the complete bipartite graph K1,k. The star S3 is called the claw graph. === Wheel graphs === The wheel graph Wn is a graph on n vertices constructed by connecting a single vertex to every vertex in an (n − 1)-cycle. == Other graphs == This partial list contains definitions of graphs and graph families which are known by particular names, but do not have a Wikipedia article of their own. === Gear === A gear graph, denoted Gn, is a graph obtained by inserting an extra vertex between each pair of adjacent vertices on the perimeter of a wheel graph Wn. Thus, Gn has 2n+1 vertices and 3n edges. Gear graphs are examples of squaregraphs, and play a key role in the forbidden graph characterization of squaregraphs. Gear graphs are also known as cogwheels and bipartite wheels. === Helm === A helm graph, denoted Hn, is a graph obtained by attaching a single edge and node to each node of the outer circuit of a wheel graph Wn. === Lobster === A lobster graph is a tree in which all the vertices are within distance 2 of a central path. Compare caterpillar. === Web === The web graph Wn,r is a graph consisting of r concentric copies of the cycle graph Cn, with corresponding vertices connected by "spokes". Thus Wn,1 is the same graph as Cn, and Wn,2 is a prism. A web graph has also been defined as a prism graph Yn+1, 3, with the edges of the outer cycle removed. == References ==
Wikipedia/Gallery_of_named_graphs
An electric power system is a network of electrical components deployed to supply, transfer, and use electric power. An example of a power system is the electrical grid that provides power to homes and industries within an extended area. The electrical grid can be broadly divided into the generators that supply the power, the transmission system that carries the power from the generating centers to the load centers, and the distribution system that feeds the power to nearby homes and industries. Smaller power systems are also found in industry, hospitals, commercial buildings, and homes. A single line diagram helps to represent this whole system. The majority of these systems rely upon three-phase AC power—the standard for large-scale power transmission and distribution across the modern world. Specialized power systems that do not always rely upon three-phase AC power are found in aircraft, electric rail systems, ocean liners, submarines, and automobiles. == History == In 1881, two electricians built the world's first power system at Godalming in England. It was powered by two water wheels and produced an alternating current that in turn supplied seven Siemens arc lamps at 250 volts and 34 incandescent lamps at 40 volts. However, supply to the lamps was intermittent and in 1882 Thomas Edison and his company, Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station initially powered around 3,000 lamps for 59 customers. The power station generated direct current and operated at a single voltage. Direct current power could not be transformed easily or efficiently to the higher voltages necessary to minimize power loss during long-distance transmission, so the maximum economic distance between the generators and load was limited to around half a mile (800 m). That same year in London, Lucien Gaulard and John Dixon Gibbs demonstrated the "secondary generator"—the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up 40 kilometers (25 miles) of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that active lamps would affect the brightness of other lamps further down the line. In 1885, Ottó Titusz Bláthy working with Károly Zipernowsky and Miksa Déri perfected the secondary generator of Gaulard and Gibbs, providing it with a closed iron core and its present name: the "transformer". The three engineers went on to present a power system at the National General Exhibition of Budapest that implemented the parallel AC distribution system proposed by a British scientist in which several power transformers have their primary windings fed in parallel from a high-voltage distribution line. The system lit more than 1000 carbon filament lamps and operated successfully from May until November of that year. Also in 1885 George Westinghouse, an American entrepreneur, obtained the patent rights to the Gaulard-Gibbs transformer and imported a number of them along with a Siemens generator, and set his engineers to experimenting with them in hopes of improving them for use in a commercial power system. In 1886, one of Westinghouse's engineers, William Stanley, independently recognized the problem with connecting transformers in series as opposed to parallel and also realized that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built a multi-voltage transformer-based alternating-current power system serving multiple homes and businesses at Great Barrington, Massachusetts in 1886. The system was unreliable and short-lived, though, due primarily to generation issues. However, based on that system, Westinghouse would begin installing AC transformer systems in competition with the Edison Company later that year. In 1888, Westinghouse licensed Nikola Tesla's patents for a polyphase AC induction motor and transformer designs. Tesla consulted for a year at the Westinghouse Electric & Manufacturing Company but it took a further four years for Westinghouse engineers to develop a workable polyphase motor and transmission system. By 1889, the electric power industry was flourishing, and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe. These networks were effectively dedicated to providing electric lighting. During this time the rivalry between Thomas Edison and George Westinghouse's companies had grown into a propaganda campaign over which form of transmission (direct or alternating current) was superior, a series of events known as the "war of the currents". In 1891, Westinghouse installed the first major power system that was designed to drive a 100 horsepower (75 kW) synchronous electric motor, as well as provide electric lighting, at Telluride, Colorado. On the other side of the Atlantic, Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown, built the first long-distance (175 kilometers (109 miles)) high-voltage (15 kV, then a record) three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt, where power was used to light lamps and run a water pump. In the United States the AC/DC competition came to an end when Edison General Electric was taken over by their chief AC rival, the Thomson-Houston Electric Company, forming General Electric. In 1895, after a protracted decision-making process, alternating current was chosen as the transmission standard with Westinghouse building the Adams No. 1 generating station at Niagara Falls and General Electric building the three-phase alternating current power system to supply Buffalo at 11 kV. Developments in power systems continued beyond the nineteenth century. In 1936 the first experimental high voltage direct current (HVDC) line using mercury arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by series-connected direct current generators and motors (the Thury system) although this suffered from serious reliability issues. The first solid-state metal diode suitable for general power uses was developed by Ernst Presser at TeKaDe in 1928. It consisted of a layer of selenium applied on an aluminum plate. In 1957, a General Electric research group developed the first thyristor suitable for use in power applications, starting a revolution in power electronics. In that same year, Siemens demonstrated a solid-state rectifier, but it was not until the early 1970s that solid-state devices became the standard in HVDC, when GE emerged as one of the top suppliers of thyristor-based HVDC. In 1979, a European consortium including Siemens, Brown Boveri & Cie and AEG realized the record HVDC link from Cabora Bassa to Johannesburg, extending more than 1,420 kilometers (880 miles) that carried 1.9 GW at 533 kV. In recent times, many important developments have come from extending innovations in the information and communications technology (ICT) field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently, allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for effective remote control of a power system's switchgear and generators. == Basics of electric power == Electric power is the product of two quantities: current and voltage. These two quantities can vary with respect to time (AC power) or can be kept at constant levels (DC power). Most refrigerators, air conditioners, pumps and industrial machinery use AC power, whereas most computers and digital equipment use DC power (digital devices plugged into the mains typically have an internal or external power adapter to convert from AC to DC power). AC power has the advantage of being easy to transform between voltages and is able to be generated and utilised by brushless machinery. DC power remains the only practical choice in digital systems and can be more economical to transmit over long distances at very high voltages (see HVDC). The ability to easily transform the voltage of AC power is important for two reasons: firstly, power can be transmitted over long distances with less loss at higher voltages. So in power systems where generation is distant from the load, it is desirable to step-up (increase) the voltage of power at the generation point and then step-down (decrease) the voltage near the load. Secondly, it is often more economical to install turbines that produce higher voltages than would be used by most appliances, so the ability to easily transform voltages means this mismatch between voltages can be easily managed. Solid-state devices, which are products of the semiconductor revolution, make it possible to transform DC power to different voltages, build brushless DC machines and convert between AC and DC power. Nevertheless, devices utilising solid-state technology are often more expensive than their traditional counterparts, so AC power remains in widespread use. == Components of power systems == === Supplies === All power systems have one or more sources of power. For some power systems, the source of power is external to the system but for others, it is part of the system itself—it is these internal power sources that are discussed in the remainder of this section. Direct current power can be supplied by batteries, fuel cells or photovoltaic cells. Alternating current power is typically supplied by a rotor that spins in a magnetic field in a device known as a turbo generator. There have been a wide range of techniques used to spin a turbine's rotor, from steam heated using fossil fuel (including coal, gas and oil) or nuclear energy to falling water (hydroelectric power) and wind (wind power). The speed at which the rotor spins in combination with the number of generator poles determines the frequency of the alternating current produced by the generator. All generators on a single synchronous system, for example, the national grid, rotate at sub-multiples of the same speed and so generate electric current at the same frequency. If the load on the system increases, the generators will require more torque to spin at that speed and, in a steam power station, more steam must be supplied to the turbines driving them. Thus the steam used and the fuel expended directly relate to the quantity of electrical energy supplied. An exception exists for generators incorporating power electronics such as gearless wind turbines or linked to a grid through an asynchronous tie such as a HVDC link — these can operate at frequencies independent of the power system frequency. Depending on how the poles are fed, alternating current generators can produce a variable number of phases of power. A higher number of phases leads to more efficient power system operation but also increases the infrastructure requirements of the system. Electricity grid systems connect multiple generators operating at the same frequency: the most common being three-phase at 50 or 60 Hz. There are a range of design considerations for power supplies. These range from the obvious: How much power should the generator be able to supply? What is an acceptable length of time for starting the generator (some generators can take hours to start)? Is the availability of the power source acceptable (some renewables are only available when the sun is shining or the wind is blowing)? To the more technical: How should the generator start (some turbines act like a motor to bring themselves up to speed in which case they need an appropriate starting circuit)? What is the mechanical speed of operation for the turbine and consequently what are the number of poles required? What type of generator is suitable (synchronous or asynchronous) and what type of rotor (squirrel-cage rotor, wound rotor, salient pole rotor or cylindrical rotor)? === Loads === Power systems deliver energy to loads that perform a function. These loads range from household appliances to industrial machinery. Most loads expect a certain voltage and, for alternating current devices, a certain frequency and number of phases. The appliances found in residential settings, for example, will typically be single-phase operating at 50 or 60 Hz with a voltage between 110 and 260 volts (depending on national standards). An exception exists for larger centralized air conditioning systems as these are now often three-phase because this allows them to operate more efficiently. All electrical appliances also have a wattage rating, which specifies the amount of power the device consumes. At any one time, the net amount of power consumed by the loads on a power system must equal the net amount of power produced by the supplies less the power lost in transmission. Making sure that the voltage, frequency and amount of power supplied to the loads is in line with expectations is one of the great challenges of power system engineering. However it is not the only challenge, in addition to the power used by a load to do useful work (termed real power) many alternating current devices also use an additional amount of power because they cause the alternating voltage and alternating current to become slightly out-of-sync (termed reactive power). The reactive power like the real power must balance (that is the reactive power produced on a system must equal the reactive power consumed) and can be supplied from the generators, however it is often more economical to supply such power from capacitors (see "Capacitors and reactors" below for more details). A final consideration with loads has to do with power quality. In addition to sustained overvoltages and undervoltages (voltage regulation issues) as well as sustained deviations from the system frequency (frequency regulation issues), power system loads can be adversely affected by a range of temporal issues. These include voltage sags, dips and swells, transient overvoltages, flicker, high-frequency noise, phase imbalance and poor power factor. Power quality issues occur when the power supply to a load deviates from the ideal. Power quality issues can be especially important when it comes to specialist industrial machinery or hospital equipment. === Conductors === Conductors carry power from the generators to the load. In a grid, conductors may be classified as belonging to the transmission system, which carries large amounts of power at high voltages (typically more than 69 kV) from the generating centres to the load centres, or the distribution system, which feeds smaller amounts of power at lower voltages (typically less than 69 kV) from the load centres to nearby homes and industry. Choice of conductors is based on considerations such as cost, transmission losses and other desirable characteristics of the metal like tensile strength. Copper, with lower resistivity than aluminum, was once the conductor of choice for most power systems. However, aluminum has a lower cost for the same current carrying capacity and is now often the conductor of choice. Overhead line conductors may be reinforced with steel or aluminium alloys. Conductors in exterior power systems may be placed overhead or underground. Overhead conductors are usually air insulated and supported on porcelain, glass or polymer insulators. Cables used for underground transmission or building wiring are insulated with cross-linked polyethylene or other flexible insulation. Conductors are often stranded for to make them more flexible and therefore easier to install. Conductors are typically rated for the maximum current that they can carry at a given temperature rise over ambient conditions. As current flow increases through a conductor it heats up. For insulated conductors, the rating is determined by the insulation. For bare conductors, the rating is determined by the point at which the sag of the conductors would become unacceptable. === Capacitors and reactors === The majority of the load in a typical AC power system is inductive; the current lags behind the voltage. Since the voltage and current are out-of-phase, this leads to the emergence of an "imaginary" form of power known as reactive power. Reactive power does no measurable work but is transmitted back and forth between the reactive power source and load every cycle. This reactive power can be provided by the generators themselves but it is often cheaper to provide it through capacitors, hence capacitors are often placed near inductive loads (i.e. if not on-site at the nearest substation) to reduce current demand on the power system (i.e. increase the power factor). Reactors consume reactive power and are used to regulate voltage on long transmission lines. In light load conditions, where the loading on transmission lines is well below the surge impedance loading, the efficiency of the power system may actually be improved by switching in reactors. Reactors installed in series in a power system also limit rushes of current flow, small reactors are therefore almost always installed in series with capacitors to limit the current rush associated with switching in a capacitor. Series reactors can also be used to limit fault currents. Capacitors and reactors are switched by circuit breakers, which results in sizeable step changes of reactive power. A solution to this comes in the form of synchronous condensers, static VAR compensators and static synchronous compensators. Briefly, synchronous condensers are synchronous motors that spin freely to generate or absorb reactive power. Static VAR compensators work by switching in capacitors using thyristors as opposed to circuit breakers allowing capacitors to be switched-in and switched-out within a single cycle. This provides a far more refined response than circuit-breaker-switched capacitors. Static synchronous compensators take this a step further by achieving reactive power adjustments using only power electronics. === Power electronics === Power electronics are semiconductor based devices that are able to switch quantities of power ranging from a few hundred watts to several hundred megawatts. Despite their relatively simple function, their speed of operation (typically in the order of nanoseconds) means they are capable of a wide range of tasks that would be difficult or impossible with conventional technology. The classic function of power electronics is rectification, or the conversion of AC-to-DC power, power electronics are therefore found in almost every digital device that is supplied from an AC source either as an adapter that plugs into the wall (see photo) or as component internal to the device. High-powered power electronics can also be used to convert AC power to DC power for long distance transmission in a system known as HVDC. HVDC is used because it proves to be more economical than similar high voltage AC systems for very long distances (hundreds to thousands of kilometres). HVDC is also desirable for interconnects because it allows frequency independence thus improving system stability. Power electronics are also essential for any power source that is required to produce an AC output but that by its nature produces a DC output. They are therefore used by photovoltaic installations. Power electronics also feature in a wide range of more exotic uses. They are at the heart of all modern electric and hybrid vehicles—where they are used for both motor control and as part of the brushless DC motor. Power electronics are also found in practically all modern petrol-powered vehicles, this is because the power provided by the car's batteries alone is insufficient to provide ignition, air-conditioning, internal lighting, radio and dashboard displays for the life of the car. So the batteries must be recharged while driving—a feat that is typically accomplished using power electronics. Some electric railway systems also use DC power and thus make use of power electronics to feed grid power to the locomotives and often for speed control of the locomotive's motor. In the middle twentieth century, rectifier locomotives were popular, these used power electronics to convert AC power from the railway network for use by a DC motor. Today most electric locomotives are supplied with AC power and run using AC motors, but still use power electronics to provide suitable motor control. The use of power electronics to assist with the motor control and with starter circuits, in addition to rectification, is responsible for power electronics appearing in a wide range of industrial machinery. Power electronics even appear in modern residential air conditioners allow are at the heart of the variable speed wind turbine. === Protective devices === Power systems contain protective devices to prevent injury or damage during failures. The quintessential protective device is the fuse. When the current through a fuse exceeds a certain threshold, the fuse element melts, producing an arc across the resulting gap that is then extinguished, interrupting the circuit. Given that fuses can be built as the weak point of a system, fuses are ideal for protecting circuitry from damage. Fuses however have two problems: First, after they have functioned, fuses must be replaced as they cannot be reset. This can prove inconvenient if the fuse is at a remote site or a spare fuse is not on hand. And second, fuses are typically inadequate as the sole safety device in most power systems as they allow current flows well in excess of that that would prove lethal to a human or animal. The first problem is resolved by the use of circuit breakers—devices that can be reset after they have broken current flow. In modern systems that use less than about 10 kW, miniature circuit breakers are typically used. These devices combine the mechanism that initiates the trip (by sensing excess current) as well as the mechanism that breaks the current flow in a single unit. Some miniature circuit breakers operate solely on the basis of electromagnetism. In these miniature circuit breakers, the current is run through a solenoid, and, in the event of excess current flow, the magnetic pull of the solenoid is sufficient to force open the circuit breaker's contacts (often indirectly through a tripping mechanism). In higher powered applications, the protective relays that detect a fault and initiate a trip are separate from the circuit breaker. Early relays worked based upon electromagnetic principles similar to those mentioned in the previous paragraph, modern relays are application-specific computers that determine whether to trip based upon readings from the power system. Different relays will initiate trips depending upon different protection schemes. For example, an overcurrent relay might initiate a trip if the current on any phase exceeds a certain threshold whereas a set of differential relays might initiate a trip if the sum of currents between them indicates there may be current leaking to earth. The circuit breakers in higher powered applications are different too. Air is typically no longer sufficient to quench the arc that forms when the contacts are forced open so a variety of techniques are used. One of the most popular techniques is to keep the chamber enclosing the contacts flooded with sulfur hexafluoride (SF6)—a non-toxic gas with sound arc-quenching properties. Other techniques are discussed in the reference. The second problem, the inadequacy of fuses to act as the sole safety device in most power systems, is probably best resolved by the use of residual-current devices (RCDs). In any properly functioning electrical appliance, the current flowing into the appliance on the active line should equal the current flowing out of the appliance on the neutral line. A residual current device works by monitoring the active and neutral lines and tripping the active line if it notices a difference. Residual current devices require a separate neutral line for each phase and to be able to trip within a time frame before harm occurs. This is typically not a problem in most residential applications where standard wiring provides an active and neutral line for each appliance (that is why your power plugs always have at least two tongs) and the voltages are relatively low however these issues limit the effectiveness of RCDs in other applications such as industry. Even with the installation of an RCD, exposure to electricity can still prove fatal. === SCADA systems === In large electric power systems, supervisory control and data acquisition (SCADA) is used for tasks such as switching on generators, controlling generator output and switching in or out system elements for maintenance. The first supervisory control systems implemented consisted of a panel of lamps and switches at a central console near the controlled plant. The lamps provided feedback on the state of the plant (the data acquisition function) and the switches allowed adjustments to the plant to be made (the supervisory control function). Today, SCADA systems are much more sophisticated and, due to advances in communication systems, the consoles controlling the plant no longer need to be near the plant itself. Instead, it is now common for plants to be controlled with equipment similar (if not identical) to a desktop computer. The ability to control such plants through computers has increased the need for security—there have already been reports of cyber-attacks on such systems causing significant disruptions to power systems. == Power systems in practice == Despite their common components, power systems vary widely both with respect to their design and how they operate. This section introduces some common power system types and briefly explains their operation. === Residential power systems === Residential dwellings almost always take supply from the low voltage distribution lines or cables that run past the dwelling. These operate at voltages of between 110 and 260 volts (phase-to-earth) depending upon national standards. A few decades ago small dwellings would be fed a single phase using a dedicated two-core service cable (one core for the active phase and one core for the neutral return). The active line would then be run through a main isolating switch in the fuse box and then split into one or more circuits to feed lighting and appliances inside the house. By convention, the lighting and appliance circuits are kept separate so the failure of an appliance does not leave the dwelling's occupants in the dark. All circuits would be fused with an appropriate fuse based upon the wire size used for that circuit. Circuits would have both an active and neutral wire with both the lighting and power sockets being connected in parallel. Sockets would also be provided with a protective earth. This would be made available to appliances to connect to any metallic casing. If this casing were to become live, the theory is the connection to earth would cause an RCD or fuse to trip—thus preventing the future electrocution of an occupant handling the appliance. Earthing systems vary between regions, but in countries such as the United Kingdom and Australia both the protective earth and neutral line would be earthed together near the fuse box before the main isolating switch and the neutral earthed once again back at the distribution transformer. There have been a number of minor changes over the years to practice of residential wiring. Some of the most significant ways modern residential power systems in developed countries tend to vary from older ones include: For convenience, miniature circuit breakers are now almost always used in the fuse box instead of fuses as these can easily be reset by occupants and, if of the thermomagnetic type, can respond more quickly to some types of fault. For safety reasons, RCDs are now often installed on appliance circuits and, increasingly, even on lighting circuits. Whereas residential air conditioners of the past might have been fed from a dedicated circuit attached to a single phase, larger centralised air conditioners that require three-phase power are now becoming common in some countries. Protective earths are now run with lighting circuits to allow for metallic lamp holders to be earthed. Increasingly residential power systems are incorporating microgenerators, most notably, photovoltaic cells. === Commercial power systems === Commercial power systems such as shopping centers or high-rise buildings are larger in scale than residential systems. Electrical designs for larger commercial systems are usually studied for load flow, short-circuit fault levels and voltage drop. The objectives of the studies are to assure proper equipment and conductor sizing, and to coordinate protective devices so that minimal disruption is caused when a fault is cleared. Large commercial installations will have an orderly system of sub-panels, separate from the main distribution board to allow for better system protection and more efficient electrical installation. Typically one of the largest appliances connected to a commercial power system in hot climates is the HVAC unit, and ensuring this unit is adequately supplied is an important consideration in commercial power systems. Regulations for commercial establishments place other requirements on commercial systems that are not placed on residential systems. For example, in Australia, commercial systems must comply with AS 2293, the standard for emergency lighting, which requires emergency lighting be maintained for at least 90 minutes in the event of loss of mains supply. In the United States, the National Electrical Code requires commercial systems to be built with at least one 20 A sign outlet in order to light outdoor signage. Building code regulations may place special requirements on the electrical system for emergency lighting, evacuation, emergency power, smoke control and fire protection. == Power system management == Power system management varies depending upon the power system. Residential power systems and even automotive electrical systems are often run-to-fail. In aviation, the power system uses redundancy to ensure availability. On the Boeing 747-400 any of the four engines can provide power and circuit breakers are checked as part of power-up (a tripped circuit breaker indicating a fault). Larger power systems require active management. In industrial plants or mining sites a single team might be responsible for fault management, augmentation and maintenance. Where as for the electric grid, management is divided amongst several specialised teams. === Fault management === Fault management involves monitoring the behaviour of the power system so as to identify and correct issues that affect the system's reliability. Fault management can be specific and reactive: for example, dispatching a team to restring conductor that has been brought down during a storm. Or, alternatively, can focus on systemic improvements: such as the installation of reclosers on sections of the system that are subject to frequent temporary disruptions (as might be caused by vegetation, lightning or wildlife). === Maintenance and augmentation === In addition to fault management, power systems may require maintenance or augmentation. As often it is neither economical nor practical for large parts of the system to be offline during this work, power systems are built with many switches. These switches allow the part of the system being worked on to be isolated while the rest of the system remains live. At high voltages, there are two switches of note: isolators and circuit breakers. Circuit breakers are load-breaking switches where as operating isolators under load would lead to unacceptable and dangerous arcing. In a typical planned outage, several circuit breakers are tripped to allow the isolators to be switched before the circuit breakers are again closed to reroute power around the isolated area. This allows work to be completed on the isolated area. === Frequency and voltage management === Beyond fault management and maintenance one of the main difficulties in power systems is that the active power consumed plus losses must equal the active power produced. If load is reduced while generation inputs remain constant the synchronous generators will spin faster and the system frequency will rise. The opposite occurs if load is increased. As such the system frequency must be actively managed primarily through switching on and off dispatchable loads and generation. Making sure the frequency is constant is usually the task of a system operator. Even with frequency maintained, the system operator can be kept occupied ensuring: == Notes == == See also == Power system simulation == References == == External links == IEEE Power Engineering Society Power Engineering International Magazine Articles Archived 16 November 2009 at the Wayback Machine Power Engineering Magazine Articles Archived 19 February 2009 at the Wayback Machine American Society of Power Engineers, Inc. National Institute for the Uniform Licensing of Power Engineer Inc.
Wikipedia/Power_systems
Geometric Algebra is a book written by Emil Artin and published by Interscience Publishers, New York, in 1957. It was republished in 1988 in the Wiley Classics series (ISBN 0-471-60839-4). In 1962 Algèbre Géométrique, a translation into French by Michel Lazard, was published by Gauthier-Villars, and reprinted in 1996. (ISBN 2-87647-089-6) In 1968 a translation into Italian was published in Milan by Feltrinelli. In 1969 a translation into Russian was published in Moscow by Nauka Long anticipated as the sequel to Moderne Algebra (1930), which Bartel van der Waerden published as his version of notes taken in a course with Artin, Geometric Algebra is a research monograph suitable for graduate students studying mathematics. From the Preface: Linear algebra, topology, differential and algebraic geometry are the indispensable tools of the mathematician of our time. It is frequently desirable to devise a course of geometric nature which is distinct from these great lines of thought and which can be presented to beginning graduate students or even to advanced undergraduates. The present book has grown out of lecture notes for a course of this nature given at New York University in 1955. This course centered around the foundations of affine geometry, the geometry of quadratic forms and the structure of the general linear group. I felt it necessary to enlarge the content of these notes by including projective and symplectic geometry and also the structure of the symplectic and orthogonal groups. The book is illustrated with six geometric configurations in chapter 2, which retraces the path from geometric to field axioms previously explored by Karl von Staudt and David Hilbert. == Contents == Chapter one is titled "Preliminary Notions". The ten sections explicate notions of set theory, vector spaces, homomorphisms, duality, linear equations, group theory, field theory, ordered fields and valuations. On page vii Artin says "Chapter I should be used mainly as a reference chapter for the proofs of certain isolated theorems." Chapter two is titled "Affine and Projective Geometry". Artin posits this challenge to generate algebra (a field k) from geometric axioms: Given a plane geometry whose objects are the elements of two sets, the set of points and the set of lines; assume that certain axioms of a geometric nature are true. Is it possible to find a field k such that the points of our geometry can be described by coordinates from k and the lines by linear equations ? The reflexive variant of parallelism is invoked: parallel lines have either all or none of their points in common. Thus a line is parallel to itself. Axiom 1 requires a unique line for each pair of distinct points, and a unique point of intersection of non-parallel lines. Axiom 2 depends on a line and a point; it requires a unique parallel to the line and through the point. Axiom 3 requires three non-collinear points. Axiom 4a requires a translation to move any point to any other. Axiom 4b requires a dilation at P to move Q to R when the three points are collinear. Artin writes the line through P and Q as P + Q. To define a dilation he writes, "Let two distinct points P and Q and their images P′ and Q′ be given." To suggest the role of incidence in geometry, a dilation is specified by this property: "If l′ is the line parallel to P + Q which passes through P′, then Q′ lies on l′." Of course, if P′ ≠ Q′, then this condition implies P + Q is parallel to P′ + Q′, so that the dilation is an affine transformation. The dilations with no fixed points are translations, and the group of translations T is shown to be an invariant subgroup of the group of dilations. For a dilation σ and a point P, the trace is P + σP. The mappings T → T that are trace-preserving homomorphisms are the elements of k. First k is shown to be an associative ring with 1, then a skew field. Conversely, there is an affine geometry based on any given skew field k. Axioms 4a and 4b are equivalent to Desargues' theorem. When Pappus's hexagon theorem holds in the affine geometry, k is commutative and hence a field. Chapter three is titled "Symplectic and Orthogonal Geometry". It begins with metric structures on vector spaces before defining symplectic and orthogonal geometry and describing their common and special features. There are sections on geometry over finite fields and over ordered fields. Chapter four is on general linear groups. First there is Jean Dieudonne's theory of determinants over "non-commutative fields" (division rings). Artin describes GL(n, k) group structure. More details are given about vector spaces over finite fields. Chapter five is "The Structure of Sympletic and Orthogonal Groups". It includes sections on elliptic spaces, Clifford algebra, and spinorial norm. == Reviews == Alice T. Schafer wrote "Mathematicians will find on many pages ample evidence of the author’s ability to penetrate a subject and to present material in a particularly elegant manner." She notes the overlap between Artin's text and Baer's Linear Algebra and Projective Geometry or Dieudonné's La Géometrie des Groupes Classique. Jean Dieudonné reviewed the book for Mathematical Reviews and placed it on a level with Hilbert's Grundlagen der Geometrie. == References == Geometric Algebra at Internet Archive
Wikipedia/Geometric_Algebra_(book)
In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid (i.e. they do not deform under the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. This excludes bodies that display fluid, highly elastic, and plastic behavior. The dynamics of a rigid body system is described by the laws of kinematics and by the application of Newton's second law (kinetics) or their derivative form, Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems. == Planar rigid body dynamics == If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, Pi, i=1,...,N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain F = ∑ i = 1 N m i A i , T = ∑ i = 1 N ( r i − R ) × m i A i , {\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {A} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {r} _{i}-\mathbf {R} )\times m_{i}\mathbf {A} _{i},} where ri denotes the planar trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, A i = α × ( r i − R ) + ω × ( ω × ( r i − R ) ) + A . {\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times (\mathbf {r} _{i}-\mathbf {R} )+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times (\mathbf {r} _{i}-\mathbf {R} ))+\mathbf {A} .} For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors ei from the reference point R to a point ri and the unit vectors t i = k × e i {\textstyle \mathbf {t} _{i}=\mathbf {k} \times \mathbf {e} _{i}} , so A i = α ( Δ r i t i ) − ω 2 ( Δ r i e i ) + A . {\displaystyle \mathbf {A} _{i}=\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} .} This yields the resultant force on the system as F = α ∑ i = 1 N m i ( Δ r i t i ) − ω 2 ∑ i = 1 N m i ( Δ r i e i ) + ( ∑ i = 1 N m i ) A , {\displaystyle \mathbf {F} =\alpha \sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {t} _{i}\right)-\omega ^{2}\sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {e} _{i}\right)+\left(\sum _{i=1}^{N}m_{i}\right)\mathbf {A} ,} and torque as T = ∑ i = 1 N ( m i Δ r i e i ) × ( α ( Δ r i t i ) − ω 2 ( Δ r i e i ) + A ) = ( ∑ i = 1 N m i Δ r i 2 ) α k + ( ∑ i = 1 N m i Δ r i e i ) × A , {\displaystyle {\begin{aligned}\mathbf {T} ={}&\sum _{i=1}^{N}(m_{i}\Delta r_{i}\mathbf {e} _{i})\times \left(\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} \right)\\{}={}&\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}^{2}\right)\alpha \mathbf {k} +\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}\mathbf {e} _{i}\right)\times \mathbf {A} ,\end{aligned}}} where e i × e i = 0 {\textstyle \mathbf {e} _{i}\times \mathbf {e} _{i}=0} and e i × t i = k {\textstyle \mathbf {e} _{i}\times \mathbf {t} _{i}=\mathbf {k} } is the unit vector perpendicular to the plane for all of the particles Pi. Use the center of mass C as the reference point, so these equations for Newton's laws simplify to become F = M A , T = I C α k , {\displaystyle \mathbf {F} =M\mathbf {A} ,\quad \mathbf {T} =I_{\textbf {C}}\alpha \mathbf {k} ,} where M is the total mass and IC is the moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass. == Rigid body in three dimensions == === Orientation or attitude descriptions === Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections. ==== Euler angles ==== The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Commonly, ψ {\displaystyle \psi } is used to denote precession, θ {\displaystyle \theta } nutation, and ϕ {\displaystyle \phi } intrinsic rotation. ==== Tait–Bryan angles ==== These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles. ==== Orientation vector ==== Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector. A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure). ==== Orientation matrix ==== With the introduction of matrices the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix. The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue). The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe. The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation. ==== Orientation quaternion ==== Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions. === Newton's second law in three dimensions === To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it. Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed." Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written as F = m a , {\displaystyle \mathbf {F} =m\mathbf {a} ,} where F is understood to be the only external force acting on the particle, m is the mass of the particle, and a is its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles. === Rigid system of particles === If a system of N particles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. If Fi is the external force applied to particle Pi with mass mi, then F i + ∑ j = 1 N F i j = m i a i , i = 1 , … , N , {\displaystyle \mathbf {F} _{i}+\sum _{j=1}^{N}\mathbf {F} _{ij}=m_{i}\mathbf {a} _{i},\quad i=1,\ldots ,N,} where Fij is the internal force of particle Pj acting on particle Pi that maintains the constant distance between these particles. An important simplification to these force equations is obtained by introducing the resultant force and torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point, R, where each of the external forces are applied with the addition of an associated torque. The resultant force F and torque T are given by the formulas, F = ∑ i = 1 N F i , T = ∑ i = 1 N ( R i − R ) × F i , {\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {F} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i},} where Ri is the vector that defines the position of particle Pi. Newton's second law for a particle combines with these formulas for the resultant force and torque to yield, F = ∑ i = 1 N m i a i , T = ∑ i = 1 N ( R i − R ) × ( m i a i ) , {\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {a} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times (m_{i}\mathbf {a} _{i}),} where the internal forces Fij cancel in pairs. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration a of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, a i = α × ( R i − R ) + ω × ( ω × ( R i − R ) ) + a . {\displaystyle \mathbf {a} _{i}=\alpha \times (\mathbf {R} _{i}-\mathbf {R} )+\omega \times (\omega \times (\mathbf {R} _{i}-\mathbf {R} ))+\mathbf {a} .} === Mass properties === The mass properties of the rigid body are represented by its center of mass and inertia matrix. Choose the reference point R so that it satisfies the condition ∑ i = 1 N m i ( R i − R ) = 0 , {\displaystyle \sum _{i=1}^{N}m_{i}(\mathbf {R} _{i}-\mathbf {R} )=0,} then it is known as the center of mass of the system. The inertia matrix [IR] of the system relative to the reference point R is defined by [ I R ] = ∑ i = 1 N m i ( I ( S i T S i ) − S i S i T ) , {\displaystyle [I_{R}]=\sum _{i=1}^{N}m_{i}\left(\mathbf {I} \left(\mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}\right)-\mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}\right),} where S i {\displaystyle \mathbf {S} _{i}} is the column vector Ri − R; S i T {\displaystyle \mathbf {S} _{i}^{\textsf {T}}} is its transpose, and I {\displaystyle \mathbf {I} } is the 3 by 3 identity matrix. S i T S i {\displaystyle \mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}} is the scalar product of S i {\displaystyle \mathbf {S} _{i}} with itself, while S i S i T {\displaystyle \mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}} is the tensor product of S i {\displaystyle \mathbf {S} _{i}} with itself. === Force-torque equations === Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the form F = m a , T = [ I R ] α + ω × [ I R ] ω , {\displaystyle \mathbf {F} =m\mathbf {a} ,\quad \mathbf {T} =[I_{R}]\alpha +\omega \times [I_{R}]\omega ,} and are known as Newton's second law of motion for a rigid body. The dynamics of an interconnected system of rigid bodies, Bi, j = 1, ..., M, is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equations F j = m j a j , T j = [ I R ] j α j + ω j × [ I R ] j ω j , j = 1 , … , M . {\displaystyle \mathbf {F} _{j}=m_{j}\mathbf {a} _{j},\quad \mathbf {T} _{j}=[I_{R}]_{j}\alpha _{j}+\omega _{j}\times [I_{R}]_{j}\omega _{j},\quad j=1,\ldots ,M.} Newton's formulation yields 6M equations that define the dynamics of a system of M rigid bodies. === Rotation in three dimensions === A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation. The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion: τ = D L D t = d L d t + ω × L = d ( I ω ) d t + ω × I ω = I α + ω × I ω {\displaystyle {\boldsymbol {\tau }}={\frac {D\mathbf {L} }{Dt}}={\frac {d\mathbf {L} }{dt}}+{\boldsymbol {\omega }}\times \mathbf {L} ={\frac {d(I{\boldsymbol {\omega }})}{dt}}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}=I{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}} where the pseudovectors τ and L are, respectively, the torques on the body and its angular momentum, the scalar I is its moment of inertia, the vector ω is its angular velocity, the vector α is its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body. The solution to this equation when there is no applied torque is discussed in the articles Euler's equation of motion and Poinsot's ellipsoid. It follows from Euler's equation that a torque τ applied perpendicular to the axis of rotation, and therefore perpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is called precession. The angular velocity of precession ΩP is given by the cross product: τ = Ω P × L . {\displaystyle {\boldsymbol {\tau }}={\boldsymbol {\Omega }}_{\mathrm {P} }\times \mathbf {L} .} Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point. Under a constant torque of magnitude τ, the speed of precession ΩP is inversely proportional to L, the magnitude of its angular momentum: τ = Ω P L sin ⁡ θ , {\displaystyle \tau ={\mathit {\Omega }}_{\mathrm {P} }L\sin \theta ,} where θ is the angle between the vectors ΩP and L. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall. By convention, these three vectors – torque, spin, and precession – are all oriented with respect to each other according to the right-hand rule. == Virtual work of forces acting on a rigid body == An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering the virtual work of forces acting on a rigid body. The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and the resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points R1, R2 ... Rn in a rigid body. The trajectories of Ri, i = 1, ..., n are defined by the movement of the rigid body. The velocity of the points Ri along their trajectories are V i = ω × ( R i − R ) + V , {\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} ,} where ω is the angular velocity vector of the body. === Virtual work === Work is computed from the dot product of each force with the displacement of its point of contact δ W = ∑ i = 1 n F i ⋅ δ r i . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}.} If the trajectory of a rigid body is defined by a set of generalized coordinates qj, j = 1, ..., m, then the virtual displacements δri are given by δ r i = ∑ j = 1 m ∂ r i ∂ q j δ q j = ∑ j = 1 m ∂ V i ∂ q ˙ j δ q j . {\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j}.} The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomes δ W = F 1 ⋅ ( ∑ j = 1 m ∂ V 1 ∂ q ˙ j δ q j ) + ⋯ + F n ⋅ ( ∑ j = 1 m ∂ V n ∂ q ˙ j δ q j ) {\displaystyle \delta W=\mathbf {F} _{1}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{1}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)+\dots +\mathbf {F} _{n}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{n}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)} or collecting the coefficients of δqj δ W = ( ∑ i = 1 n F i ⋅ ∂ V i ∂ q ˙ 1 ) δ q 1 + ⋯ + ( ∑ 1 = 1 n F i ⋅ ∂ V i ∂ q ˙ m ) δ q m . {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{1}}}\right)\delta q_{1}+\dots +\left(\sum _{1=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{m}}}\right)\delta q_{m}.} === Generalized forces === For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomes δ W = ( ∑ i = 1 n F i ⋅ ∂ V i ∂ q ˙ ) δ q = ( ∑ i = 1 n F i ⋅ ∂ ( ω × ( R i − R ) + V ) ∂ q ˙ ) δ q . {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}\right)\delta q=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial ({\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} )}{\partial {\dot {q}}}}\right)\delta q.} Introduce the resultant force F and torque T so this equation takes the form δ W = ( F ⋅ ∂ V ∂ q ˙ + T ⋅ ∂ ω ∂ q ˙ ) δ q . {\displaystyle \delta W=\left(\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}\right)\delta q.} The quantity Q defined by Q = F ⋅ ∂ V ∂ q ˙ + T ⋅ ∂ ω ∂ q ˙ , {\displaystyle Q=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}},} is known as the generalized force associated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that is δ W = ∑ j = 1 m Q j δ q j , {\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},} where Q j = F ⋅ ∂ V ∂ q ˙ j + T ⋅ ∂ ω ∂ q ˙ j , j = 1 , … , m . {\displaystyle Q_{j}=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}_{j}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.} It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential function V(q1, ..., qn), known as a potential energy. In this case the generalized forces are given by Q j = − ∂ V ∂ q j , j = 1 , … , m . {\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.} == D'Alembert's form of the principle of virtual work == The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium. === Static equilibrium === The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as the principle of virtual work. This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Qi=0. Let a mechanical system be constructed from n rigid bodies, Bi, i = 1, ..., n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, i = 1, ..., n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, i = 1, ..., n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom. The virtual work of the forces and torques, Fi and Ti, applied to this one degree of freedom system is given by δ W = ∑ i = 1 n ( F i ⋅ ∂ V i ∂ q ˙ + T i ⋅ ∂ ω i ∂ q ˙ ) δ q = Q δ q , {\displaystyle \delta W=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right)\delta q=Q\delta q,} where Q = ∑ i = 1 n ( F i ⋅ ∂ V i ∂ q ˙ + T i ⋅ ∂ ω i ∂ q ˙ ) , {\displaystyle Q=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right),} is the generalized force acting on this one degree of freedom system. If the mechanical system is defined by m generalized coordinates, qj, j = 1, ..., m, then the system has m degrees of freedom and the virtual work is given by, δ W = ∑ j = 1 m Q j δ q j , {\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},} where Q j = ∑ i = 1 n ( F i ⋅ ∂ V i ∂ q ˙ j + T i ⋅ ∂ ω i ∂ q ˙ j ) , j = 1 , … , m . {\displaystyle Q_{j}=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}_{j}}}\right),\quad j=1,\ldots ,m.} is the generalized force associated with the generalized coordinate qj. The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that is Q j = 0 , j = 1 , … , m . {\displaystyle Q_{j}=0,\quad j=1,\ldots ,m.} These m equations define the static equilibrium of the system of rigid bodies. === Generalized inertia forces === Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by Q ∗ = − ( M A ) ⋅ ∂ V ∂ q ˙ − ( [ I R ] α + ω × [ I R ] ω ) ⋅ ∂ ω ∂ q ˙ . {\displaystyle Q^{*}=-(M\mathbf {A} )\cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}-\left([I_{R}]{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times [I_{R}]{\boldsymbol {\omega }}\right)\cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}.} This inertia force can be computed from the kinetic energy of the rigid body, T = 1 2 M V ⋅ V + 1 2 ω ⋅ [ I R ] ω , {\displaystyle T={\tfrac {1}{2}}M\mathbf {V} \cdot \mathbf {V} +{\tfrac {1}{2}}{\boldsymbol {\omega }}\cdot [I_{R}]{\boldsymbol {\omega }},} by using the formula Q ∗ = − ( d d t ∂ T ∂ q ˙ − ∂ T ∂ q ) . {\displaystyle Q^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}}}-{\frac {\partial T}{\partial q}}\right).} A system of n rigid bodies with m generalized coordinates has the kinetic energy T = ∑ i = 1 n ( 1 2 M V i ⋅ V i + 1 2 ω i ⋅ [ I R ] ω i ) , {\displaystyle T=\sum _{i=1}^{n}\left({\tfrac {1}{2}}M\mathbf {V} _{i}\cdot \mathbf {V} _{i}+{\tfrac {1}{2}}{\boldsymbol {\omega }}_{i}\cdot [I_{R}]{\boldsymbol {\omega }}_{i}\right),} which can be used to calculate the m generalized inertia forces Q j ∗ = − ( d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j ) , j = 1 , … , m . {\displaystyle Q_{j}^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right),\quad j=1,\ldots ,m.} === Dynamic equilibrium === D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that δ W = ( Q 1 + Q 1 ∗ ) δ q 1 + ⋯ + ( Q m + Q m ∗ ) δ q m = 0 , {\displaystyle \delta W=\left(Q_{1}+Q_{1}^{*}\right)\delta q_{1}+\dots +\left(Q_{m}+Q_{m}^{*}\right)\delta q_{m}=0,} for any set of virtual displacements δqj. This condition yields m equations, Q j + Q j ∗ = 0 , j = 1 , … , m , {\displaystyle Q_{j}+Q_{j}^{*}=0,\quad j=1,\ldots ,m,} which can also be written as d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j = Q j , j = 1 , … , m . {\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=Q_{j},\quad j=1,\ldots ,m.} The result is a set of m equations of motion that define the dynamics of the rigid body system. === Lagrange's equations === If the generalized forces Qj are derivable from a potential energy V(q1, ..., qm), then these equations of motion take the form d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j = − ∂ V ∂ q j , j = 1 , … , m . {\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.} In this case, introduce the Lagrangian, L = T − V, so these equations of motion become d d t ∂ L ∂ q ˙ j − ∂ L ∂ q j = 0 j = 1 , … , m . {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}-{\frac {\partial L}{\partial q_{j}}}=0\quad j=1,\ldots ,m.} These are known as Lagrange's equations of motion. == Linear and angular momentum == === System of particles === The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i = 1, ..., n be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors, r i = ( r i − R ) + R , v i = d d t ( r i − R ) + V . {\displaystyle \mathbf {r} _{i}=\left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {R} ,\quad \mathbf {v} _{i}={\frac {d}{dt}}(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} .} The total linear and angular momentum vectors relative to the reference point R are p = d d t ( ∑ i = 1 n m i ( r i − R ) ) + ( ∑ i = 1 n m i ) V , {\displaystyle \mathbf {p} ={\frac {d}{dt}}\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,} and L = ∑ i = 1 n m i ( r i − R ) × d d t ( r i − R ) + ( ∑ i = 1 n m i ( r i − R ) ) × V . {\displaystyle \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right)+\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)\times \mathbf {V} .} If R is chosen as the center of mass these equations simplify to p = M V , L = ∑ i = 1 n m i ( r i − R ) × d d t ( r i − R ) . {\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right).} === Rigid system of particles === To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so Pi, i=1,...,n are located by the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors, r i = ( r i − R ) + R , v i = ω × ( r i − R ) + V , {\displaystyle \mathbf {r} _{i}=(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {R} ,\quad \mathbf {v} _{i}=\omega \times (\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} ,} where ω is the angular velocity of the system. The linear momentum and angular momentum of this rigid system measured relative to the center of mass R is p = ( ∑ i = 1 n m i ) V , L = ∑ i = 1 n m i ( r i − R ) × v i = ∑ i = 1 n m i ( r i − R ) × ( ω × ( r i − R ) ) . {\displaystyle \mathbf {p} =\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times \mathbf {v} _{i}=\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times (\omega \times (\mathbf {r} _{i}-\mathbf {R} )).} These equations simplify to become, p = M V , L = [ I R ] ω , {\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =[I_{R}]\omega ,} where M is the total mass of the system and [IR] is the moment of inertia matrix defined by [ I R ] = − ∑ i = 1 n m i [ r i − R ] [ r i − R ] , {\displaystyle [I_{R}]=-\sum _{i=1}^{n}m_{i}[r_{i}-R][r_{i}-R],} where [ri − R] is the skew-symmetric matrix constructed from the vector ri − R. == Applications == For the analysis of robotic systems For the biomechanical analysis of animals, humans or humanoid systems For the analysis of space objects For the understanding of strange motions of rigid bodies. For the design and development of dynamics-based sensors, such as gyroscopic sensors. For the design and development of various stability enhancement applications in automobiles. For improving the graphics of video games which involves rigid bodies == See also == == References == == Further reading == E. Leimanis (1965). The General Problem of the Motion of Coupled Rigid Bodies about a Fixed Point. (Springer, New York). W. B. Heard (2006). Rigid Body Mechanics: Mathematics, Physics and Applications. (Wiley-VCH). == External links == Chris Hecker's Rigid Body Dynamics Information Archived 12 March 2007 at the Wayback Machine Physically Based Modeling: Principles and Practice DigitalRune Knowledge Base Archived 20 November 2008 at the Wayback Machine contains a master thesis and a collection of resources about rigid body dynamics. F. Klein, "Note on the connection between line geometry and the mechanics of rigid bodies" (English translation) F. Klein, "On Sir Robert Ball's theory of screws" (English translation) E. Cotton, "Application of Cayley geometry to the geometric study of the displacement of a solid around a fixed point" (English translation)
Wikipedia/Rigid_body_dynamics
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests. CFD is applied to a range of research and engineering problems in multiple fields of study and industries, including aerodynamics and aerospace analysis, hypersonics, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, engine and combustion analysis, and visual effects for film and games. == Background and history == The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define a number of single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book. The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is widely considered one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as particle-in-cell method, fluid-in-cell method, vorticity stream function method, and marker-and-cell method. Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world. The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967. This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of a number of submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available. In the two-dimensional realm, a number of Panel Codes have been developed for airfoil analysis and design. The codes typically have a boundary layer analysis included, so that viscous effects can be modeled. Richard Eppler developed the PROFILE code, partly with NASA funding, which became available in the early 1980s. This was soon followed by Mark Drela's XFOIL code. Both PROFILE and XFOIL incorporate two-dimensional panel codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses a conformal transformation method for inverse airfoil design, while XFOIL has both a conformal transformation and an inverse panel method for airfoil design. An intermediate step between Panel Codes and Full Potential codes were codes that used the Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code, developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use. Developers turned to Full Potential codes, as panel methods could not calculate the non-linear flow present at transonic speeds. The first description of a means of using the Full Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970. Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were widely used, the most important being named Program H. A further growth of Program H was developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. Antony Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with David Caughey to develop the important three-dimensional Full Potential code FLO22 in 1975. A number of Full Potential codes emerged after this, culminating in Boeing's Tranair (A633) code, which still sees heavy use. The next step was the Euler equations, which promised to provide more accurate solutions of transonic flows. The methodology used by Jameson in his three-dimensional FLO57 code (1981) was used by others to produce such programs as Lockheed's TEAM program and IAI/Analytical Methods' MGAERO program. MGAERO is unique in being a structured cartesian mesh code, while most other such codes use structured body-fitted grids (with the exception of NASA's highly successful CART3D code, Lockheed's SPLITFLOW code and Georgia Tech's NASCART-GT). Antony Jameson also developed the three-dimensional AIRPLANE code which made use of unstructured tetrahedral grids. In the two-dimensional realm, Mark Drela and Michael Giles, then graduate students at MIT, developed the ISES Euler program (actually a suite of programs) for airfoil design and analysis. This code first became available in 1986 and has been further developed to design, analyze and optimize single or multi-element airfoils, as the MSES program. MSES sees wide use throughout the world. A derivative of MSES, for the design and analysis of airfoils in a cascade, is MISES, developed by Harold Youngren while he was a graduate student at MIT. The Navier–Stokes equations were the ultimate target of development. Two-dimensional codes, such as NASA Ames' ARC2D code first emerged. A number of three-dimensional codes were developed (ARC3D, OVERFLOW, CFL3D are three successful NASA contributions), leading to numerous commercial packages. Recently CFD methods have gained traction for modeling the flow behavior of granular materials within various chemical processes in engineering. This approach has emerged as a cost-effective alternative, offering a nuanced understanding of complex flow phenomena while minimizing expenses associated with traditional experimental methods. == Hierarchy of fluid flow equations == CFD can be seen as a group of computational methodologies (discussed below) used to solve equations governing fluid flow. In the application of CFD, a critical step is to decide which set of physical assumptions and related equations need to be used for the problem at hand. To illustrate this step, the following summarizes the physical assumptions/simplifications taken in equations of a flow that is single-phase (see multiphase flow and two-phase flow), single-species (i.e., it consists of one chemical species), non-reacting, and (unless said otherwise) compressible. Thermal radiation is neglected, and body forces due to gravity are considered (unless said otherwise). In addition, for this type of flow, the next discussion highlights the hierarchy of flow equations solved with CFD. Note that some of the following equations could be derived in more than one way. Conservation laws (CL): These are the most fundamental equations considered with CFD in the sense that, for example, all the following equations can be derived from them. For a single-phase, single-species, compressible flow one considers the conservation of mass, conservation of linear momentum, and conservation of energy. Continuum conservation laws (CCL): Start with the CL. Assume that mass, momentum and energy are locally conserved: These quantities are conserved and cannot "teleport" from one place to another but can only move by a continuous flow (see continuity equation). Another interpretation is that one starts with the CL and assumes a continuum medium (see continuum mechanics). The resulting system of equations is unclosed since to solve it one needs further relationships/equations: (a) constitutive relationships for the viscous stress tensor; (b) constitutive relationships for the diffusive heat flux; (c) an equation of state (EOS), such as the ideal gas law; and, (d) a caloric equation of state relating temperature with quantities such as enthalpy or internal energy. Compressible Navier-Stokes equations (C-NS): Start with the CCL. Assume a Newtonian viscous stress tensor (see Newtonian fluid) and a Fourier heat flux (see heat flux). The C-NS need to be augmented with an EOS and a caloric EOS to have a closed system of equations. Incompressible Navier-Stokes equations (I-NS): Start with the C-NS. Assume that density is always and everywhere constant. Another way to obtain the I-NS is to assume that the Mach number is very small and that temperature differences in the fluid are very small as well. As a result, the mass-conservation and momentum-conservation equations are decoupled from the energy-conservation equation, so one only needs to solve for the first two equations. Compressible Euler equations (EE): Start with the C-NS. Assume a frictionless flow with no diffusive heat flux. Weakly compressible Navier-Stokes equations (WC-NS): Start with the C-NS. Assume that density variations depend only on temperature and not on pressure. For example, for an ideal gas, use ρ = p 0 / ( R T ) {\displaystyle \rho =p_{0}/(RT)} , where p 0 {\displaystyle p_{0}} is a conveniently-defined reference pressure that is always and everywhere constant, ρ {\displaystyle \rho } is density, R {\displaystyle R} is the specific gas constant, and T {\displaystyle T} is temperature. As a result, the WC-NS do not capture acoustic waves. It is also common in the WC-NS to neglect the pressure-work and viscous-heating terms in the energy-conservation equation. The WC-NS are also called the C-NS with the low-Mach-number approximation. Boussinesq equations: Start with the C-NS. Assume that density variations are always and everywhere negligible except in the gravity term of the momentum-conservation equation (where density multiplies the gravitational acceleration). Also assume that various fluid properties such as viscosity, thermal conductivity, and heat capacity are always and everywhere constant. The Boussinesq equations are widely used in microscale meteorology. Compressible Reynolds-averaged Navier–Stokes equations and compressible Favre-averaged Navier-Stokes equations (C-RANS and C-FANS): Start with the C-NS. Assume that any flow variable f {\displaystyle f} , such as density, velocity and pressure, can be represented as f = F + f ″ {\displaystyle f=F+f''} , where F {\displaystyle F} is the ensemble-average of any flow variable, and f ″ {\displaystyle f''} is a perturbation or fluctuation from this average. f ″ {\displaystyle f''} is not necessarily small. If F {\displaystyle F} is a classic ensemble-average (see Reynolds decomposition) one obtains the Reynolds-averaged Navier–Stokes equations. And if F {\displaystyle F} is a density-weighted ensemble-average one obtains the Favre-averaged Navier-Stokes equations. As a result, and depending on the Reynolds number, the range of scales of motion is greatly reduced, something which leads to much faster solutions in comparison to solving the C-NS. However, information is lost, and the resulting system of equations requires the closure of various unclosed terms, notably the Reynolds stress. Ideal flow or potential flow equations: Start with the EE. Assume zero fluid-particle rotation (zero vorticity) and zero flow expansion (zero divergence). The resulting flowfield is entirely determined by the geometrical boundaries. Ideal flows can be useful in modern CFD to initialize simulations. Linearized compressible Euler equations (LEE): Start with the EE. Assume that any flow variable f {\displaystyle f} , such as density, velocity and pressure, can be represented as f = f 0 + f ′ {\displaystyle f=f_{0}+f'} , where f 0 {\displaystyle f_{0}} is the value of the flow variable at some reference or base state, and f ′ {\displaystyle f'} is a perturbation or fluctuation from this state. Furthermore, assume that this perturbation f ′ {\displaystyle f'} is very small in comparison with some reference value. Finally, assume that f 0 {\displaystyle f_{0}} satisfies "its own" equation, such as the EE. The LEE and its multiple variations are widely used in computational aeroacoustics. Sound wave or acoustic wave equation: Start with the LEE. Neglect all gradients of f 0 {\displaystyle f_{0}} and f ′ {\displaystyle f'} , and assume that the Mach number at the reference or base state is very small. The resulting equations for density, momentum and energy can be manipulated into a pressure equation, giving the well-known sound wave equation. Shallow water equations (SW): Consider a flow near a wall where the wall-parallel length-scale of interest is much larger than the wall-normal length-scale of interest. Start with the EE. Assume that density is always and everywhere constant, neglect the velocity component perpendicular to the wall, and consider the velocity parallel to the wall to be spatially-constant. Boundary layer equations (BL): Start with the C-NS (I-NS) for compressible (incompressible) boundary layers. Assume that there are thin regions next to walls where spatial gradients perpendicular to the wall are much larger than those parallel to the wall. Bernoulli equation: Start with the EE. Assume that density variations depend only on pressure variations. See Bernoulli's Principle. Steady Bernoulli equation: Start with the Bernoulli Equation and assume a steady flow. Or start with the EE and assume that the flow is steady and integrate the resulting equation along a streamline. Stokes Flow or creeping flow equations: Start with the C-NS or I-NS. Neglect the inertia of the flow. Such an assumption can be justified when the Reynolds number is very low. As a result, the resulting set of equations is linear, something which simplifies greatly their solution. Two-dimensional channel flow equation: Consider the flow between two infinite parallel plates. Start with the C-NS. Assume that the flow is steady, two-dimensional, and fully developed (i.e., the velocity profile does not change along the streamwise direction). Note that this widely-used fully-developed assumption can be inadequate in some instances, such as some compressible, microchannel flows, in which case it can be supplanted by a locally fully-developed assumption. One-dimensional Euler equations or one-dimensional gas-dynamic equations (1D-EE): Start with the EE. Assume that all flow quantities depend only on one spatial dimension. Fanno flow equation: Consider the flow inside a duct with constant area and adiabatic walls. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the momentum-conservation equation an empirical term to recover the effect of wall friction (neglected in the EE). To close the Fanno flow equation, a model for this friction term is needed. Such a closure involves problem-dependent assumptions. Rayleigh flow equation. Consider the flow inside a duct with constant area and either non-adiabatic walls without volumetric heat sources or adiabatic walls with volumetric heat sources. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the energy-conservation equation an empirical term to recover the effect of wall heat transfer or the effect of the heat sources (neglected in the EE). == Methodology == In all of these approaches the same basic procedure is followed. During preprocessing The geometry and physical bounds of the problem can be defined using computer aided design (CAD). From there, data can be suitably processed (cleaned-up) and the fluid volume (or fluid domain) is extracted. The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non-uniform, structured or unstructured, consisting of a combination of hexahedral, tetrahedral, prismatic, pyramidal or polyhedral elements. The physical modeling is defined – for example, the equations of fluid motion + enthalpy + radiation + species conservation Boundary conditions are defined. This involves specifying the fluid behaviour and properties at all bounding surfaces of the fluid domain. For transient problems, the initial conditions are also defined. The simulation is started and the equations are solved iteratively as a steady-state or transient. Finally a postprocessor is used for the analysis and visualization of the resulting solution. === Discretization methods === The stability of the selected discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks and contact surfaces. Some of the discretization methods being used are: ==== Finite volume method ==== The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion). In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretization guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form, ∂ ∂ t ∭ Q d V + ∬ F d A = 0 , {\displaystyle {\frac {\partial }{\partial t}}\iiint Q\,dV+\iint F\,d\mathbf {A} =0,} where Q {\displaystyle Q} is the vector of conserved variables, F {\displaystyle F} is the vector of fluxes (see Euler equations or Navier–Stokes equations), V {\displaystyle V} is the volume of the control volume element, and A {\displaystyle \mathbf {A} } is the surface area of the control volume element. ==== Finite element method ==== The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations. Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. FEM also provides more accurate solutions for smooth problems comparing to FVM. Another advantage of FEM is that it can handle complex geometries and boundary conditions. However, FEM can require more memory and has slower solution times than the FVM. In this method, a weighted residual equation is formed: R i = ∭ W i Q d V e {\displaystyle R_{i}=\iiint W_{i}Q\,dV^{e}} where R i {\displaystyle R_{i}} is the equation residual at an element vertex i {\displaystyle i} , Q {\displaystyle Q} is the conservation equation expressed on an element basis, W i {\displaystyle W_{i}} is the weight factor, and V e {\displaystyle V^{e}} is the volume of the element. ==== Finite difference method ==== The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid). ∂ Q ∂ t + ∂ F ∂ x + ∂ G ∂ y + ∂ H ∂ z = 0 {\displaystyle {\frac {\partial Q}{\partial t}}+{\frac {\partial F}{\partial x}}+{\frac {\partial G}{\partial y}}+{\frac {\partial H}{\partial z}}=0} where Q {\displaystyle Q} is the vector of conserved variables, and F {\displaystyle F} , G {\displaystyle G} , and H {\displaystyle H} are the fluxes in the x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} directions respectively. ==== Spectral element method ==== Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinite-dimensional function space. Clearly an infinite-dimensional function space cannot be represented on a discrete spectral element mesh; this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form v ( x , y ) = a x + b y + c x y + d {\displaystyle v(x,y)=ax+by+cxy+d} . In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out. At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world. ==== Lattice Boltzmann method ==== The lattice Boltzmann method (LBM) with its simplified kinetic picture on a lattice provides a computationally efficient description of hydrodynamics. Unlike the traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice mesh. In this method, one works with the discrete in space and time version of the kinetic evolution equation in the Boltzmann Bhatnagar-Gross-Krook (BGK) form. ==== Vortex method ==== The vortex method, also Lagrangian Vortex Particle Method, is a meshfree technique for the simulation of incompressible turbulent flows. In it, vorticity is discretized onto Lagrangian particles, these computational elements being called vortices, vortons, or vortex particles. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). This breakthrough came in the 1980s with the development of the Barnes-Hut and fast multipole method (FMM) algorithms. These paved the way to practical computation of the velocities from the vortex elements. Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology; It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES. All problems are treated identically. No modeling or calibration inputs are required. Time-series simulations, which are crucial for correct analysis of acoustics, are possible. The small scale and large scale are accurately simulated at the same time. ==== Boundary element method ==== In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh. ==== High-resolution discretization schemes ==== High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing. === Turbulence models === In computational modeling of turbulent flows, one common objective is to obtain a model that can predict quantities of interest, such as fluid velocity, for use in engineering designs of the system being modeled. For turbulent flows, the range of length scales and complexity of phenomena involved in turbulence make most modeling approaches prohibitively expensive; the resolution required to resolve all scales involved in turbulence is beyond what is computationally possible. The primary approach in such cases is to create numerical models to approximate unresolved phenomena. This section lists some commonly used computational models for turbulent flows. Turbulence models can be classified based on computational expense, which corresponds to the range of scales that are modeled versus resolved (the more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost). If a majority or all of the turbulent scales are not modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy. In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions. ==== Reynolds-averaged Navier–Stokes ==== Reynolds-averaged Navier–Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second-order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy. RANS models can be divided into two broad approaches: Boussinesq hypothesis This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding), Mixing Length Model (Prandtl), and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the k − ϵ {\displaystyle k-\epsilon } is a "Two Equation" model because two transport equations (one for k {\displaystyle k} and one for ϵ {\displaystyle \epsilon } ) are solved. Reynolds stress model (RSM) This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort. ==== Large eddy simulation ==== Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS. ==== Detached eddy simulation ==== Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Philippe R. Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart–Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions. ==== Direct numerical simulation ==== Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to R e 3 {\displaystyle Re^{3}} . DNS is intractable for flows with complex geometries or flow configurations. ==== Coherent vortex simulation ==== The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the − 40 39 {\displaystyle -{\frac {40}{39}}} energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Vasilyev applied the FDV model to large eddy simulation, but did not assume that the wavelet filter eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion. ==== PDF methods ==== Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity, f V ( v ; x , t ) d v {\displaystyle f_{V}({\boldsymbol {v}};{\boldsymbol {x}},t)d{\boldsymbol {v}}} , which gives the probability of the velocity at point x {\displaystyle {\boldsymbol {x}}} being between v {\displaystyle {\boldsymbol {v}}} and v + d v {\displaystyle {\boldsymbol {v}}+d{\boldsymbol {v}}} . This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution. ==== Vorticity confinement method ==== The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed. ==== Linear eddy model ==== The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions. === Two-phase flow === The modeling of two-phase flow is still under development. Different methods have been proposed, including the Volume of fluid method, the level-set method and front tracking. These methods often involve a tradeoff between maintaining a sharp interface or conserving mass . This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface. === Solution algorithms === Discretization in the space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator. Multigrid has the advantage of asymptotically optimal performance on a number of problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require a number of iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations. For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates. === Unsteady aerodynamics === CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3. === Biomedical engineering === CFD investigations are used to clarify the characteristics of aortic flow in details that are beyond the capabilities of experimental measurements. To analyze these conditions, CAD models of the human vascular system are extracted employing modern imaging techniques such as MRI or Computed Tomography. A 3D model is reconstructed from this data and the fluid flow can be computed. Blood properties such as density and viscosity, and realistic boundary conditions (e.g. systemic pressure) have to be taken into consideration. Therefore, making it possible to analyze and optimize the flow in the cardiovascular system for different applications. === CPU versus GPU === Traditionally, CFD simulations are performed on CPUs. In a more recent trend, simulations are also performed on GPUs. These typically contain slower but more processors. For CFD algorithms that feature good parallelism performance (i.e. good speed-up by adding more cores) this can greatly reduce simulation times. Fluid-implicit particle and lattice-Boltzmann methods are typical examples of codes that scale well on GPUs. == See also == == References == == Notes == Anderson, John D. (1995). Computational Fluid Dynamics: The Basics With Applications. Science/Engineering/Math. McGraw-Hill Science. ISBN 978-0-07-001685-9. Patankar, Suhas (1980). Numerical Heat Transfer and Fluid Flow. Hemisphere Series on Computational Methods in Mechanics and Thermal Science. Taylor & Francis. ISBN 978-0-89116-522-4. == External links == Course: Computational Fluid Dynamics – Suman Chakraborty (Indian Institute of Technology Kharagpur) Course: Numerical PDE Techniques for Scientists and Engineers, Open access Lectures and Codes for Numerical PDEs, including a modern view of Compressible CFD
Wikipedia/Computational_fluid_dynamics
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz. The most common form of the transformation, parametrized by the real constant v , {\displaystyle v,} representing a velocity confined to the x-direction, is expressed as t ′ = γ ( t − v x c 2 ) x ′ = γ ( x − v t ) y ′ = y z ′ = z {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}} where (t, x, y, z) and (t′, x′, y′, z′) are the coordinates of an event in two frames with the spatial origins coinciding at t = t′ = 0, where the primed frame is seen from the unprimed frame as moving with speed v along the x-axis, where c is the speed of light, and γ = 1 1 − v 2 / c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}} is the Lorentz factor. When speed v is much smaller than c, the Lorentz factor is negligibly different from 1, but as v approaches c, γ {\displaystyle \gamma } grows without bound. The value of v must be smaller than c for the transformation to make sense. Expressing the speed as a fraction of the speed of light, β = v / c , {\textstyle \beta =v/c,} an equivalent form of the transformation is c t ′ = γ ( c t − β x ) x ′ = γ ( x − β c t ) y ′ = y z ′ = z . {\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\\x'&=\gamma \left(x-\beta ct\right)\\y'&=y\\z'&=z.\end{aligned}}} Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity, etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity. In each reference frame, an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime. The transformations connect the space and time coordinates of an event as measured by an observer in each frame. They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events, but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity. Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The transformations later became a cornerstone for special relativity. The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost. In Minkowski space—the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group. == History == Many physicists—including Woldemar Voigt, George FitzGerald, Joseph Larmor, and Hendrik Lorentz himself—had been discussing the physics implied by these equations since 1887. Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether. FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905. Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c, the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations. In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group, and he named it after Lorentz. Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanistic aether as unnecessary. == Derivation of the group of Lorentz transformations == An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates x, y, z to specify position in space in that frame. Subscripts label individual events. From Einstein's second postulate of relativity (invariance of c) it follows that: in all inertial frames for events connected by light signals. The quantity on the left is called the spacetime interval between events a1 = (t1, x1, y1, z1) and a2 = (t2, x2, y2, z2). The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space. The transformation sought after thus must possess the property that: where (t, x, y, z) are the spacetime coordinates used to define events in one frame, and (t′, x′, y′, z′) are the coordinates in another frame. First one observes that (D2) is satisfied if an arbitrary 4-tuple b of numbers are added to events a1 and a2. Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too: (a solution satisfying the first formula automatically satisfies the second one as well; see polarization identity). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature. First equation in (D3) can be written more compactly as: where (·, ·) refers to the bilinear form of signature (1, 3) on R4 exposed by the right hand side formula in (D3). The alternative notation defined on the right is referred to as the relativistic dot product. Spacetime mathematically viewed as R4 endowed with this bilinear form is known as Minkowski space M. The Lorentz transformation is thus an element of the group O(1, 3), the Lorentz group or, for those that prefer the other metric signature, O(3, 1) (also called the Lorentz group). One has: which is precisely preservation of the bilinear form (D3) which implies (by linearity of Λ and bilinearity of the form) that (D2) is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group. == Generalities == The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations, each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations. Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts, and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is rotation in the spatial coordinates only, these like boosts are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation, or Euler angles, etc.). A combination of a rotation and boost is a homogeneous transformation, which transforms the origin back to the origin. The full Lorentz group O(3, 1) also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed. Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation, an element of the Poincaré group, which is also called the inhomogeneous Lorentz group. == Physical formulation of Lorentz boosts == === Coordinate transformation === A "stationary" observer in frame F defines events with coordinates t, x, y, z. Another frame F′ moves with velocity v relative to F, and an observer in this "moving" frame F′ defines events using the coordinates t′, x′, y′, z′. The coordinate axes in each frame are parallel (the x and x′ axes are parallel, the y and y′ axes are parallel, and the z and z′ axes are parallel), remain mutually perpendicular, and relative motion is along the coincident xx′ axes. At t = t′ = 0, the origins of both coordinate systems are the same, (x, y, z) = (x′, y′, z′) = (0, 0, 0). In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration, or synchronized. If an observer in F records an event t, x, y, z, then an observer in F′ records the same event with coordinates where v is the relative velocity between frames in the x-direction, c is the speed of light, and γ = 1 1 − v 2 c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} (lowercase gamma) is the Lorentz factor. Here, v is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity v > 0 is motion along the positive directions of the xx′ axes, zero relative velocity v = 0 is no relative motion, while negative relative velocity v < 0 is relative motion along the negative directions of the xx′ axes. The magnitude of relative velocity v cannot equal or exceed c, so only subluminal speeds −c < v < c are allowed. The corresponding range of γ is 1 ≤ γ < ∞. The transformations are not defined if v is outside these limits. At the speed of light (v = c) γ is infinite, and faster than light (v > c) γ is a complex number, each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers. As an active transformation, an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the xx′ axes, because of the −v in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the xx′ axes, while the event does not change and is simply represented in another coordinate system, a passive transformation. The inverse relations (t, x, y, z in terms of t′, x′, y′, z′) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here F′ is the "stationary" frame while F is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from F′ to F must take exactly the same form as the transformations from F to F′. The only difference is F moves with velocity −v relative to F′ (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in F′ notes an event t′, x′, y′, z′, then an observer in F notes the same event with coordinates and the value of γ remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction. Sometimes it is more convenient to use β = v/c (lowercase beta) instead of v, so that c t ′ = γ ( c t − β x ) , x ′ = γ ( x − β c t ) , {\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\,,\\x'&=\gamma \left(x-\beta ct\right)\,,\\\end{aligned}}} which shows much more clearly the symmetry in the transformation. From the allowed ranges of v and the definition of β, it follows −1 < β < 1. The use of β and γ is standard throughout the literature. In the case of three spatial dimensions [ct,x,y,z], where the boost β {\displaystyle \beta } is in the x direction, the eigenstates of the transformation are [1,1,0,0] with eigenvalue ( 1 − β ) / ( 1 + β ) {\displaystyle {\sqrt {(1-\beta )/(1+\beta )}}} , [1, −1,0,0] with eigenvalue ( 1 + β ) / ( 1 − β ) {\displaystyle {\sqrt {(1+\beta )/(1-\beta )}}} , and [0,0,1,0] and [0,0,0,1], the latter two with eigenvalue 1. When the boost velocity v {\displaystyle {\boldsymbol {v}}} is in an arbitrary vector direction with the boost vector β = v / c {\displaystyle {\boldsymbol {\beta }}={\boldsymbol {v}}/c} , then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by [ c t ′ − γ β x x ′ 1 + γ 2 1 + γ β x 2 y ′ γ 2 1 + γ β x β y z ′ γ 2 1 + γ β y β z ] = [ γ − γ β x − γ β y − γ β z − γ β x 1 + γ 2 1 + γ β x 2 γ 2 1 + γ β x β y γ 2 1 + γ β x β z − γ β y γ 2 1 + γ β x β y 1 + γ 2 1 + γ β y 2 γ 2 1 + γ β y β z − γ β z γ 2 1 + γ β x β z γ 2 1 + γ β y β z 1 + γ 2 1 + γ β z 2 ] [ c t − γ β x x 1 + γ 2 1 + γ β x 2 y γ 2 1 + γ β x β y z γ 2 1 + γ β y β z ] , {\displaystyle {\begin{bmatrix}ct'{\vphantom {-\gamma \beta _{x}}}\\x'{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}}}\\y'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}}}\\z'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma \beta _{x}&-\gamma \beta _{y}&-\gamma \beta _{z}\\-\gamma \beta _{x}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{z}\\-\gamma \beta _{y}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}\\-\gamma \beta _{z}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{z}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{z}^{2}\\\end{bmatrix}}{\begin{bmatrix}ct{\vphantom {-\gamma \beta _{x}}}\\x{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}}}\\y{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}}}\\z{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}}}\end{bmatrix}},} where the Lorentz factor is γ = 1 / 1 − β 2 {\displaystyle \gamma =1/{\sqrt {1-{\boldsymbol {\beta }}^{2}}}} . The determinant of the transformation matrix is +1 and its trace is 2 ( 1 + γ ) {\displaystyle 2(1+\gamma )} . The inverse of the transformation is given by reversing the sign of β {\displaystyle {\boldsymbol {\beta }}} . The quantity c 2 t 2 − x 2 − y 2 − z 2 {\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}} is invariant under the transformation: namely ( c t ′ 2 − x ′ 2 − y ′ 2 − z ′ 2 ) = ( c t 2 − x 2 − y 2 − z 2 ) {\displaystyle (ct'^{2}-x'^{2}-y'^{2}-z'^{2})=(ct^{2}-x^{2}-y^{2}-z^{2})} . The Lorentz transformations can also be derived in a way that resembles circular rotations in 3-dimensional space using the hyperbolic functions. For the boost in the x direction, the results are where ζ (lowercase zeta) is a parameter called rapidity (many other symbols are used, including θ, ϕ, φ, η, ψ, ξ). Given the strong resemblance to rotations of spatial coordinates in 3-dimensional space in the Cartesian xy, yz, and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4-dimensional Minkowski space. The parameter ζ is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram. The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking x = 0 or ct = 0 in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying ζ, which parametrizes the curves according to the identity cosh 2 ⁡ ζ − sinh 2 ⁡ ζ = 1 . {\displaystyle \cosh ^{2}\zeta -\sinh ^{2}\zeta =1\,.} Conversely the ct and x axes can be constructed for varying coordinates but constant ζ. The definition tanh ⁡ ζ = sinh ⁡ ζ cosh ⁡ ζ , {\displaystyle \tanh \zeta ={\frac {\sinh \zeta }{\cosh \zeta }}\,,} provides the link between a constant value of rapidity, and the slope of the ct axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor cosh ⁡ ζ = 1 1 − tanh 2 ⁡ ζ . {\displaystyle \cosh \zeta ={\frac {1}{\sqrt {1-\tanh ^{2}\zeta }}}\,.} Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between β, γ, and ζ are β = tanh ⁡ ζ , γ = cosh ⁡ ζ , β γ = sinh ⁡ ζ . {\displaystyle {\begin{aligned}\beta &=\tanh \zeta \,,\\\gamma &=\cosh \zeta \,,\\\beta \gamma &=\sinh \zeta \,.\end{aligned}}} Taking the inverse hyperbolic tangent gives the rapidity ζ = tanh − 1 ⁡ β . {\displaystyle \zeta =\tanh ^{-1}\beta \,.} Since −1 < β < 1, it follows −∞ < ζ < ∞. From the relation between ζ and β, positive rapidity ζ > 0 is motion along the positive directions of the xx′ axes, zero rapidity ζ = 0 is no relative motion, while negative rapidity ζ < 0 is relative motion along the negative directions of the xx′ axes. The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity ζ → −ζ since this is equivalent to negating the relative velocity. Therefore, The inverse transformations can be similarly visualized by considering the cases when x′ = 0 and ct′ = 0. So far the Lorentz transformations have been applied to one event. If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences; Δ t ′ = γ ( Δ t − v Δ x c 2 ) , Δ x ′ = γ ( Δ x − v Δ t ) , {\displaystyle {\begin{aligned}\Delta t'&=\gamma \left(\Delta t-{\frac {v\,\Delta x}{c^{2}}}\right)\,,\\\Delta x'&=\gamma \left(\Delta x-v\,\Delta t\right)\,,\end{aligned}}} with inverse relations Δ t = γ ( Δ t ′ + v Δ x ′ c 2 ) , Δ x = γ ( Δ x ′ + v Δ t ′ ) . {\displaystyle {\begin{aligned}\Delta t&=\gamma \left(\Delta t'+{\frac {v\,\Delta x'}{c^{2}}}\right)\,,\\\Delta x&=\gamma \left(\Delta x'+v\,\Delta t'\right)\,.\end{aligned}}} where Δ (uppercase delta) indicates a difference of quantities; e.g., Δx = x2 − x1 for two values of x coordinates, and so on. These transformations on differences rather than spatial points or instants of time are useful for a number of reasons: in calculations and experiments, it is lengths between two points or time intervals that are measured or of interest (e.g., the length of a moving vehicle, or time duration it takes to travel from one place to another), the transformations of velocity can be readily derived by making the difference infinitesimally small and dividing the equations, and the process repeated for the transformation of acceleration, if the coordinate systems are never coincident (i.e., not in standard configuration), and if both observers can agree on an event t0, x0, y0, z0 in F and t0′, x0′, y0′, z0′ in F′, then they can use that event as the origin, and the spacetime coordinate differences are the differences between their coordinates and this origin, e.g., Δx = x − x0, Δx′ = x′ − x0′, etc. === Physical implications === A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in F the equation for a pulse of light along the x direction is x = ct, then in F′ the Lorentz transformations give x′ = ct′, and vice versa, for any −c < v < c. For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation: t ′ ≈ t x ′ ≈ x − v t {\displaystyle {\begin{aligned}t'&\approx t\\x'&\approx x-vt\end{aligned}}} in accordance with the correspondence principle. It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance". Three counterintuitive, but correct, predictions of the transformations are: Relativity of simultaneity Suppose two events occur along the x axis simultaneously (Δt = 0) in F, but separated by a nonzero displacement Δx. Then in F′, we find that Δ t ′ = γ − v Δ x c 2 {\displaystyle \Delta t'=\gamma {\frac {-v\,\Delta x}{c^{2}}}} , so the events are no longer simultaneous according to a moving observer. Time dilation Suppose there is a clock at rest in F. If a time interval is measured at the same point in that frame, so that Δx = 0, then the transformations give this interval in F′ by Δt′ = γΔt. Conversely, suppose there is a clock at rest in F′. If an interval is measured at the same point in that frame, so that Δx′ = 0, then the transformations give this interval in F by Δt = γΔt′. Either way, each observer measures the time interval between ticks of a moving clock to be longer by a factor γ than the time interval between ticks of his own clock. Length contraction Suppose there is a rod at rest in F aligned along the x axis, with length Δx. In F′, the rod moves with velocity -v, so its length must be measured by taking two simultaneous (Δt′ = 0) measurements at opposite ends. Under these conditions, the inverse Lorentz transform shows that Δx = γΔx′. In F the two measurements are no longer simultaneous, but this does not matter because the rod is at rest in F. So each observer measures the distance between the end points of a moving rod to be shorter by a factor 1/γ than the end points of an identical rod at rest in his own frame. Length contraction affects any geometric quantity related to lengths, so from the perspective of a moving observer, areas and volumes will also appear to shrink along the direction of motion. === Vector transformations === The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector v with a magnitude |v| = v that cannot equal or exceed c, so that 0 ≤ v < c. Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector r as measured in F, and r′ as measured in F′, each into components perpendicular (⊥) and parallel ( ‖ ) to v, r = r ⊥ + r ‖ , r ′ = r ⊥ ′ + r ‖ ′ , {\displaystyle \mathbf {r} =\mathbf {r} _{\perp }+\mathbf {r} _{\|}\,,\quad \mathbf {r} '=\mathbf {r} _{\perp }'+\mathbf {r} _{\|}'\,,} then the transformations are t ′ = γ ( t − r ∥ ⋅ v c 2 ) r ‖ ′ = γ ( r ‖ − v t ) r ⊥ ′ = r ⊥ {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {\mathbf {r} _{\parallel }\cdot \mathbf {v} }{c^{2}}}\right)\\\mathbf {r} _{\|}'&=\gamma (\mathbf {r} _{\|}-\mathbf {v} t)\\\mathbf {r} _{\perp }'&=\mathbf {r} _{\perp }\end{aligned}}} where · is the dot product. The Lorentz factor γ retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition β = v/c with magnitude 0 ≤ β < 1 is also used by some authors. Introducing a unit vector n = v/v = β/β in the direction of relative motion, the relative velocity is v = vn with magnitude v and direction n, and vector projection and rejection give respectively r ∥ = ( r ⋅ n ) n , r ⊥ = r − ( r ⋅ n ) n {\displaystyle \mathbf {r} _{\parallel }=(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {r} _{\perp }=\mathbf {r} -(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} } Accumulating the results gives the full transformations, The projection and rejection also applies to r′. For the inverse transformations, exchange r and r′ to switch observed coordinates, and negate the relative velocity v → −v (or simply the unit vector n → −n since the magnitude v is always positive) to obtain The unit vector has the advantage of simplifying equations for a single boost, allows either v or β to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing β and βγ. It is not convenient for multiple boosts. The vectorial relation between relative velocity and rapidity is β = β n = n tanh ⁡ ζ , {\displaystyle {\boldsymbol {\beta }}=\beta \mathbf {n} =\mathbf {n} \tanh \zeta \,,} and the "rapidity vector" can be defined as ζ = ζ n = n tanh − 1 ⁡ β , {\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta \,,} each of which serves as a useful abbreviation in some contexts. The magnitude of ζ is the absolute value of the rapidity scalar confined to 0 ≤ ζ < ∞, which agrees with the range 0 ≤ β < 1. === Transformation of velocities === Defining the coordinate velocities and Lorentz factor by u = d r d t , u ′ = d r ′ d t ′ , γ v = 1 1 − v ⋅ v c 2 {\displaystyle \mathbf {u} ={\frac {d\mathbf {r} }{dt}}\,,\quad \mathbf {u} '={\frac {d\mathbf {r} '}{dt'}}\,,\quad \gamma _{\mathbf {v} }={\frac {1}{\sqrt {1-{\dfrac {\mathbf {v} \cdot \mathbf {v} }{c^{2}}}}}}} taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to u ′ = 1 1 − v ⋅ u c 2 [ u γ v − v + 1 c 2 γ v γ v + 1 ( u ⋅ v ) v ] {\displaystyle \mathbf {u} '={\frac {1}{1-{\frac {\mathbf {v} \cdot \mathbf {u} }{c^{2}}}}}\left[{\frac {\mathbf {u} }{\gamma _{\mathbf {v} }}}-\mathbf {v} +{\frac {1}{c^{2}}}{\frac {\gamma _{\mathbf {v} }}{\gamma _{\mathbf {v} }+1}}\left(\mathbf {u} \cdot \mathbf {v} \right)\mathbf {v} \right]} The velocities u and u′ are the velocity of some massive object. They can also be for a third inertial frame (say F′′), in which case they must be constant. Denote either entity by X. Then X moves with velocity u relative to F, or equivalently with velocity u′ relative to F′, in turn F′ moves with velocity v relative to F. The inverse transformations can be obtained in a similar way, or as with position coordinates exchange u and u′, and change v to −v. The transformation of velocity is useful in stellar aberration, the Fizeau experiment, and the relativistic Doppler effect. The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential. === Transformation of other quantities === In general, given four quantities A and Z = (Zx, Zy, Zz) and their Lorentz-boosted counterparts A′ and Z′ = (Z′x, Z′y, Z′z), a relation of the form A 2 − Z ⋅ Z = A ′ 2 − Z ′ ⋅ Z ′ {\displaystyle A^{2}-\mathbf {Z} \cdot \mathbf {Z} ={A'}^{2}-\mathbf {Z} '\cdot \mathbf {Z} '} implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates; A ′ = γ ( A − v n ⋅ Z c ) , Z ′ = Z + ( γ − 1 ) ( Z ⋅ n ) n − γ A v n c . {\displaystyle {\begin{aligned}A'&=\gamma \left(A-{\frac {v\mathbf {n} \cdot \mathbf {Z} }{c}}\right)\,,\\\mathbf {Z} '&=\mathbf {Z} +(\gamma -1)(\mathbf {Z} \cdot \mathbf {n} )\mathbf {n} -{\frac {\gamma Av\mathbf {n} }{c}}\,.\end{aligned}}} The decomposition of Z (and Z′) into components perpendicular and parallel to v is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange (A, Z) and (A′, Z′) to switch observed quantities, and reverse the direction of relative motion by the substitution n ↦ −n). The quantities (A, Z) collectively make up a four-vector, where A is the "timelike component", and Z the "spacelike component". Examples of A and Z are the following: For a given object (e.g., particle, fluid, field, material), if A or Z correspond to properties specific to the object like its charge density, mass density, spin, etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy E of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin s depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity st, however a boosted observer will perceive a nonzero timelike component and an altered spin. Not all quantities are invariant in the form as shown above, for example orbital angular momentum L does not have a timelike quantity, and neither does the electric field E nor the magnetic field B. The definition of angular momentum is L = r × p, and in a boosted frame the altered angular momentum is L′ = r′ × p′. Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out L transforms with another vector quantity N = (E/c2)r − tp related to boosts, see relativistic angular momentum for details. For the case of the E and B fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in F it is F = q(E + v × B) while in F′ it is F′ = q(E′ + v′ × B′). A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below. == Mathematical formulation == Throughout, italic non-bold capital letters are 4 × 4 matrices, while non-italic bold letters are 3 × 3 matrices. === Homogeneous Lorentz group === Writing the coordinates in column vectors and the Minkowski metric η as a square matrix X ′ = [ c t ′ x ′ y ′ z ′ ] , η = [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] , X = [ c t x y z ] {\displaystyle X'={\begin{bmatrix}c\,t'\\x'\\y'\\z'\end{bmatrix}}\,,\quad \eta ={\begin{bmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}\,,\quad X={\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}}} the spacetime interval takes the form (superscript T denotes transpose) X ⋅ X = X T η X = X ′ T η X ′ {\displaystyle X\cdot X=X^{\mathrm {T} }\eta X={X'}^{\mathrm {T} }\eta {X'}} and is invariant under a Lorentz transformation X ′ = Λ X {\displaystyle X'=\Lambda X} where Λ is a square matrix which can depend on parameters. The set of all Lorentz transformations Λ {\displaystyle \Lambda } in this article is denoted L {\displaystyle {\mathcal {L}}} . This set together with matrix multiplication forms a group, in this context known as the Lorentz group. Also, the above expression X·X is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group. In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups. In this context the operation of composition amounts to matrix multiplication. From the invariance of the spacetime interval it follows η = Λ T η Λ {\displaystyle \eta =\Lambda ^{\mathrm {T} }\eta \Lambda } and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule gives immediately [ det ( Λ ) ] 2 = 1 ⇒ det ( Λ ) = ± 1 {\displaystyle \left[\det(\Lambda )\right]^{2}=1\quad \Rightarrow \quad \det(\Lambda )=\pm 1} Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form, η = [ − 1 0 0 I ] , Λ = [ Γ − a T − b M ] , {\displaystyle \eta ={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}\,,\quad \Lambda ={\begin{bmatrix}\Gamma &-\mathbf {a} ^{\mathrm {T} }\\-\mathbf {b} &\mathbf {M} \end{bmatrix}}\,,} carrying out the block matrix multiplications obtains general conditions on Γ, a, b, M to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results Γ 2 = 1 + b T b {\displaystyle \Gamma ^{2}=1+\mathbf {b} ^{\mathrm {T} }\mathbf {b} } is useful; bTb ≥ 0 always so it follows that Γ 2 ≥ 1 ⇒ Γ ≤ − 1 , Γ ≥ 1 {\displaystyle \Gamma ^{2}\geq 1\quad \Rightarrow \quad \Gamma \leq -1\,,\quad \Gamma \geq 1} The negative inequality may be unexpected, because Γ multiplies the time coordinate and this has an effect on time symmetry. If the positive equality holds, then Γ is the Lorentz factor. The determinant and inequality provide four ways to classify Lorentz Transformations (herein LTs for brevity). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets. where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities. The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets L = L + ↑ ∪ L − ↑ ∪ L + ↓ ∪ L − ↓ {\displaystyle {\mathcal {L}}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\uparrow }\cup {\mathcal {L}}_{+}^{\downarrow }\cup {\mathcal {L}}_{-}^{\downarrow }} A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations Λ and L from a particular subgroup, the composite Lorentz transformations ΛL and LΛ must be in the same subgroup as Λ and L. This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets L + ↑ {\displaystyle {\mathcal {L}}_{+}^{\uparrow }} , L + {\displaystyle {\mathcal {L}}_{+}} , L ↑ {\displaystyle {\mathcal {L}}^{\uparrow }} , and L 0 = L + ↑ ∪ L − ↓ {\displaystyle {\mathcal {L}}_{0}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\downarrow }} all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g. L + ↓ {\displaystyle {\mathcal {L}}_{+}^{\downarrow }} , L − ↓ {\displaystyle {\mathcal {L}}_{-}^{\downarrow }} , L − ↑ {\displaystyle {\mathcal {L}}_{-}^{\uparrow }} ) do not form subgroups. === Proper transformations === If a Lorentz covariant 4-vector is measured in one inertial frame with result X {\displaystyle X} , and the same measurement made in another inertial frame (with the same orientation and origin) gives result X ′ {\displaystyle X'} , the two results will be related by X ′ = B ( v ) X {\displaystyle X'=B(\mathbf {v} )X} where the boost matrix B ( v ) {\displaystyle B(\mathbf {v} )} represents the rotation-free Lorentz transformation between the unprimed and primed frames and v {\displaystyle \mathbf {v} } is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by B ( v ) = [ γ − γ v x / c − γ v y / c − γ v z / c − γ v x / c 1 + ( γ − 1 ) v x 2 v 2 ( γ − 1 ) v x v y v 2 ( γ − 1 ) v x v z v 2 − γ v y / c ( γ − 1 ) v y v x v 2 1 + ( γ − 1 ) v y 2 v 2 ( γ − 1 ) v y v z v 2 − γ v z / c ( γ − 1 ) v z v x v 2 ( γ − 1 ) v z v y v 2 1 + ( γ − 1 ) v z 2 v 2 ] = [ γ − γ β → T − γ β → I + ( γ − 1 ) β → β → T β 2 ] , {\displaystyle B(\mathbf {v} )={\begin{bmatrix}\gamma &-\gamma v_{x}/c&-\gamma v_{y}/c&-\gamma v_{z}/c\\-\gamma v_{x}/c&1+(\gamma -1){\dfrac {v_{x}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{x}v_{y}}{v^{2}}}&(\gamma -1){\dfrac {v_{x}v_{z}}{v^{2}}}\\-\gamma v_{y}/c&(\gamma -1){\dfrac {v_{y}v_{x}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{y}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{y}v_{z}}{v^{2}}}\\-\gamma v_{z}/c&(\gamma -1){\dfrac {v_{z}v_{x}}{v^{2}}}&(\gamma -1){\dfrac {v_{z}v_{y}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{z}^{2}}{v^{2}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma {\vec {\beta }}^{T}\\-\gamma {\vec {\beta }}&I+(\gamma -1){\dfrac {{\vec {\beta }}{\vec {\beta }}^{T}}{\beta ^{2}}}\end{bmatrix}},} where v = v x 2 + v y 2 + v z 2 {\textstyle v={\sqrt {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}}} is the magnitude of the velocity and γ = 1 1 − v 2 c 2 {\textstyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by B ( − v ) {\displaystyle B(-\mathbf {v} )} . If a frame F′ is boosted with velocity u relative to frame F, and another frame F′′ is boosted with velocity v relative to F′, the separate boosts are X ″ = B ( v ) X ′ , X ′ = B ( u ) X {\displaystyle X''=B(\mathbf {v} )X'\,,\quad X'=B(\mathbf {u} )X} and the composition of the two boosts connects the coordinates in F′′ and F, X ″ = B ( v ) B ( u ) X . {\displaystyle X''=B(\mathbf {v} )B(\mathbf {u} )X\,.} Successive transformations act on the left. If u and v are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute: B(v)B(u) = B(u)B(v). This composite transformation happens to be another boost, B(w), where w is collinear with u and v. If u and v are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: B(v)B(u) and B(u)B(v) are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of R(ρ)B(w) or B(w)R(ρ). The w and w are composite velocities, while ρ and ρ are rotation parameters (e.g. axis-angle variables, Euler angles, etc.). The rotation in block matrix form is simply R ( ρ ) = [ 1 0 0 R ( ρ ) ] , {\displaystyle \quad R({\boldsymbol {\rho }})={\begin{bmatrix}1&0\\0&\mathbf {R} ({\boldsymbol {\rho }})\end{bmatrix}}\,,} where R(ρ) is a 3 × 3 rotation matrix, which rotates any 3-dimensional vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect w and ρ (or w and ρ) to the original boost parameters u and v. In a composition of boosts, the R matrix is named the Wigner rotation, and gives rise to the Thomas precession. These articles give the explicit formulae for the composite transformation matrices, including expressions for w, ρ, w, ρ. In this article the axis-angle representation is used for ρ. The rotation is about an axis in the direction of a unit vector e, through angle θ (positive anticlockwise, negative clockwise, according to the right-hand rule). The "axis-angle vector" θ = θ e {\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {e} } will serve as a useful abbreviation. Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include: inverses: B(v)−1 = B(−v) (relative motion in the opposite direction), and R(θ)−1 = R(−θ) (rotation in the opposite sense about the same axis) identity transformation for no relative motion/rotation: B(0) = R(0) = I unit determinant: det(B) = det(R) = +1. This property makes them proper transformations. matrix symmetry: B is symmetric (equals transpose), while R is nonsymmetric but orthogonal (transpose equals inverse, RT = R−1). The most general proper Lorentz transformation Λ(v, θ) includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, Λ(0, θ) = R(θ) and Λ(v, 0) = B(v). An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes Λ(ζ, θ) and B(ζ). ==== The Lie group SO+(3,1) ==== The set of transformations { B ( ζ ) , R ( θ ) , Λ ( ζ , θ ) } {\displaystyle \{B({\boldsymbol {\zeta }}),R({\boldsymbol {\theta }}),\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\}} with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO+(3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension). For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about ζ = 0, B x = I + ζ ∂ B x ∂ ζ | ζ = 0 + ⋯ {\displaystyle B_{x}=I+\zeta \left.{\frac {\partial B_{x}}{\partial \zeta }}\right|_{\zeta =0}+\cdots } where the higher order terms not shown are negligible because ζ is small, and Bx is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at ζ = 0, ∂ B x ∂ ζ | ζ = 0 = − K x . {\displaystyle \left.{\frac {\partial B_{x}}{\partial \zeta }}\right|_{\zeta =0}=-K_{x}\,.} For now, Kx is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained B x = lim N → ∞ ( I − ζ N K x ) N = e − ζ K x {\displaystyle B_{x}=\lim _{N\to \infty }\left(I-{\frac {\zeta }{N}}K_{x}\right)^{N}=e^{-\zeta K_{x}}} where the limit definition of the exponential has been used (see also characterizations of the exponential function). More generally B ( ζ ) = e − ζ ⋅ K , R ( θ ) = e θ ⋅ J . {\displaystyle B({\boldsymbol {\zeta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }\,,\quad R({\boldsymbol {\theta }})=e^{{\boldsymbol {\theta }}\cdot \mathbf {J} }\,.} The axis-angle vector θ and rapidity vector ζ are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are K = (Kx, Ky, Kz) and J = (Jx, Jy, Jz), each vectors of matrices with the explicit forms K x = [ 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 ] , K y = [ 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 ] , K z = [ 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 ] J x = [ 0 0 0 0 0 0 0 0 0 0 0 − 1 0 0 1 0 ] , J y = [ 0 0 0 0 0 0 0 1 0 0 0 0 0 − 1 0 0 ] , J z = [ 0 0 0 0 0 0 − 1 0 0 1 0 0 0 0 0 0 ] {\displaystyle {\begin{alignedat}{3}K_{x}&={\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\\\end{bmatrix}}\,,\quad &K_{y}&={\begin{bmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{bmatrix}}\,,\quad &K_{z}&={\begin{bmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}\\[10mu]J_{x}&={\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\\\end{bmatrix}}\,,\quad &J_{y}&={\begin{bmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&-1&0&0\end{bmatrix}}\,,\quad &J_{z}&={\begin{bmatrix}0&0&0&0\\0&0&-1&0\\0&1&0&0\\0&0&0&0\end{bmatrix}}\end{alignedat}}} These are all defined in an analogous way to Kx above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: J are the rotation generators which correspond to angular momentum, and K are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve C(t) with C(0) = I in the group depending on some group parameter t with respect to that group parameter, evaluated at t = 0, serves as a definition of a corresponding group generator G, and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map G smoothly back into the group via t → exp(tG) for all t; this curve will yield G again when differentiated at t = 0. Expanding the exponentials in their Taylor series obtains B ( ζ ) = I − sinh ⁡ ζ ( n ⋅ K ) + ( cosh ⁡ ζ − 1 ) ( n ⋅ K ) 2 {\displaystyle B({\boldsymbol {\zeta }})=I-\sinh \zeta (\mathbf {n} \cdot \mathbf {K} )+(\cosh \zeta -1)(\mathbf {n} \cdot \mathbf {K} )^{2}} R ( θ ) = I + sin ⁡ θ ( e ⋅ J ) + ( 1 − cos ⁡ θ ) ( e ⋅ J ) 2 . {\displaystyle R({\boldsymbol {\theta }})=I+\sin \theta (\mathbf {e} \cdot \mathbf {J} )+(1-\cos \theta )(\mathbf {e} \cdot \mathbf {J} )^{2}\,.} which compactly reproduce the boost and rotation matrices as given in the previous section. It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product Λ = ( I − ζ ⋅ K + ⋯ ) ( I + θ ⋅ J + ⋯ ) = ( I + θ ⋅ J + ⋯ ) ( I − ζ ⋅ K + ⋯ ) = I − ζ ⋅ K + θ ⋅ J + ⋯ {\displaystyle {\begin{aligned}\Lambda &=(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )\\&=(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )\\&=I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots \end{aligned}}} is commutative because only linear terms are required (products like (θ·J)(ζ·K) and (ζ·K)(θ·J) count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential Λ ( ζ , θ ) = e − ζ ⋅ K + θ ⋅ J . {\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }.} The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular, e − ζ ⋅ K + θ ⋅ J ≠ e − ζ ⋅ K e θ ⋅ J , {\displaystyle e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }\neq e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }e^{{\boldsymbol {\theta }}\cdot \mathbf {J} },} because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators J and K), see Wigner rotation. If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies. ==== The Lie algebra so(3,1) ==== Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators V = { ζ ⋅ K + θ ⋅ J } {\displaystyle V=\{{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \}} together with the operations of ordinary matrix addition and multiplication of a matrix by a number, forms a vector space over the real numbers. The generators Jx, Jy, Jz, Kx, Ky, Kz form a basis set of V, and the components of the axis-angle and rapidity vectors, θx, θy, θz, ζx, ζy, ζz, are the coordinates of a Lorentz generator with respect to this basis. Three of the commutation relations of the Lorentz generators are [ J x , J y ] = J z , [ K x , K y ] = − J z , [ J x , K y ] = K z , {\displaystyle [J_{x},J_{y}]=J_{z}\,,\quad [K_{x},K_{y}]=-J_{z}\,,\quad [J_{x},K_{y}]=K_{z}\,,} where the bracket [A, B] = AB − BA is known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat). These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra s o ( 3 , 1 ) {\displaystyle {\mathfrak {so}}(3,1)} . In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity, alternatization, and the Jacobi identity. Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers. Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense. The exponential map from the Lie algebra to the Lie group, exp : s o ( 3 , 1 ) → S O ( 3 , 1 ) , {\displaystyle \exp \,:\,{\mathfrak {so}}(3,1)\to \mathrm {SO} (3,1),} provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential. Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra. === Improper transformations === Lorentz transformations also include parity inversion P = [ 1 0 0 − I ] {\displaystyle P={\begin{bmatrix}1&0\\0&-\mathbf {I} \end{bmatrix}}} which negates all the spatial coordinates only, and time reversal T = [ − 1 0 0 I ] {\displaystyle T={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}} which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here I is the 3 × 3 identity matrix. These are both symmetric, they are their own inverses (see involution (mathematics)), and each have determinant −1. This latter property makes them improper transformations. If Λ is a proper orthochronous Lorentz transformation, then TΛ is improper antichronous, PΛ is improper orthochronous, and TPΛ = PTΛ is proper antichronous. === Inhomogeneous Lorentz group === Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown that it is necessary and sufficient for the coordinate transformation to be of the form X ′ = Λ X + C {\displaystyle X'=\Lambda X+C} where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation. If C = 0, this is a homogeneous Lorentz transformation. Poincaré transformations are not dealt further in this article. == Tensor formulation == === Contravariant vectors === Writing the general matrix transformation of coordinates as the matrix equation [ x ′ 0 x ′ 1 x ′ 2 x ′ 3 ] = [ Λ 0 0 Λ 0 1 Λ 0 2 Λ 0 3 x ′ 0 Λ 1 0 Λ 1 1 Λ 1 2 Λ 1 3 x ′ 0 Λ 2 0 Λ 2 1 Λ 2 2 Λ 2 3 x ′ 0 Λ 3 0 Λ 3 1 Λ 3 2 Λ 3 3 x ′ 0 ] [ x 0 x ′ 0 x 1 x ′ 0 x 2 x ′ 0 x 3 x ′ 0 ] {\displaystyle {\begin{bmatrix}{x'}^{0}\\{x'}^{1}\\{x'}^{2}\\{x'}^{3}\end{bmatrix}}={\begin{bmatrix}{\Lambda ^{0}}_{0}&{\Lambda ^{0}}_{1}&{\Lambda ^{0}}_{2}&{\Lambda ^{0}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{1}}_{0}&{\Lambda ^{1}}_{1}&{\Lambda ^{1}}_{2}&{\Lambda ^{1}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{2}}_{0}&{\Lambda ^{2}}_{1}&{\Lambda ^{2}}_{2}&{\Lambda ^{2}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{3}}_{0}&{\Lambda ^{3}}_{1}&{\Lambda ^{3}}_{2}&{\Lambda ^{3}}_{3}{\vphantom {{x'}^{0}}}\\\end{bmatrix}}{\begin{bmatrix}x^{0}{\vphantom {{x'}^{0}}}\\x^{1}{\vphantom {{x'}^{0}}}\\x^{2}{\vphantom {{x'}^{0}}}\\x^{3}{\vphantom {{x'}^{0}}}\end{bmatrix}}} allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4-dimensional spacetime, to be defined. In the corresponding tensor index notation, the above matrix expression is x ′ ν = Λ ν μ x μ , {\displaystyle {x'}^{\nu }={\Lambda ^{\nu }}_{\mu }x^{\mu },} where lower and upper indices label covariant and contravariant components respectively, and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index. The second index corresponds to the column index. The transformation matrix is universal for all four-vectors, not just 4-dimensional spacetime coordinates. If A is any four-vector, then in tensor index notation A ′ ν = Λ ν μ A μ . {\displaystyle {A'}^{\nu }={\Lambda ^{\nu }}_{\mu }A^{\mu }\,.} Alternatively, one writes A ν ′ = Λ ν ′ μ A μ . {\displaystyle A^{\nu '}={\Lambda ^{\nu '}}_{\mu }A^{\mu }\,.} in which the primed indices denote the indices of A in the primed frame. For a general n-component object one may write X ′ α = Π ( Λ ) α β X β , {\displaystyle {X'}^{\alpha }={\Pi (\Lambda )^{\alpha }}_{\beta }X^{\beta }\,,} where Π is the appropriate representation of the Lorentz group, an n × n matrix for every Λ. In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from 1 to n. E.g., if X is a bispinor, then the indices are called Dirac indices. === Covariant vectors === There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index; e.g., x ν = η μ ν x μ , {\displaystyle x_{\nu }=\eta _{\mu \nu }x^{\mu },} where η is the metric tensor. (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by x μ = η μ ν x ν , {\displaystyle x^{\mu }=\eta ^{\mu \nu }x_{\nu },} where, when viewed as matrices, ημν is the inverse of ημν. As it happens, ημν = ημν. This is referred to as raising an index. To transform a covariant vector Aμ, first raise its index, then transform it according to the same rule as for contravariant 4-vectors, then finally lower the index; A ′ ν = η ρ ν Λ ρ σ η μ σ A μ . {\displaystyle {A'}_{\nu }=\eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }A_{\mu }.} But η ρ ν Λ ρ σ η μ σ = ( Λ − 1 ) μ ν , {\displaystyle \eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },} That is, it is the (μ, ν)-component of the inverse Lorentz transformation. One defines (as a matter of notation), Λ ν μ ≡ ( Λ − 1 ) μ ν , {\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },} and may in this notation write A ′ ν = Λ ν μ A μ . {\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.} Now for a subtlety. The implied summation on the right hand side of A ′ ν = Λ ν μ A μ = ( Λ − 1 ) μ ν A μ {\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }} is running over a row index of the matrix representing Λ−1. Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of Λ acting on the column vector Aμ. That is, in pure matrix notation, A ′ = ( Λ − 1 ) T A . {\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.} This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace Λ with Π(Λ). === Tensors === If A and B are linear operators on vector spaces U and V, then a linear operator A ⊗ B may be defined on the tensor product of U and V, denoted U ⊗ V according to From this it is immediately clear that if u and v are a four-vectors in V, then u ⊗ v ∈ T2V ≡ V ⊗ V transforms as The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor u ⊗ v. These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space V can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity T. It is given by where Λχ′ψ is defined above. This form can generally be reduced to the form for general n-component objects given above with a single matrix (Π(Λ)) operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor. ==== Transformation of the electromagnetic field ==== Lorentz transformations can also be used to illustrate that the magnetic field B and electric field E are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment. An observer measures a charge at rest in frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer does not observe any magnetic field. The other observer in frame F′ moves at velocity v relative to F and the charge. This observer sees a different electric field because the charge moves at velocity −v in their rest frame. The motion of the charge corresponds to an electric current, and thus the observer in frame F′ also sees a magnetic field. The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector. The electromagnetic field strength tensor is given by F μ ν = [ 0 − 1 c E x − 1 c E y − 1 c E z 1 c E x 0 − B z B y 1 c E y B z 0 − B x 1 c E z − B y B x 0 ] (SI units, signature ( + , − , − , − ) ) . {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-{\frac {1}{c}}E_{x}&-{\frac {1}{c}}E_{y}&-{\frac {1}{c}}E_{z}\\{\frac {1}{c}}E_{x}&0&-B_{z}&B_{y}\\{\frac {1}{c}}E_{y}&B_{z}&0&-B_{x}\\{\frac {1}{c}}E_{z}&-B_{y}&B_{x}&0\end{bmatrix}}{\text{(SI units, signature }}(+,-,-,-){\text{)}}.} in SI units. In relativity, the Gaussian system of units is often preferred over SI units, even in texts whose main choice of units is SI units, because in it the electric field E and the magnetic induction B have the same units making the appearance of the electromagnetic field tensor more natural. Consider a Lorentz boost in the x-direction. It is given by Λ μ ν = [ γ − γ β 0 0 − γ β γ 0 0 0 0 1 0 0 0 0 1 ] , F μ ν = [ 0 E x E y E z − E x 0 B z − B y − E y − B z 0 B x − E z B y − B x 0 ] (Gaussian units, signature ( − , + , + , + ) ) , {\displaystyle {\Lambda ^{\mu }}_{\nu }={\begin{bmatrix}\gamma &-\gamma \beta &0&0\\-\gamma \beta &\gamma &0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}},\qquad F^{\mu \nu }={\begin{bmatrix}0&E_{x}&E_{y}&E_{z}\\-E_{x}&0&B_{z}&-B_{y}\\-E_{y}&-B_{z}&0&B_{x}\\-E_{z}&B_{y}&-B_{x}&0\end{bmatrix}}{\text{(Gaussian units, signature }}(-,+,+,+){\text{)}},} where the field tensor is displayed side by side for easiest possible reference in the manipulations below. The general transformation law (T3) becomes F μ ′ ν ′ = Λ μ ′ μ Λ ν ′ ν F μ ν . {\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }.} For the magnetic field one obtains B x ′ = F 2 ′ 3 ′ = Λ 2 μ Λ 3 ν F μ ν = Λ 2 2 Λ 3 3 F 23 = 1 × 1 × B x = B x , B y ′ = F 3 ′ 1 ′ = Λ 3 μ Λ 1 ν F μ ν = Λ 3 3 Λ 1 ν F 3 ν = Λ 3 3 Λ 1 0 F 30 + Λ 3 3 Λ 1 1 F 31 = 1 × ( − β γ ) ( − E z ) + 1 × γ B y = γ B y + β γ E z = γ ( B − β × E ) y B z ′ = F 1 ′ 2 ′ = Λ 1 μ Λ 2 ν F μ ν = Λ 1 μ Λ 2 2 F μ 2 = Λ 1 0 Λ 2 2 F 02 + Λ 1 1 Λ 2 2 F 12 = ( − γ β ) × 1 × E y + γ × 1 × B z = γ B z − β γ E y = γ ( B − β × E ) z {\displaystyle {\begin{aligned}B_{x'}&=F^{2'3'}={\Lambda ^{2}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{2}}_{2}{\Lambda ^{3}}_{3}F^{23}=1\times 1\times B_{x}\\&=B_{x},\\B_{y'}&=F^{3'1'}={\Lambda ^{3}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{\nu }F^{3\nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{0}F^{30}+{\Lambda ^{3}}_{3}{\Lambda ^{1}}_{1}F^{31}\\&=1\times (-\beta \gamma )(-E_{z})+1\times \gamma B_{y}=\gamma B_{y}+\beta \gamma E_{z}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{y}\\B_{z'}&=F^{1'2'}={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{1}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{1}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=(-\gamma \beta )\times 1\times E_{y}+\gamma \times 1\times B_{z}=\gamma B_{z}-\beta \gamma E_{y}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{z}\end{aligned}}} For the electric field results E x ′ = F 0 ′ 1 ′ = Λ 0 μ Λ 1 ν F μ ν = Λ 0 1 Λ 1 0 F 10 + Λ 0 0 Λ 1 1 F 01 = ( − γ β ) ( − γ β ) ( − E x ) + γ γ E x = − γ 2 β 2 ( E x ) + γ 2 E x = E x ( 1 − β 2 ) γ 2 = E x , E y ′ = F 0 ′ 2 ′ = Λ 0 μ Λ 2 ν F μ ν = Λ 0 μ Λ 2 2 F μ 2 = Λ 0 0 Λ 2 2 F 02 + Λ 0 1 Λ 2 2 F 12 = γ × 1 × E y + ( − β γ ) × 1 × B z = γ E y − β γ B z = γ ( E + β × B ) y E z ′ = F 0 ′ 3 ′ = Λ 0 μ Λ 3 ν F μ ν = Λ 0 μ Λ 3 3 F μ 3 = Λ 0 0 Λ 3 3 F 03 + Λ 0 1 Λ 3 3 F 13 = γ × 1 × E z − β γ × 1 × ( − B y ) = γ E z + β γ B y = γ ( E + β × B ) z . {\displaystyle {\begin{aligned}E_{x'}&=F^{0'1'}={\Lambda ^{0}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{1}{\Lambda ^{1}}_{0}F^{10}+{\Lambda ^{0}}_{0}{\Lambda ^{1}}_{1}F^{01}\\&=(-\gamma \beta )(-\gamma \beta )(-E_{x})+\gamma \gamma E_{x}=-\gamma ^{2}\beta ^{2}(E_{x})+\gamma ^{2}E_{x}=E_{x}(1-\beta ^{2})\gamma ^{2}\\&=E_{x},\\E_{y'}&=F^{0'2'}={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{0}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{0}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=\gamma \times 1\times E_{y}+(-\beta \gamma )\times 1\times B_{z}=\gamma E_{y}-\beta \gamma B_{z}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{y}\\E_{z'}&=F^{0'3'}={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{3}F^{\mu 3}={\Lambda ^{0}}_{0}{\Lambda ^{3}}_{3}F^{03}+{\Lambda ^{0}}_{1}{\Lambda ^{3}}_{3}F^{13}\\&=\gamma \times 1\times E_{z}-\beta \gamma \times 1\times (-B_{y})=\gamma E_{z}+\beta \gamma B_{y}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{z}.\end{aligned}}} Here, β = (β, 0, 0) is used. These results can be summarized by E ∥ ′ = E ∥ B ∥ ′ = B ∥ E ⊥ ′ = γ ( E ⊥ + β × B ⊥ ) = γ ( E + β × B ) ⊥ , B ⊥ ′ = γ ( B ⊥ − β × E ⊥ ) = γ ( B − β × E ) ⊥ , {\displaystyle {\begin{aligned}\mathbf {E} _{\parallel '}&=\mathbf {E} _{\parallel }\\\mathbf {B} _{\parallel '}&=\mathbf {B} _{\parallel }\\\mathbf {E} _{\bot '}&=\gamma \left(\mathbf {E} _{\bot }+{\boldsymbol {\beta }}\times \mathbf {B} _{\bot }\right)=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{\bot },\\\mathbf {B} _{\bot '}&=\gamma \left(\mathbf {B} _{\bot }-{\boldsymbol {\beta }}\times \mathbf {E} _{\bot }\right)=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{\bot },\end{aligned}}} and are independent of the metric signature. For SI units, substitute E → E⁄c. Misner, Thorne & Wheeler (1973) refer to this last form as the 3 + 1 view as opposed to the geometric view represented by the tensor expression F μ ′ ν ′ = Λ μ ′ μ Λ ν ′ ν F μ ν , {\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu },} and make a strong point of the ease with which results that are difficult to achieve using the 3 + 1 view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time. The fields E (alone) and B (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations (T1) and (T2) that immediately yield (T3). One should note that the primed and unprimed tensors refer to the same event in spacetime. Thus the complete equation with spacetime dependence is F μ ′ ν ′ ( x ′ ) = Λ μ ′ μ Λ ν ′ ν F μ ν ( Λ − 1 x ′ ) = Λ μ ′ μ Λ ν ′ ν F μ ν ( x ) . {\displaystyle F^{\mu '\nu '}\left(x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }\left(\Lambda ^{-1}x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }(x).} Length contraction has an effect on charge density ρ and current density J, and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors, j ′ = j − γ ρ v n + ( γ − 1 ) ( j ⋅ n ) n ρ ′ = γ ( ρ − j ⋅ v n c 2 ) , {\displaystyle {\begin{aligned}\mathbf {j} '&=\mathbf {j} -\gamma \rho v\mathbf {n} +\left(\gamma -1\right)(\mathbf {j} \cdot \mathbf {n} )\mathbf {n} \\\rho '&=\gamma \left(\rho -\mathbf {j} \cdot {\frac {v\mathbf {n} }{c^{2}}}\right),\end{aligned}}} or, in the simpler geometric view, j μ ′ = Λ μ ′ μ j μ . {\displaystyle j^{\mu '}={\Lambda ^{\mu '}}_{\mu }j^{\mu }.} Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector. The Maxwell equations are invariant under Lorentz transformations. === Spinors === Equation (T1) hold unmodified for any representation of the Lorentz group, including the bispinor representation. In (T2) one simply replaces all occurrences of Λ by the bispinor representation Π(Λ), The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons. ==== Transformation of general fields ==== A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule where W(Λ, p) is the Wigner's little group and D(j) is the (2j + 1)-dimensional representation of SO(3). == See also == == Footnotes == == Notes == == References == === Websites === O'Connor, John J.; Robertson, Edmund F. (1996), A History of Special Relativity Brown, Harvey R. (2003), Michelson, FitzGerald and Lorentz: the Origins of Relativity Revisited === Papers === === Books === == Further reading == Ernst, A.; Hsu, J.-P. (2001), "First proposal of the universal speed of light by Voigt 1887" (PDF), Chinese Journal of Physics, 39 (3): 211–230, Bibcode:2001ChJPh..39..211E, archived from the original (PDF) on 2011-07-16 Thornton, Stephen T.; Marion, Jerry B. (2004), Classical dynamics of particles and systems (5th ed.), Belmont, [CA.]: Brooks/Cole, pp. 546–579, ISBN 978-0-534-40896-1 Voigt, Woldemar (1887), "Über das Doppler'sche princip", Nachrichten von der Königlicher Gesellschaft den Wissenschaft zu Göttingen, 2: 41–51 == External links == Derivation of the Lorentz transformations. This web page contains a more detailed derivation of the Lorentz transformation with special emphasis on group properties. The Paradox of Special Relativity. This webpage poses a problem, the solution of which is the Lorentz transformation, which is presented graphically in its next page. Relativity Archived 2011-08-29 at the Wayback Machine – a chapter from an online textbook Warp Special Relativity Simulator. A computer program demonstrating the Lorentz transformations on everyday objects. Animation clip on YouTube visualizing the Lorentz transformation. MinutePhysics video on YouTube explaining and visualizing the Lorentz transformation with a mechanical Minkowski diagram Interactive graph on Desmos (graphing) showing Lorentz transformations with a virtual Minkowski diagram Interactive graph on Desmos showing Lorentz transformations with points and hyperbolas Lorentz Frames Animated from John de Pillis. Online Flash animations of Galilean and Lorentz frames, various paradoxes, EM wave phenomena, etc.
Wikipedia/Lorentz_transformation
The Navier–Stokes equations ( nav-YAY STOHKS) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes). The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable). The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics. The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample. == Flow velocity == The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time. == General continuum equations == The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is: D u D t = 1 ρ ∇ ⋅ σ + f . {\displaystyle {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} .} By setting the Cauchy stress tensor σ {\textstyle {\boldsymbol {\sigma }}} to be the sum of a viscosity term τ {\textstyle {\boldsymbol {\tau }}} (the deviatoric stress) and a pressure term − p I {\textstyle -p\mathbf {I} } (volumetric stress), we arrive at: where D D t {\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}} is the material derivative, defined as ∂ ∂ t + u ⋅ ∇ {\textstyle {\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla } , ρ {\textstyle \rho } is the (mass) density, u {\textstyle \mathbf {u} } is the flow velocity, ∇ ⋅ {\textstyle \nabla \cdot \,} is the divergence, p {\textstyle p} is the pressure, t {\textstyle t} is time, τ {\textstyle {\boldsymbol {\tau }}} is the deviatoric stress tensor, which has order 2, a {\textstyle \mathbf {a} } represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on. In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations. Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative D D t {\displaystyle {\frac {\mathbf {D} }{\mathbf {Dt} }}} ) of any finite volume (V) to represent the change of velocity in fluid media: D m D t = ∭ V ( D ρ D t + ρ ( ∇ ⋅ u ) ) d V D ρ D t + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ( ∇ ρ ) ⋅ u + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\begin{aligned}&{\frac {\mathbf {D} m}{\mathbf {Dt} }}=\iiint \limits _{V}\left({\frac {\mathbf {D} \rho }{\mathbf {Dt} +\rho (\nabla \cdot \mathbf {u} )}}\right)\,dV\\[5pt]&{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+(\nabla \rho )\cdot \mathbf {u} +\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0\end{aligned}}} where D m D t {\textstyle {\frac {\mathrm {D} m}{\mathrm {D} t}}} is the material derivative of mass per unit volume (density, ρ {\displaystyle \rho } ), ∭ V ( F ( x 1 , x 2 , x 3 , t ) ) d V {\textstyle \iiint \limits _{V}(F(x_{1},x_{2},x_{3},t))\,dV} is the mathematical operation for the integration throughout the volume (V), ∂ ∂ t {\textstyle {\frac {\partial }{\partial t}}} is the partial derivative mathematical operator, ∇ ⋅ u {\textstyle \nabla \cdot \mathbf {u} \,} is the divergence of the flow velocity ( u {\displaystyle \mathbf {u} } ), which is a scalar field, Note 1 ∇ ρ {\textstyle \nabla \rho \,} is the gradient of density ( ρ {\displaystyle \rho } ), which is the vector derivative of a scalar field, Note 1 Note 1 – Refer to the mathematical operator del represented by the nabla ( ∇ {\displaystyle \nabla } ) symbol. to arrive at the conservation form of the equations of motion. This is often written: where ⊗ {\textstyle \otimes } is the outer product of the flow velocity ( u {\displaystyle \mathbf {u} } ): u ⊗ u = u u T {\displaystyle \mathbf {u} \otimes \mathbf {u} =\mathbf {u} \mathbf {u} ^{\mathrm {T} }} The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity). All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below. === Convective acceleration === A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle. == Compressible flow == Remark: here, the deviatoric stress tensor is denoted τ {\textstyle {\boldsymbol {\tau }}} as it was in the general continuum equations and in the incompressible flow section. The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient ∇ u {\textstyle \nabla \mathbf {u} } , or more simply the rate-of-strain tensor: ε ( ∇ u ) ≡ 1 2 ∇ u + 1 2 ( ∇ u ) T {\textstyle {\boldsymbol {\varepsilon }}\left(\nabla \mathbf {u} \right)\equiv {\frac {1}{2}}\nabla \mathbf {u} +{\frac {1}{2}}\left(\nabla \mathbf {u} \right)^{T}} the deviatoric stress is linear in this variable: σ ( ε ) = − p I + C : ε {\textstyle {\boldsymbol {\sigma }}({\boldsymbol {\varepsilon }})=-p\mathbf {I} +\mathbf {C} :{\boldsymbol {\varepsilon }}} , where p {\textstyle p} is independent on the strain rate tensor, C {\textstyle \mathbf {C} } is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product. the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently C {\textstyle \mathbf {C} } is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity λ {\textstyle \lambda } and the dynamic viscosity μ {\textstyle \mu } , as it is usual in linear elasticity: where I {\textstyle \mathbf {I} } is the identity tensor, and tr ⁡ ( ε ) {\textstyle \operatorname {tr} ({\boldsymbol {\varepsilon }})} is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as: σ = − p I + λ ( ∇ ⋅ u ) I + μ ( ∇ u + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\lambda (\nabla \cdot \mathbf {u} )\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right).} Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow: tr ⁡ ( ε ) = ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\varepsilon }})=\nabla \cdot \mathbf {u} .} Given this relation, and since the trace of the identity tensor in three dimensions is three: tr ⁡ ( I ) = 3. {\displaystyle \operatorname {tr} ({\boldsymbol {I}})=3.} the trace of the stress tensor in three dimensions becomes: tr ⁡ ( σ ) = − 3 p + ( 3 λ + 2 μ ) ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\sigma }})=-3p+(3\lambda +2\mu )\nabla \cdot \mathbf {u} .} So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics: σ = − [ p − ( λ + 2 3 μ ) ( ∇ ⋅ u ) ] I + μ ( ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ) {\displaystyle {\boldsymbol {\sigma }}=-\left[p-\left(\lambda +{\tfrac {2}{3}}\mu \right)\left(\nabla \cdot \mathbf {u} \right)\right]\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }-{\tfrac {2}{3}}\left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right)} Introducing the bulk viscosity ζ {\textstyle \zeta } , ζ ≡ λ + 2 3 μ , {\displaystyle \zeta \equiv \lambda +{\tfrac {2}{3}}\mu ,} we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics: which can also be arranged in the other usual form: σ = − p I + μ ( ∇ u + ( ∇ u ) T ) + ( ζ − 2 3 μ ) ( ∇ ⋅ u ) I . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right)+\left(\zeta -{\frac {2}{3}}\mu \right)(\nabla \cdot \mathbf {u} )\mathbf {I} .} Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term: p = − 1 3 tr ⁡ ( σ ) + ζ ( ∇ ⋅ u ) {\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta (\nabla \cdot \mathbf {u} )} and the deviatoric stress tensor σ ′ {\displaystyle {\boldsymbol {\sigma }}'} is still coincident with the shear stress tensor τ {\displaystyle {\boldsymbol {\tau }}} (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity: σ ′ = τ = μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] {\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]} Both bulk viscosity ζ {\textstyle \zeta } and dynamic viscosity μ {\textstyle \mu } need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state. The most general of the Navier–Stokes equations become in index notation, the equation can be written as The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to: ρ D u D t = ∂ ∂ t ( ρ u ) + ∇ ⋅ ( ρ u ⊗ u ) {\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} )} to give finally: Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity ζ {\textstyle \zeta } can be assumed to be constant in which case, the effect of the volume viscosity ζ {\textstyle \zeta } is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below. ∇ ⋅ ( ∇ ⋅ u ) I = ∇ ( ∇ ⋅ u ) , {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {u} )\mathbf {I} =\nabla (\nabla \cdot \mathbf {u} ),} p ¯ ≡ p − ζ ∇ ⋅ u , {\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,} However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming ζ = 0 {\textstyle \zeta =0} . The assumption of setting ζ = 0 {\textstyle \zeta =0} is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become If the dynamic μ and bulk ζ {\displaystyle \zeta } viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor ∇ u {\textstyle \nabla \mathbf {u} } is ∇ 2 u {\textstyle \nabla ^{2}\mathbf {u} } and the divergence of tensor ( ∇ u ) T {\textstyle \left(\nabla \mathbf {u} \right)^{\mathrm {T} }} is ∇ ( ∇ ⋅ u ) {\textstyle \nabla \left(\nabla \cdot \mathbf {u} \right)} , one finally arrives to the compressible Navier–Stokes momentum equation: where D D t {\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}} is the material derivative. ν = μ ρ {\displaystyle \nu ={\frac {\mu }{\rho }}} is the shear kinematic viscosity and ξ = ζ ρ {\displaystyle \xi ={\frac {\zeta }{\rho }}} is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation. By bringing the operator on the flow velocity on the left side, one also has: The convective acceleration term can also be written as u ⋅ ∇ u = ( ∇ × u ) × u + 1 2 ∇ u 2 , {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =(\nabla \times \mathbf {u} )\times \mathbf {u} +{\tfrac {1}{2}}\nabla \mathbf {u} ^{2},} where the vector ( ∇ × u ) × u {\textstyle (\nabla \times \mathbf {u} )\times \mathbf {u} } is known as the Lamb vector. For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} . == Incompressible flow == The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient ∇ u {\textstyle \nabla \mathbf {u} } . the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently τ {\textstyle {\boldsymbol {\tau }}} is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity μ {\textstyle \mu } : where ε = 1 2 ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\varepsilon }}={\tfrac {1}{2}}\left(\mathbf {\nabla u} +\mathbf {\nabla u} ^{\mathrm {T} }\right)} is the rate-of-strain tensor. So this decomposition can be made explicit as: This is constitutive equation is also called the Newtonian law of viscosity. Dynamic viscosity μ need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state. The divergence of the deviatoric stress in case of uniform viscosity is given by: ∇ ⋅ τ = 2 μ ∇ ⋅ ε = μ ∇ ⋅ ( ∇ u + ∇ u T ) = μ ∇ 2 u {\displaystyle \nabla \cdot {\boldsymbol {\tau }}=2\mu \nabla \cdot {\boldsymbol {\varepsilon }}=\mu \nabla \cdot \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{\mathrm {T} }\right)=\mu \,\nabla ^{2}\mathbf {u} } because ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} for an incompressible fluid. Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density: where ν = μ ρ {\textstyle \nu ={\frac {\mu }{\rho }}} is called the kinematic viscosity. By isolating the fluid velocity, one can also state: If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, ρ {\textstyle \rho } , then we have where p / ρ {\textstyle p/\rho } is called the unit pressure head. In incompressible flows, the pressure field satisfies the Poisson equation, ∇ 2 p = − ρ ∂ u i ∂ x k ∂ u k ∂ x i = − ρ ∂ 2 u i u k ∂ x k x i , {\displaystyle \nabla ^{2}p=-\rho {\frac {\partial u_{i}}{\partial x_{k}}}{\frac {\partial u_{k}}{\partial x_{i}}}=-\rho {\frac {\partial ^{2}u_{i}u_{k}}{\partial x_{k}x_{i}}},} which is obtained by taking the divergence of the momentum equations. It is well worth observing the meaning of each term (compare to the Cauchy momentum equation): ∂ u ∂ t ⏟ Variation + ( u ⋅ ∇ ) u ⏟ Convective acceleration ⏞ Inertia (per volume) = ∂ ∂ − ∇ w ⏟ Internal source + ν ∇ 2 u ⏟ Diffusion ⏞ Divergence of stress + g ⏟ External source . {\displaystyle \overbrace {{\vphantom {\frac {}{}}}\underbrace {\frac {\partial \mathbf {u} }{\partial t}} _{\text{Variation}}+\underbrace {{\vphantom {\frac {}{}}}(\mathbf {u} \cdot \nabla )\mathbf {u} } _{\begin{smallmatrix}{\text{Convective}}\\{\text{acceleration}}\end{smallmatrix}}} ^{\text{Inertia (per volume)}}=\overbrace {{\vphantom {\frac {\partial }{\partial }}}\underbrace {{\vphantom {\frac {}{}}}-\nabla w} _{\begin{smallmatrix}{\text{Internal}}\\{\text{source}}\end{smallmatrix}}+\underbrace {{\vphantom {\frac {}{}}}\nu \nabla ^{2}\mathbf {u} } _{\text{Diffusion}}} ^{\text{Divergence of stress}}+\underbrace {{\vphantom {\frac {}{}}}\mathbf {g} } _{\begin{smallmatrix}{\text{External}}\\{\text{source}}\end{smallmatrix}}.} The higher-order term, namely the shear stress divergence ∇ ⋅ τ {\textstyle \nabla \cdot {\boldsymbol {\tau }}} , has simply reduced to the vector Laplacian term μ ∇ 2 u {\textstyle \mu \nabla ^{2}\mathbf {u} } . This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations. In the usual case of an external field being a conservative field: g = − ∇ φ {\displaystyle \mathbf {g} =-\nabla \varphi } by defining the hydraulic head: h ≡ w + φ {\displaystyle h\equiv w+\varphi } one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field: ∂ u ∂ t + ( u ⋅ ∇ ) u − ν ∇ 2 u = − ∇ h . {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} -\nu \,\nabla ^{2}\mathbf {u} =-\nabla h.} The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or fewer dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems. A special case of the fundamental equation of hydraulics is the Bernoulli's equation. The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations, ∂ u ∂ t = Π S ( − ( u ⋅ ∇ ) u + ν ∇ 2 u ) + f S ρ − 1 ∇ p = Π I ( − ( u ⋅ ∇ ) u + ν ∇ 2 u ) + f I {\displaystyle {\begin{aligned}{\frac {\partial \mathbf {u} }{\partial t}}&=\Pi ^{S}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{S}\\\rho ^{-1}\,\nabla p&=\Pi ^{I}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{I}\end{aligned}}} where Π S {\textstyle \Pi ^{S}} and Π I {\textstyle \Pi ^{I}} are solenoidal and irrotational projection operators satisfying Π S + Π I = 1 {\textstyle \Pi ^{S}+\Pi ^{I}=1} , and f S {\textstyle \mathbf {f} ^{S}} and f I {\textstyle \mathbf {f} ^{I}} are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation. The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem: Π S F ( r ) = 1 4 π ∇ × ∫ ∇ ′ × F ( r ′ ) | r − r ′ | d V ′ , Π I = 1 − Π S {\displaystyle \Pi ^{S}\,\mathbf {F} (\mathbf {r} )={\frac {1}{4\pi }}\nabla \times \int {\frac {\nabla ^{\prime }\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V',\quad \Pi ^{I}=1-\Pi ^{S}} with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb's and Biot–Savart's law, not convenient for numerical computation. An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by, ( w , ∂ u ∂ t ) = − ( w , ( u ⋅ ∇ ) u ) − ν ( ∇ w : ∇ u ) + ( w , f S ) {\displaystyle \left(\mathbf {w} ,{\frac {\partial \mathbf {u} }{\partial t}}\right)=-{\bigl (}\mathbf {w} ,\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} {\bigr )}-\nu \left(\nabla \mathbf {w} :\nabla \mathbf {u} \right)+\left(\mathbf {w} ,\mathbf {f} ^{S}\right)} for divergence-free test functions w {\textstyle \mathbf {w} } satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There, one will be able to address the question, "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?". The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition. === Weak form of the incompressible Navier–Stokes equations === ==== Strong form ==== Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density ρ {\textstyle \rho } in a domain Ω ⊂ R d ( d = 2 , 3 ) {\displaystyle \Omega \subset \mathbb {R} ^{d}\quad (d=2,3)} with boundary ∂ Ω = Γ D ∪ Γ N , {\displaystyle \partial \Omega =\Gamma _{D}\cup \Gamma _{N},} being Γ D {\textstyle \Gamma _{D}} and Γ N {\textstyle \Gamma _{N}} portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ( Γ D ∩ Γ N = ∅ {\textstyle \Gamma _{D}\cap \Gamma _{N}=\emptyset } ): { ρ ∂ u ∂ t + ρ ( u ⋅ ∇ ) u − ∇ ⋅ σ ( u , p ) = f in Ω × ( 0 , T ) ∇ ⋅ u = 0 in Ω × ( 0 , T ) u = g on Γ D × ( 0 , T ) σ ( u , p ) n ^ = h on Γ N × ( 0 , T ) u ( 0 ) = u 0 in Ω × { 0 } {\displaystyle {\begin{cases}\rho {\dfrac {\partial \mathbf {u} }{\partial t}}+\rho (\mathbf {u} \cdot \nabla )\mathbf {u} -\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)=\mathbf {f} &{\text{ in }}\Omega \times (0,T)\\\nabla \cdot \mathbf {u} =0&{\text{ in }}\Omega \times (0,T)\\\mathbf {u} =\mathbf {g} &{\text{ on }}\Gamma _{D}\times (0,T)\\{\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\mathbf {h} &{\text{ on }}\Gamma _{N}\times (0,T)\\\mathbf {u} (0)=\mathbf {u} _{0}&{\text{ in }}\Omega \times \{0\}\end{cases}}} u {\textstyle \mathbf {u} } is the fluid velocity, p {\textstyle p} the fluid pressure, f {\textstyle \mathbf {f} } a given forcing term, n ^ {\displaystyle {\hat {\mathbf {n} }}} the outward directed unit normal vector to Γ N {\textstyle \Gamma _{N}} , and σ ( u , p ) {\textstyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)} the viscous stress tensor defined as: σ ( u , p ) = − p I + 2 μ ε ( u ) . {\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)=-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} ).} Let μ {\textstyle \mu } be the dynamic viscosity of the fluid, I {\textstyle \mathbf {I} } the second-order identity tensor and ε ( u ) {\textstyle {\boldsymbol {\varepsilon }}(\mathbf {u} )} the strain-rate tensor defined as: ε ( u ) = 1 2 ( ( ∇ u ) + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {u} )={\frac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right).} The functions g {\textstyle \mathbf {g} } and h {\textstyle \mathbf {h} } are given Dirichlet and Neumann boundary data, while u 0 {\textstyle \mathbf {u} _{0}} is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation. Assuming constant dynamic viscosity, using the vectorial identity ∇ ⋅ ( ∇ f ) T = ∇ ( ∇ ⋅ f ) {\displaystyle \nabla \cdot \left(\nabla \mathbf {f} \right)^{\mathrm {T} }=\nabla (\nabla \cdot \mathbf {f} )} and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as: ∇ ⋅ σ ( u , p ) = ∇ ⋅ ( − p I + 2 μ ε ( u ) ) = − ∇ p + 2 μ ∇ ⋅ ε ( u ) = − ∇ p + 2 μ ∇ ⋅ [ 1 2 ( ( ∇ u ) + ( ∇ u ) T ) ] = − ∇ p + μ ( Δ u + ∇ ⋅ ( ∇ u ) T ) = − ∇ p + μ ( Δ u + ∇ ( ∇ ⋅ u ) ⏟ = 0 ) = − ∇ p + μ Δ u . {\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)&=\nabla \cdot \left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right)\\&=-\nabla p+2\mu \nabla \cdot {\boldsymbol {\varepsilon }}(\mathbf {u} )\\&=-\nabla p+2\mu \nabla \cdot \left[{\tfrac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\right]\\&=-\nabla p+\mu \left(\Delta \mathbf {u} +\nabla \cdot \left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\\&=-\nabla p+\mu {\bigl (}\Delta \mathbf {u} +\nabla \underbrace {(\nabla \cdot \mathbf {u} )} _{=0}{\bigr )}=-\nabla p+\mu \,\Delta \mathbf {u} .\end{aligned}}} Moreover, note that the Neumann boundary conditions can be rearranged as: σ ( u , p ) n ^ = ( − p I + 2 μ ε ( u ) ) n ^ = − p n ^ + μ ∂ u ∂ n ^ . {\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right){\hat {\mathbf {n} }}=-p{\hat {\mathbf {n} }}+\mu {\frac {\partial {\boldsymbol {u}}}{\partial {\hat {\mathbf {n} }}}}.} ==== Weak form ==== In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation ρ ∂ u ∂ t − μ Δ u + ρ ( u ⋅ ∇ ) u + ∇ p = f {\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}-\mu \Delta \mathbf {u} +\rho (\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p=\mathbf {f} } multiply it for a test function v {\textstyle \mathbf {v} } , defined in a suitable space V {\textstyle V} , and integrate both members with respect to the domain Ω {\textstyle \Omega } : ∫ Ω ρ ∂ u ∂ t ⋅ v − ∫ Ω μ Δ u ⋅ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v + ∫ Ω ∇ p ⋅ v = ∫ Ω f ⋅ v {\displaystyle \int \limits _{\Omega }\rho {\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} -\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\nabla p\cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} } Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem: − ∫ Ω μ Δ u ⋅ v = ∫ Ω μ ∇ u ⋅ ∇ v − ∫ ∂ Ω μ ∂ u ∂ n ^ ⋅ v ∫ Ω ∇ p ⋅ v = − ∫ Ω p ∇ ⋅ v + ∫ ∂ Ω p v ⋅ n ^ {\displaystyle {\begin{aligned}-\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} &=\int _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} -\int \limits _{\partial \Omega }\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}\cdot \mathbf {v} \\\int \limits _{\Omega }\nabla p\cdot \mathbf {v} &=-\int \limits _{\Omega }p\nabla \cdot \mathbf {v} +\int \limits _{\partial \Omega }p\mathbf {v} \cdot {\hat {\mathbf {n} }}\end{aligned}}} Using these relations, one gets: ∫ Ω ρ ∂ u ∂ t ⋅ v + ∫ Ω μ ∇ u ⋅ ∇ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v − ∫ Ω p ∇ ⋅ v = ∫ Ω f ⋅ v + ∫ ∂ Ω ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v ∀ v ∈ V . {\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} \quad \forall \mathbf {v} \in V.} In the same fashion, the continuity equation is multiplied for a test function q belonging to a space Q {\textstyle Q} and integrated in the domain Ω {\textstyle \Omega } : ∫ Ω q ∇ ⋅ u = 0. ∀ q ∈ Q . {\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0.\quad \forall q\in Q.} The space functions are chosen as follows: V = [ H 0 1 ( Ω ) ] d = { v ∈ [ H 1 ( Ω ) ] d : v = 0 on Γ D } , Q = L 2 ( Ω ) {\displaystyle {\begin{aligned}V=\left[H_{0}^{1}(\Omega )\right]^{d}&=\left\{\mathbf {v} \in \left[H^{1}(\Omega )\right]^{d}:\quad \mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\right\},\\Q&=L^{2}(\Omega )\end{aligned}}} Considering that the test function v vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as: ∫ ∂ Ω ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v = ∫ Γ D ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v ⏟ v = 0 on Γ D + ∫ Γ N ∫ Γ N ( μ ∂ u ∂ n ^ − p n ^ ) ⏟ = h on Γ N ⋅ v = ∫ Γ N h ⋅ v . {\displaystyle \int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} =\underbrace {\int \limits _{\Gamma _{D}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} } _{\mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\ }+\int \limits _{\Gamma _{N}}\underbrace {{\vphantom {\int \limits _{\Gamma _{N}}}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)} _{=\mathbf {h} {\text{ on }}\Gamma _{N}}\cdot \mathbf {v} =\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} .} Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as: find u ∈ L 2 ( R + [ H 1 ( Ω ) ] d ) ∩ C 0 ( R + [ L 2 ( Ω ) ] d ) such that: { ∫ Ω ρ ∂ u ∂ t ⋅ v + ∫ Ω μ ∇ u ⋅ ∇ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v − ∫ Ω p ∇ ⋅ v = ∫ Ω f ⋅ v + ∫ Γ N h ⋅ v ∀ v ∈ V , ∫ Ω q ∇ ⋅ u = 0 ∀ q ∈ Q . {\displaystyle {\begin{aligned}&{\text{find }}\mathbf {u} \in L^{2}\left(\mathbb {R} ^{+}\;\left[H^{1}(\Omega )\right]^{d}\right)\cap C^{0}\left(\mathbb {R} ^{+}\;\left[L^{2}(\Omega )\right]^{d}\right){\text{ such that: }}\\[5pt]&\quad {\begin{cases}\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \quad \forall \mathbf {v} \in V,\\\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0\quad \forall q\in Q.\end{cases}}\end{aligned}}} === Discrete velocity === With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is ( w i , ∂ u j ∂ t ) = − ( w i , ( u ⋅ ∇ ) u j ) − ν ( ∇ w i : ∇ u j ) + ( w i , f S ) . {\displaystyle \left(\mathbf {w} _{i},{\frac {\partial \mathbf {u} _{j}}{\partial t}}\right)=-{\bigl (}\mathbf {w} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}{\bigr )}-\nu \left(\nabla \mathbf {w} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {w} _{i},\mathbf {f} ^{S}\right).} It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following. We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions, ∇ φ = ( ∂ φ ∂ x , ∂ φ ∂ y ) T , ∇ × φ = ( ∂ φ ∂ y , − ∂ φ ∂ x ) T . {\displaystyle {\begin{aligned}\nabla \varphi &=\left({\frac {\partial \varphi }{\partial x}},\,{\frac {\partial \varphi }{\partial y}}\right)^{\mathrm {T} },\\[5pt]\nabla \times \varphi &=\left({\frac {\partial \varphi }{\partial y}},\,-{\frac {\partial \varphi }{\partial x}}\right)^{\mathrm {T} }.\end{aligned}}} Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements. Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces. Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces. Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions. The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations. Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D. === Pressure recovery === Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is, ( g i , ∇ p ) = − ( g i , ( u ⋅ ∇ ) u j ) − ν ( ∇ g i : ∇ u j ) + ( g i , f I ) {\displaystyle (\mathbf {g} _{i},\nabla p)=-\left(\mathbf {g} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}\right)-\nu \left(\nabla \mathbf {g} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {g} _{i},\mathbf {f} ^{I}\right)} where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions g i {\textstyle \mathbf {g} _{i}} one would choose the irrotational vector elements obtained from the gradient of the pressure element. == Non-inertial frame of reference == The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference K {\textstyle K} , and a non-inertial frame of reference K ′ {\textstyle K'} , which is translating with velocity U ( t ) {\textstyle \mathbf {U} (t)} and rotating with angular velocity Ω ( t ) {\textstyle \Omega (t)} with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes Here x {\textstyle \mathbf {x} } and u {\textstyle \mathbf {u} } are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of K ′ {\textstyle K'} with respect to K {\textstyle K} and the fourth term is due to the angular acceleration of K ′ {\textstyle K'} with respect to K {\textstyle K} . == Other equations == The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state. === Continuity equation for incompressible fluid === Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows: D m D t = ∭ V ( D ρ D t + ρ ( ∇ ⋅ u ) ) d V D ρ D t + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ( ∇ ρ ) ⋅ u + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )})dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}} A fluid media for which the density ( ρ {\displaystyle \rho } ) is constant is called incompressible. Therefore, the rate of change of density ( ρ {\displaystyle \rho } ) with respect to time ( ∂ ρ ∂ t ) {\displaystyle ({\frac {\partial \rho }{\partial t}})} and the gradient of density ( ∇ ρ ) {\displaystyle (\nabla \rho )} are equal to zero ( 0 ) {\displaystyle (0)} . In this case the general equation of continuity, ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0} , reduces to: ρ ( ∇ ⋅ u ) = 0 {\displaystyle \rho (\nabla {\cdot }{\mathbf {u} })=0} . Furthermore, assuming that density ( ρ {\displaystyle \rho } ) is a non-zero constant ( ρ ≠ 0 ) {\displaystyle (\rho \neq 0)} means that the right-hand side of the equation ( 0 ) {\displaystyle (0)} is divisible by density ( ρ {\displaystyle \rho } ). Therefore, the continuity equation for an incompressible fluid reduces further to: ( ∇ ⋅ u ) = 0 {\displaystyle (\nabla {\cdot {\mathbf {u} }})=0} This relationship, ( ∇ ⋅ u ) = 0 {\textstyle (\nabla {\cdot {\mathbf {u} }})=0} , identifies that the divergence of the flow velocity vector ( u {\displaystyle \mathbf {u} } ) is equal to zero ( 0 ) {\displaystyle (0)} , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator ( ∇ 2 u = ∇ ( ∇ ⋅ u ) − ∇ × ( ∇ × u ) ) {\displaystyle (\nabla ^{2}\mathbf {u} =\nabla (\nabla \cdot \mathbf {u} )-\nabla \times (\nabla \times \mathbf {u} ))} , and vorticity ( ω → = ∇ × u ) {\displaystyle ({\vec {\omega }}=\nabla \times \mathbf {u} )} which is now expressed like so, for an incompressible fluid: ∇ 2 u = − ( ∇ × ( ∇ × u ) ) = − ( ∇ × ω → ) {\displaystyle \nabla ^{2}\mathbf {u} =-(\nabla \times (\nabla \times \mathbf {u} ))=-(\nabla \times {\vec {\omega }})} == Stream function for incompressible 2D fluid == Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with u z = 0 {\textstyle u_{z}=0} and no dependence of anything on z {\textstyle z} ), where the equations reduce to: ρ ( ∂ u x ∂ t + u x ∂ u x ∂ x + u y ∂ u x ∂ y ) = − ∂ p ∂ x + μ ( ∂ 2 u x ∂ x 2 + ∂ 2 u x ∂ y 2 ) + ρ g x ρ ( ∂ u y ∂ t + u x ∂ u y ∂ x + u y ∂ u y ∂ y ) = − ∂ p ∂ y + μ ( ∂ 2 u y ∂ x 2 + ∂ 2 u y ∂ y 2 ) + ρ g y . {\displaystyle {\begin{aligned}\rho \left({\frac {\partial u_{x}}{\partial t}}+u_{x}{\frac {\partial u_{x}}{\partial x}}+u_{y}{\frac {\partial u_{x}}{\partial y}}\right)&=-{\frac {\partial p}{\partial x}}+\mu \left({\frac {\partial ^{2}u_{x}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{x}}{\partial y^{2}}}\right)+\rho g_{x}\\\rho \left({\frac {\partial u_{y}}{\partial t}}+u_{x}{\frac {\partial u_{y}}{\partial x}}+u_{y}{\frac {\partial u_{y}}{\partial y}}\right)&=-{\frac {\partial p}{\partial y}}+\mu \left({\frac {\partial ^{2}u_{y}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{y}}{\partial y^{2}}}\right)+\rho g_{y}.\end{aligned}}} Differentiating the first with respect to y {\textstyle y} , the second with respect to x {\textstyle x} and subtracting the resulting equations will eliminate pressure and any conservative force. For incompressible flow, defining the stream function ψ {\textstyle \psi } through u x = ∂ ψ ∂ y ; u y = − ∂ ψ ∂ x {\displaystyle u_{x}={\frac {\partial \psi }{\partial y}};\quad u_{y}=-{\frac {\partial \psi }{\partial x}}} results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation: ∂ ∂ t ( ∇ 2 ψ ) + ∂ ψ ∂ y ∂ ∂ x ( ∇ 2 ψ ) − ∂ ψ ∂ x ∂ ∂ y ( ∇ 2 ψ ) = ν ∇ 4 ψ {\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \psi }{\partial y}}{\frac {\partial }{\partial x}}\left(\nabla ^{2}\psi \right)-{\frac {\partial \psi }{\partial x}}{\frac {\partial }{\partial y}}\left(\nabla ^{2}\psi \right)=\nu \nabla ^{4}\psi } where ∇ 4 {\textstyle \nabla ^{4}} is the 2D biharmonic operator and ν {\textstyle \nu } is the kinematic viscosity, ν = μ ρ {\textstyle \nu ={\frac {\mu }{\rho }}} . We can also express this compactly using the Jacobian determinant: ∂ ∂ t ( ∇ 2 ψ ) + ∂ ( ψ , ∇ 2 ψ ) ∂ ( y , x ) = ν ∇ 4 ψ . {\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \left(\psi ,\nabla ^{2}\psi \right)}{\partial (y,x)}}=\nu \nabla ^{4}\psi .} This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero. In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function. The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest. == Properties == === Nonlinearity === The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model. The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood. === Turbulence === Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly. The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, k–ω, k–ε, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales. === Applicability === Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations. The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement. Failing that, one may have to resort to molecular dynamics or various hybrid methods. Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist. == Application to specific problems == The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension. Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem. === Parallel flow === Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is: d 2 u d y 2 = − 1 ; u ( 0 ) = u ( 1 ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-1;\quad u(0)=u(1)=0.} The boundary condition is the no slip condition. This problem is easily solved for the flow field: u ( y ) = y − y 2 2 . {\displaystyle u(y)={\frac {y-y^{2}}{2}}.} From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate. === Radial flow === Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function f(z) that must satisfy: d 2 f d z 2 + R f 2 = − 1 ; f ( − 1 ) = f ( 1 ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}f}{\mathrm {d} z^{2}}}+Rf^{2}=-1;\quad f(-1)=f(1)=0.} This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for R > 1.41 {\textstyle R>1.41} (approximately; this is not √2), the parameter R {\textstyle R} being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows. === Convection === A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility. == Exact solutions of the Navier–Stokes equations == Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex. Time-dependent self-similar solutions of the three-dimensional non-compressible Navier–Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. For the compressible Navier–Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers. Under additional assumptions, the component parts can be separated. === A three-dimensional steady-state vortex solution === A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let r {\textstyle r} be a constant radius of the inner coil. One set of solutions is given by: ρ ( x , y , z ) = 3 B r 2 + x 2 + y 2 + z 2 p ( x , y , z ) = − A 2 B ( r 2 + x 2 + y 2 + z 2 ) 3 u ( x , y , z ) = A ( r 2 + x 2 + y 2 + z 2 ) 2 ( 2 ( − r y + x z ) 2 ( r x + y z ) r 2 − x 2 − y 2 + z 2 ) g = 0 μ = 0 {\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {3B}{r^{2}+x^{2}+y^{2}+z^{2}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\\mathbf {u} (x,y,z)&={\frac {A}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}{\begin{pmatrix}2(-ry+xz)\\2(rx+yz)\\r^{2}-x^{2}-y^{2}+z^{2}\end{pmatrix}}\\g&=0\\\mu &=0\end{aligned}}} for arbitrary constants A {\textstyle A} and B {\textstyle B} . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where ρ {\textstyle \rho } is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field: === Viscous three-dimensional periodic solutions === Two examples of periodic fully-three-dimensional viscous solutions are described in. These solutions are defined on a three-dimensional torus T 3 = [ 0 , L ] 3 {\displaystyle \mathbb {T} ^{3}=[0,L]^{3}} and are characterized by positive and negative helicity respectively. The solution with positive helicity is given by: u x = 4 2 3 3 U 0 [ sin ⁡ ( k x − π / 3 ) cos ⁡ ( k y + π / 3 ) sin ⁡ ( k z + π / 2 ) − cos ⁡ ( k z − π / 3 ) sin ⁡ ( k x + π / 3 ) sin ⁡ ( k y + π / 2 ) ] e − 3 ν k 2 t u y = 4 2 3 3 U 0 [ sin ⁡ ( k y − π / 3 ) cos ⁡ ( k z + π / 3 ) sin ⁡ ( k x + π / 2 ) − cos ⁡ ( k x − π / 3 ) sin ⁡ ( k y + π / 3 ) sin ⁡ ( k z + π / 2 ) ] e − 3 ν k 2 t u z = 4 2 3 3 U 0 [ sin ⁡ ( k z − π / 3 ) cos ⁡ ( k x + π / 3 ) sin ⁡ ( k y + π / 2 ) − cos ⁡ ( k y − π / 3 ) sin ⁡ ( k z + π / 3 ) sin ⁡ ( k x + π / 2 ) ] e − 3 ν k 2 t {\displaystyle {\begin{aligned}u_{x}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kx-\pi /3)\cos(ky+\pi /3)\sin(kz+\pi /2)-\cos(kz-\pi /3)\sin(kx+\pi /3)\sin(ky+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{y}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(ky-\pi /3)\cos(kz+\pi /3)\sin(kx+\pi /2)-\cos(kx-\pi /3)\sin(ky+\pi /3)\sin(kz+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{z}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kz-\pi /3)\cos(kx+\pi /3)\sin(ky+\pi /2)-\cos(ky-\pi /3)\sin(kz+\pi /3)\sin(kx+\pi /2)\,\right]e^{-3\nu k^{2}t}\end{aligned}}} where k = 2 π / L {\displaystyle k=2\pi /L} is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is U 0 2 / 2 {\displaystyle U_{0}^{2}/2} at t = 0 {\displaystyle t=0} . The pressure field is obtained from the velocity field as p = p 0 − ρ 0 ‖ u ‖ 2 / 2 {\displaystyle p=p_{0}-\rho _{0}\|{\boldsymbol {u}}\|^{2}/2} (where p 0 {\displaystyle p_{0}} and ρ 0 {\displaystyle \rho _{0}} are reference values for the pressure and density fields respectively). Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by ω = 3 k u {\displaystyle \omega ={\sqrt {3}}\,k\,{\boldsymbol {u}}} . These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex. == Wyld diagrams == Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Mstislav Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions. == Representations in 3D == Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. ∂ x u {\textstyle \partial _{x}u} means the partial derivative of u {\textstyle u} with respect to x {\textstyle x} , and ∂ y 2 f θ {\textstyle \partial _{y}^{2}f_{\theta }} means the second-order partial derivative of f θ {\textstyle f_{\theta }} with respect to y {\textstyle y} . A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic. === Cartesian coordinates === From the general form of the Navier–Stokes, with the velocity vector expanded as u = ( u x , u y , u z ) {\textstyle \mathbf {u} =(u_{x},u_{y},u_{z})} , sometimes respectively named u {\textstyle u} , v {\textstyle v} , w {\textstyle w} , we may write the vector equation explicitly, x : ρ ( ∂ t u x + u x ∂ x u x + u y ∂ y u x + u z ∂ z u x ) = − ∂ x p + μ ( ∂ x 2 u x + ∂ y 2 u x + ∂ z 2 u x ) + 1 3 μ ∂ x ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g x {\displaystyle {\begin{aligned}x:\ &\rho \left({\partial _{t}u_{x}}+u_{x}\,{\partial _{x}u_{x}}+u_{y}\,{\partial _{y}u_{x}}+u_{z}\,{\partial _{z}u_{x}}\right)\\&\quad =-\partial _{x}p+\mu \left({\partial _{x}^{2}u_{x}}+{\partial _{y}^{2}u_{x}}+{\partial _{z}^{2}u_{x}}\right)+{\frac {1}{3}}\mu \ \partial _{x}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{x}\\\end{aligned}}} y : ρ ( ∂ t u y + u x ∂ x u y + u y ∂ y u y + u z ∂ z u y ) = − ∂ y p + μ ( ∂ x 2 u y + ∂ y 2 u y + ∂ z 2 u y ) + 1 3 μ ∂ y ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g y {\displaystyle {\begin{aligned}y:\ &\rho \left({\partial _{t}u_{y}}+u_{x}{\partial _{x}u_{y}}+u_{y}{\partial _{y}u_{y}}+u_{z}{\partial _{z}u_{y}}\right)\\&\quad =-{\partial _{y}p}+\mu \left({\partial _{x}^{2}u_{y}}+{\partial _{y}^{2}u_{y}}+{\partial _{z}^{2}u_{y}}\right)+{\frac {1}{3}}\mu \ \partial _{y}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{y}\\\end{aligned}}} z : ρ ( ∂ t u z + u x ∂ x u z + u y ∂ y u z + u z ∂ z u z ) = − ∂ z p + μ ( ∂ x 2 u z + ∂ y 2 u z + ∂ z 2 u z ) + 1 3 μ ∂ z ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g z . {\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{x}{\partial _{x}u_{z}}+u_{y}{\partial _{y}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}+\mu \left({\partial _{x}^{2}u_{z}}+{\partial _{y}^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)+{\frac {1}{3}}\mu \ \partial _{z}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{z}.\end{aligned}}} Note that gravity has been accounted for as a body force, and the values of g x {\textstyle g_{x}} , g y {\textstyle g_{y}} , g z {\textstyle g_{z}} will depend on the orientation of gravity with respect to the chosen set of coordinates. The continuity equation reads: ∂ t ρ + ∂ x ( ρ u x ) + ∂ y ( ρ u y ) + ∂ z ( ρ u z ) = 0. {\displaystyle \partial _{t}\rho +\partial _{x}(\rho u_{x})+\partial _{y}(\rho u_{y})+\partial _{z}(\rho u_{z})=0.} When the flow is incompressible, ρ {\textstyle \rho } does not change for any fluid particle, and its material derivative vanishes: D ρ D t = 0 {\textstyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0} . The continuity equation is reduced to: ∂ x u x + ∂ y u y + ∂ z u z = 0. {\displaystyle \partial _{x}u_{x}+\partial _{y}u_{y}+\partial _{z}u_{z}=0.} Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow). This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain. === Cylindrical coordinates === A change of variables on the Cartesian equations will yield the following momentum equations for r {\textstyle r} , ϕ {\textstyle \phi } , and z {\textstyle z} r : ρ ( ∂ t u r + u r ∂ r u r + u φ r ∂ φ u r + u z ∂ z u r − u φ 2 r ) = − ∂ r p + μ ( 1 r ∂ r ( r ∂ r u r ) + 1 r 2 ∂ φ 2 u r + ∂ z 2 u r − u r r 2 − 2 r 2 ∂ φ u φ ) + 1 3 μ ∂ r ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g r {\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{r}}+u_{z}{\partial _{z}u_{r}}-{\frac {u_{\varphi }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{r}}+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}-{\frac {2}{r^{2}}}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}} φ : ρ ( ∂ t u φ + u r ∂ r u φ + u φ r ∂ φ u φ + u z ∂ z u φ + u r u φ r ) = − 1 r ∂ φ p + μ ( 1 r ∂ r ( r ∂ r u φ ) + 1 r 2 ∂ φ 2 u φ + ∂ z 2 u φ − u φ r 2 + 2 r 2 ∂ φ u r ) + 1 3 μ 1 r ∂ φ ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g φ {\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{\varphi }}+u_{z}{\partial _{z}u_{\varphi }}+{\frac {u_{r}u_{\varphi }}{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r}}\ \partial _{r}\left(r{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{\varphi }}+{\partial _{z}^{2}u_{\varphi }}-{\frac {u_{\varphi }}{r^{2}}}+{\frac {2}{r^{2}}}{\partial _{\varphi }u_{r}}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\varphi }\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}} z : ρ ( ∂ t u z + u r ∂ r u z + u φ r ∂ φ u z + u z ∂ z u z ) = − ∂ z p + μ ( 1 r ∂ r ( r ∂ r u z ) + 1 r 2 ∂ φ 2 u z + ∂ z 2 u z ) + 1 3 μ ∂ z ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g z . {\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{z}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{z}.\end{aligned}}} The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is: ∂ t ρ + 1 r ∂ r ( ρ r u r ) + 1 r ∂ φ ( ρ u φ ) + ∂ z ( ρ u z ) = 0. {\displaystyle {\partial _{t}\rho }+{\frac {1}{r}}\partial _{r}\left(\rho ru_{r}\right)+{\frac {1}{r}}{\partial _{\varphi }\left(\rho u_{\varphi }\right)}+{\partial _{z}\left(\rho u_{z}\right)}=0.} This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity ( u ϕ = 0 {\textstyle u_{\phi }=0} ), and the remaining quantities are independent of ϕ {\textstyle \phi } : ρ ( ∂ t u r + u r ∂ r u r + u z ∂ z u r ) = − ∂ r p + μ ( 1 r ∂ r ( r ∂ r u r ) + ∂ z 2 u r − u r r 2 ) + ρ g r ρ ( ∂ t u z + u r ∂ r u z + u z ∂ z u z ) = − ∂ z p + μ ( 1 r ∂ r ( r ∂ r u z ) + ∂ z 2 u z ) + ρ g z 1 r ∂ r ( r u r ) + ∂ z u z = 0. {\displaystyle {\begin{aligned}\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+u_{z}{\partial _{z}u_{r}}\right)&=-{\partial _{r}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}\right)+\rho g_{r}\\\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)&=-{\partial _{z}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\partial _{z}^{2}u_{z}}\right)+\rho g_{z}\\{\frac {1}{r}}\partial _{r}\left(ru_{r}\right)+{\partial _{z}u_{z}}&=0.\end{aligned}}} === Spherical coordinates === In spherical coordinates, the r {\textstyle r} , ϕ {\textstyle \phi } , and θ {\textstyle \theta } momentum equations are (note the convention used: θ {\textstyle \theta } is polar angle, or colatitude, 0 ≤ θ ≤ π {\textstyle 0\leq \theta \leq \pi } ): r : ρ ( ∂ t u r + u r ∂ r u r + u φ r sin ⁡ θ ∂ φ u r + u θ r ∂ θ u r − u φ 2 + u θ 2 r ) = − ∂ r p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u r ) + 1 r 2 sin 2 ⁡ θ ∂ φ 2 u r + 1 r 2 sin ⁡ θ ∂ θ ( sin ⁡ θ ∂ θ u r ) − 2 u r + ∂ θ u θ + u θ cot ⁡ θ r 2 − 2 r 2 sin ⁡ θ ∂ φ u φ ) + 1 3 μ ∂ r ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin ⁡ θ ∂ θ ( u θ sin ⁡ θ ) + 1 r sin ⁡ θ ∂ φ u φ ) + ρ g r {\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{r}}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{r}}-{\frac {u_{\varphi }^{2}+u_{\theta }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{r}}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{r}}\right)-2{\frac {u_{r}+{\partial _{\theta }u_{\theta }}+u_{\theta }\cot \theta }{r^{2}}}-{\frac {2}{r^{2}\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}} φ : ρ ( ∂ t u φ + u r ∂ r u φ + u φ r sin ⁡ θ ∂ φ u φ + u θ r ∂ θ u φ + u r u φ + u φ u θ cot ⁡ θ r ) = − 1 r sin ⁡ θ ∂ φ p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u φ ) + 1 r 2 sin 2 ⁡ θ ∂ φ 2 u φ + 1 r 2 sin ⁡ θ ∂ θ ( sin ⁡ θ ∂ θ u φ ) + 2 sin ⁡ θ ∂ φ u r + 2 cos ⁡ θ ∂ φ u θ − u φ r 2 sin 2 ⁡ θ ) + 1 3 μ 1 r sin ⁡ θ ∂ φ ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin ⁡ θ ∂ θ ( u θ sin ⁡ θ ) + 1 r sin ⁡ θ ∂ φ u φ ) + ρ g φ {\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\varphi }}+{\frac {u_{r}u_{\varphi }+u_{\varphi }u_{\theta }\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r\sin \theta }}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\varphi }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\varphi }}\right)+{\frac {2\sin \theta {\partial _{\varphi }u_{r}}+2\cos \theta {\partial _{\varphi }u_{\theta }}-u_{\varphi }}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r\sin \theta }}\partial _{\varphi }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}} θ : ρ ( ∂ t u θ + u r ∂ r u θ + u φ r sin ⁡ θ ∂ φ u θ + u θ r ∂ θ u θ + u r u θ − u φ 2 cot ⁡ θ r ) = − 1 r ∂ θ p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u θ ) + 1 r 2 sin 2 ⁡ θ ∂ φ 2 u θ + 1 r 2 sin ⁡ θ ∂ θ ( sin ⁡ θ ∂ θ u θ ) + 2 r 2 ∂ θ u r − u θ + 2 cos ⁡ θ ∂ φ u φ r 2 sin 2 ⁡ θ ) + 1 3 μ 1 r ∂ θ ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin ⁡ θ ∂ θ ( u θ sin ⁡ θ ) + 1 r sin ⁡ θ ∂ φ u φ ) + ρ g θ . {\displaystyle {\begin{aligned}\theta :\ &\rho \left({\partial _{t}u_{\theta }}+u_{r}{\partial _{r}u_{\theta }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\theta }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\theta }}+{\frac {u_{r}u_{\theta }-u_{\varphi }^{2}\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\theta }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\theta }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\theta }}\right)+{\frac {2}{r^{2}}}{\partial _{\theta }u_{r}}-{\frac {u_{\theta }+2\cos \theta {\partial _{\varphi }u_{\varphi }}}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\theta }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\theta }.\end{aligned}}} Mass continuity will read: ∂ t ρ + 1 r 2 ∂ r ( ρ r 2 u r ) + 1 r sin ⁡ θ ∂ φ ( ρ u φ ) + 1 r sin ⁡ θ ∂ θ ( sin ⁡ θ ρ u θ ) = 0. {\displaystyle {\partial _{t}\rho }+{\frac {1}{r^{2}}}\partial _{r}\left(\rho r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }(\rho u_{\varphi })}+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(\sin \theta \rho u_{\theta }\right)=0.} These equations could be (slightly) compacted by, for example, factoring 1 r 2 {\textstyle {\frac {1}{r^{2}}}} from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities. == See also == == Citations == == General references == Acheson, D. J. (1990), Elementary Fluid Dynamics, Oxford Applied Mathematics and Computing Science Series, Oxford University Press, ISBN 978-0-19-859679-0 Batchelor, G. K. (1967), An Introduction to Fluid Dynamics, Cambridge University Press, ISBN 978-0-521-66396-0 Currie, I. G. (1974), Fundamental Mechanics of Fluids, McGraw-Hill, ISBN 978-0-07-015000-3 V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986. Landau, L. D.; Lifshitz, E. M. (1987), Fluid mechanics, vol. Course of Theoretical Physics Volume 6 (2nd revised ed.), Pergamon Press, ISBN 978-0-08-033932-0, OCLC 15017127 Polyanin, A. D.; Kutepov, A. M.; Vyazmin, A. V.; Kazenin, D. A. (2002), Hydrodynamics, Mass and Heat Transfer in Chemical Engineering, Taylor & Francis, London, ISBN 978-0-415-27237-7 Rhyming, Inge L. (1991), Dynamique des fluides, Presses polytechniques et universitaires romandes Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley, ISBN 0-47-1253499 Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis, ACM Chelsea Publishing, ISBN 978-0-8218-2737-6 Milne-Thomson, L.M. C.B.E (1962), Theoretical Hydrodynamics, Macmillan & Co Ltd. Tartar, L (2006), An Introduction to Navier Stokes Equation and Oceanography, Springer ISBN 3-540-35743-2 Birkhoff, Garrett (1960), Hydrodynamics, Princeton University Press Campos, D.(Editor) (2017) Handbook on Navier-Stokes Equations Theory and Applied Analysis, Nova Science Publisher ISBN 978-1-53610-292-5 Döring, C.E. and J.D. Gibbon, J.D. (1995) Applied analysis of the Navier-Stokes equations, Cambridge University Press, ISBN 0-521-44557-1 Basset, A.B. (1888) Hydrodynamics Volume I and II, Cambridge: Delighton, Bell and Co Fox, R. W. McDonald, A.T. and Pritchard, P.J. (2004) Introduction to Fluid Mechanics, John Wiley and Sons, ISBN 0-471-2023-2 Foias, C. Mainley, O. Rosa, R. and Temam, R. (2004) Navier–Stokes Equations and Turbulence, Cambridge University Press, {{ISBN}0-521-36032-3}} Lions, P-L. (1998) Mathematical Topics in Fluid Mechanics Volume 1 and 2, Clarendon Press, ISBN 0-19-851488-3 Deville, M.O. and Gatski, T. B. (2012) Mathematical Modeling for Complex Fluids and Flows, Springer, ISBN 978-3-642-25294-5 Kochin, N.E. Kibel, I.A. and Roze, N.V. (1964) Theoretical Hydromechanics, John Wiley & Sons, Ltd. Lamb, H. (1879) Hydrodynamics, Cambridge University Press, White, Frank M. (2006), Viscous Fluid Flow, McGraw-Hill, ISBN 978-0-07-124493-0 == External links == Simplified derivation of the Navier–Stokes equations Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA
Wikipedia/Navier–Stokes_equations
In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is. == Usage in science and engineering == In formal expressions, the ordinal number used before the word order refers to the highest power in the series expansion used in the approximation. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases. The expression a zero-order approximation is also common. Cardinal numerals are occasionally used in expressions like an order-zero approximation, an order-one approximation, etc. The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity. The phrase to a zeroth approximation indicates a wild guess. The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing, as these formal expressions do not directly refer to the order of derivatives. The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. The choice of order of approximation depends on the research purpose. One may wish to simplify a known analytic expression to devise a new application or, on the contrary, try to fit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy. In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree. The formal usage of order of approximation corresponds to the omission of some terms of the series used in the expansion. This affects accuracy. The error usually varies within the interval. Thus the terms (zeroth, first, second, etc.) used above meaning do not directly give information about percent error or significant figures. For example, in the Taylor series expansion of the exponential function, e x = 1 ⏟ 0 th + x ⏟ 1 st + x 2 2 ! ⏟ 2 nd + x 3 3 ! ⏟ 3 rd + x 4 4 ! ⏟ 4 th + … , {\displaystyle e^{x}=\underbrace {1} _{0^{\text{th}}}+\underbrace {x} _{1^{\text{st}}}+\underbrace {\frac {x^{2}}{2!}} _{2^{\text{nd}}}+\underbrace {\frac {x^{3}}{3!}} _{3^{\text{rd}}}+\underbrace {\frac {x^{4}}{4!}} _{4^{\text{th}}}+\ldots \;,} the zeroth-order term is 1 ; {\displaystyle 1;} the first-order term is x , {\displaystyle x,} second-order is x 2 / 2 , {\displaystyle x^{2}/2,} and so forth. If | x | < 1 , {\displaystyle |x|<1,} each higher order term is smaller than the previous. If | x | << 1 , {\displaystyle |x|<<1,\,} then the first order approximation, e x ≈ 1 + x , {\displaystyle e^{x}\approx 1+x,} is often sufficient. But at x = 1 , {\displaystyle x=1,} the first-order term, x , {\displaystyle x,} is not smaller than the zeroth-order term, 1. {\displaystyle 1.} And at x = 2 , {\displaystyle x=2,} even the second-order term, 2 3 / 3 ! = 4 / 3 , {\displaystyle 2^{3}/3!=4/3,\,} is greater than the zeroth-order term. === Zeroth-order === Zeroth-order approximation is the term scientists use for a first rough answer. Many simplifying assumptions are made, and when a number is needed, an order-of-magnitude answer (or zero significant figures) is often given. For example, "the town has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order-of-magnitude approximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined. A zeroth-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be constant, or a flat line with no slope: a polynomial of degree 0. For example, x = [ 0 , 1 , 2 ] , {\displaystyle x=[0,1,2],} y = [ 3 , 3 , 5 ] , {\displaystyle y=[3,3,5],} y ∼ f ( x ) = 3.67 {\displaystyle y\sim f(x)=3.67} could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging the x values and the y values. However, data points represent results of measurements and they do differ from points in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example of false precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result for y of ~3.7 ± 2.0 in the interval of x from −0.5 to 2.5, considering the standard deviation. If the data points are reported as x = [ 0.00 , 1.00 , 2.00 ] , {\displaystyle x=[0.00,1.00,2.00],} y = [ 3.00 , 3.00 , 5.00 ] , {\displaystyle y=[3.00,3.00,5.00],} the zeroth-order approximation results in y ∼ f ( x ) = 3.67. {\displaystyle y\sim f(x)=3.67.} The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example, y ∼ x + 2.67. {\displaystyle y\sim x+2.67.} One should be careful though, because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of the interval, which may be a large part of it. This means that y could have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for example y = sin πx. Taylor series are useful and help predict analytic solutions, but the approximations alone do not provide conclusive evidence. === First-order === First-order approximation is the term scientists use for a slightly better answer. Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has 4×103, or four thousand, residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth-order example above, the quantity "a few" was given, but in the first-order example, the number "4" is given. A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example: x = [ 0.00 , 1.00 , 2.00 ] , {\displaystyle x=[0.00,1.00,2.00],} y = [ 3.00 , 3.00 , 5.00 ] , {\displaystyle y=[3.00,3.00,5.00],} y ∼ f ( x ) = x + 2.67 {\displaystyle y\sim f(x)=x+2.67} is an approximate fit to the data. In this example there is a zeroth-order approximation that is the same as the first-order, but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an "educated guess". === Second-order === Second-order approximation is the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has 3.9×103, or thirty-nine hundred, residents") is generally given. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth order found in the examples above. A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a quadratic polynomial, geometrically, a parabola: a polynomial of degree 2. For example: x = [ 0.00 , 1.00 , 2.00 ] , {\displaystyle x=[0.00,1.00,2.00],} y = [ 3.00 , 3.00 , 5.00 ] , {\displaystyle y=[3.00,3.00,5.00],} y ∼ f ( x ) = x 2 − x + 3 {\displaystyle y\sim f(x)=x^{2}-x+3} is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order"). === Higher-order === While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number. Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. See polynomial interpolation. == Colloquial usage == These terms are also used colloquially by scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it." or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement. == See also == Linearization Perturbation theory Taylor series Chapman–Enskog method Big O notation Order of accuracy == References ==
Wikipedia/First-order_approximation
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. == Elementary examples == === Trivial example === The system of one equation in one unknown 2 x = 4 {\displaystyle 2x=4} has the solution x = 2. {\displaystyle x=2.} However, most interesting linear systems have at least two equations. === Simple nontrivial example === The simplest kind of nontrivial linear system involves two equations and two variables: 2 x + 3 y = 6 4 x + 9 y = 15 . {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} : x = 3 − 3 2 y . {\displaystyle x=3-{\frac {3}{2}}y.} Now substitute this expression for x into the bottom equation: 4 ( 3 − 3 2 y ) + 9 y = 15. {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) == General form == A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. === Vector equation === One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. x 1 [ a 11 a 21 ⋮ a m 1 ] + x 2 [ a 12 a 22 ⋮ a m 2 ] + ⋯ + x n [ a 1 n a 2 n ⋮ a m n ] = [ b 1 b 2 ⋮ b m ] {\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed. === Matrix equation === The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix. == Solution set == A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: The system has infinitely many solutions. The system has a unique solution. The system has no solution. === Geometric interpretation === For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n. === General behavior === In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system. In general, a system with the same number of equations and unknowns has a single unique solution. In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns. == Properties == === Independence === The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations 3 x + 2 y = 6 and 6 x + 4 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12} are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations x − 2 y = − 1 3 x + 5 y = 8 4 x + 3 y = 7 {\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}} are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. === Consistency === A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1. For example, the equations 3 x + 2 y = 6 and 3 x + 2 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12} are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations x + y = 1 2 x + y = 1 3 x + 2 y = 3 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}} are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. === Equivalence === Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. == Solving a linear system == There are several algorithms for solving a system of linear equations. === Describing the solution === When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}} The solution set to this system can be described by the following equations: x = − 7 z − 1 and y = 3 z + 2 . {\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}} Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: y = − 3 7 x + 11 7 and z = − 1 7 x − 1 7 . {\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}} Here x is the free variable, and y and z are dependent. === Elimination of variables === The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: In the first equation, solve for one of the variables in terms of the others. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown. Repeat steps 1 and 2 until the system is reduced to a single linear equation. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: { x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}} Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields { y = 3 z + 2 y = 7 2 z + 1 {\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}} Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have: 3 z + 2 = 7 2 z + 1 ⇒ z = 2 {\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}} Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} . === Row reduction === In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] . {\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}} This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 0 1 2 ] ∼ [ 1 3 − 2 5 0 1 0 8 0 0 1 2 ] ∼ [ 1 3 0 9 0 1 0 8 0 0 1 2 ] ∼ [ 1 0 0 − 15 0 1 0 8 0 0 1 2 ] . {\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}} The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. === Cramer's rule === Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}} is given by x = | 5 3 − 2 7 5 6 8 4 3 | | 1 3 − 2 3 5 6 2 4 3 | , y = | 1 5 − 2 3 7 6 2 8 3 | | 1 3 − 2 3 5 6 2 4 3 | , z = | 1 3 5 3 5 7 2 4 8 | | 1 3 − 2 3 5 6 2 4 3 | . {\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.} For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. === Matrix solution === If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by x = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} } where A − 1 {\displaystyle A^{-1}} is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted A + {\displaystyle A^{+}} , as follows: x = A + b + ( I − A + A ) w {\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} } where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to x = A − 1 b + ( I − A − 1 A ) w = A − 1 b + ( I − I ) w = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} } as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation. === Other methods === While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})} When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. There is also a quantum algorithm for linear systems of equations. == Homogeneous systems == A system of linear equations is homogeneous if all of the constant terms are zero: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0. {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}} A homogeneous system is equivalent to a matrix equation of the form A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. === Homogeneous solution set === Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. === Relation to nonhomogeneous systems === There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: A x = b and A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .} Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as { p + v : v is any solution to A x = 0 } . {\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.} Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. == See also == Arrangement of hyperplanes Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations Coates graph – A mathematical graph for solution of linear equations LAPACK – Software library for numerical linear algebra Linear equation over a ring Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets Matrix decomposition – Representation of a matrix as a product Matrix splitting – Representation of a matrix as a sum NAG Numerical Library – Software library of numerical-analysis algorithms Rybicki Press algorithm – An algorithm for inverting a matrix Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets == References == == Bibliography == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993 Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3 Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5 == Further reading == Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0. Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7. Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001. Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3. Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International. Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall. Strang, Gilbert (2005). Linear Algebra and Its Applications. Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679. == External links == Media related to System of linear equations at Wikimedia Commons
Wikipedia/Homogeneous_system_of_linear_equations
In linear algebra, two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix P such that B = P − 1 A P . {\displaystyle B=P^{-1}AP.} Similar matrices represent the same linear map under two possibly different bases, with P being the change-of-basis matrix. A transformation A ↦ P−1AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and similar matrices are also called conjugate; however, in a given subgroup H of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that P be chosen to lie in H. == Motivating example == When defining a linear transformation, it can be the case that a change of basis can result in a simpler form of the same transformation. For example, the matrix representing a rotation in ℝ3 when the axis of rotation is not aligned with the coordinate axis can be complicated to compute. If the axis of rotation were aligned with the positive z-axis, then it would simply be S = [ cos ⁡ θ − sin ⁡ θ 0 sin ⁡ θ cos ⁡ θ 0 0 0 1 ] , {\displaystyle S={\begin{bmatrix}\cos \theta &-\sin \theta &0\\\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}},} where θ {\displaystyle \theta } is the angle of rotation. In the new coordinate system, the transformation would be written as y ′ = S x ′ , {\displaystyle y'=Sx',} where x' and y' are respectively the original and transformed vectors in a new basis containing a vector parallel to the axis of rotation. In the original basis, the transform would be written as y = T x , {\displaystyle y=Tx,} where vectors x and y and the unknown transform matrix T are in the original basis. To write T in terms of the simpler matrix, we use the change-of-basis matrix P that transforms x and y as x ′ = P x {\displaystyle x'=Px} and y ′ = P y {\displaystyle y'=Py} : y ′ = S x ′ ⇒ P y = S P x ⇒ y = ( P − 1 S P ) x = T x {\displaystyle {\begin{aligned}&&y'&=Sx'\\[1.6ex]&\Rightarrow &Py&=SPx\\[1.6ex]&\Rightarrow &y&=\left(P^{-1}SP\right)x=Tx\end{aligned}}} Thus, the matrix in the original basis, T {\displaystyle T} , is given by T = P − 1 S P {\displaystyle T=P^{-1}SP} . The transform in the original basis is found to be the product of three easy-to-derive matrices. In effect, the similarity transform operates in three steps: change to a new basis (P), perform the simple transformation (S), and change back to the old basis (P−1). == Properties == Similarity is an equivalence relation on the space of square matrices. Because matrices are similar if and only if they represent the same linear operator with respect to (possibly) different bases, similar matrices share all properties of their shared underlying operator: Rank Characteristic polynomial, and attributes that can be derived from it: Determinant Trace Eigenvalues, and their algebraic multiplicities Geometric multiplicities of eigenvalues (but not the eigenspaces, which are transformed according to the base change matrix P used). Minimal polynomial Frobenius normal form Jordan normal form, up to a permutation of the Jordan blocks Index of nilpotence Elementary divisors, which form a complete set of invariants for similarity of matrices over a principal ideal domain Because of this, for a given matrix A, one is interested in finding a simple "normal form" B which is similar to A—the study of A then reduces to the study of the simpler matrix B. For example, A is called diagonalizable if it is similar to a diagonal matrix. Not all matrices are diagonalizable, but at least over the complex numbers (or any algebraically closed field), every matrix is similar to a matrix in Jordan form. Neither of these forms is unique (diagonal entries or Jordan blocks may be permuted) so they are not really normal forms; moreover their determination depends on being able to factor the minimal or characteristic polynomial of A (equivalently to find its eigenvalues). The rational canonical form does not have these drawbacks: it exists over any field, is truly unique, and it can be computed using only arithmetic operations in the field; A and B are similar if and only if they have the same rational canonical form. The rational canonical form is determined by the elementary divisors of A; these can be immediately read off from a matrix in Jordan form, but they can also be determined directly for any matrix by computing the Smith normal form, over the ring of polynomials, of the matrix (with polynomial entries) XIn − A (the same one whose determinant defines the characteristic polynomial). Note that this Smith normal form is not a normal form of A itself; moreover it is not similar to XIn − A either, but obtained from the latter by left and right multiplications by different invertible matrices (with polynomial entries). Similarity of matrices does not depend on the base field: if L is a field containing K as a subfield, and A and B are two matrices over K, then A and B are similar as matrices over K if and only if they are similar as matrices over L. This is so because the rational canonical form over K is also the rational canonical form over L. This means that one may use Jordan forms that only exist over a larger field to determine whether the given matrices are similar. In the definition of similarity, if the matrix P can be chosen to be a permutation matrix then A and B are permutation-similar; if P can be chosen to be a unitary matrix then A and B are unitarily equivalent. The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities. == See also == Canonical forms Matrix congruence Matrix equivalence Jacobi rotation == References == === Citations === === General references ===
Wikipedia/Similar_(linear_algebra)
In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation. Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry, and the term homography, which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity. The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field (incidence geometry, see also synthetic geometry); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations". For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field. Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the results remain true, or may be generalized to projective geometries for which these theorems do not hold. == Geometric motivation == Historically, the concept of homography had been introduced to understand, explain and study visual perspective, and, specifically, the difference in appearance of two plane objects viewed from different points of view. In three-dimensional Euclidean space, a central projection from a point O (the center) onto a plane P that does not contain O is the mapping that sends a point A to the intersection (if it exists) of the line OA and the plane P. The projection is not defined if the point A belongs to the plane passing through O and parallel to P. The notion of projective space was originally introduced by extending the Euclidean space, that is, by adding points at infinity to it, in order to define the projection for every point except O. Given another plane Q, which does not contain O, the restriction to Q of the above projection is called a perspectivity. With these definitions, a perspectivity is only a partial function, but it becomes a bijection if extended to projective spaces. Therefore, this notion is normally defined for projective spaces. The notion is also easily generalized to projective spaces of any dimension, over any field, in the following way: Given two projective spaces P and Q of dimension n, a perspectivity is a bijection from P to Q that may be obtained by embedding P and Q in a projective space R of dimension n + 1 and restricting to P a central projection onto Q. If f is a perspectivity from P to Q, and g a perspectivity from Q to P, with a different center, then g ⋅ f is a homography from P to itself, which is called a central collineation, when the dimension of P is at least two. (See § Central collineations below and Perspectivity § Perspective collineations.) Originally, a homography was defined as the composition of a finite number of perspectivities. It is a part of the fundamental theorem of projective geometry (see below) that this definition coincides with the more algebraic definition sketched in the introduction and detailed below. == Definition and expression in homogeneous coordinates == A projective space P(V) of dimension n over a field K may be defined as the set of the lines through the origin in a K-vector space V of dimension n + 1. If a basis of V has been fixed, a point of V may be represented by a point (x0, ..., xn) of Kn+1. A point of P(V), being a line in V, may thus be represented by the coordinates of any nonzero point of this line, which are thus called homogeneous coordinates of the projective point. Given two projective spaces P(V) and P(W) of the same dimension, a homography is a mapping from P(V) to P(W), which is induced by an isomorphism of vector spaces f : V → W. Such an isomorphism induces a bijection from P(V) to P(W), because of the linearity of f. Two such isomorphisms, f and g, define the same homography if and only if there is a nonzero element a of K such that g = af. This may be written in terms of homogeneous coordinates in the following way: A homography φ may be defined by a nonsingular (n+1) × (n+1) matrix [ai,j], called the matrix of the homography. This matrix is defined up to the multiplication by a nonzero element of K. The homogeneous coordinates [x0 : ... : xn] of a point and the coordinates [y0 : ... : yn] of its image by φ are related by y 0 = a 0 , 0 x 0 + ⋯ + a 0 , n x n ⋮ y n = a n , 0 x 0 + ⋯ + a n , n x n . {\displaystyle {\begin{aligned}y_{0}&=a_{0,0}x_{0}+\dots +a_{0,n}x_{n}\\&\vdots \\y_{n}&=a_{n,0}x_{0}+\dots +a_{n,n}x_{n}.\end{aligned}}} When the projective spaces are defined by adding points at infinity to affine spaces (projective completion) the preceding formulas become, in affine coordinates, y 1 = a 1 , 0 + a 1 , 1 x 1 + ⋯ + a 1 , n x n a 0 , 0 + a 0 , 1 x 1 + ⋯ + a 0 , n x n ⋮ y n = a n , 0 + a n , 1 x 1 + ⋯ + a n , n x n a 0 , 0 + a 0 , 1 x 1 + ⋯ + a 0 , n x n {\displaystyle {\begin{aligned}y_{1}&={\frac {a_{1,0}+a_{1,1}x_{1}+\dots +a_{1,n}x_{n}}{a_{0,0}+a_{0,1}x_{1}+\dots +a_{0,n}x_{n}}}\\&\vdots \\y_{n}&={\frac {a_{n,0}+a_{n,1}x_{1}+\dots +a_{n,n}x_{n}}{a_{0,0}+a_{0,1}x_{1}+\dots +a_{0,n}x_{n}}}\end{aligned}}} which generalizes the expression of the homographic function of the next section. This defines only a partial function between affine spaces, which is defined only outside the hyperplane where the denominator is zero. == Homographies of a projective line == The projective line over a field K may be identified with the union of K and a point, called the "point at infinity" and denoted by ∞ (see Projective line). With this representation of the projective line, the homographies are the mappings z ↦ a z + b c z + d , where a d − b c ≠ 0 , {\displaystyle z\mapsto {\frac {az+b}{cz+d}},{\text{ where }}ad-bc\neq 0,} which are called homographic functions or linear fractional transformations. In the case of the complex projective line, which can be identified with the Riemann sphere, the homographies are called Möbius transformations. These correspond precisely with those bijections of the Riemann sphere that preserve orientation and are conformal. In the study of collineations, the case of projective lines is special due to the small dimension. When the line is viewed as a projective space in isolation, any permutation of the points of a projective line is a collineation, since every set of points are collinear. However, if the projective line is embedded in a higher-dimensional projective space, the geometric structure of that space can be used to impose a geometric structure on the line. Thus, in synthetic geometry, the homographies and the collineations of the projective line that are considered are those obtained by restrictions to the line of collineations and homographies of spaces of higher dimension. This means that the fundamental theorem of projective geometry (see below) remains valid in the one-dimensional setting. A homography of a projective line may also be properly defined by insisting that the mapping preserves cross-ratios. == Projective frame and coordinates == A projective frame or projective basis of a projective space of dimension n is an ordered set of n + 2 points such that no hyperplane contains n + 1 of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension n has at most n + 1 vertices. Projective spaces over a commutative field K are considered in this section, although most results may be generalized to projective spaces over a division ring. Let P(V) be a projective space of dimension n, where V is a K-vector space of dimension n + 1, and p : V ∖ {0} → P(V) be the canonical projection that maps a nonzero vector to the vector line that contains it. For every frame of P(V), there exists a basis e0, ..., en of V such that the frame is (p(e0), ..., p(en), p(e0 + ... + en)), and this basis is unique up to the multiplication of all its elements by the same nonzero element of K. Conversely, if e0, ..., en is a basis of V, then (p(e0), ..., p(en), p(e0 + ... + en)) is a frame of P(V) It follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the first fundamental theorem of projective geometry. Every frame (p(e0), ..., p(en), p(e0 + ... + en)) allows to define projective coordinates, also known as homogeneous coordinates: every point may be written as p(v); the projective coordinates of p(v) on this frame are the coordinates of v on the base (e0, ..., en). It is not difficult to verify that changing the ei and v, without changing the frame nor p(v), results in multiplying the projective coordinates by the same nonzero element of K. The projective space Pn(K) = P(Kn+1) has a canonical frame consisting of the image by p of the canonical basis of Kn+1 (consisting of the elements having only one nonzero entry, which is equal to 1), and (1, 1, ..., 1). On this basis, the homogeneous coordinates of p(v) are simply the entries (coefficients) of the tuple v. Given another projective space P(V) of the same dimension, and a frame F of it, there is one and only one homography h mapping F onto the canonical frame of Pn(K). The projective coordinates of a point a on the frame F are the homogeneous coordinates of h(a) on the canonical frame of Pn(K). == Central collineations == In above sections, homographies have been defined through linear algebra. In synthetic geometry, they are traditionally defined as the composition of one or several special homographies called central collineations. It is a part of the fundamental theorem of projective geometry that the two definitions are equivalent. In a projective space, P, of dimension n ≥ 2, a collineation of P is a bijection from P onto P that maps lines onto lines. A central collineation (traditionally these were called perspectivities, but this term may be confusing, having another meaning; see Perspectivity) is a bijection α from P to P, such that there exists a hyperplane H (called the axis of α), which is fixed pointwise by α (that is, α(X) = X for all points X in H) and a point O (called the center of α), which is fixed linewise by α (any line through O is mapped to itself by α, but not necessarily pointwise). There are two types of central collineations. Elations are the central collineations in which the center is incident with the axis and homologies are those in which the center is not incident with the axis. A central collineation is uniquely defined by its center, its axis, and the image α(P) of any given point P that differs from the center O and does not belong to the axis. (The image α(Q) of any other point Q is the intersection of the line defined by O and Q and the line passing through α(P) and the intersection with the axis of the line defined by P and Q.) A central collineation is a homography defined by a (n+1) × (n+1) matrix that has an eigenspace of dimension n. It is a homology, if the matrix has another eigenvalue and is therefore diagonalizable. It is an elation, if all the eigenvalues are equal and the matrix is not diagonalizable. The geometric view of a central collineation is easiest to see in a projective plane. Given a central collineation α, consider a line ℓ that does not pass through the center O, and its image under α, ℓ′ = α(ℓ). Setting R = ℓ ∩ ℓ′, the axis of α is some line M through R. The image of any point A of ℓ under α is the intersection of OA with ℓ′. The image B′ of a point B that does not belong to ℓ may be constructed in the following way: let S = AB ∩ M, then B′ = SA′ ∩ OB. The composition of two central collineations, while still a homography in general, is not a central collineation. In fact, every homography is the composition of a finite number of central collineations. In synthetic geometry, this property, which is a part of the fundamental theory of projective geometry is taken as the definition of homographies. == Fundamental theorem of projective geometry == There are collineations besides the homographies. In particular, any field automorphism σ of a field F induces a collineation of every projective space over F by applying σ to all homogeneous coordinates (over a projective frame) of a point. These collineations are called automorphic collineations. The fundamental theorem of projective geometry consists of the three following theorems. Given two projective frames of a projective space P, there is exactly one homography of P that maps the first frame onto the second one. If the dimension of a projective space P is at least two, every collineation of P is the composition of an automorphic collineation and a homography. In particular, over the reals, every collineation of a projective space of dimension at least two is a homography. Every homography is the composition of a finite number of perspectivities. In particular, if the dimension of the implied projective space is at least two, every homography is the composition of a finite number of central collineations. If projective spaces are defined by means of axioms (synthetic geometry), the third part is simply a definition. On the other hand, if projective spaces are defined by means of linear algebra, the first part is an easy corollary of the definitions. Therefore, the proof of the first part in synthetic geometry, and the proof of the third part in terms of linear algebra both are fundamental steps of the proof of the equivalence of the two ways of defining projective spaces. == Homography groups == As every homography has an inverse mapping and the composition of two homographies is another, the homographies of a given projective space form a group. For example, the Möbius group is the homography group of any complex projective line. As all the projective spaces of the same dimension over the same field are isomorphic, the same is true for their homography groups. They are therefore considered as a single group acting on several spaces, and only the dimension and the field appear in the notation, not the specific projective space. Homography groups also called projective linear groups are denoted PGL(n + 1, F) when acting on a projective space of dimension n over a field F. Above definition of homographies shows that PGL(n + 1, F) may be identified to the quotient group GL(n + 1, F) / F×I, where GL(n + 1, F) is the general linear group of the invertible matrices, and F×I is the group of the products by a nonzero element of F of the identity matrix of size (n + 1) × (n + 1). When F is a Galois field GF(q) then the homography group is written PGL(n, q). For example, PGL(2, 7) acts on the eight points in the projective line over the finite field GF(7), while PGL(2, 4), which is isomorphic to the alternating group A5, is the homography group of the projective line with five points. The homography group PGL(n + 1, F) is a subgroup of the collineation group PΓL(n + 1, F) of the collineations of a projective space of dimension n. When the points and lines of the projective space are viewed as a block design, whose blocks are the sets of points contained in a line, it is common to call the collineation group the automorphism group of the design. == Cross-ratio == The cross-ratio of four collinear points is an invariant under the homography that is fundamental for the study of the homographies of the lines. Three distinct points a, b and c on a projective line over a field F form a projective frame of this line. There is therefore a unique homography h of this line onto F ∪ {∞} that maps a to ∞, b to 0, and c to 1. Given a fourth point on the same line, the cross-ratio of the four points a, b, c and d, denoted [a, b; c, d], is the element h(d) of F ∪ {∞}. In other words, if d has homogeneous coordinates [k : 1] over the projective frame (a, b, c), then [a, b; c, d] = k. == Over a ring == Suppose A is a ring and U is its group of units. Homographies act on a projective line over A, written P(A), consisting of points U[a, b] with projective coordinates. The homographies on P(A) are described by matrix mappings U [ z , 1 ] ( a c b d ) = U [ z a + b , z c + d ] . {\displaystyle U[z,1]{\begin{pmatrix}a&c\\b&d\end{pmatrix}}=U[za+b,\ zc+d].} When A is a commutative ring, the homography may be written z ↦ z a + b z c + d , {\displaystyle z\mapsto {\frac {za+b}{zc+d}}\ ,} but otherwise the linear fractional transformation is seen as an equivalence: U [ z a + b , z c + d ] ∼ U [ ( z c + d ) − 1 ( z a + b ) , 1 ] . {\displaystyle U[za+b,\ zc+d]\thicksim U[(zc+d)^{-1}(za+b),\ 1].} The homography group of the ring of integers Z is modular group PSL(2, Z). Ring homographies have been used in quaternion analysis, and with dual quaternions to facilitate screw theory. The conformal group of spacetime can be represented with homographies where A is the composition algebra of biquaternions. == Periodic homographies == The homography h = ( 1 1 0 1 ) {\displaystyle h={\begin{pmatrix}1&1\\0&1\end{pmatrix}}} is periodic when the ring is Z/nZ (the integers modulo n) since then h n = ( 1 n 0 1 ) = ( 1 0 0 1 ) . {\displaystyle h^{n}={\begin{pmatrix}1&n\\0&1\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}.} Arthur Cayley was interested in periodicity when he calculated iterates in 1879. In his review of a brute force approach to periodicity of homographies, H. S. M. Coxeter gave this analysis: A real homography is involutory (of period 2) if and only if a + d = 0. If it is periodic with period n > 2, then it is elliptic, and no loss of generality occurs by assuming that ad − bc = 1. Since the characteristic roots are exp(±hπi/m), where (h, m) = 1, the trace is a + d = 2 cos(hπ/m). == See also == W-curve == Notes == == References == Artin, E. (1957), Geometric Algebra, Interscience Publishers Baer, Reinhold (2005) [First published 1952], Linear Algebra and Projective Geometry, Dover, ISBN 9780486445656 Berger, Marcel (2009), Geometry I, Springer-Verlag, ISBN 978-3-540-11658-5, translated from the 1977 French original by M. Cole and S. Levy, fourth printing of the 1987 English translation Beutelspacher, Albrecht; Rosenbaum, Ute (1998), Projective Geometry: From Foundations to Applications, Cambridge University Press, ISBN 0-521-48364-6 Hartshorne, Robin (1967), Foundations of Projective Geometry, New York: W.A. Benjamin, Inc Hirschfeld, J. W. P. (1979), Projective Geometries Over Finite Fields, Oxford University Press, ISBN 978-0-19-850295-1 Meserve, Bruce E. (1983), Fundamental Concepts of Geometry, Dover, ISBN 0-486-63415-9 Yale, Paul B. (1968), Geometry and Symmetry, Holden-Day == Further reading == Patrick du Val (1964) Homographies, quaternions and rotations, Oxford Mathematical Monographs, Clarendon Press, Oxford, MR0169108 . Gunter Ewald (1971) Geometry: An Introduction, page 263, Belmont:Wadsworth Publishing ISBN 0-534-00034-7. == External links == Media related to Homography at Wikimedia Commons
Wikipedia/Homography
Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pigeon post is not. Ancient signalling systems, although sometimes quite extensive and sophisticated as in China, were generally not capable of transmitting arbitrary text messages. Possible messages were fixed and predetermined, so such systems are thus not true telegraphs. The earliest true telegraph put into widespread use was the Chappe telegraph, an optical telegraph invented by Claude Chappe in the late 18th century. The system was used extensively in France, and European nations occupied by France, during the Napoleonic era. The electric telegraph started to replace the optical telegraph in the mid-19th century. It was first taken up in Britain in the form of the Cooke and Wheatstone telegraph, initially used mostly as an aid to railway signalling. This was quickly followed by a different system developed in the United States by Samuel Morse. The electric telegraph was slower to develop in France due to the established optical telegraph system, but an electrical telegraph was put into use with a code compatible with the Chappe optical telegraph. The Morse system was adopted as the international standard in 1865, using a modified Morse code developed in Germany in 1848. The heliograph is a telegraph system using reflected sunlight for signalling. It was mainly used in areas where the electrical telegraph had not been established and generally used the same code. The most extensive heliograph network established was in Arizona and New Mexico during the Apache Wars. The heliograph was standard military equipment as late as World War II. Wireless telegraphy developed in the early 20th century became important for maritime use, and was a competitor to electrical telegraphy using submarine telegraph cables in international communications. Telegrams became a popular means of sending messages once telegraph prices had fallen sufficiently. Traffic became high enough to spur the development of automated systems—teleprinters and punched tape transmission. These systems led to new telegraph codes, starting with the Baudot code. However, telegrams were never able to compete with the letter post on price, and competition from the telephone, which removed their speed advantage, drove the telegraph into decline from 1920 onwards. The few remaining telegraph applications were largely taken over by alternatives on the internet towards the end of the 20th century. == Terminology == The word telegraph (from Ancient Greek: τῆλε (têle) 'at a distance' and γράφειν (gráphein) 'to write') was coined by the French inventor of the semaphore telegraph, Claude Chappe, who also coined the word semaphore. A telegraph is a device for transmitting and receiving messages over long distances, i.e., for telegraphy. The word telegraph alone generally refers to an electrical telegraph. Wireless telegraphy is transmission of messages over radio with telegraphic codes. Contrary to the extensive definition used by Chappe, Morse argued that the term telegraph can strictly be applied only to systems that transmit and record messages at a distance. This is to be distinguished from semaphore, which merely transmits messages. Smoke signals, for instance, are to be considered semaphore, not telegraph. According to Morse, telegraph dates only from 1832 when Pavel Schilling invented one of the earliest electrical telegraphs. A telegraph message sent by an electrical telegraph operator or telegrapher using Morse code (or a printing telegraph operator using plain text) was known as a telegram. A cablegram was a message sent by a submarine telegraph cable, often shortened to "cable" or "wire". The suffix -gram is derived from ancient Greek: γραμμα (gramma), meaning something written, i.e. telegram means something written at a distance and cablegram means something written via a cable, whereas telegraph implies the process of writing at a distance. Later, a Telex was a message sent by a Telex network, a switched network of teleprinters similar to a telephone network. A wirephoto or wire picture was a newspaper picture that was sent from a remote location by a facsimile telegraph. A diplomatic telegram, also known as a diplomatic cable, is a confidential communication between a diplomatic mission and the foreign ministry of its parent country. These continue to be called telegrams or cables regardless of the method used for transmission. == History == === Early signalling === Passing messages by signalling over distance is an ancient practice. One of the oldest examples is the signal towers of the Great Wall of China. By 400 BC, signals could be sent by beacon fires or drum beats, and by 200 BC complex flag signalling had developed. During the Han dynasty (202 BC – 220 AD), signallers mainly used flags and wood fires—via the light of the flames swung high into the air at night, and via dark smoke produced by the addition of wolf dung during the day—to send signals. By the Tang dynasty (618–907) a message could be sent 1,100 kilometres (700 mi) in 24 hours. The Ming dynasty (1368–1644) used artillery as another possible signalling method. While the signalling was complex (for instance, flags of different colours could be used to indicate enemy strength), only predetermined messages could be sent. The Chinese signalling system extended well beyond the Great Wall. Signal towers away from the wall were used to give early warning of an attack. Others were built even further out as part of the protection of trade routes, especially the Silk Road. Signal fires were widely used in Europe and elsewhere for military purposes. The Roman army made frequent use of them, as did their enemies, and the remains of some of the stations still exist. Few details have been recorded of European/Mediterranean signalling systems and the possible messages. One of the few for which details are known is a system invented by Aeneas Tacticus (4th century BC). Tacticus's system had water filled pots at the two signal stations which were drained in synchronisation. Annotation on a floating scale indicated which message was being sent or received. Signals sent by means of torches indicated when to start and stop draining to keep the synchronisation. None of the signalling systems discussed above are true telegraphs in the sense of a system that can transmit arbitrary messages over arbitrary distances. Lines of signalling relay stations can send messages to any required distance, but all these systems are limited to one extent or another in the range of messages that they can send. A system like flag semaphore, with an alphabetic code, can certainly send any given message, but the system is designed for short-range communication between two persons. An engine order telegraph, used to send instructions from the bridge of a ship to the engine room, fails to meet both criteria; it has a limited distance and very simple message set. There was only one ancient signalling system described that does meet these criteria. That was a system using the Polybius square to encode an alphabet. Polybius (2nd century BC) suggested using two successive groups of torches to identify the coordinates of the letter of the alphabet being transmitted. The number of said torches held up signalled the grid square that contained the letter. There is no definite record of the system ever being used, but there are several passages in ancient texts that some think are suggestive. Holzmann and Pehrson, for instance, suggest that Livy is describing its use by Philip V of Macedon in 207 BC during the First Macedonian War. Nothing else that could be described as a true telegraph existed until the 17th century.: 26–29  Possibly the first alphabetic telegraph code in the modern era is due to Franz Kessler who published his work in 1616. Kessler used a lamp placed inside a barrel with a moveable shutter operated by the signaller. The signals were observed at a distance with the newly invented telescope.: 32–34  === Optical telegraph === An optical telegraph is a telegraph consisting of a line of stations in towers or natural high points which signal to each other by means of shutters or paddles. Signalling by means of indicator pointers was called semaphore. Early proposals for an optical telegraph system were made to the Royal Society by Robert Hooke in 1684 and were first implemented on an experimental level by Sir Richard Lovell Edgeworth in 1767. The first successful optical telegraph network was invented by Claude Chappe and operated in France from 1793. The two most extensive systems were Chappe's in France, with branches into neighbouring countries, and the system of Abraham Niclas Edelcrantz in Sweden.: ix–x, 47  During 1790–1795, at the height of the French Revolution, France needed a swift and reliable communication system to thwart the war efforts of its enemies. In 1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. On 2 March 1791, at 11 am, they sent the message "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory) between Brulon and Parce, a distance of 16 kilometres (10 mi). The first means used a combination of black and white panels, clocks, telescopes, and codebooks to send their message. In 1792, Claude was appointed Ingénieur-Télégraphiste and charged with establishing a line of stations between Paris and Lille, a distance of 230 kilometres (140 mi). It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. A decision to replace the system with an electric telegraph was made in 1846, but it took a decade before it was fully taken out of service. The fall of Sevastopol was reported by Chappe telegraph in 1855.: 92–94  The Prussian system was put into effect in the 1830s. However, they were highly dependent on good weather and daylight to work and even then could accommodate only about two words per minute. The last commercial semaphore link ceased operation in Sweden in 1880. As of 1895, France still operated coastal commercial semaphore telegraph stations, for ship-to-shore communication. === Electrical telegraph === The early ideas for an electric telegraph included in 1753 using electrostatic deflections of pith balls, proposals for electrochemical bubbles in acid by Campillo in 1804 and von Sömmering in 1809. The first experimental system over a substantial distance was by Ronalds in 1816 using an electrostatic generator. Ronalds offered his invention to the British Admiralty, but it was rejected as unnecessary, the existing optical telegraph connecting the Admiralty in London to their main fleet base in Portsmouth being deemed adequate for their purposes. As late as 1844, after the electrical telegraph had come into use, the Admiralty's optical telegraph was still used, although it was accepted that poor weather ruled it out on many days of the year.: 16, 37  France had an extensive optical telegraph system dating from Napoleonic times and was even slower to take up electrical systems.: 217–218  Eventually, electrostatic telegraphs were abandoned in favour of electromagnetic systems. An early experimental system (Schilling, 1832) led to a proposal to establish a telegraph between St Petersburg and Kronstadt, but it was never completed. The first operative electric telegraph (Gauss and Weber, 1833) connected Göttingen Observatory to the Institute of Physics about 1 km away during experimental investigations of the geomagnetic field. The first commercial telegraph was by Cooke and Wheatstone following their English patent of 10 June 1837. It was demonstrated on the London and Birmingham Railway in July of the same year. In July 1839, a five-needle, five-wire system was installed to provide signalling over a record distance of 21 km on a section of the Great Western Railway between London Paddington station and West Drayton. However, in trying to get railway companies to take up his telegraph more widely for railway signalling, Cooke was rejected several times in favour of the more familiar, but shorter range, steam-powered pneumatic signalling. Even when his telegraph was taken up, it was considered experimental and the company backed out of a plan to finance extending the telegraph line out to Slough. However, this led to a breakthrough for the electric telegraph, as up to this point the Great Western had insisted on exclusive use and refused Cooke permission to open public telegraph offices. Cooke extended the line at his own expense and agreed that the railway could have free use of it in exchange for the right to open it up to the public.: 19–20  Most of the early electrical systems required multiple wires (Ronalds' system was an exception), but the system developed in the United States by Morse and Vail was a single-wire system. This was the system that first used the soon-to-become-ubiquitous Morse code. By 1844, the Morse system connected Baltimore to Washington, and by 1861 the west coast of the continent was connected to the east coast. The Cooke and Wheatstone telegraph, in a series of improvements, also ended up with a one-wire system, but still using their own code and needle displays. The electric telegraph quickly became a means of more general communication. The Morse system was officially adopted as the standard for continental European telegraphy in 1851 with a revised code, which later became the basis of International Morse Code. However, Great Britain and the British Empire continued to use the Cooke and Wheatstone system, in some places as late as the 1930s. Likewise, the United States continued to use American Morse code internally, requiring translation operators skilled in both codes for international messages. === Railway telegraphy === Railway signal telegraphy was developed in Britain from the 1840s onward. It was used to manage railway traffic and to prevent accidents as part of the railway signalling system. On 12 June 1837 Cooke and Wheatstone were awarded a patent for an electric telegraph. This was demonstrated between Euston railway station—where Wheatstone was located—and the engine house at Camden Town—where Cooke was stationed, together with Robert Stephenson, the London and Birmingham Railway line's chief engineer. The messages were for the operation of the rope-haulage system for pulling trains up the 1 in 77 bank. The world's first permanent railway telegraph was completed in July 1839 between London Paddington and West Drayton on the Great Western Railway with an electric telegraph using a four-needle system. The concept of a signalling "block" system was proposed by Cooke in 1842. Railway signal telegraphy did not change in essence from Cooke's initial concept for more than a century. In this system each line of railway was divided into sections or blocks of varying length. Entry to and exit from the block was to be authorised by electric telegraph and signalled by the line-side semaphore signals, so that only a single train could occupy the rails. In Cooke's original system, a single-needle telegraph was adapted to indicate just two messages: "Line Clear" and "Line Blocked". The signaller would adjust his line-side signals accordingly. As first implemented in 1844 each station had as many needles as there were stations on the line, giving a complete picture of the traffic. As lines expanded, a sequence of pairs of single-needle instruments were adopted, one pair for each block in each direction. === Wigwag === Wigwag is a form of flag signalling using a single flag. Unlike most forms of flag signalling, which are used over relatively short distances, wigwag is designed to maximise the distance covered—up to 32 km (20 mi) in some cases. Wigwag achieved this by using a large flag—a single flag can be held with both hands unlike flag semaphore which has a flag in each hand—and using motions rather than positions as its symbols since motions are more easily seen. It was invented by US Army surgeon Albert J. Myer in the 1850s who later became the first head of the Signal Corps. Wigwag was used extensively during the American Civil War where it filled a gap left by the electrical telegraph. Although the electrical telegraph had been in use for more than a decade, the network did not yet reach everywhere and portable, ruggedized equipment suitable for military use was not immediately available. Permanent or semi-permanent stations were established during the war, some of them towers of enormous height and the system was extensive enough to be described as a communications network. === Heliograph === A heliograph is a telegraph that transmits messages by flashing sunlight with a mirror, usually using Morse code. The idea for a telegraph of this type was first proposed as a modification of surveying equipment (Gauss, 1821). Various uses of mirrors were made for communication in the following years, mostly for military purposes, but the first device to become widely used was a heliograph with a moveable mirror (Mance, 1869). The system was used by the French during the 1870–71 siege of Paris, with night-time signalling using kerosene lamps as the source of light. An improved version (Begbie, 1870) was used by British military in many colonial wars, including the Anglo-Zulu War (1879). At some point, a morse key was added to the apparatus to give the operator the same degree of control as in the electric telegraph. Another type of heliograph was the heliostat or heliotrope fitted with a Colomb shutter. The heliostat was essentially a surveying instrument with a fixed mirror and so could not transmit a code by itself. The term heliostat is sometimes used as a synonym for heliograph because of this origin. The Colomb shutter (Bolton and Colomb, 1862) was originally invented to enable the transmission of morse code by signal lamp between Royal Navy ships at sea. The heliograph was heavily used by Nelson A. Miles in Arizona and New Mexico after he took over command (1886) of the fight against Geronimo and other Apache bands in the Apache Wars. Miles had previously set up the first heliograph line in the US between Fort Keogh and Fort Custer in Montana. He used the heliograph to fill in vast, thinly populated areas that were not covered by the electric telegraph. Twenty-six stations covered an area 320 by 480 km (200 by 300 mi). In a test of the system, a message was relayed 640 km (400 mi) in four hours. Miles' enemies used smoke signals and flashes of sunlight from metal, but lacked a sophisticated telegraph code. The heliograph was ideal for use in the American Southwest due to its clear air and mountainous terrain on which stations could be located. It was found necessary to lengthen the morse dash (which is much shorter in American Morse code than in the modern International Morse code) to aid differentiating from the morse dot. Use of the heliograph declined from 1915 onwards, but remained in service in Britain and British Commonwealth countries for some time. Australian forces used the heliograph as late as 1942 in the Western Desert Campaign of World War II. Some form of heliograph was used by the mujahideen in the Soviet–Afghan War (1979–1989). === Teleprinter === A teleprinter is a telegraph machine that can send messages from a typewriter-like keyboard and print incoming messages in readable text with no need for the operators to be trained in the telegraph code used on the line. It developed from various earlier printing telegraphs and resulted in improved transmission speeds. The Morse telegraph (1837) was originally conceived as a system marking indentations on paper tape. A chemical telegraph making blue marks improved the speed of recording (Bain, 1846), but was delayed by a patent challenge from Morse. The first true printing telegraph (that is printing in plain text) used a spinning wheel of types in the manner of a daisy wheel printer (House, 1846, improved by Hughes, 1855). The system was adopted by Western Union. Early teleprinters used the Baudot code, a five-bit sequential binary code. This was a telegraph code developed for use on the French telegraph using a five-key keyboard (Baudot, 1874). Teleprinters generated the same code from a full alphanumeric keyboard. A feature of the Baudot code, and subsequent telegraph codes, was that, unlike Morse code, every character has a code of the same length making it more machine friendly. The Baudot code was used on the earliest ticker tape machines (Calahan, 1867), a system for mass distributing information on current price of publicly listed companies. === Automated punched-tape transmission === In a punched-tape system, the message is first typed onto punched tape using the code of the telegraph system—Morse code for instance. It is then, either immediately or at some later time, run through a transmission machine which sends the message to the telegraph network. Multiple messages can be sequentially recorded on the same run of tape. The advantage of doing this is that messages can be sent at a steady, fast rate making maximum use of the available telegraph lines. The economic advantage of doing this is greatest on long, busy routes where the cost of the extra step of preparing the tape is outweighed by the cost of providing more telegraph lines. The first machine to use punched tape was Bain's teleprinter (Bain, 1843), but the system saw only limited use. Later versions of Bain's system achieved speeds up to 1000 words per minute, far faster than a human operator could achieve. The first widely used system (Wheatstone, 1858) was first put into service with the British General Post Office in 1867. A novel feature of the Wheatstone system was the use of bipolar encoding. That is, both positive and negative polarity voltages were used. Bipolar encoding has several advantages, one of which is that it permits duplex communication. The Wheatstone tape reader was capable of a speed of 400 words per minute.: 190  === Oceanic telegraph cables === A worldwide communication network meant that telegraph cables would have to be laid across oceans. On land cables could be run uninsulated suspended from poles. Underwater, a good insulator that was both flexible and capable of resisting the ingress of seawater was required. A solution presented itself with gutta-percha, a natural rubber from the Palaquium gutta tree, after William Montgomerie sent samples to London from Singapore in 1843. The new material was tested by Michael Faraday and in 1845 Wheatstone suggested that it should be used on the cable planned between Dover and Calais by John Watkins Brett. The idea was proved viable when the South Eastern Railway company successfully tested a three-kilometre (two-mile) gutta-percha insulated cable with telegraph messages to a ship off the coast of Folkestone. The cable to France was laid in 1850 but was almost immediately severed by a French fishing vessel. It was relaid the next year and connections to Ireland and the Low Countries soon followed. Getting a cable across the Atlantic Ocean proved much more difficult. The Atlantic Telegraph Company, formed in London in 1856, had several failed attempts. A cable laid in 1858 worked poorly for a few days, sometimes taking all day to send a message despite the use of the highly sensitive mirror galvanometer developed by William Thomson (the future Lord Kelvin) before being destroyed by applying too high a voltage. Its failure and slow speed of transmission prompted Thomson and Oliver Heaviside to find better mathematical descriptions of long transmission lines. The company finally succeeded in 1866 with an improved cable laid by SS Great Eastern, the largest ship of its day, designed by Isambard Kingdom Brunel. An overland telegraph from Britain to India was first connected in 1866 but was unreliable so a submarine telegraph cable was connected in 1870. Several telegraph companies were combined to form the Eastern Telegraph Company in 1872. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable-laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted while it was able to quickly cut Germany's cables worldwide. === Facsimile === In 1843, Scottish inventor Alexander Bain invented a device that could be considered the first facsimile machine. He called his invention a "recording telegraph". Bain's telegraph was able to transmit images by electrical wires. Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. In 1855, an Italian priest, Giovanni Caselli, also created an electric telegraph that could transmit images. Caselli called his invention "Pantelegraph". Pantelegraph was successfully tested and approved for a telegraph line between Paris and Lyon. In 1881, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. Around 1900, German physicist Arthur Korn invented the Bildtelegraph widespread in continental Europe especially since a widely noticed transmission of a wanted-person photograph from Paris to London in 1908 used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s, the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission. === Wireless telegraphy === The late 1880s through to the 1890s saw the discovery and then development of a newly understood phenomenon into a form of wireless telegraphy, called Hertzian wave wireless telegraphy, radiotelegraphy, or (later) simply "radio". Between 1886 and 1888, Heinrich Rudolf Hertz published the results of his experiments where he was able to transmit electromagnetic waves (radio waves) through the air, proving James Clerk Maxwell's 1873 theory of electromagnetic radiation. Many scientists and inventors experimented with this new phenomenon but the consensus was that these new waves (similar to light) would be just as short range as light, and, therefore, useless for long range communication. At the end of 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building a commercial wireless telegraphy system based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Building on the ideas of previous scientists and inventors Marconi re-engineered their apparatus by trial and error attempting to build a radio-based wireless telegraphic system that would function the same as wired telegraphy. He would work on the system through 1895 in his lab and then in field tests making improvements to extend its range. After many breakthroughs, including applying the wired telegraphy concept of grounding the transmitter and receiver, Marconi was able, by early 1896, to transmit radio far beyond the short ranges that had been predicted. Having failed to interest the Italian government, the 22-year-old inventor brought his telegraphy system to Britain in 1896 and met William Preece, a Welshman, who was a major figure in the field and Chief Engineer of the General Post Office. A series of demonstrations for the British government followed—by March 1897, Marconi had transmitted Morse code signals over a distance of about 6 km (3+1⁄2 mi) across Salisbury Plain. On 13 May 1897, Marconi, assisted by George Kemp, a Cardiff Post Office engineer, transmitted the first wireless signals over water to Lavernock (near Penarth in Wales) from Flat Holm. His star rising, he was soon sending signals across the English Channel (1899), from shore to ship (1899) and finally across the Atlantic (1901). A study of these demonstrations of radio, with scientists trying to work out how a phenomenon predicted to have a short range could transmit "over the horizon", led to the discovery of a radio reflecting layer in the Earth's atmosphere in 1902, later called the ionosphere. Radiotelegraphy proved effective for rescue work in sea disasters by enabling effective communication between ships and from ship to shore. In 1904, Marconi began the first commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907. Notably, Marconi's apparatus was used to help rescue efforts after the sinking of RMS Titanic. Britain's postmaster-general summed up, referring to the Titanic disaster, "Those who have been saved, have been saved through one man, Mr. Marconi...and his marvellous invention." ==== Non-radio wireless telegraphy ==== The successful development of radiotelegraphy was preceded by a 50-year history of ingenious but ultimately unsuccessful experiments by inventors to achieve wireless telegraphy by other means. ===== Ground, water, and air conduction ===== Several wireless electrical signaling schemes based on the (sometimes erroneous) idea that electric currents could be conducted long-range through water, ground, and air were investigated for telegraphy before practical radio systems became available. The original telegraph lines used two wires between the two stations to form a complete electrical circuit or "loop". In 1837, however, Carl August von Steinheil of Munich, Germany, found that by connecting one leg of the apparatus at each station to metal plates buried in the ground, he could eliminate one wire and use a single wire for telegraphic communication. This led to speculation that it might be possible to eliminate both wires and therefore transmit telegraph signals through the ground without any wires connecting the stations. Other attempts were made to send the electric current through bodies of water, to span rivers, for example. Prominent experimenters along these lines included Samuel F. B. Morse in the United States and James Bowman Lindsay in Great Britain, who in August 1854, was able to demonstrate transmission across a mill dam at a distance of 500 yards (457 metres). US inventors William Henry Ward (1871) and Mahlon Loomis (1872) developed electrical conduction systems based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. They thought atmosphere current, connected with a return path using "Earth currents" would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile. In the 1890s inventor Nikola Tesla worked on an air and ground conduction wireless electric power transmission system, similar to Loomis', which he planned to include wireless telegraphy. Tesla's experiments had led him to incorrectly conclude that he could use the entire globe of the Earth to conduct electrical energy and his 1901 large scale application of his ideas, a high-voltage wireless power station, now called Wardenclyffe Tower, lost funding and was abandoned after a few years. Telegraphic communication using earth conductivity was eventually found to be limited to impractically short distances, as was communication conducted through water, or between trenches during World War I. ===== Electrostatic and electromagnetic induction ===== Both electrostatic and electromagnetic induction were used to develop wireless telegraph systems that saw limited commercial application. In the United States, Thomas Edison, in the mid-1880s, patented an electromagnetic induction system he called "grasshopper telegraphy", which allowed telegraphic signals to jump the short distance between a running train and telegraph wires running parallel to the tracks. This system was successful technically but not economically, as there turned out to be little interest by train travelers in the use of an on-board telegraph service. During the Great Blizzard of 1888, this system was used to send and receive wireless messages from trains buried in snowdrifts. The disabled trains were able to maintain communications via their Edison induction wireless telegraph systems, perhaps the first successful use of wireless telegraphy to send distress calls. Edison would also help to patent a ship-to-shore communication system based on electrostatic induction. The most successful creator of an electromagnetic induction telegraph system was William Preece, chief engineer of Post Office Telegraphs of the General Post Office (GPO) in the United Kingdom. Preece first noticed the effect in 1884 when overhead telegraph wires in Grays Inn Road were accidentally carrying messages sent on buried cables. Tests in Newcastle succeeded in sending a quarter of a mile using parallel rectangles of wire.: 243  In tests across the Bristol Channel in 1892, Preece was able to telegraph across gaps of about 5 kilometres (3.1 miles). However, his induction system required extensive lengths of antenna wires, many kilometers long, at both the sending and receiving ends. The length of those sending and receiving wires needed to be about the same length as the width of the water or land to be spanned. For example, for Preece's station to span the English Channel from Dover, England, to the coast of France would require sending and receiving wires of about 30 miles (48 kilometres) along the two coasts. These facts made the system impractical on ships, boats, and ordinary islands, which are much smaller than Great Britain or Greenland. Also, the relatively short distances that a practical Preece system could span meant that it had few advantages over underwater telegraph cables. === Telegram services === A telegram service is a company or public entity that delivers telegraphed messages directly to the recipient. Telegram services were not inaugurated until electric telegraphy became available. Earlier optical systems were largely limited to official government and military purposes. Historically, telegrams were sent between a network of interconnected telegraph offices. A person visiting a local telegraph office paid by the word to have a message telegraphed to another office and delivered to the addressee on a paper form.: 276  Messages (i.e. telegrams) sent by telegraph could be delivered by telegraph messenger faster than mail, and even in the telephone age, the telegram remained popular for social and business correspondence. At their peak in 1929, an estimated 200 million telegrams were sent.: 274  In 1919, the Central Bureau for Registered Addresses was established in the financial district of New York City. The bureau was created to ease the growing problem of messages being delivered to the wrong recipients. To combat this issue, the bureau offered telegraph customers the option to register unique code names for their telegraph addresses. Customers were charged $2.50 per year per code. By 1934, 28,000 codes had been registered. Telegram services still operate in much of the world (see worldwide use of telegrams by country), but e-mail and text messaging have rendered telegrams obsolete in many countries, and the number of telegrams sent annually has been declining rapidly since the 1980s. Where telegram services still exist, the transmission method between offices is no longer by telegraph, but by telex or IP link. ==== Telegram length ==== As telegrams have been traditionally charged by the word, messages were often abbreviated to pack information into the smallest possible number of words, in what came to be called "telegram style". The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer. According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters. For German telegrams, the mean length is 11.5 words or 72.4 characters. At the end of the 19th century, the average length of a German telegram was calculated as 14.2 words. === Telex === Telex (telegraph exchange) was a public switched network of teleprinters. It used rotary-telephone-style pulse dialling for automatic routing through the network. It initially used the Baudot code for messages. Telex development began in Germany in 1926, becoming an operational service in 1933 run by the Reichspost (the German imperial postal service). It had a speed of 50 baud—approximately 66 words per minute. Up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Telex was introduced into Canada in July 1957, and the United States in 1958. A new code, ASCII, was introduced in 1963 by the American Standards Association. ASCII was a seven-bit code and could thus support a larger number of characters than Baudot. In particular, ASCII supported upper and lower case whereas Baudot was upper case only. === Decline === Telegraph use began to permanently decline around 1920.: 248  The decline began with the growth of the use of the telephone.: 253  Ironically, the invention of the telephone grew out of the development of the harmonic telegraph, a device which was supposed to increase the efficiency of telegraph transmission and improve the profits of telegraph companies. Western Union gave up its patent battle with Alexander Graham Bell because it believed the telephone was not a threat to its telegraph business. The Bell Telephone Company was formed in 1877 and had 230 subscribers which grew to 30,000 by 1880. By 1886 there were a quarter of a million phones worldwide,: 276–277  and nearly 2 million by 1900.: 204  The decline was briefly postponed by the rise of special occasion congratulatory telegrams. Traffic continued to grow between 1867 and 1893 despite the introduction of the telephone in this period,: 274  but by 1900 the telegraph was definitely in decline.: 277  There was a brief resurgence in telegraphy during World War I but the decline continued as the world entered the Great Depression years of the 1930s.: 277  After the Second World War new technology improved communication in the telegraph industry. Telegraph lines continued to be an important means of distributing news feeds from news agencies by teleprinter machine until the rise of the internet in the 1990s. For Western Union, one service remained highly profitable—the wire transfer of money. This service kept Western Union in business long after the telegraph had ceased to be important.: 277  In the modern era, the telegraph that began in 1837 has been gradually replaced by digital data transmission based on computer information systems. == Social implications == Optical telegraph lines were installed by governments, often for a military purpose, and reserved for official use only. In many countries, this situation continued after the introduction of the electric telegraph. Starting in Germany and the UK, electric telegraph lines were installed by railway companies. Railway use quickly led to private telegraph companies in the UK and the US offering a telegraph service to the public using telegraph along railway lines. The availability of this new form of communication brought on widespread social and economic changes. The electric telegraph freed communication from the time constraints of postal mail and revolutionized the global economy and society. By the end of the 19th century, the telegraph was becoming an increasingly common medium of communication for ordinary people. The telegraph isolated the message (information) from the physical movement of objects or the process. There was some fear of the new technology. According to author Allan J. Kimmel, some people "feared that the telegraph would erode the quality of public discourse through the transmission of irrelevant, context-free information." Henry David Thoreau thought of the Transatlantic cable "...perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough." Kimmel says these fears anticipate many of the characteristics of the modern internet age. Initially, the telegraph was expensive, but it had an enormous effect on three industries: finance, newspapers, and railways. Telegraphy facilitated the growth of organizations "in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms". In the US, there were 200 to 300 stock exchanges before the telegraph, but most of these were unnecessary and unprofitable once the telegraph made financial transactions at a distance easy and drove down transaction costs.: 274–75  This immense growth in the business sectors influenced society to embrace the use of telegrams once the cost had fallen. Worldwide telegraphy changed the gathering of information for news reporting. Journalists were using the telegraph for war reporting as early as 1846 when the Mexican–American War broke out. News agencies were formed, such as the Associated Press, for the purpose of reporting news by telegraph.: 274–75  Messages and information would now travel far and wide, and the telegraph demanded a language "stripped of the local, the regional; and colloquial", to better facilitate a worldwide media language. Media language had to be standardized, which led to the gradual disappearance of different forms of speech and styles of journalism and storytelling. The spread of the railways created a need for an accurate standard time to replace local standards based on local noon. The means of achieving this synchronisation was the telegraph. This emphasis on precise time has led to major societal changes such as the concept of the time value of money.: 273–74  During the telegraph era there was widespread employment of women in telegraphy. The shortage of men to work as telegraph operators in the American Civil War opened up the opportunity for women of a well-paid skilled job.: 274  In the UK, there was widespread employment of women as telegraph operators even earlier – from the 1850s by all the major companies. The attraction of women for the telegraph companies was that they could pay them less than men. Nevertheless, the jobs were popular with women for the same reason as in the US; most other work available for women was very poorly paid.: 77 : 85  The economic impact of the telegraph was not much studied by economic historians until parallels started to be drawn with the rise of the internet. In fact, the electric telegraph was as important as the invention of printing in this respect. According to economist Ronnie J. Phillips, the reason for this may be that institutional economists paid more attention to advances that required greater capital investment. The investment required to build railways, for instance, is orders of magnitude greater than that for the telegraph.: 269–70  == In popular culture == The optical telegraph was quickly forgotten once it went out of service. While it was in operation, it was very familiar to the public across Europe. Examples appear in many paintings of the period. Poems include "Le Telégraphe" by Victor Hugo, and the collection Telegrafen: Optisk kalender för 1858 by Elias Sehlstedt is dedicated to the telegraph. In novels, the telegraph is a major component in Lucien Leuwen by Stendhal, and it features in The Count of Monte Cristo, by Alexandre Dumas.: vii–ix  Joseph Chudy's 1796 opera, Der Telegraph oder die Fernschreibmaschine, was written to publicise Chudy's telegraph (a binary code with five lamps) when it became clear that Chappe's design was being taken up.: 42–43  Rudyard Kipling wrote a poem in praise of submarine telegraph cables; "And a new Word runs between: whispering, 'Let us be one!'" Kipling's poem represented a widespread idea in the late nineteenth century that international telegraphy (and new technology in general) would bring peace and mutual understanding to the world. When a submarine telegraph cable first connected America and Britain, the New York Post declared: It is the harbinger of an age when international difficulties will not have time to ripen into bloody results, and when, in spite of the fatuity and perveseness of rulers, war will be impossible. === Newspaper names === Numerous newspapers and news outlets in various countries, such as The Daily Telegraph in Britain, The Telegraph in India, De Telegraaf in the Netherlands, and the Jewish Telegraphic Agency in the US, were given names which include the word "telegraph" due to their having received news by means of electric telegraphy. Some of these names are retained even though different means of news acquisition are now used. == See also == Familygram First transcontinental telegraph Globotype Radiogram Telecommunications == References == == Further reading == == External links == "Telegraph" . Encyclopædia Britannica (11th ed.). 1911. Telegraph at the Encyclopædia Britannica The Porthcurno Telegraph Museum (Archived 27 September 2013 at the Wayback Machine)—The biggest telegraph station in the world, now a museum Distant Writing—The History of the Telegraph Companies in Britain between 1838 and 1868 Western Union Telegraph Company Records, 1820–1995—Archives Center, National Museum of American History, Smithsonian Institution. Early telegraphy and fax engineering, still operable in a German computer museum (Archived 20 April 2012 at the Wayback Machine) "Telegram Falls Silent Stop Era Ends Stop", The New York Times, 6 February 2006 International Facilities of the American Carriers—an overview of the U.S. international cable network in 1950 Elizabeth Bruton: "Communication Technology", in the 1914-1918-online. International Encyclopedia of the First World War
Wikipedia/Telegraph
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. GPUs were later found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields including artificial intelligence (AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training of neural networks and cryptocurrency mining. == History == === 1970s === Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito. The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer). 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU. === 1980s === The NEC μPD7220 was the first implementation of a personal computer graphics display processor as a single large-scale integration (LSI) integrated circuit chip. This enabled the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became the best-known GPU until the mid-1980s. It was the first fully integrated VLSI (very large-scale integration) metal–oxide–semiconductor (NMOS) graphics display processor for PCs, supported up to 1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units. The Williams Electronics arcade games Robotron 2084, Joust, Sinistar, and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps. In 1984, Hitachi released the ARTC HD63484, the first major CMOS graphics processor for personal computers. The ARTC could display up to 4K resolution when in monochrome mode. It was used in a number of graphics cards and terminals during the late 1980s. In 1985, the Amiga was released with a custom graphics chip including a blitter for bitmap manipulation, line drawing, and area fill. It also included a coprocessor with its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first fully programmable graphics processor. It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of the Texas Instruments Graphics Architecture ("TIGA") Windows accelerator cards. In 1987, the IBM 8514 graphics system was released. It was one of the first video cards for IBM PC compatibles that implemented fixed-function 2D primitives in electronic hardware. Sharp's X68000, released in 1987, used a custom graphics chipset with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields. It served as a development machine for Capcom's CP System arcade board. Fujitsu's FM Towns computer, released in 1989, had support for a 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. IBM introduced its proprietary Video Graphics Array (VGA) display standard in 1987, with a maximum resolution of 640×480 pixels. In November 1988, NEC Home Electronics announced its creation of the Video Electronics Standards Association (VESA) to develop and promote a Super VGA (SVGA) computer display standard as a successor to VGA. Super VGA enabled graphics display resolutions up to 800×600 pixels, a 56% increase. === 1990s === In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised. The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. Fixed-function Windows accelerators surpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market. Throughout the 1990s, 2D GUI acceleration evolved. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games in Windows 95 and later. In the early- and mid-1990s, real-time 3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, and Sega Model 2, and the fifth-generation video game consoles such as the Saturn, PlayStation, and Nintendo 64. Arcade systems such as the Sega Model 2 and SGI Onyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards. Another early example is the Super FX chip, a RISC-based on-cartridge graphics chip used in some SNES games, notably Doom and Star Fox. Some systems used DSPs to accelerate transformations. Fujitsu, which worked on the Sega Model 2 arcade system, began working on integrating T&L into a single LSI solution for use in home computers in 1995; the Fujitsu Pinolite, the first 3D geometry processor for personal computers, released in 1997. The first hardware T&L GPU on home video game consoles was the Nintendo 64's Reality Coprocessor, released in 1996. In 1997, Mitsubishi released the 3Dpro/2MP, a GPU capable of transformation and lighting, for workstations and Windows NT desktops; ATi used it for its FireGL 4000 graphics card, released in 1997. The term "GPU" was coined by Sony in reference to the 32-bit Sony GPU (designed by Toshiba) in the PlayStation video game console, released in 1994. In the PC world, notable failed attempts for low-cost 3D graphics chips included the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D graphical user interface (GUI) acceleration entirely) such as the PowerVR and the 3dfx Voodoo. However, as manufacturing technology continued to progress, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were among the first to do this well. In 1997, Rendition collaborated with Hercules and Fujitsu on a "Thriller Conspiracy" project which combined a Fujitsu FXG-1 Pinolite geometry processor with a Vérité V2200 core to create a graphics card with a full T&L engine years before Nvidia's GeForce 256; This card, designed to reduce the load placed upon the system's CPU, never made it to market. NVIDIA RIVA 128 was one of the first consumer-facing GPU integrated 3D processing unit and 2D processing unit on a chip. OpenGL was introduced in the early 1990s by Silicon Graphics as a professional graphics API, with proprietary hardware support for 3D rasterization. In 1994, Microsoft acquired Softimage, the dominant CGI movie production tool used for early CGI movie hits like Jurassic Park, Terminator 2 and Titanic. With that deal came a strategic relationship with SGI and a commercial license of their OpenGL libraries, enabling Microsoft to port the API to the Windows NT OS but not to the upcoming release of Windows 95. Although it was little known at the time, SGI had contracted with Microsoft to transition from Unix to the forthcoming Windows NT OS; the deal which was signed in 1995 was not announced publicly until 1998. In the intervening period, Microsoft worked closely with SGI to port OpenGL to Windows NT. In that era, OpenGL had no standard driver model for competing hardware accelerators to compete on the basis of support for higher level 3D texturing and lighting functionality. In 1994 Microsoft announced DirectX 1.0 and support for gaming in the forthcoming Windows 95 consumer OS. In 1995 Microsoft announced the acquisition of UK based Rendermorphics Ltd and the Direct3D driver model for the acceleration of consumer 3D graphics. The Direct3D driver model shipped with DirectX 2.0 in 1996. It included standards and specifications for 3D chip makers to compete to support 3D texture, lighting and Z-buffering. ATI, which was later to be acquired by AMD, began development on the first Direct3D GPUs. Nvidia quickly pivoted from a failed deal with Sega in 1996 to aggressively embracing support for Direct3D. In this era Microsoft merged their internal Direct3D and OpenGL teams and worked closely with SGI to unify driver standards for both industrial and consumer 3D graphics hardware accelerators. Microsoft ran annual events for 3D chip makers called "Meltdowns" to test their 3D hardware and drivers to work both with Direct3D and OpenGL. It was during this period of strong Microsoft influence over 3D standards that 3D accelerator cards moved beyond being simple rasterizers to become more powerful general purpose processors as support for hardware accelerated texture mapping, lighting, Z-buffering and compute created the modern GPU. During this period the same Microsoft team responsible for Direct3D and OpenGL driver standardization introduced their own Microsoft 3D chip design called Talisman. Details of this era are documented extensively in the books "Game of X" v.1 and v.2 by Russel Demaria, "Renegades of the Empire" by Mike Drummond, "Opening the Xbox" by Dean Takahashi and "Masters of Doom" by David Kushner. The Nvidia GeForce 256 (also known as NV10) was the first consumer-level card with hardware-accelerated T&L. While the OpenGL API provided software support for texture mapping and lighting, the first 3D hardware acceleration for these features arrived with the first Direct3D accelerated consumer GPU's. === 2000s === NVIDIA released the GeForce 256, marketed as the world's first GPU, integrating transform and lighting engines for advanced 3D graphics rendering. Nvidia was first to produce a chip capable of programmable shading: the GeForce 3. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. Used in the Xbox console, this chip competed with the one in the PlayStation 2, which used a custom vector unit for hardware-accelerated vertex processing (commonly referred to as VU0/VU1). The earliest incarnations of shader execution engines used in Xbox were not general-purpose and could not execute arbitrary pixel code. Vertices and pixels were processed by different units, which had their resources, with pixel shaders having tighter constraints (because they execute at higher frequencies than vertices). Pixel shading engines were more akin to a highly customizable function block and did not "run" a program. Many of these disparities between vertex and pixel shading were not addressed until the Unified Shader Model. In October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used for bump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded. With the introduction of the Nvidia GeForce 8 series and new generic stream processing units, GPUs became more generalized computing devices. Parallel GPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing or GPGPU for general purpose computing on GPU, has found applications in fields as diverse as machine learning, oil exploration, scientific image processing, linear algebra, statistics, 3D reconstruction, and stock options pricing. GPGPU was the precursor to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader. This entails some overheads since units like the scan converter are involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader). Nvidia's CUDA platform, first introduced in 2007, was the earliest widely adopted programming model for GPU computing. OpenCL is an open standard defined by the Khronos Group that allows for the development of code for both GPUs and CPUs with an emphasis on portability. OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by Evans Data, OpenCL had become the second most popular HPC tool. === 2010s === In 2010, Nvidia partnered with Audi to power their cars' dashboards, using the Tegra GPU to provide increased functionality to cars' navigation and entertainment systems. Advances in GPU technology in cars helped advance self-driving technology. AMD's Radeon HD 6000 series cards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices. The Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia's 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease it according to its power draw. The Kepler microarchitecture was manufactured. The PS4 and Xbox One were released in 2013; they both use GPUs based on AMD's Radeon HD 7850 and 7790. Nvidia's Kepler line of GPUs was followed by the Maxwell line, manufactured on the same process. Nvidia's 28 nm chips were manufactured by TSMC in Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power. Virtual reality headsets have high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release. Cards based on the Pascal microarchitecture were released in 2016. The GeForce 10 series of cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures. Nvidia released one non-consumer card under the new Volta architecture, the Titan V. Changes from the Titan XP, Pascal's high-end card, include an increase in the number of CUDA cores, the addition of tensor cores, and HBM2. Tensor cores are designed for deep learning, while high-bandwidth memory is on-die, stacked, lower-clocked memory that offers an extremely wide memory bus. To emphasize that the Titan V is not a gaming card, Nvidia removed the "GeForce GTX" suffix it adds to consumer gaming cards. In 2018, Nvidia launched the RTX 20 series GPUs that added ray-tracing cores to GPUs, improving their performance on lighting effects. Polaris 11 and Polaris 10 GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards. AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V. In 2019, AMD released the successor to their Graphics Core Next (GCN) microarchitecture/instruction set. Dubbed RDNA, the first product featuring it was the Radeon RX 5000 series of video cards. The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled the Radeon RX 6000 series, its RDNA 2 graphics cards with support for hardware-accelerated ray tracing. The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT. The RX 6700 XT, which is based on Navi 22, was launched in early 2021. The PlayStation 5 and Xbox Series X and Series S were released in 2020; they both use GPUs based on the RDNA 2 microarchitecture with incremental improvements and different GPU configurations in each system's implementation. Intel first entered the GPU market in the late 1990s, but produced lackluster 3D accelerators compared to the competition at the time. Rather than attempting to compete with the high-end manufacturers Nvidia and ATI/AMD, they began integrating Intel Graphics Technology GPUs into motherboard chipsets, beginning with the Intel 810 for the Pentium III, and later into CPUs. They began with the Intel Atom 'Pineview' laptop processor in 2009, continuing in 2010 with desktop processors in the first generation of the Intel Core line and with contemporary Pentiums and Celerons. This resulted in a large nominal market share, as the majority of computers with an Intel CPU also featured this embedded graphics processor. These generally lagged behind discrete processors in performance. Intel re-entered the discrete GPU market in 2022 with its Arc series, which competed with the then-current GeForce 30 series and Radeon 6000 series cards at competitive prices. === 2020s === In the 2020s, GPUs have been increasingly used for calculations involving embarrassingly parallel problems, such as training of neural networks on enormous datasets that are needed for large language models. Specialized processing cores on some modern workstation's GPUs are dedicated for deep learning since they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are expected to appear in consumer cards, as well. == GPU companies == Many companies have produced GPUs under a number of brand names. In 2009, Intel, Nvidia, and AMD/ATI were the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition, Matrox produces GPUs. Chinese companies such as Jingjia Micro have also produced GPUs for the domestic market although in terms of worldwide sales, they still lag behind market leaders. Modern smartphones use mostly Adreno GPUs from Qualcomm, PowerVR GPUs from Imagination Technologies, and Mali GPUs from ARM. == Computational functions == Modern GPUs have traditionally used most of their transistors to do calculations related to 3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000–HD7000 lack dedicated 2D acceleration; it is emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons. Later, dedicated hardware was added to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations that are supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in the semiconductor device fabrication, the clock signal frequency, and the number and size of various on-chip memory caches. Performance is also affected by the number of streaming multiprocessors (SM) for NVidia GPUs, or compute units (CU) for AMD GPUs, or Xe cores for Intel discrete GPUs, which describe the number of on-silicon processor core units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second (FLOPS); GPUs in the 2010s and 2020s typically deliver performance measured in teraflops (TFLOPS). This is an estimated performance measure, as other factors can affect the actual display rate. === GPU accelerated video decoding and encoding === Most GPUs made since 1995 support the YUV color space and hardware overlays, important for digital video playback, and many GPUs made since 2000 also support MPEG primitives such as motion compensation and iDCT. This hardware-accelerated video decoding, in which portions of the video decoding process and video post-processing are offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding", or "GPU hardware assisted video decoding". Recent graphics cards decode high-definition video on the card, offloading the central processing unit. The most common APIs for GPU accelerated video decoding are DxVA for Microsoft Windows operating systems and VDPAU, VAAPI, XvMC, and XvBA for Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded with MPEG-1, MPEG-2, MPEG-4 ASP (MPEG-4 Part 2), MPEG-4 AVC (H.264 / DivX 6), VC-1, WMV3/WMV9, Xvid / OpenDivX (DivX 4), and DivX 5 codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2. There are several dedicated hardware video decoding and encoding solutions. ==== Video decoding processes that can be accelerated ==== Video decoding processes that can be accelerated by modern GPU hardware are: Motion compensation (mocomp) Inverse discrete cosine transform (iDCT) Inverse telecine 3:2 and 2:2 pull-down correction Inverse modified discrete cosine transform (iMDCT) In-loop deblocking filter Intra-frame prediction Inverse quantization (IQ) Variable-length decoding (VLD), more commonly known as slice-level acceleration Spatial-temporal deinterlacing and automatic interlace/progressive source detection Bitstream processing (Context-adaptive variable-length coding/Context-adaptive binary arithmetic coding) and perfect pixel positioning These operations also have applications in video editing, encoding, and transcoding. === 2D graphics APIs === An earlier GPU may support one or more 2D graphics API for 2D acceleration, such as GDI and DirectDraw. === 3D graphics APIs === A GPU can support one or more 3D graphics API, such as DirectX, Metal, OpenGL, OpenGL ES, Vulkan. == GPU forms == === Terminology === In the 1970s, the term "GPU" originally stood for graphics processor unit and described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output. In 1994, Sony used the term (now standing for graphics processing unit) in reference to the PlayStation console's Toshiba-designed Sony GPU. The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU". It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002. The AMD Alveo MA35D features dual VPU’s, each using the 5 nm process in 2023. In personal computers, there are two main forms of GPUs. Each has many synonyms: Dedicated graphics also called discrete graphics. Integrated graphics also called shared graphics solutions, integrated graphics processors (IGP), or unified memory architecture (UMA). ==== Usage-specific GPU ==== Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations: Gaming GeForce GTX, RTX Nvidia Titan Radeon HD, R5, R7, R9, RX, Vega and Navi series Radeon VII Intel Arc Cloud Gaming Nvidia GRID Radeon Sky Workstation Nvidia Quadro Nvidia RTX AMD FirePro AMD Radeon Pro Intel Arc Pro Cloud Workstation Nvidia Tesla AMD FireStream Artificial Intelligence training and Cloud Nvidia Tesla AMD Radeon Instinct Automated/Driverless car Nvidia Drive PX === Dedicated graphics processing unit === Dedicated graphics processing units uses RAM that is dedicated to the GPU rather than relying on the computer’s main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card (see GDDR). Sometimes systems with dedicated discrete GPUs were called "DIS" systems as opposed to "UMA" systems (see next section). Dedicated GPUs are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts. Graphics cards with dedicated GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIe) or Accelerated Graphics Port (AGP). They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available. Technologies such as Scan-Line Interleave by 3dfx, SLI and NVLink by Nvidia and CrossFire by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them. Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video (processing multiple videos at once) and 3D rendering, for VFX, GPGPU workloads and for simulations, and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs. === Integrated graphics processing unit === Integrated graphics processing units (IGPU), integrated graphics, shared graphics solutions, integrated graphics processors (IGP), or unified memory architectures (UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of its northbridge chipset, or on the same die (integrated circuit) with the CPU (like AMD APU or Intel HD Graphics). On certain motherboards, AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007 computers with integrated graphics account for about 90% of all PC shipments. They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004. However, modern integrated graphics processors such as AMD Accelerated Processing Unit and Intel Graphics Technology (HD, UHD, Iris, Iris Pro, Iris Plus, and Xe-LP) can handle 2D graphics or low-stress 3D graphics. Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between its VRAM and GPU core. This memory bus bandwidth can limit the performance of the GPU, though multi-channel memory can mitigate this deficiency. Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it. On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics, modern Intel processors with integrated graphics, Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs (without needing a large static split of the RAM) and thanks to zero copy transfers, removes the need for either copying data over a bus between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data. === Hybrid graphics processing === Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's HyperMemory and Nvidia's TurboCache. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the high latency of the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory. === Stream processing and general purpose GPUs (GPGPU) === It is common to use a general purpose graphics processing unit (GPGPU) as a modified form of stream processor (or a vector processor), running compute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics processing unit" above) GPU designers, AMD and Nvidia, are pursuing this approach with an array of applications. Both Nvidia and AMD teamed with Stanford University to create a GPU-based client for the Folding@home distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications. GPGPUs can be used for many types of embarrassingly parallel tasks including ray tracing. They are generally suited to high-throughput computations that exhibit data-parallelism to exploit the wide vector width SIMD architecture of the GPU. GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration. GPUs support API extensions to the C programming language such as OpenCL and OpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards: AMD APP SDK from AMD, and CUDA from Nvidia. These allow functions called compute kernels to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API. Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general, and for accelerating the fitness evaluation in genetic programming in particular. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's SIMD architecture. However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there. Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs. === External GPU (eGPU) === An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering. Therefore, it is desirable to attach a GPU to some external bus of a notebook. PCI Express is the only bus used for this purpose. The port may be, for example, an ExpressCard or mPCIe port (PCIe ×1, up to 5 or 2.5 Gbit/s respectively), a Thunderbolt 1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively), a USB4 port with Thunderbolt compatibility, or an OCuLink port. Those ports are only available on certain notebook systems. eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts. == Energy efficiency == == Sales == In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of PC GPUs totaled around 75.5 million units, down 19% year-over-year. == See also == === Hardware === List of AMD graphics processing units List of Nvidia graphics processing units List of Intel graphics processing units List of discrete and integrated graphics processing units Intel GMA Larrabee Nvidia PureVideo – the bit-stream technology from Nvidia used in their graphics chips to accelerate video decoding on hardware GPU with DXVA. SoC UVD (Unified Video Decoder) – the video decoding bit-stream technology from ATI to support hardware (GPU) decode with DXVA === APIs === === Applications === GPU cluster Mathematica – includes built-in support for CUDA and OpenCL GPU execution Molecular modeling on GPU Deeplearning4j – open-source, distributed deep learning for Java == References == == Sources == Peddie, Jon (1 January 2023). The History of the GPU – New Developments. Springer Nature. ISBN 978-3-03-114047-1. OCLC 1356877844. == External links ==
Wikipedia/Graphics_processing_units
Renewable energy (also called green energy) is energy made from renewable natural resources that are replenished on a human timescale. The most widely used renewable energy types are solar energy, wind power, and hydropower. Bioenergy and geothermal power are also significant in some countries. Some also consider nuclear power a renewable power source, although this is controversial, as nuclear energy requires mining uranium, a nonrenewable resource. Renewable energy installations can be large or small and are suited for both urban and rural areas. Renewable energy is often deployed together with further electrification. This has several benefits: electricity can move heat and vehicles efficiently and is clean at the point of consumption. Variable renewable energy sources are those that have a fluctuating nature, such as wind power and solar power. In contrast, controllable renewable energy sources include dammed hydroelectricity, bioenergy, or geothermal power. Renewable energy systems have rapidly become more efficient and cheaper over the past 30 years. A large majority of worldwide newly installed electricity capacity is now renewable. Renewable energy sources, such as solar and wind power, have seen significant cost reductions over the past decade, making them more competitive with traditional fossil fuels. In most countries, photovoltaic solar or onshore wind are the cheapest new-build electricity. From 2011 to 2021, renewable energy grew from 20% to 28% of global electricity supply. Power from the sun and wind accounted for most of this increase, growing from a combined 2% to 10%. Use of fossil energy shrank from 68% to 62%. In 2024, renewables accounted for over 30% of global electricity generation and are projected to reach over 45% by 2030. Many countries already have renewables contributing more than 20% of their total energy supply, with some generating over half or even all their electricity from renewable sources. The main motivation to use renewable energy instead of fossil fuels is to slow and eventually stop climate change, which is mostly caused by their greenhouse gas emissions. In general, renewable energy sources pollute much less than fossil fuels. The International Energy Agency estimates that to achieve net zero emissions by 2050, 90% of global electricity will need to be generated by renewables. Renewables also cause much less air pollution than fossil fuels, improving public health, and are less noisy. The deployment of renewable energy still faces obstacles, especially fossil fuel subsidies, lobbying by incumbent power providers, and local opposition to the use of land for renewable installations. Like all mining, the extraction of minerals required for many renewable energy technologies also results in environmental damage. In addition, although most renewable energy sources are sustainable, some are not. == Overview == === Definition === Renewable energy is usually understood as energy harnessed from continuously occurring natural phenomena. The International Energy Agency defines it as "energy derived from natural processes that are replenished at a faster rate than they are consumed". Solar power, wind power, hydroelectricity, geothermal energy, and biomass are widely agreed to be the main types of renewable energy. Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services. Although almost all forms of renewable energy cause much fewer carbon emissions than fossil fuels, the term is not synonymous with low-carbon energy. Some non-renewable sources of energy, such as nuclear power,generate almost no emissions, while some renewable energy sources can be very carbon-intensive, such as the burning of biomass if it is not offset by planting new plants. Renewable energy is also distinct from sustainable energy, a more abstract concept that seeks to group energy sources based on their overall permanent impact on future generations of humans. For example, biomass is often associated with unsustainable deforestation. === Role in addressing climate change === As part of the global effort to limit climate change, most countries have committed to net zero greenhouse gas emissions. In practice, this means phasing out fossil fuels and replacing them with low-emissions energy sources. This much needed process, coined as "low-carbon substitutions" in contrast to other transition processes including energy additions, needs to be accelerated multiple times in order to successfully mitigate climate change. At the 2023 United Nations Climate Change Conference, around three-quarters of the world's countries set a goal of tripling renewable energy capacity by 2030. The European Union aims to generate 40% of its electricity from renewables by the same year. === Other benefits === Renewable energy is more evenly distributed around the world than fossil fuels, which are concentrated in a limited number of countries. It also brings health benefits by reducing air pollution caused by the burning of fossil fuels. The potential worldwide savings in health care costs have been estimated at trillions of dollars annually. === Intermittency === The two most important forms of renewable energy, solar and wind, are intermittent energy sources: they are not available constantly, resulting in lower capacity factors. In contrast, fossil fuel power plants, nuclear power plants and hydropower are usually able to produce precisely the amount of energy an electricity grid requires at a given time. Solar energy can only be captured during the day, and ideally in cloudless conditions. Wind power generation can vary significantly not only day-to-day, but even month-to-month. This poses a challenge when transitioning away from fossil fuels: energy demand will often be higher or lower than what renewables can provide. In the medium-term, this variability may require keeping some gas-fired power plants or other dispatchable generation on standby until there is enough energy storage, demand response, grid improvement, or base load power from non-intermittent sources. In the long-term, energy storage is an important way of dealing with intermittency. Using diversified renewable energy sources and smart grids can also help flatten supply and demand. Sector coupling of the power generation sector with other sectors may increase flexibility: for example the transport sector can be coupled by charging electric vehicles and sending electricity from vehicle to grid. Similarly the industry sector can be coupled by hydrogen produced by electrolysis, and the buildings sector by thermal energy storage for space heating and cooling. Building overcapacity for wind and solar generation can help ensure sufficient electricity production even during poor weather. In optimal weather, it may be necessary to curtail energy generation if it is not possible to use or store excess electricity. ==== Electrical energy storage ==== Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 85% of all grid power storage. Batteries are increasingly being deployed for storage and grid ancillary services and for domestic storage. Green hydrogen is a more economical means of long-term renewable energy storage, in terms of capital expenditures compared to pumped hydroelectric or batteries. ==== Energy supply security ==== Two main renewable energy sources - solar power and wind power - are usually deployed in distributed generation architecture, which offers specific benefits and comes with specific risks. Notable risks are associated with centralisation of 90% of the supply chains in a single country (China) in the photovoltaic sector. Mass-scale installation of photovoltaic power inverters with remote control, security vulnerabilities and backdoors results in cyberattacks that can disable generation from millions of physically decentralised panels, resulting in disappearance of hundreds of gigawatts of installed power from the grid in one moment. Similar attacks have targeted wind power farms through vulnerabilities in their remote control and monitoring systems. The European NIS2 directive partially responds to these challenges by extending the scope of cybersecurity regulations to the energy generation market. == Mainstream technologies == === Solar energy === Solar power produced around 1.3 terrawatt-hours (TWh) worldwide in 2022, representing 4.6% of the world's electricity. Almost all of this growth has happened since 2010. Solar energy can be harnessed anywhere that receives sunlight; however, the amount of solar energy that can be harnessed for electricity generation is influenced by weather conditions, geographic location and time of day. There are two mainstream ways of harnessing solar energy: solar thermal, which converts solar energy into heat; and photovoltaics (PV), which converts it into electricity. PV is far more widespread, accounting for around two thirds of the global solar energy capacity as of 2022. It is also growing at a much faster rate, with 170 GW newly installed capacity in 2021, compared to 25 GW of solar thermal. Passive solar refers to a range of construction strategies and technologies that aim to optimize the distribution of solar heat in a building. Examples include solar chimneys, orienting a building to the sun, using construction materials that can store heat, and designing spaces that naturally circulate air. From 2020 to 2022, solar technology investments almost doubled from USD 162 billion to USD 308 billion, driven by the sector's increasing maturity and cost reductions, particularly in solar photovoltaic (PV), which accounted for 90% of total investments. China and the United States were the main recipients, collectively making up about half of all solar investments since 2013. Despite reductions in Japan and India due to policy changes and COVID-19, growth in China, the United States, and a significant increase from Vietnam's feed-in tariff program offset these declines. Globally, the solar sector added 714 gigawatts (GW) of solar PV and concentrated solar power (CSP) capacity between 2013 and 2021, with a notable rise in large-scale solar heating installations in 2021, especially in China, Europe, Turkey, and Mexico. ==== Photovoltaics ==== A photovoltaic system, consisting of solar cells assembled into panels, converts light into electrical direct current via the photoelectric effect. PV has several advantages that make it by far the fastest-growing renewable energy technology. It is cheap, low-maintenance and scalable; adding to an existing PV installation as demanded arises is simple. Its main disadvantage is its poor performance in cloudy weather. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. A household's solar panels can either be used for just that household or, if connected to an electrical grid, can be aggregated with millions of others. The first utility-scale solar power plant was built in 1982 in Hesperia, California by ARCO. The plant was not profitable and was sold eight years later. However, over the following decades, PV cells became significantly more efficient and cheaper. As a result, PV adoption has grown exponentially since 2010. Global capacity increased from 230 GW at the end of 2015 to 890 GW in 2021. PV grew fastest in China between 2016 and 2021, adding 560 GW, more than all advanced economies combined. Four of the ten biggest solar power stations are in China, including the biggest, Golmud Solar Park in China. Solar panels are recycled to reduce electronic waste and create a source for materials that would otherwise need to be mined, but such business is still small and work is ongoing to improve and scale-up the process. ==== Solar thermal ==== Unlike photovoltaic cells that convert sunlight directly into electricity, solar thermal systems convert it into heat. They use mirrors or lenses to concentrate sunlight onto a receiver, which in turn heats a water reservoir. The heated water can then be used in homes. The advantage of solar thermal is that the heated water can be stored until it is needed, eliminating the need for a separate energy storage system. Solar thermal power can also be converted to electricity by using the steam generated from the heated water to drive a turbine connected to a generator. However, because generating electricity this way is much more expensive than photovoltaic power plants, there are very few in use today. ==== Floatovoltaics ==== Floatovoltiacs, or floating solar panels, are solar panels floating on bodies of water. There are both positive and negative points to this. Some positive points are increased efficiency and price decrease of water space compared to land space. A negative point is that making floating solar panels could be more expensive. ==== Agrivoltaics ==== Agrivoltaics is where there is simultaneous use of land for energy production and agriculture. There are again both positive and negative points. A positive viewpoint is there is a better use of land, which leads to lower land costs. A negative viewpoint is it the plants grown underneath would have to be plants that can grow well under shade, such as Polka Dot Plant, Pineapple Sage, and Begonia. Agrivoltaics not only optimizes land use and reduces costs by enabling dual revenue streams from both energy production and agriculture, but it can also help moderate temperatures beneath the panels, potentially reducing water loss and improving microclimates for crop growth. However, careful design and crop selection are crucial, as the shading effect may limit the types of plants that can thrive, necessitating the use of shade-tolerant species and innovative management practices. === Wind power === Humans have harnessed wind energy since at least 3500 BC. Until the 20th century, it was primarily used to power ships, windmills and water pumps. Today, the vast majority of wind power is used to generate electricity using wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine. Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%. Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore, and likely also industrial use of new types of VAWT turbines in addition to the horizontal axis units currently in use. As offshore wind speeds average ~90% greater than that of land, offshore resources can contribute substantially more energy than land-stationed turbines. Investments in wind technologies reached USD 161 billion in 2020, with onshore wind dominating at 80% of total investments from 2013 to 2022. Offshore wind investments nearly doubled to USD 41 billion between 2019 and 2020, primarily due to policy incentives in China and expansion in Europe. Global wind capacity increased by 557 GW between 2013 and 2021, with capacity additions increasing by an average of 19% each year. === Hydropower === Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. Water can generate electricity with a conversion efficiency of about 90%, which is the highest rate in renewable energy. There are many forms of water energy: Historically, hydroelectric power came from constructing large hydroelectric dams and reservoirs, which are still popular in developing countries. The largest of them are the Three Gorges Dam (2003) in China and the Itaipu Dam (1984) built by Brazil and Paraguay. Small hydro systems are hydroelectric power installations that typically produce up to 50 MW of power. They are often used on small rivers or as a low-impact development on larger rivers. China is the largest producer of hydroelectricity in the world and has more than 45,000 small hydro installations. Run-of-the-river hydroelectricity plants derive energy from rivers without the creation of a large reservoir. The water is typically conveyed along the side of the river valley (using channels, pipes or tunnels) until it is high above the valley floor, whereupon it can be allowed to fall through a penstock to drive a turbine. A run-of-river plant may still produce a large amount of electricity, such as the Chief Joseph Dam on the Columbia River in the United States. However many run-of-the-river hydro power plants are micro hydro or pico hydro plants. Much hydropower is flexible, thus complementing wind and solar, as it not intermittent. In 2021, the world renewable hydropower capacity was 1,360 GW. Only a third of the world's estimated hydroelectric potential of 14,000 TWh/year has been developed. New hydropower projects face opposition from local communities due to their large impact, including relocation of communities and flooding of wildlife habitats and farming land. High cost and lead times from permission process, including environmental and risk assessments, with lack of environmental and social acceptance are therefore the primary challenges for new developments. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid. Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Because dispatchable power is more valuable than VRE countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro. === Bioenergy === Biomass is biological material derived from living, or recently living organisms. Most commonly, it refers to plants or plant-derived materials. As an energy source, biomass can either be used directly via combustion to produce heat, or converted to a more energy-dense biofuel like ethanol. Wood is the most significant biomass energy source as of 2012 and is usually sourced from a trees cleared for silvicultural reasons or fire prevention. Municipal wood waste – for instance, construction materials or sawdust – is also often burned for energy. The biggest per-capita producers of wood-based bioenergy are heavily forested countries like Finland, Sweden, Estonia, Austria, and Denmark. Bioenergy can be environmentally destructive if old-growth forests are cleared to make way for crop production. In particular, demand for palm oil to produce biodiesel has contributed to the deforestation of tropical rainforests in Brazil and Indonesia. In addition, burning biomass still produces carbon emissions, although much less than fossil fuels (39 grams of CO2 per megajoule of energy, compared to 75 g/MJ for fossil fuels). Some biomass sources are unsustainable at current rates of exploitation (as of 2017). ==== Biofuel ==== Biofuels are primarily used in transportation, providing 3.5% of the world's transport energy demand in 2022, up from 2.7% in 2010. Biojet is expected to be important for short-term reduction of carbon dioxide emissions from long-haul flights. Aside from wood, the major sources of bioenergy are bioethanol and biodiesel. Bioethanol is usually produced by fermenting the sugar components of crops like sugarcane and maize, while biodiesel is mostly made from oils extracted from plants, such as soybean oil and corn oil. Most of the crops used to produce bioethanol and biodiesel are grown specifically for this purpose, although used cooking oil accounted for 14% of the oil used to produce biodiesel as of 2015. The biomass used to produce biofuels varies by region. Maize is the major feedstock in the United States, while sugarcane dominates in Brazil. In the European Union, where biodiesel is more common than bioethanol, rapeseed oil and palm oil are the main feedstocks. China, although it produces comparatively much less biofuel, uses mostly corn and wheat. In many countries, biofuels are either subsidized or mandated to be included in fuel mixtures. There are many other sources of bioenergy that are more niche, or not yet viable at large scales. For instance, bioethanol could be produced from the cellulosic parts of crops, rather than only the seed as is common today. Sweet sorghum may be a promising alternative source of bioethanol, due to its tolerance of a wide range of climates. Cow dung can be converted into methane. There is also a great deal of research involving algal fuel, which is attractive because algae is a non-food resource, grows around 20 times faster than most food crops, and can be grown almost anywhere. === Geothermal energy === Geothermal energy is thermal energy (heat) extracted from the Earth's crust. It originates from several different sources, of which the most significant is slow radioactive decay of minerals contained in the Earth's interior, as well as some leftover heat from the formation of the Earth. Some of the heat is generated near the Earth's surface in the crust, but some also flows from deep within the Earth from the mantle and core. Geothermal energy extraction is viable mostly in countries located on tectonic plate edges, where the Earth's hot mantle is more exposed. As of 2023, the United States has by far the most geothermal capacity (2.7 GW, or less than 0.2% of the country's total energy capacity), followed by Indonesia and the Philippines. Global capacity in 2022 was 15 GW. Geothermal energy can be either used directly to heat homes, as is common in Iceland where almost all of its energy is renewable, or to generate electricity. Iceland is a global leader in renewable energy, relying almost entirely on its abundant geothermal and hydroelectric resources derived from volcanic activity and glaciers. At smaller scales, geothermal power can be generated with geothermal heat pumps, which can extract heat from ground temperatures of under 30 °C (86 °F), allowing them to be used at relatively shallow depths of a few meters. Electricity generation requires large plants and ground temperatures of at least 150 °C (302 °F). In some countries, electricity produced from geothermal energy accounts for a large portion of the total, such as Kenya (43%) and Indonesia (5%). Technical advances may eventually make geothermal power more widely available. For example, enhanced geothermal systems involve drilling around 10 kilometres (6.2 mi) into the Earth, breaking apart hot rocks and extracting the heat using water. In theory, this type of geothermal energy extraction could be done anywhere on Earth. == Emerging technologies == There are also other renewable energy technologies that are still under development, including enhanced geothermal systems, concentrated solar power, cellulosic ethanol, and marine energy. These technologies are not yet widely demonstrated or have limited commercialization. Some may have potential comparable to other renewable energy technologies, but still depend on further breakthroughs from research, development and engineering. === Enhanced geothermal systems === Enhanced geothermal systems (EGS) are a new type of geothermal power which does not require natural hot water reservoirs or steam to generate power. Most of the underground heat within drilling reach is trapped in solid rocks, not in water. EGS technologies use hydraulic fracturing to break apart these rocks and release the heat they contain, which is then harvested by pumping water into the ground. The process is sometimes known as "hot dry rock" (HDR). Unlike conventional geothermal energy extraction, EGS may be feasible anywhere in the world, depending on the cost of drilling. EGS projects have so far primarily been limited to demonstration plants, as the technology is capital-intensive due to the high cost of drilling. === Marine energy === Marine energy (also sometimes referred to as ocean energy) is the energy carried by ocean waves, tides, salinity, and ocean temperature differences. Technologies to harness the energy of moving water include wave power, marine current power, and tidal power. Reverse electrodialysis (RED) is a technology for generating electricity by mixing fresh water and salty sea water in large power cells. Most marine energy harvesting technologies are still at low technology readiness levels and not used at large scales. Tidal energy is generally considered the most mature, but has not seen wide deployment. The world's largest tidal power station is on Sihwa Lake, South Korea, which produces around 550 gigawatt-hours of electricity per year. === Earth infrared thermal radiation === Earth emits roughly 1017 W of infrared thermal radiation that flows toward the cold outer space. Solar energy hits the surface and atmosphere of the earth and produces heat. Using various theorized devices like emissive energy harvester (EEH) or thermoradiative diode, this energy flow can be converted into electricity. In theory, this technology can be used during nighttime. === Others === ==== Algae fuels ==== Producing liquid fuels from oil-rich (fat-rich) varieties of algae is an ongoing research topic. Various microalgae grown in open or closed systems are being tried including some systems that can be set up in brownfield and desert lands. ==== Space-based solar power ==== There have been numerous proposals for space-based solar power, in which very large satellites with photovoltaic panels would be equipped with microwave transmitters to beam power back to terrestrial receivers. A 2024 study by the NASA Office of Science and Technology Policy examined the concept and concluded that with current and near-future technologies it would be economically uncompetitive. ==== Water vapor ==== Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%. ==== Nuclear energy ==== Breeder reactors could, in principle, depending on the fuel cycle employed, extract almost all of the energy contained in uranium or thorium, decreasing fuel requirements by a factor of 100 compared to widely used once-through light water reactors, which extract less than 1% of the energy in the actinide metal (uranium or thorium) mined from the earth. The high fuel-efficiency of breeder reactors could greatly reduce concerns about fuel supply, energy used in mining, and storage of radioactive waste. With seawater uranium extraction (currently too expensive to be economical), there is enough fuel for breeder reactors to satisfy the world's energy needs for 5 billion years at 1983's total energy consumption rate, thus making nuclear energy effectively a renewable energy. In addition to seawater the average crustal granite rocks contain significant quantities of uranium and thorium with which breeder reactors can supply abundant energy for the remaining lifespan of the sun on the main sequence of stellar evolution. ==== Artificial photosynthesis ==== Artificial photosynthesis uses techniques including nanotechnology to store solar electromagnetic energy in chemical bonds by splitting water to produce hydrogen and then using carbon dioxide to make methanol. Researchers in this field strived to design molecular mimics of photosynthesis that use a wider region of the solar spectrum, employ catalytic systems made from abundant, inexpensive materials that are robust, readily repaired, non-toxic, stable in a variety of environmental conditions and perform more efficiently allowing a greater proportion of photon energy to end up in the storage compounds, i.e., carbohydrates (rather than building and sustaining living cells). However, prominent research faces hurdles, Sun Catalytix a MIT spin-off stopped scaling up their prototype fuel-cell in 2012 because it offers few savings over other ways to make hydrogen from sunlight. Recent research emphasizes that while artificial photosynthesis shows promise in splitting water to generate hydrogen, its broader significance lies in the ability to produce dense, carbon-based solar fuels suitable for transport applications, such as aviation and long-haul shipping. These fuels, if derived from carbon dioxide and water using sunlight, could close the carbon loop and reduce reliance on fossil-based hydrocarbons. However, realizing this potential requires overcoming major technical hurdles, including the development of efficient, durable catalysts for water oxidation and CO₂ reduction, and careful attention to land use and public perception. == Market and industry trends == Most new renewables are solar, followed by wind then hydro then bioenergy. Investment in renewables, especially solar, tends to be more effective in creating jobs than coal, gas or oil. Worldwide, renewables employ about 12 million people as of 2020, with solar PV being the technology employing the most at almost 4 million. However, as of February 2024, the world's supply of workforce for solar energy is lagging greatly behind demand as universities worldwide still produce more workforce for fossil fuels than for renewable energy industries. In 2021, China accounted for almost half of the global increase in renewable electricity. There are 3,146 gigawatts installed in 135 countries, while 156 countries have laws regulating the renewable energy sector. Globally in 2020 there are over 10 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer. The clean energy sectors added about 4.7 million jobs globally between 2019 and 2022, totaling 35 million jobs by 2022.: 5  === Usage by sector or application === Some studies say that a global transition to 100% renewable energy across all sectors – power, heat, transport and industry – is feasible and economically viable. One of the efforts to decarbonize transportation is the increased use of electric vehicles (EVs). Despite that and the use of biofuels, such as biojet, less than 4% of transport energy is from renewables. Occasionally hydrogen fuel cells are used for heavy transport. Meanwhile, in the future electrofuels may also play a greater role in decarbonizing hard-to-abate sectors like aviation and maritime shipping. Solar water heating makes an important contribution to renewable heat in many countries, most notably in China, which now has 70% of the global total (180 GWth). Most of these systems are installed on multi-family apartment buildings and meet a portion of the hot water needs of an estimated 50–60 million households in China. Worldwide, total installed solar water heating systems meet a portion of the water heating needs of over 70 million households. Heat pumps provide both heating and cooling, and also flatten the electric demand curve and are thus an increasing priority. Renewable thermal energy is also growing rapidly. About 10% of heating and cooling energy is from renewables. === Cost comparison === The International Renewable Energy Agency (IRENA) stated that ~86% (187 GW) of renewable capacity added in 2022 had lower costs than electricity generated from fossil fuels. IRENA also stated that capacity added since 2000 reduced electricity bills in 2022 by at least $520 billion, and that in non-OECD countries, the lifetime savings of 2022 capacity additions will reduce costs by up to $580 billion. * = 2018. All other values for 2019. === Growth of renewables === The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies. In the decade of 2010–2019, worldwide investment in renewable energy capacity excluding large hydropower amounted to US$2.7 trillion, of which the top countries China contributed US$818 billion, the United States contributed US$392.3 billion, Japan contributed US$210.9 billion, Germany contributed US$183.4 billion, and the United Kingdom contributed US$126.5 billion. This was an increase of over three and possibly four times the equivalent amount invested in the decade of 2000–2009 (no data is available for 2000–2003). As of 2022, an estimated 28% of the world's electricity was generated by renewables. This is up from 19% in 1990. ==== Future projections ==== A December 2022 report by the IEA forecasts that over 2022-2027, renewables are seen growing by almost 2,400 GW in its main forecast, equal to the entire installed power capacity of China in 2021. This is an 85% acceleration from the previous five years, and almost 30% higher than what the IEA forecast in its 2021 report, making its largest ever upward revision. Renewables are set to account for over 90% of global electricity capacity expansion over the forecast period. To achieve net zero emissions by 2050, IEA believes that 90% of global electricity generation will need to be produced from renewable sources. In June 2022, IEA Executive Director Fatih Birol said that countries should invest more in renewables to "ease the pressure on consumers from high fossil fuel prices, make our energy systems more secure, and get the world on track to reach our climate goals." China's five year plan to 2025 includes increasing direct heating by renewables such as geothermal and solar thermal. REPowerEU, the EU plan to escape dependence on fossil Russian gas, is expected to call for much more green hydrogen. After a transitional period, renewable energy production is expected to make up most of the world's energy production. In 2018, the risk management firm, DNV GL, forecasts that the world's primary energy mix will be split equally between fossil and non-fossil sources by 2050. Middle eastern nations are also planning on reducing their reliance fossil fuel. Many planned green projects will contribute in 26% of energy supply for the region by 2050 achieving emission reductions equal to 1.1 Gt CO2/year. Massive Renewable Energy Projects in the Middle East: Mohammed bin Rashid Al Maktoum Solar Park in Dubai, UAE Shuaibah Two (2) Solar Facility in Mecca Province, Saudi Arabia NEOM Green Hydrogen Project in NEOM, Saudi Arabia Gulf of Suez Wind Power Project in Suez, Egypt Al-Ajban Solar Park in Abu Dhabi, UAE Besides future energy carriers like hydrogen, research indicates that ammonia also has significant potential as an energy carrier. Its high energy density and ease of storage make it a promising option for large-scale energy applications. The underground storage of ammonia on a wide scale could provide a safe and efficient method for energy storage, but this avenue still requires more comprehensive investigation to address associated challenges and ensure its viability for future energy systems. === Demand === In July 2014, WWF and the World Resources Institute convened a discussion among a number of major US companies who had declared their intention to increase their use of renewable energy. These discussions identified a number of "principles" which companies seeking greater access to renewable energy considered important market deliverables. These principles included choice (between suppliers and between products), cost competitiveness, longer term fixed price supplies, access to third-party financing vehicles, and collaboration. UK statistics released in September 2020 noted that "the proportion of demand met from renewables varies from a low of 3.4 per cent (for transport, mainly from biofuels) to highs of over 20 per cent for 'other final users', which is largely the service and commercial sectors that consume relatively large quantities of electricity, and industry". In some locations, individual households can opt to purchase renewable energy through a consumer green energy program. === Developing countries === In Kenya, the Olkaria V Geothermal Power Station is one of the largest in the world. The Grand Ethiopia Renaissance Dam project incorporates wind turbines. Once completed, Morocco's Ouarzazate Solar Power Station is projected to provide power to over a million people. == Policy == Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in the early 2000s, most countries around the world now have some form of energy policy. The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, with 75 countries signing the charter of IRENA. As of April 2019, IRENA has 160 member states. The then United Nations Secretary-General Ban Ki-moon has said that renewable energy can lift the poorest nations to new levels of prosperity, and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy. The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies. In 2017, a total of 121 countries adopted some form of renewable energy policy. National targets that year existed in 176 countries. In addition, there is also a wide range of policies at the state/provincial, and local levels. Some public utilities help plan or install residential energy upgrades. Many national, state and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies. Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. Global and national policies related to renewable energy can be divided based on sectors, such as agriculture, transport, buildings, industry: Climate neutrality (net zero emissions) by the year 2050 is the main goal of the European Green Deal. For the European Union to reach their target of climate neutrality, one goal is to decarbonise its energy system by aiming to achieve "net-zero greenhouse gas emissions by 2050." == Finance == The International Renewable Energy Agency's (IRENA) 2023 report on renewable energy finance highlights steady investment growth since 2018: USD 348 billion in 2020 (a 5.6% increase from 2019), USD 430 billion in 2021 (24% up from 2020), and USD 499 billion in 2022 (16% higher). This trend is driven by increasing recognition of renewable energy's role in mitigating climate change and enhancing energy security, along with investor interest in alternatives to fossil fuels. Policies such as feed-in tariffs in China and Vietnam have significantly increased renewable adoption. Furthermore, from 2013 to 2022, installation costs for solar photovoltaic (PV), onshore wind, and offshore wind fell by 69%, 33%, and 45%, respectively, making renewables more cost-effective. Between 2013 and 2022, the renewable energy sector underwent a significant realignment of investment priorities. Investment in solar and wind energy technologies markedly increased. In contrast, other renewable technologies such as hydropower (including pumped storage hydropower), biomass, biofuels, geothermal, and marine energy experienced a substantial decrease in financial investment. Notably, from 2017 to 2022, investment in these alternative renewable technologies declined by 45%, falling from USD 35 billion to USD 17 billion. In 2023, the renewable energy sector experienced a significant surge in investments, particularly in solar and wind technologies, totaling approximately USD 200 billion—a 75% increase from the previous year. The increased investments in 2023 contributed between 1% and 4% to the GDP in key regions including the United States, China, the European Union, and India. The energy sector receives investments of approximately USD 3 trillion each year, with USD 1.9 trillion directed towards clean energy technologies and infrastructure. To meet the targets set in the Net Zero Emissions (NZE) Scenario by 2035, this investment must increase to USD 5.3 trillion per year.: 15  == Debates == === Nuclear power proposed as renewable energy === === Geopolitics === The geopolitical impact of the growing use of renewable energy is a subject of ongoing debate and research. Many fossil-fuel producing countries, such as Qatar, Russia, Saudi Arabia and Norway, are currently able to exert diplomatic or geopolitical influence as a result of their oil wealth. Most of these countries are expected to be among the geopolitical "losers" of the energy transition, although some, like Norway, are also significant producers and exporters of renewable energy. Fossil fuels and the infrastructure to extract them may, in the long term, become stranded assets. It has been speculated that countries dependent on fossil fuel revenue may one day find it in their interests to quickly sell off their remaining fossil fuels. Conversely, nations abundant in renewable resources, and the minerals required for renewables technology, are expected to gain influence. In particular, China has become the world's dominant manufacturer of the technology needed to produce or store renewable energy, especially solar panels, wind turbines, and lithium-ion batteries. Nations rich in solar and wind energy could become major energy exporters. Some may produce and export green hydrogen, although electricity is projected to be the dominant energy carrier in 2050, accounting for almost 50% of total energy consumption (up from 22% in 2015). Countries with large uninhabited areas such as Australia, China, and many African and Middle Eastern countries have a potential for huge installations of renewable energy. The production of renewable energy technologies requires rare-earth elements with new supply chains. Countries with already weak governments that rely on fossil fuel revenue may face even higher political instability or popular unrest. Analysts consider Nigeria, Angola, Chad, Gabon, and Sudan, all countries with a history of military coups, to be at risk of instability due to dwindling oil income. A study found that transition from fossil fuels to renewable energy systems reduces risks from mining, trade and political dependence because renewable energy systems don't need fuel – they depend on trade only for the acquisition of materials and components during construction. In October 2021, European Commissioner for Climate Action Frans Timmermans suggested "the best answer" to the 2021 global energy crisis is "to reduce our reliance on fossil fuels." He said those blaming the European Green Deal were doing so "for perhaps ideological reasons or sometimes economic reasons in protecting their vested interests." Some critics blamed the European Union Emissions Trading System (EU ETS) and closure of nuclear plants for contributing to the energy crisis. European Commission President Ursula von der Leyen said that Europe is "too reliant" on natural gas and too dependent on natural gas imports. According to Von der Leyen, "The answer has to do with diversifying our suppliers ... and, crucially, with speeding up the transition to clean energy." === Metal and mineral extraction === The transition to renewable energy requires increased extraction of certain metals and minerals. Like all mining, this impacts the environment and can lead to environmental conflict. For example, lithium mining uses around 65% of the water in the Salar de Atamaca desert forcing farmers and llama herders to abandon their ancestral settlements and creating environment degradation, in several African countries, the green energy transition has created a mining boom, causing deforestation, and threatening already endangered species. Wind power requires large amounts of copper and zinc, as well as smaller amounts of the rarer metal neodymium. Solar power is less resource-intensive, but still requires significant amounts of aluminum. The expansion of electrical grids requires both copper and aluminum. Batteries, which are critical to enable storage of renewable energy, use large quantities of copper, nickel, aluminum and graphite. Demand for lithium is expected to grow 42-fold from 2020 to 2040. Demand for nickel, cobalt and graphite is expected to grow by a factor of about 20–25. For each of the most relevant minerals and metals, its mining is dominated by a single country: copper in Chile, nickel in Indonesia, rare earths in China, cobalt in the Democratic Republic of the Congo (DRC), and lithium in Australia. China dominates processing of all of these. Recycling these metals after the devices they are embedded in are spent is essential to create a circular economy and ensure renewable energy is sustainable. By 2040, recycled copper, lithium, cobalt, and nickel from spent batteries could reduce combined primary supply requirements for these minerals by around 10%. A controversial approach is deep sea mining. Minerals can be collected from new sources like polymetallic nodules lying on the seabed. This would damage local biodiversity, but proponents point out that biomass on resource-rich seabeds is much scarcer than in the mining regions on land, which are often found in vulnerable habitats like rainforests. Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste. === Conservation areas === Installations used to produce wind, solar and hydropower are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts. More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphasized that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity. In 2020 scientists published a world map of areas that contain renewable energy materials as well as estimations of their overlaps with "Key Biodiversity Areas", "Remaining Wilderness" and "Protected Areas". The authors assessed that careful strategic planning is needed. === Impact of climate change on renewable energy production === Climate change is making weather patterns less predictable. This can seriously hamper the use of renewable energy. For example, in the year 2023, in Sudan and Namibia, hydropower production dropped by more than half due to drastic reduction in rainfall, in China, India and some regions in Africa unusual weather phenomena reduced the amount of produced wind energy, heatwaves and clouds reduce the effectiveness of solar pannels, melting glaciers are creating problems to hydropower. Nuclear energy is also affected as drought create water shortage, so nuclear power plants sometimes do not have enough water for cooling. == Society and culture == === Public support === Solar power plants may compete with arable land, while on-shore wind farms often face opposition due to aesthetic concerns and noise. Such opponents are often described as NIMBYs ("not in my back yard"). Some environmentalists are concerned about fatal collisions of birds and bats with wind turbines. Although protests against new wind farms occasionally occur around the world, regional and national surveys generally find broad support for both solar and wind power. Community-owned wind energy is sometimes proposed as a way to increase local support for wind farms. A 2011 UK Government document stated that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake." In the 2000s and early 2010s, many renewable projects in Germany, Sweden and Denmark were owned by local communities, particularly through cooperative structures. In the years since, more installations in Germany have been undertaken by large companies, but community ownership remains strong in Denmark. == History == Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. The oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. The use of biomass for fire did not become commonplace until many hundreds of thousands of years later. Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile. From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times. Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass. In 1885, Werner Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote: In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten. Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905. Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race". The theory of peak oil was published in 1956. In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980. New government spending, regulation and policies helped the renewables industry weather the 2008 financial crisis and the Great Recession better than many other sectors. In 2022, renewables accounted for 30% of global electricity generation, up from 21% in 1985. === Ancient Historical Examples === Among the most notable historical uses of renewable energy (in the form of ancient and traditional methods), the following examples can be highlighted: Windmills in Europe and Asia (such as the windmills of the Netherlands and Nashtifan in Iran). The earliest discovered verified designs of windmills date back to Iran, between 700 and 900 CE. Water mills (Ancient China and Ancient Persia). Archimedes' burning lens. Traditional cooling and ventilation systems based on windcatchers and Solar updraft tower (or Solar chimney). Traditional architecture aware of natural heat transfer and natural energy transformation processes. Gravity-based fountains. Using animal biomass in ancient fuel bricks. Solar ovens and furnaces in ancient China, India, Egypt, and Persia. Solar energy applications for traditional agricultural processing (drying), engineering material properties (solar curing of pottery and ceramics), and ancient health practices (natural disinfection by solar radiation). Long-distance gravitational water flow control in ancient qanat technology for water transport and supply. Cargo and passenger transportation using sails on rivers, seas, and oceans. Cargo and passenger transportation based on understanding water currents in rivers, seas, and oceans. Using renewable vegetation (such as desert shrubs, agricultural waste, and pruned branches) for producing light and heat. Using renewable oils (vegetable or animal-based) for producing light and heat. Maximizing use of natural sunlight during the day and moonlight at night in building architecture for purposes such as lighting, decorative applications (e.g., reflective tilework, mirror work, and surface polishing on stone or brick), timekeeping (sundials, noon markers, prayer time indicators, seasonal change markers), etc. == See also == Distributed generation – Decentralised electricity generation Efficient energy use – Methods for higher energy efficiency Fossil fuel phase-out – Gradual reduction of the use and production of fossil fuels Thermal energy storage – Technologies to store thermal energy List of countries by renewable electricity production List of renewable energy topics by country and territory Renewable heat – Application of renewable energy == References == === Sources === "Renewable Power Generation Costs in 2019" (PDF). IRENA. 2020. "Renewable Capacity Statistics 2020". IRENA. 2020. "Renewable Energy Statistics 2020". IRENA. 2020. == External links == Energypedia – a wiki platform for collaborative knowledge exchange on renewable energy in developing countries Renewable Energy Conference – a global platform for industry professionals, academics, and policymakers to exchange knowledge and discuss advancements in renewable energy technologies, with a focus on innovation, sustainability, and future energy solutions.
Wikipedia/Renewable_energy
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them.: 3  Originally applied to water (hydromechanics), it found applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology. It can be divided into fluid statics, the study of various fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion.: 3  It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic. Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. == History == The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes investigated fluid statics and buoyancy and formulated his famous law known now as the Archimedes' principle, which was published in his work On Floating Bodies—generally considered to be the first major work on fluid mechanics. Iranian scholar Abu Rayhan Biruni and later Al-Khazini applied experimental scientific methods to fluid mechanics. Rapid advancement in fluid mechanics began with Leonardo da Vinci (observations and experiments), Evangelista Torricelli (invented the barometer), Isaac Newton (investigated viscosity) and Blaise Pascal (researched hydrostatics, formulated Pascal's law), and was continued by Daniel Bernoulli with the introduction of mathematical fluid dynamics in Hydrodynamica (1739). Inviscid flow was further analyzed by various mathematicians (Jean le Rond d'Alembert, Joseph Louis Lagrange, Pierre-Simon Laplace, Siméon Denis Poisson) and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Further mathematical justification was provided by Claude-Louis Navier and George Gabriel Stokes in the Navier–Stokes equations, and boundary layers were investigated (Ludwig Prandtl, Theodore von Kármán), while various scientists such as Osborne Reynolds, Andrey Kolmogorov, and Geoffrey Ingram Taylor advanced the understanding of fluid viscosity and turbulence. == Main branches == === Fluid statics === Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium; and is contrasted with fluid dynamics, the study of fluids in motion. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of water is always level whatever the shape of its container. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspects of geophysics and astrophysics (for example, in understanding plate tectonics and anomalies in the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields. === Fluid dynamics === Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and movements on aircraft, determining the mass flow rate of petroleum through pipelines, predicting evolving weather patterns, understanding nebulae in interstellar space and modeling explosions. Some fluid-dynamical principles are used in traffic engineering and crowd dynamics. == Relationship to continuum mechanics == Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table. In a mechanical view, a fluid is a substance that does not support shear stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress. == Assumptions == The assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. Fundamentally, every fluid mechanical system is assumed to obey: Conservation of mass Conservation of energy Conservation of momentum The continuum assumption For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume.: 74  The continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale. Fluid properties can vary continuously from one volume element to another and are average values of the molecular properties. The continuum hypothesis can lead to inaccurate results in applications like supersonic speed flows, or molecular flows on nano scale. Those problems for which the continuum hypothesis fails can be solved using statistical mechanics. To determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, but molecular approach (statistical mechanics) can be applied to find the fluid motion for larger Knudsen numbers. == Navier–Stokes equations == The Navier–Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are differential equations that describe the force balance at a given point within a fluid. For an incompressible fluid with vector velocity field u {\displaystyle \mathbf {u} } , the Navier–Stokes equations are ∂ u ∂ t + ( u ⋅ ∇ ) u = − 1 ρ ∇ p + ν ∇ 2 u {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} =-{\frac {1}{\rho }}\nabla p+\nu \nabla ^{2}\mathbf {u} } . These differential equations are the analogues for deformable materials to Newton's equations of motion for particles – the Navier–Stokes equations describe changes in momentum (force) in response to pressure p {\displaystyle p} and viscosity, parameterized by the kinematic viscosity ν {\displaystyle \nu } . Occasionally, body forces, such as the gravitational force or Lorentz force are added to the equations. Solutions of the Navier–Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms, only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow in which the Reynolds number is small. For more complex cases, especially those involving turbulence, such as global weather systems, aerodynamics, hydrodynamics and many more, solutions of the Navier–Stokes equations can currently only be found with the help of computers. This branch of science is called computational fluid dynamics. == Inviscid and viscous fluids == An inviscid fluid has no viscosity, ν = 0 {\displaystyle \nu =0} . In practice, an inviscid flow is an idealization, one that facilitates mathematical treatment. In fact, purely inviscid flows are only known to be realized in the case of superfluidity. Otherwise, fluids are generally viscous, a property that is often most important within a boundary layer near a solid surface, where the flow must match onto the no-slip condition at the solid. In some cases, the mathematics of a fluid mechanical system can be treated by assuming that the fluid outside of boundary layers is inviscid, and then matching its solution onto that for a thin laminar boundary layer. For fluid flow over a porous boundary, the fluid velocity can be discontinuous between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition). Further, it is useful at low subsonic speeds to assume that gas is incompressible—that is, the density of the gas does not change even though the speed and static pressure change. == Newtonian versus non-Newtonian fluids == A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved slowly through the fluid is proportional to the force applied to the object. (Compare friction). Important fluids, like water as well as most gasses, behave—to good approximation—as a Newtonian fluid under normal conditions on Earth.: 145  By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time—this behavior is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property—for example, most fluids with long molecular chains can react in a non-Newtonian manner.: 145  === Equations for a Newtonian fluid === The constant of proportionality between the viscous stress tensor and the velocity gradient is known as the viscosity. A simple equation to describe incompressible Newtonian fluid behavior is τ = − μ d u d n {\displaystyle \tau =-\mu {\frac {\mathrm {d} u}{\mathrm {d} n}}} where τ {\displaystyle \tau } is the shear stress exerted by the fluid ("drag"), μ {\displaystyle \mu } is the fluid viscosity—a constant of proportionality, and d u d n {\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} n}}} is the velocity gradient perpendicular to the direction of shear. For a Newtonian fluid, the viscosity, by definition, depends only on temperature, not on the forces acting upon it. If the fluid is incompressible the equation governing the viscous stress (in Cartesian coordinates) is τ i j = μ ( ∂ v i ∂ x j + ∂ v j ∂ x i ) {\displaystyle \tau _{ij}=\mu \left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}\right)} where τ i j {\displaystyle \tau _{ij}} is the shear stress on the i t h {\displaystyle i^{th}} face of a fluid element in the j t h {\displaystyle j^{th}} direction v i {\displaystyle v_{i}} is the velocity in the i t h {\displaystyle i^{th}} direction x j {\displaystyle x_{j}} is the j t h {\displaystyle j^{th}} direction coordinate. If the fluid is not incompressible the general form for the viscous stress in a Newtonian fluid is τ i j = μ ( ∂ v i ∂ x j + ∂ v j ∂ x i − 2 3 δ i j ∇ ⋅ v ) + κ δ i j ∇ ⋅ v {\displaystyle \tau _{ij}=\mu \left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}-{\frac {2}{3}}\delta _{ij}\nabla \cdot \mathbf {v} \right)+\kappa \delta _{ij}\nabla \cdot \mathbf {v} } where κ {\displaystyle \kappa } is the second viscosity coefficient (or bulk viscosity). If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types. Non-Newtonian fluids can be either plastic, Bingham plastic, pseudoplastic, dilatant, thixotropic, rheopectic, viscoelastic. In some applications, another rough broad division among fluids is made: ideal and non-ideal fluids. An ideal fluid is non-viscous and offers no resistance whatsoever to a shearing force. An ideal fluid really does not exist, but in some calculations, the assumption is justifiable. One example of this is the flow far from solid surfaces. In many cases, the viscous effects are concentrated near the solid boundaries (such as in boundary layers) while in regions of the flow field far away from the boundaries the viscous effects can be neglected and the fluid there is treated as it were inviscid (ideal flow). When the viscosity is neglected, the term containing the viscous stress tensor τ {\displaystyle \mathbf {\tau } } in the Navier–Stokes equation vanishes. The equation reduced in this form is called the Euler equation. == See also == Transport phenomena Aerodynamics Applied mechanics Bernoulli's principle Communicating vessels Computational fluid dynamics Compressor map Secondary flow Different types of boundary conditions in fluid dynamics Fluid–structure interaction Immersed boundary method Stochastic Eulerian Lagrangian method Stokesian dynamics Smoothed-particle hydrodynamics == References == == Further reading == Falkovich, Gregory (2011), Fluid Mechanics (A short course for physicists), Cambridge University Press, doi:10.1017/CBO9780511794353, ISBN 978-1-107-00575-4 Kundu, Pijush K.; Cohen, Ira M. (2008), Fluid Mechanics (4th revised ed.), Academic Press, ISBN 978-0-12-373735-9 Currie, I. G. (1974), Fundamental Mechanics of Fluids, McGraw-Hill, Inc., ISBN 0-07-015000-1 Massey, B.; Ward-Smith, J. (2005), Mechanics of Fluids (8th ed.), Taylor & Francis, ISBN 978-0-415-36206-1 Nazarenko, Sergey (2014), Fluid Dynamics via Examples and Solutions, CRC Press (Taylor & Francis group), ISBN 978-1-43-988882-7 == External links == Free Fluid Mechanics books Annual Review of Fluid Mechanics. Archived 2009-01-19 at the Wayback Machine. CFDWiki – the Computational Fluid Dynamics reference wiki. Educational Particle Image Velocimetry Archived 2017-08-03 at the Wayback Machine – resources and demonstrations
Wikipedia/Fluid_mechanics
Control Data Corporation (CDC) was a mainframe and supercomputer company that in the 1960s was one of the nine major U.S. computer companies, which group included IBM, the Burroughs Corporation, and the Digital Equipment Corporation (DEC), the NCR Corporation (NCR), General Electric, Honeywell, RCA, and UNIVAC. For most of the 1960s, the strength of CDC was the work of the electrical engineer Seymour Cray who developed a series of fast computers, then considered the fastest computing machines in the world; in the 1970s, Cray left the Control Data Corporation and founded Cray Research (CRI) to design and make supercomputers. In 1988, after much financial loss, the Control Data Corporation began withdrawing from making computers and sold the affiliated companies of CDC; in 1992, CDC established Control Data Systems, Inc. The remaining affiliate companies of CDC currently do business as the software company Dayforce. == Background: World War II – 1957 == During World War II the U.S. Navy had built up a classified team of engineers to build codebreaking machinery for both Japanese and German electro-mechanical ciphers. A number of these were produced by a team dedicated to the task working in the Washington, D.C., area. With the post-war wind-down of military spending, the Navy grew increasingly worried that this team would break up and scatter into various companies, and it started looking for ways to keep the code-breaking team together. Eventually they found their solution: John Parker, the owner of a Chase Aircraft affiliate named Northwestern Aeronautical Corporation located in St. Paul, Minnesota, was about to lose all his contracts due to the ending of the war. The Navy never told Parker exactly what the team did, since it would have taken too long to get top secret clearance. Instead they simply said the team was important, and they would be very happy if he hired them all. Parker was obviously wary, but after several meetings with increasingly high-ranking Naval officers it became apparent that whatever it was, they were serious, and he eventually agreed to give this team a home in his military glider factory. The result was Engineering Research Associates (ERA). Formed in 1946, this contract engineering company worked on a number of seemingly unrelated projects in the early 1950s. Among these was the ERA Atlas, an early military stored program computer, the basis of the Univac 1101, which was followed by the 1102, and then the 36-bit ERA 1103 (UNIVAC 1103). The Atlas was built for the Navy, which intended to use it in their non-secret code-breaking centers. In the early 1950s a minor political debate broke out in Congress about the Navy essentially "owning" ERA, and the ensuing debates and legal wrangling left the company drained of both capital and spirit. In 1952, Parker sold ERA to Remington Rand. Although Rand kept the ERA team together and developing new products, it was most interested in ERA's magnetic drum memory systems. Rand soon merged with Sperry Corporation to become Sperry Rand. In the process of merging the companies, the ERA division was folded into Sperry's UNIVAC division. At first this did not cause too many changes at ERA, since the company was used primarily to provide engineering talent to support a variety of projects. However, one major project was moved from UNIVAC to ERA, the UNIVAC II project, which led to lengthy delays and upsets to nearly everyone involved. Since the Sperry "big company" mentality encroached on the decision-making powers of the ERA employees, a number left Sperry to form the Control Data Corp. in September 1957, setting up shop in an old warehouse across the river from Sperry's St. Paul laboratory, in Minneapolis at 501 Park Avenue. Of the members forming CDC, William Norris was the unanimous choice to become the chief executive officer of the new company. Seymour Cray soon became the chief designer, though at the time of CDC's formation he was still in the process of completing a prototype for the Naval Tactical Data System (NTDS), and he did not leave Sperry to join CDC until it was complete. The M-460 was Seymour's first transistor computer, though the power supply rectifiers were still tubes. == Early designs and Cray's big plan == CDC started business by selling subsystems, mostly drum memory systems, to other companies. Cray joined the next year, and he immediately built a small transistor-based 6-bit machine known as the "CDC Little Character" to test his ideas on large-system design and transistor-based machines. "Little Character" was a great success. In 1959, CDC released a 48-bit transistorized version of their re-design of the 1103 re-design under the name CDC 1604; the first machine was delivered to the U.S. Navy in 1960 at the Naval Postgraduate School in Monterey, California. Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-Univac 1103. A 12-bit cut-down version was also released as the CDC 160A in 1960, often considered among the first minicomputers. The 160A was particularly notable as it was built as a standard office desk, which was unusual packaging for that era. New versions of the basic 1604 architecture were rebuilt into the CDC 3000 series, which sold through the early and mid-1960s. Cray immediately turned to the design of a machine that would be the fastest (or in the terminology of the day, largest) machine in the world, setting the goal at 50 times the speed of the 1604. This required radical changes in design, and as the project "dragged on" — it had gone on for about four years by then — the management got increasingly upset and it demanded greater oversight. Cray in turn demanded (in 1962) to have his own remote lab, saying that otherwise, he would quit. Norris agreed, and Cray and his team moved to Cray's home town, Chippewa Falls, Wisconsin. Not even Bill Norris, the founder and president of CDC, could visit Cray's laboratory without an invitation. == Peripherals business == In the early 1960s, the corporation moved to the Highland Park neighborhood of St. Paul where Norris lived. Through this period, Norris became increasingly worried that CDC had to develop a "critical mass" to compete with IBM. To do this, he started an aggressive program of buying up various companies to round out CDC's peripheral lineup. In general, they tried to offer a product to compete with any of IBM's, but running 10% faster and costing 10% less. This was not always easy to achieve. One of its first peripherals was a tape transport, which led to some internal wrangling as the Peripherals Equipment Division attempted to find a reasonable way to charge other divisions of the company for supplying the devices. If the division simply "gave" them away at cost as part of a system purchase, they would never have a real budget of their own. Instead, a plan was established in which it would share profits with the divisions selling its peripherals, a plan eventually used throughout the company. The tape transport was followed by the 405 Card Reader and the 415 Card Punch, followed by a series of tape drives and drum printers, all of which were designed in-house. The printer business was initially supported by Holley Carburetor in the Rochester, Michigan suburb outside of Detroit. They later formalized this by creating a jointly held company, Holley Computer Products. Holley later sold its stake back to CDC, the remainder becoming the Rochester Division. Train printers and band printers in Rochester were developed in a joint venture with NCR and ICL, with CDC holding controlling interest. This joint venture was known as Computer Peripherals, Inc. (CPI). In the early 80s, it was merged with dot matrix computer printer manufacturer Centronics. Norris was particularly interested in breaking out of the punched card–based workflow, where IBM held a stranglehold. He eventually decided to buy Rabinow Engineering, one of the pioneers of optical character recognition (OCR) systems. The idea was to bypass the entire punched card stage by having the operators simply type onto normal paper pages with an OCR-friendly typewriter font, and then submit those pages to the computer. Since a typewritten page contains much more information than a punched card (which has essentially one line of text from a page), this would offer savings all around. This seemingly simple task turned out to be much harder than anyone expected, and while CDC became a major player in the early days of OCR systems, OCR has remained a niche product to this day. Rabinow's plant in Rockville, MD was closed in 1976, and CDC left the business. With the continued delays on the OCR project, it became clear that punched cards were not going to go away any time soon, and CDC had to address this as quickly as possible. Although the 405 remained in production, it was an expensive machine to build. So another purchase was made, Bridge Engineering, which offered a line of lower-cost as well as higher-speed card punches. All card-handling products were moved to what became the Valley Forge Division after Bridge moved to a new factory, with the tape transports to follow. Later, the Valley Forge and Rochester divisions were spun off to form a new joint company with National Cash Register (later NCR Corporation), Computer Peripherals Inc (CPI), to share development and production costs across the two companies. ICL later joined the effort. Eventually the Rochester Division was sold to Centronics in 1982. Another side effect of Norris's attempts to diversify was the creation of a number of service bureaus that ran jobs on behalf of smaller companies that could not afford to buy computers. This was never very profitable, and in 1965, several managers suggested that the unprofitable centers be closed in a cost-cutting measure. Nevertheless, Norris was so convinced of the idea that he refused to accept this, and ordered an across-the-board "belt tightening" instead. == Control Data Institute == Control Data created an international technical/computer vocational school from the mid-1960s to the late 1980s. By the late 1970s there were sixty-nine learning centers worldwide, serving 18,000 students. == CDC 6600: defining supercomputing == Meanwhile, at the new Chippewa Falls lab, Seymour Cray, Jim Thornton, and Dean Roush put together a team of 34 engineers, which continued work on the new computer design. One of the ways they hoped to improve the CDC 1604 was to use better transistors, and Cray used the new silicon transistors using the planar process, developed by Fairchild Semiconductor. These were much faster than the germanium transistors in the 1604, without the drawbacks of the older mesa silicon transistors. The speed of light restriction forced a more compact design with refrigeration designed by Dean Roush. In 1964, the resulting computer was released onto the market as the CDC 6600, out-performing everything on the market by roughly ten times. When it sold over 100 units at $8 million ($81 million in 2024 dollars) each; it was considered a supercomputer. The 6600 had a 100ns, transistor-based CPU (Central Processing Unit) with multiple asynchronous functional units, using 10 logical, external I/O processors to off-load many common tasks and core memory. That way, the CPU could devote all of its time and circuitry to processing actual data, while the other controllers dealt with the mundane tasks like punching cards and running disk drives. Using late-model compilers, the machine attained a standard mathematical operations rate of 500 kiloFLOPS, but handcrafted assembly managed to deliver approximately 1 megaFLOPS. A simpler, albeit much slower and less expensive version, implemented using a more traditional serial processor design rather than the 6600's parallel functional units, was released as the CDC 6400, and a two-processor version of the 6400 is called the CDC 6500. A FORTRAN compiler, known as MNF (Minnesota FORTRAN), was developed by Lawrence A. Liddiard and E. James Mundstock at the University of Minnesota for the 6600. After the delivery of the 6600 IBM took notice of this new company. In 1965 IBM started an effort to build a machine that would be faster than the 6600, the ACS-1. Two hundred people were gathered on the U.S. West Coast to work on the project, away from corporate prodding, in an attempt to mirror Cray's off-site lab. The project produced interesting computer architecture and technology, but it was not compatible with IBM's hugely successful System/360 line of computers. The engineers were directed to make it 360-compatible, but that compromised its performance. The ACS was canceled in 1969, without ever being produced for customers. Many of the engineers left the company, leading to a brain-drain in IBM's high-performance departments. In the meantime, IBM announced a new System/360 model, the Model 92, which would be just as fast as CDC's 6600. Although this machine did not exist, sales of the 6600 dropped drastically while people waited for the release of the mythical Model 92. Norris did not take this tactic, dubbed as fear, uncertainty and doubt (FUD), lying down, and in an extensive antitrust lawsuit launched against IBM a year later, he eventually won a settlement valued at $80 million. As part of the settlement, he picked up IBM's subsidiary, Service Bureau Corporation (SBC), which ran computer processing for other corporations on its own computers. SBC fitted nicely into CDC's existing service bureau offerings. During the designing of the 6600, CDC had set up Project SPIN to supply the system with a high speed hard disk memory system. At the time it was unclear if disks would replace magnetic memory drums, or whether fixed or removable disks would become the more prevalent. SPIN explored all of these approaches, and eventually delivered a 28" diameter fixed disk and a smaller multi-platter 14" removable disk-pack system. Over time, the hard disk business pioneered in SPIN became a major product line. == CDC 7600 and 8600 == In the same month it won its lawsuit against IBM, CDC announced its new computer, the CDC 7600 (previously referred to as the 6800 within CDC). This machine's hardware clock speed was almost four times that of the 6600 (36 MHz vs. 10 MHz), with a 27.5 ns clock cycle, and it offered considerably more than four times the total throughput, with much of the speed increase coming from extensive use of pipelining. The 7600 did not sell well because it was introduced during the 1969 downturn in the U.S. national economy. Its complexity had led to poor reliability. The machine was not totally compatible with the 6000-series and required a completely different operating system, which like most new OSs, was primitive. The 7600 project paid for itself, but damaged CDC's reputation. The 7600 memory had a split primary- and secondary-memory which required user management but was more than fast enough to make it the fastest uniprocessor from 1969 to 1976. A few dozen 7600s were the computers of choice at supercomputer centers around the world. Cray then turned to the design of the CDC 8600. This design included four 7600-like processors in a single, smaller case. The smaller size and shorter signal paths allowed the 8600 to run at much higher clock speeds which, together with faster memory, provided most of the performance gains. The 8600, however, belonged to the "old school" in terms of its physical construction, and it used individual components soldered to circuit boards. The design was so compact that cooling the CPU modules proved effectively impossible, and access for maintenance difficult. An abundance of hot-running solder joints ensured that the machines did not work reliably; Cray recognized that a re-design was needed. == The STAR and the Cyber == In addition to the redesign of the 8600, CDC had another project called the CDC STAR-100 under way, led by Cray's former collaborator on the 6600/7600, Jim Thornton. Unlike the 8600's "four computers in one box" solution to the speed problem, the STAR was a new design using a unit that we know today as the vector processor. By highly pipelining mathematical instructions with purpose-built instructions and hardware, mathematical processing is dramatically improved in a machine that was otherwise slower than a 7600. Although the particular set of problems it would be best at solving was limited compared to the general-purpose 7600, it was for solving exactly these problems that customers would buy CDC machines. Since these two projects competed for limited funds during the late 1960s, Norris felt that the company could not support simultaneous development of the STAR and a complete redesign of the 8600. Therefore, Cray left CDC to form the Cray Research company in 1972. Norris remained, however, a staunch supporter of Cray, and invested money into Cray's new company. In 1974 CDC released the STAR, designated as the Cyber 203. It turned out to have "real world" performance that was considerably worse than expected. STAR's chief designer, Jim Thornton, then left CDC to form the Network Systems Corporation. In 1975, a STAR-100 was placed into service in a Control Data service center which was considered the first supercomputer in a data center. Founder William C. Norris presided at the podium for the press conference announcing the new service. Publicity was a key factor in making the announcement a success by coordinating the event with Guinness; thus, establishing the Star-100 as "The most powerful and fastest computer" which was published in the Guinness Book of World Records. The late Duane Andrews, Public Relations, was responsible for coordinating this event. Andrews successfully attracted many influential editors including the research editor at Business Week who chronicled this publicity release "... as the most exciting public event he attended in 20 years". Sharing the podium were William C. Norris, Boyd Jones V.P. and S. Steve Adkins, Data Center Manager. It was extremely rare for Bill Norris to take the podium being a very private individual. Also, during the lunch at a local country club, Norris signed a huge stack of certificates attesting to the record which were printed by the Star 100 on printer paper produced in our Lincoln, Nebraska plant. The paper included a half-tone photo of the Star 100. The main customers of the STAR-100 data center were oil companies running oil reservoir simulations. Most notably was the simulation controlled from a terminal in Texas which solved oil extraction simulations for oil fields in Kuwait. A front page Wall Street Journal news article resulted in acquiring a new user, Allis-Chalmers, for simulation of a damaged hydroelectric turbine in a Norwegian mountain hydropower plant. A variety of systems based on the basic 6600/7600 architecture were repackaged in different price/performance categories of the CDC Cyber, which became CDC's main product line in the 1970s. An updated version of the STAR architecture, the Cyber 205, had considerably better performance than the original. By this time, however, Cray's own designs, like the Cray-1, were using the same basic design techniques as the STAR, but were computing much faster. The Star 100 was able to process vectors up to 64K (65536) elements, versus 64 elements for the Cray-1, but the Star 100 took much longer for initiating the operation so the Cray-1 outperformed with short vectors. Sales of the STAR were weak, but Control Data Corp. produced a successor system, the Cyber 200/205, that gave Cray Research some competition. CDC also embarked on a number of special projects for its clients, who produced an even smaller number of black project computers. The CDC Advanced Flexible Processor (AFP), also known as CYBER PLUS, was one such machine. Another design direction was the "Cyber 80" project, which was aimed at release in 1980. This machine could run old 6600-style programs, and also had a completely new 64-bit architecture. The concept behind Cyber 80 was that current 6000-series users would migrate to these machines with relative ease. The design and debugging of these machines went on past 1980, and the machines were eventually released under other names. CDC was also attempting to diversify its revenue from hardware into services and this included its promotion of the PLATO computer-aided learning system, which ran on Cyber hardware and incorporated many early computer interface innovations including bit-mapped touchscreen terminals. == Magnetic Peripherals Inc. == Meanwhile, several very large Japanese manufacturing firms were entering the market. The supercomputer market was too small to support more than a handful of companies, so CDC started looking for other markets. One of these was the hard disk drive (HDD) market. Magnetic Peripherals Inc., later Imprimis Technology, was originally a joint venture with Honeywell formed in 1975 to manufacture HDDs for both companies. CII-Honeywell Bull later purchased a 3 percent interest in MPI from Honeywell. Sperry became a partner in 1983 with 17 percent, making the ownership split CDC (67%) and Honeywell (17%). MPI was a captive supplier to its parents. It sold on an OEM basis only to them, while CDC sold MPI product to third parties under its brand name. It became a major player in the HDD market. It was the worldwide leader in 14-inch disk drive technology in the OEM marketplace in the late 1970s and early 1980s especially with its SMD (Storage Module Device) and CMD (Cartridge Module Drive), with its plant at Brynmawr in the South Wales valleys running 24/7 production. The Magnetic Peripherals division in Brynmawr had produced 1 million disks and 3 million magnetic tapes by October 1979. CDC was an early developer of the eight-inch drive technology with products from its MPI Oklahoma City Operation. Its CDC Wren series drives were particularly popular with high end users, although it was behind the capacity growth and performance curves of numerous startups such as Micropolis, Atasi, Maxtor, and Quantum. CDC also co-developed the now universal Advanced Technology Attachment (ATA) interface with Compaq and Western Digital, which was aimed at lowering the cost of adding low-performance drives. CDC founded a separate division called Rigidyne in Simi Valley, California, to develop 3.5-inch drives using technology from the Wren series. These were marketed by CDC as the "Swift" series, and were among the first high-performance 3.5-inch drives on the market at their introduction in 1987. In September 1988, CDC merged Rigidyne and MPI into the umbrella subsidiary of Imprimis Technology. The next year, Seagate Technology purchased Imprimis for $250 million in cash, 10.7 million in Seagate stock and a $50 million promissory note. == Investments == Control Data held interests in other companies including computer research company Arbitron, Commercial Credit Corporation and Ticketron. === Commercial Credit Corporation === In 1968, Commercial Credit Corporation was the target of a hostile takeover by Loews Inc. Loews had acquired nearly 10% of CCC, which it intended to break up on acquisition. To avoid the takeover, CCC forged a deal with CDC lending them the money to purchase control in CCC instead, and "That is how a computer company came to own a fleet of fishing boats in the Chesapeake Bay." By the 1980s, Control Data entered an unstable period, which resulted in the company liquidating many of their assets. In 1986, Sandy Weill convinced the Control Data management to spin off their Commercial Credit subsidiary to prevent the company's potential liquidation. Over a period of years, Weill used Commercial Credit to build an empire that became Citigroup. In 1999, Commercial Credit was renamed CitiFinancial, and in 2011, the full-service network of US CitiFinancial branches were renamed OneMain Financial. === Ticketron === In 1969, Control Data acquired 51% of Ticketron for $3.9 million from Cemp Investments. In 1970, Ticketron became the sole computerized ticketing provider in the United States. In 1973, Control Data increased the size of its investment. Ticketron also provided ticketing terminals and back-end infrastructure for parimutuel betting, and provided similar services for a number of US lotteries, including those in New York, Illinois, Pennsylvania, Delaware, Washington and Maryland. By the mid 1980s, Ticketron was CDC's most profitable business with revenue of $120 million and CDC, which was loss-making at the time, considered selling the business. In 1990 the majority of Ticketron's assets and business, with the exception of a small antitrust carve-out for Broadway's "Telecharge" business-unit, were bought by The Carlyle Group who sold it the following year to rival Ticketmaster. == ETA Systems, wind-down and sale of assets == CDC decided to fight for the high-performance niche, but Norris considered that the company had become moribund and unable to quickly design competitive machines. In 1983 he set up a spinoff company, ETA Systems, whose design goal was a machine processing data at 10 GFLOPs, about 40 times the speed of the Cray-1. The design never fully matured, and it was unable to reach its goals. Nevertheless, the product was one of the fastest computers on the market, and 7 liquid nitrogen-cooled and 27 smaller air cooled versions of the computers were sold during the next few years. They used the new CMOS chips, which produced much less heat. The effort ended after half-hearted attempts to sell ETA Systems. In 1989, most of the employees of ETA Systems were laid off, and the remaining ones were folded into CDC. Despite having valuable technology, CDC still suffered huge losses in 1985 ($567 million) and 1986 while attempting to reorganize. As a result, in 1987 it sold its PathLab Laboratory Information System to 3M. While CDC was still making computers, it was decided that hardware manufacturing was no longer as profitable as it used to be, and so in 1988 it was decided to leave the industry, bit by bit. The first division to go was Imprimis. After that, CDC sold other assets such as VTC (a chip maker that specialized in mass-storage circuitry and was closely linked with MPI), and non-computer-related assets like Ticketron. In 1992, the company separated into two independent companies – the computer businesses were spun out as Control Data Systems, Inc. (CDS), while the information service businesses became the Ceridian Corporation. CDS later became owner of ICEM Technologies, makers of ICEM DDN and ICEM Surf software and sold the business to PTC for $40.6m in 1998. In 1999, CDS was bought out by Syntegra, a subsidiary of the BT Group, and merged into BT's Global Services organization. Ceridian continues as a successful outsourced IT company focusing on human resources. CDC's Energy Management Division, was one of its most successful business units, providing control systems solutions that managed as much as 25% of all electricity on the planet, and went to Ceridian in the split. This division was renamed Empros and was sold to Siemens in 1993. In 1997, General Dynamics acquired the Computing Devices International Division of Ceridian, which was a defense electronics and systems integration business headquartered in Bloomington, Minnesota – originally Control Data's Government Systems Division. In March 2001, Ceridian separated into two independent companies, with the old Ceridian Corporation renamed itself to Arbitron Inc. and the rest of the company (consisting of human resources services and Comdata business) took the Ceridian Corporation name. Ceridian was later split again in 2013, with formation of Ceridian HCM Holding Inc. (human resources services) and Comdata Inc. (payments business), marking the end of CDC assets split for good. == Timeline of systems releases == CDC 1604 et al – 1604, 1604-A, 1604-B, 1604-C, 924 (a "cut down" 1604 sibling) * CDC 160 series – 160, 160A (160-A), 160G (160-G) * CDC 3000 series – 3100, 3200, 3300, 3400, 3500, 3600, 3800 * CDC 6000 series – 6200, 6400, 6500, 6700 * CDC 6600 * CDC 7600 * CDC CYBER – 17, 18, 71, 72, 73, 74, 76, 170, 171, 172, 173, 174, 175, 176, 203, 205, Omega/480, 700 * CDC STAR-100 1957 – Founding 1959 – 1604 1960 – 1604-B 1961 – 160 1962 – 924 (a 24-bit 1604) 1963 – 160A (160-A), 1604-A, 3400, 6600 1964 – 160G (160-G), 3100, 3200, 3600, 6400 1965 – 1604-C, 1700, 3300, 3500, 8050, 8090 1966 – 3800, 6200, 6500, Station 6000 1968 – 7600 1969 – 6700 1970 – STAR-100 1971 – Cyber 71, Cyber 72, Cyber 73, Cyber 74, Cyber 76 1972 – 5600, 8600 1973 – Cyber 170, Cyber 172, Cyber 173, Cyber 174, Cyber 175, Cyber 17 1976 – Cyber 18 1977 – Cyber 171, Cyber 176, Omega/480 1979 – Cyber 203, Cyber 720, Cyber 730, Cyber 740, Cyber 750, Cyber 760 1980 – Cyber 205 1982 – Cyber 815, Cyber 825, Cyber 835, Cyber 845, Cyber 855, Cyber 865, Cyber 875 1983 – ETA10 1984 – Cyber 810, Cyber 830, Cyber 840, Cyber 850, Cyber 860, Cyber 990, CyberPlus 1987 – Cyber 910, Cyber 930, Cyber 995 1988 – Cyber 960 1989 – Cyber 920, Cyber 2000 Note: The 8xx & 9xx Cyber models, introduced beginning in 1982, formed the 64-bit Cyber 180 series, and their Peripheral Processors (PPs) were 16-bit. The 180 series had virtual memory capability, using CDC's NOS/VE operating system.The more complete nomenclature for these was 180/xxx, although at times the shorter form (e.g. Cyber 990) was used. == Peripheral Systems Group == Control Data Corporation's Peripheral Systems Group was both a hardware and a software development unit that functioned in the 1970s and 1980s. Their services including development and marketing of IBM-oriented (operating) systems software. One of the Peripheral Systems Group's software products was named CUPID, "Control Data's Program for Unlike Data Set Concatenation." Its focus was for customers of IBM's MVS operating system, and the intended audience was systems programmers. The product's General Information and Reference Manual included SysGen-like options and information about internal user-accessible control blocks. == Film and science fiction references == Mars Needs Women (1967) – a CDC 3400 is used for radio communication and to direct the actions of the military as they intercept the Martian spaceships. Colossus: The Forbin Project (1970) – The title sequences to this film include tape drives and other early CDC equipment. The Mad Bomber (1973) – The police department has a CDC 3100 that they use to profile the bomber. The Adolescence of P-1 (1977), by Thomas Ryan – Control Data computers were very enticing to young P-1. The New Avengers – In episode 2-10 (#23) ("Complex", 1977) Purdey uses a CDC card reader. Mi-Sex – Computer Games: 1979 pop music video. The band enters the computer room in the Control Data North Sydney building and proceeds to play with CDC equipment. Tron (1982) – In the wide screen version of the film, when Flynn and Lora sneak into Encom, a CDC 7600 is visible in the background, alongside a Cray-1. This scene was shot at the Lawrence Livermore National Laboratory. Die Hard (1988) – The computer room shot up by one of the terrorists contained a number of working Cyber 180 computers and a mock-up of an ETA-10 supercomputer, along with a number of other peripheral devices, all provided by CDC Demonstration Services/Benchmark Lab. This equipment was requested on short notice after another computer manufacturer backed out at the last minute. Paul Derby, manager of the Benchmark Lab, arranged to send two van-loads of equipment to Hollywood for the shoot, accompanied by Jerry Sterns of the Benchmark Lab who supervised the equipment while it was on the set. After the machines were returned to Minnesota, they were inspected and tested, and as each machine was sold, a notation was made in the corporate records that the machine had appeared in the film. They Live (1988), by John Carpenter – As Roddy Piper's character is trying on his new "sunglasses" that allow him to see the world as it is, he looks at an advertisement for Control Data Corporation and sees the word OBEY. The film's credits include "special thanks" to CDC. == References == == Further reading == Lundstrom, David. A Few Good Men from Univac. Cambridge, Massachusetts: MIT Press, 1987. ISBN 0-262-12120-4. Misa, Thomas J., ed. Building the Control Data Legacy: The Career of Robert M. Price. Minneapolis: Charles Babbage Institute, 2012 ISBN 1300058188 Murray, Charles J. The Supermen: The Story of Seymour Cray and the Technical Wizards behind the Supercomputer. New York: John Wiley, 1997. ISBN 0-471-04885-2. Price, Robert M. The Eye for Innovation: Recognizing Possibilities and Managing the Creative Enterprise. New Haven: Yale University Press, 2005 ISBN 030010877X Thornton, J. E. Design of a Computer: The Control Data 6600. Glenview, Ill.: Scott, Foresman, 1970 Worthy, James C. William C. Norris: Portrait of a Maverick. Ballinger Pub Co., May 1987. ISBN 978-0-88730-087-5 == External links == Control Data Corporation Records at the Charles Babbage Institute, University of Minnesota, Minneapolis; CDC records donated by Ceridian Corporation in 1991; finding guide contains historical timeline, product timeline, acquisitions list, and joint venture list. Oral history interview with William Norris discusses ERA years, acquisition of ERA by Remington Rand, the Univac File computer, work as head of the Univac Division, and the formation of CDC. Charles Babbage Institute, University of Minnesota, Minneapolis. Oral history interview with Willis K. Drake Discusses Remington-Rand, the Eckert-Mauchly Computer Company, ERA, and formation of Control Data Corporation. Charles Babbage Institute, University of Minnesota, Minneapolis. Organized discussion moderated by Neil R. Lincoln with eighteen Control Data Corporation (CDC) engineers on computer architecture and design. Charles Babbage Institute, University of Minnesota, Minneapolis. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, CDC 8600, CDC STAR-100 and Seymour Cray. Information about the spin out of Commercial Credit from Control Data by Sandy Weill Information about the Control Data CDC 3800 Computer—on display at the National Air and Space Museum Steven F. Udvar-Hazy Center near Washington Dulles International Airport. Private Collection of historical documents about CDC Control Data User Manuals Library @ Computing History Computing history describing the use of a range of CDC systems and equipment 1970–1985 A German collection of CDC, Cray and other large computer systems, some of them in operation
Wikipedia/Control_Data_Corporation
In quantum physics, a wave function (or wavefunction) is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters ψ and Ψ (lower-case and capital psi, respectively). Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called normalization. Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function ψ and calculate the statistical distributions for measurable quantities. Wave functions can be functions of variables other than position, such as momentum. The information represented by a wave function that is dependent upon position can be converted into a wave function dependent upon momentum and vice versa, by means of a Fourier transform. Some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom; other discrete variables can also be included, such as isospin. When a system has internal degrees of freedom, the wave function at each point in the continuous degrees of freedom (e.g., a point in space) assigns a complex number for each possible value of the discrete degrees of freedom (e.g., z-component of spin). These values are often displayed in a column matrix (e.g., a 2 × 1 column vector for a non-relativistic electron with spin 1⁄2). According to the superposition principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions and form a Hilbert space. The inner product of two wave functions is a measure of the overlap between the corresponding physical states and is used in the foundational probabilistic interpretation of quantum mechanics, the Born rule, relating transition probabilities to inner products. The Schrödinger equation determines how wave functions evolve over time, and a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality. However, the wave function in quantum mechanics describes a kind of physical phenomenon, as of 2023 still open to different interpretations, which fundamentally differs from that of classic mechanical waves. == Historical background == In 1900, Max Planck postulated the proportionality between the frequency f {\displaystyle f} of a photon and its energy E {\displaystyle E} , E = h f {\displaystyle E=hf} , and in 1916 the corresponding relation between a photon's momentum p {\displaystyle p} and wavelength λ {\displaystyle \lambda } , λ = h p {\displaystyle \lambda ={\frac {h}{p}}} , where h {\displaystyle h} is the Planck constant. In 1923, De Broglie was the first to suggest that the relation λ = h p {\displaystyle \lambda ={\frac {h}{p}}} , now called the De Broglie relation, holds for massive particles, the chief clue being Lorentz invariance, and this can be viewed as the starting point for the modern development of quantum mechanics. The equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and linear algebra. Those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, and others, developing "matrix mechanics". Schrödinger subsequently showed that the two approaches were equivalent. In 1926, Schrödinger published the famous wave equation now named after him, the Schrödinger equation. This equation was based on classical conservation of energy using quantum operators and the de Broglie relations and the solutions of the equation are the wave functions for the quantum system. However, no one was clear on how to interpret it. At first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the elastic scattering of a wave packet (representing a particle) off a target; it spreads out in all directions. While a scattered particle may scatter in any direction, it does not break up and take off in all directions. In 1926, Born provided the perspective of probability amplitude. This relates calculations of quantum mechanics directly to probabilistic experimental observations. It is accepted as part of the Copenhagen interpretation of quantum mechanics. There are many other interpretations of quantum mechanics. In 1927, Hartree and Fock made the first step in an attempt to solve the N-body wave function, and developed the self-consistency cycle: an iterative algorithm to approximate the solution. Now it is also known as the Hartree–Fock method. The Slater determinant and permanent (of a matrix) was part of the method, provided by John C. Slater. Schrödinger did encounter an equation for the wave function that satisfied relativistic energy conservation before he published the non-relativistic one, but discarded it as it predicted negative probabilities and negative energies. In 1927, Klein, Gordon and Fock also found it, but incorporated the electromagnetic interaction and proved that it was Lorentz invariant. De Broglie also arrived at the same equation in 1928. This relativistic wave equation is now most commonly known as the Klein–Gordon equation. In 1927, Pauli phenomenologically found a non-relativistic equation to describe spin-1/2 particles in electromagnetic fields, now called the Pauli equation. Pauli found the wave function was not described by a single complex function of space and time, but needed two complex numbers, which respectively correspond to the spin +1/2 and −1/2 states of the fermion. Soon after in 1928, Dirac found an equation from the first successful unification of special relativity and quantum mechanics applied to the electron, now called the Dirac equation. In this, the wave function is a spinor represented by four complex-valued components: two for the electron and two for the electron's antiparticle, the positron. In the non-relativistic limit, the Dirac wave function resembles the Pauli wave function for the electron. Later, other relativistic wave equations were found. === Wave functions and wave equations in modern theories === All these wave equations are of enduring importance. The Schrödinger equation and the Pauli equation are under many circumstances excellent approximations of the relativistic variants. They are considerably easier to solve in practical problems than the relativistic counterparts. The Klein–Gordon equation and the Dirac equation, while being relativistic, do not represent full reconciliation of quantum mechanics and special relativity. The branch of quantum mechanics where these equations are studied the same way as the Schrödinger equation, often called relativistic quantum mechanics, while very successful, has its limitations (see e.g. Lamb shift) and conceptual problems (see e.g. Dirac sea). Relativity makes it inevitable that the number of particles in a system is not constant. For full reconciliation, quantum field theory is needed. In this theory, the wave equations and the wave functions have their place, but in a somewhat different guise. The main objects of interest are not the wave functions, but rather operators, so called field operators (or just fields where "operator" is understood) on the Hilbert space of states (to be described next section). It turns out that the original relativistic wave equations and their solutions are still needed to build the Hilbert space. Moreover, the free fields operators, i.e. when interactions are assumed not to exist, turn out to (formally) satisfy the same equation as do the fields (wave functions) in many cases. Thus the Klein–Gordon equation (spin 0) and the Dirac equation (spin 1⁄2) in this guise remain in the theory. Higher spin analogues include the Proca equation (spin 1), Rarita–Schwinger equation (spin 3⁄2), and, more generally, the Bargmann–Wigner equations. For massless free fields two examples are the free field Maxwell equation (spin 1) and the free field Einstein equation (spin 2) for the field operators. All of them are essentially a direct consequence of the requirement of Lorentz invariance. Their solutions must transform under Lorentz transformation in a prescribed way, i.e. under a particular representation of the Lorentz group and that together with few other reasonable demands, e.g. the cluster decomposition property, with implications for causality is enough to fix the equations. This applies to free field equations; interactions are not included. If a Lagrangian density (including interactions) is available, then the Lagrangian formalism will yield an equation of motion at the classical level. This equation may be very complex and not amenable to solution. Any solution would refer to a fixed number of particles and would not account for the term "interaction" as referred to in these theories, which involves the creation and annihilation of particles and not external potentials as in ordinary "first quantized" quantum theory. In string theory, the situation remains analogous. For instance, a wave function in momentum space has the role of Fourier expansion coefficient in a general state of a particle (string) with momentum that is not sharply defined. == Definition (one spinless particle in one dimension) == For now, consider the simple case of a non-relativistic single particle, without spin, in one spatial dimension. More general cases are discussed below. According to the postulates of quantum mechanics, the state of a physical system, at fixed time t {\displaystyle t} , is given by the wave function belonging to a separable complex Hilbert space. As such, the inner product of two wave functions Ψ1 and Ψ2 can be defined as the complex number (at time t) ( Ψ 1 , Ψ 2 ) = ∫ − ∞ ∞ Ψ 1 ∗ ( x , t ) Ψ 2 ( x , t ) d x < ∞ {\displaystyle (\Psi _{1},\Psi _{2})=\int _{-\infty }^{\infty }\,\Psi _{1}^{*}(x,t)\Psi _{2}(x,t)\,dx<\infty } . More details are given below. However, the inner product of a wave function Ψ with itself, ( Ψ , Ψ ) = ‖ Ψ ‖ 2 {\displaystyle (\Psi ,\Psi )=\|\Psi \|^{2}} , is always a positive real number. The number ‖Ψ‖ (not ‖Ψ‖2) is called the norm of the wave function Ψ. The separable Hilbert space being considered is infinite-dimensional, which means there is no finite set of square integrable functions which can be added together in various combinations to create every possible square integrable function. === Position-space wave functions === The state of such a particle is completely described by its wave function, Ψ ( x , t ) , {\displaystyle \Psi (x,t)\,,} where x is position and t is time. This is a complex-valued function of two real variables x and t. For one spinless particle in one dimension, if the wave function is interpreted as a probability amplitude; the square modulus of the wave function, the positive real number | Ψ ( x , t ) | 2 = Ψ ∗ ( x , t ) Ψ ( x , t ) = ρ ( x ) , {\displaystyle \left|\Psi (x,t)\right|^{2}=\Psi ^{*}(x,t)\Psi (x,t)=\rho (x),} is interpreted as the probability density for a measurement of the particle's position at a given time t. The asterisk indicates the complex conjugate. If the particle's position is measured, its location cannot be determined from the wave function, but is described by a probability distribution. ==== Normalization condition ==== The probability that its position x will be in the interval a ≤ x ≤ b is the integral of the density over this interval: P a ≤ x ≤ b ( t ) = ∫ a b | Ψ ( x , t ) | 2 d x {\displaystyle P_{a\leq x\leq b}(t)=\int _{a}^{b}\,|\Psi (x,t)|^{2}dx} where t is the time at which the particle was measured. This leads to the normalization condition: ∫ − ∞ ∞ | Ψ ( x , t ) | 2 d x = 1 , {\displaystyle \int _{-\infty }^{\infty }\,|\Psi (x,t)|^{2}dx=1\,,} because if the particle is measured, there is 100% probability that it will be somewhere. For a given system, the set of all possible normalizable wave functions (at any given time) forms an abstract mathematical vector space, meaning that it is possible to add together different wave functions, and multiply wave functions by complex numbers. Technically, wave functions form a ray in a projective Hilbert space rather than an ordinary vector space. ==== Quantum states as vectors ==== At a particular instant of time, all values of the wave function Ψ(x, t) are components of a vector. There are uncountably infinitely many of them and integration is used in place of summation. In Bra–ket notation, this vector is written | Ψ ( t ) ⟩ = ∫ Ψ ( x , t ) | x ⟩ d x {\displaystyle |\Psi (t)\rangle =\int \Psi (x,t)|x\rangle dx} and is referred to as a "quantum state vector", or simply "quantum state". There are several advantages to understanding wave functions as representing elements of an abstract vector space: All the powerful tools of linear algebra can be used to manipulate and understand wave functions. For example: Linear algebra explains how a vector space can be given a basis, and then any vector in the vector space can be expressed in this basis. This explains the relationship between a wave function in position space and a wave function in momentum space and suggests that there are other possibilities too. Bra–ket notation can be used to manipulate wave functions. The idea that quantum states are vectors in an abstract vector space is completely general in all aspects of quantum mechanics and quantum field theory, whereas the idea that quantum states are complex-valued "wave" functions of space is only true in certain situations. The time parameter is often suppressed, and will be in the following. The x coordinate is a continuous index. The |x⟩ are called improper vectors which, unlike proper vectors that are normalizable to unity, can only be normalized to a Dirac delta function. ⟨ x ′ | x ⟩ = δ ( x ′ − x ) {\displaystyle \langle x'|x\rangle =\delta (x'-x)} thus ⟨ x ′ | Ψ ⟩ = ∫ Ψ ( x ) ⟨ x ′ | x ⟩ d x = Ψ ( x ′ ) {\displaystyle \langle x'|\Psi \rangle =\int \Psi (x)\langle x'|x\rangle dx=\Psi (x')} and | Ψ ⟩ = ∫ | x ⟩ ⟨ x | Ψ ⟩ d x = ( ∫ | x ⟩ ⟨ x | d x ) | Ψ ⟩ {\displaystyle |\Psi \rangle =\int |x\rangle \langle x|\Psi \rangle dx=\left(\int |x\rangle \langle x|dx\right)|\Psi \rangle } which illuminates the identity operator I = ∫ | x ⟩ ⟨ x | d x . {\displaystyle I=\int |x\rangle \langle x|dx\,.} which is analogous to completeness relation of orthonormal basis in N-dimensional Hilbert space. Finding the identity operator in a basis allows the abstract state to be expressed explicitly in a basis, and more (the inner product between two state vectors, and other operators for observables, can be expressed in the basis). === Momentum-space wave functions === The particle also has a wave function in momentum space: Φ ( p , t ) {\displaystyle \Phi (p,t)} where p is the momentum in one dimension, which can be any value from −∞ to +∞, and t is time. Analogous to the position case, the inner product of two wave functions Φ1(p, t) and Φ2(p, t) can be defined as: ( Φ 1 , Φ 2 ) = ∫ − ∞ ∞ Φ 1 ∗ ( p , t ) Φ 2 ( p , t ) d p . {\displaystyle (\Phi _{1},\Phi _{2})=\int _{-\infty }^{\infty }\,\Phi _{1}^{*}(p,t)\Phi _{2}(p,t)dp\,.} One particular solution to the time-independent Schrödinger equation is Ψ p ( x ) = e i p x / ℏ , {\displaystyle \Psi _{p}(x)=e^{ipx/\hbar },} a plane wave, which can be used in the description of a particle with momentum exactly p, since it is an eigenfunction of the momentum operator. These functions are not normalizable to unity (they are not square-integrable), so they are not really elements of physical Hilbert space. The set { Ψ p ( x , t ) , − ∞ ≤ p ≤ ∞ } {\displaystyle \{\Psi _{p}(x,t),-\infty \leq p\leq \infty \}} forms what is called the momentum basis. This "basis" is not a basis in the usual mathematical sense. For one thing, since the functions are not normalizable, they are instead normalized to a delta function, ( Ψ p , Ψ p ′ ) = δ ( p − p ′ ) . {\displaystyle (\Psi _{p},\Psi _{p'})=\delta (p-p').} For another thing, though they are linearly independent, there are too many of them (they form an uncountable set) for a basis for physical Hilbert space. They can still be used to express all functions in it using Fourier transforms as described next. === Relations between position and momentum representations === The x and p representations are | Ψ ⟩ = I | Ψ ⟩ = ∫ | x ⟩ ⟨ x | Ψ ⟩ d x = ∫ Ψ ( x ) | x ⟩ d x , | Ψ ⟩ = I | Ψ ⟩ = ∫ | p ⟩ ⟨ p | Ψ ⟩ d p = ∫ Φ ( p ) | p ⟩ d p . {\displaystyle {\begin{aligned}|\Psi \rangle =I|\Psi \rangle &=\int |x\rangle \langle x|\Psi \rangle dx=\int \Psi (x)|x\rangle dx,\\|\Psi \rangle =I|\Psi \rangle &=\int |p\rangle \langle p|\Psi \rangle dp=\int \Phi (p)|p\rangle dp.\end{aligned}}} Now take the projection of the state Ψ onto eigenfunctions of momentum using the last expression in the two equations, ∫ Ψ ( x ) ⟨ p | x ⟩ d x = ∫ Φ ( p ′ ) ⟨ p | p ′ ⟩ d p ′ = ∫ Φ ( p ′ ) δ ( p − p ′ ) d p ′ = Φ ( p ) . {\displaystyle \int \Psi (x)\langle p|x\rangle dx=\int \Phi (p')\langle p|p'\rangle dp'=\int \Phi (p')\delta (p-p')dp'=\Phi (p).} Then utilizing the known expression for suitably normalized eigenstates of momentum in the position representation solutions of the free Schrödinger equation ⟨ x | p ⟩ = p ( x ) = 1 2 π ℏ e i ℏ p x ⇒ ⟨ p | x ⟩ = 1 2 π ℏ e − i ℏ p x , {\displaystyle \langle x|p\rangle =p(x)={\frac {1}{\sqrt {2\pi \hbar }}}e^{{\frac {i}{\hbar }}px}\Rightarrow \langle p|x\rangle ={\frac {1}{\sqrt {2\pi \hbar }}}e^{-{\frac {i}{\hbar }}px},} one obtains Φ ( p ) = 1 2 π ℏ ∫ Ψ ( x ) e − i ℏ p x d x . {\displaystyle \Phi (p)={\frac {1}{\sqrt {2\pi \hbar }}}\int \Psi (x)e^{-{\frac {i}{\hbar }}px}dx\,.} Likewise, using eigenfunctions of position, Ψ ( x ) = 1 2 π ℏ ∫ Φ ( p ) e i ℏ p x d p . {\displaystyle \Psi (x)={\frac {1}{\sqrt {2\pi \hbar }}}\int \Phi (p)e^{{\frac {i}{\hbar }}px}dp\,.} The position-space and momentum-space wave functions are thus found to be Fourier transforms of each other. They are two representations of the same state; containing the same information, and either one is sufficient to calculate any property of the particle. In practice, the position-space wave function is used much more often than the momentum-space wave function. The potential entering the relevant equation (Schrödinger, Dirac, etc.) determines in which basis the description is easiest. For the harmonic oscillator, x and p enter symmetrically, so there it does not matter which description one uses. The same equation (modulo constants) results. From this, with a little bit of afterthought, it follows that solutions to the wave equation of the harmonic oscillator are eigenfunctions of the Fourier transform in L2. == Definitions (other cases) == Following are the general forms of the wave function for systems in higher dimensions and more particles, as well as including other degrees of freedom than position coordinates or momentum components. === Finite dimensional Hilbert space === While Hilbert spaces originally refer to infinite dimensional complete inner product spaces they, by definition, include finite dimensional complete inner product spaces as well. In physics, they are often referred to as finite dimensional Hilbert spaces. For every finite dimensional Hilbert space there exist orthonormal basis kets that span the entire Hilbert space. If the N-dimensional set { | ϕ i ⟩ } {\textstyle \{|\phi _{i}\rangle \}} is orthonormal, then the projection operator for the space spanned by these states is given by: P = ∑ i | ϕ i ⟩ ⟨ ϕ i | = I {\displaystyle P=\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|=I} where the projection is equivalent to identity operator since { | ϕ i ⟩ } {\textstyle \{|\phi _{i}\rangle \}} spans the entire Hilbert space, thus leaving any vector from Hilbert space unchanged. This is also known as completeness relation of finite dimensional Hilbert space. The wavefunction is instead given by: | ψ ⟩ = I | ψ ⟩ = ∑ i | ϕ i ⟩ ⟨ ϕ i | ψ ⟩ {\displaystyle |\psi \rangle =I|\psi \rangle =\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|\psi \rangle } where { ⟨ ϕ i | ψ ⟩ } {\textstyle \{\langle \phi _{i}|\psi \rangle \}} , is a set of complex numbers which can be used to construct a wavefunction using the above formula. ==== Probability interpretation of inner product ==== If the set { | ϕ i ⟩ } {\textstyle \{|\phi _{i}\rangle \}} are eigenkets of a non-degenerate observable with eigenvalues λ i {\textstyle \lambda _{i}} , by the postulates of quantum mechanics, the probability of measuring the observable to be λ i {\textstyle \lambda _{i}} is given according to Born rule as: P ψ ( λ i ) = | ⟨ ϕ i | ψ ⟩ | 2 {\displaystyle P_{\psi }(\lambda _{i})=|\langle \phi _{i}|\psi \rangle |^{2}} For non-degenerate { | ϕ i ⟩ } {\textstyle \{|\phi _{i}\rangle \}} of some observable, if eigenvalues λ {\textstyle \lambda } have subset of eigenvectors labelled as { | λ ( j ) ⟩ } {\textstyle \{|\lambda ^{(j)}\rangle \}} , by the postulates of quantum mechanics, the probability of measuring the observable to be λ {\textstyle \lambda } is given by: P ψ ( λ ) = ∑ j | ⟨ λ ( j ) | ψ ⟩ | 2 = | P ^ λ | ψ ⟩ | 2 {\displaystyle P_{\psi }(\lambda )=\sum _{j}|\langle \lambda ^{(j)}|\psi \rangle |^{2}=|{\widehat {P}}_{\lambda }|\psi \rangle |^{2}} where P ^ λ = ∑ j | λ ( j ) ⟩ ⟨ λ ( j ) | {\textstyle {\widehat {P}}_{\lambda }=\sum _{j}|\lambda ^{(j)}\rangle \langle \lambda ^{(j)}|} is a projection operator of states to subspace spanned by { | λ ( j ) ⟩ } {\textstyle \{|\lambda ^{(j)}\rangle \}} . The equality follows due to orthogonal nature of { | ϕ i ⟩ } {\textstyle \{|\phi _{i}\rangle \}} . Hence, { ⟨ ϕ i | ψ ⟩ } {\textstyle \{\langle \phi _{i}|\psi \rangle \}} which specify state of the quantum mechanical system, have magnitudes whose square gives the probability of measuring the respective | ϕ i ⟩ {\textstyle |\phi _{i}\rangle } state. ==== Physical significance of relative phase ==== While the relative phase has observable effects in experiments, the global phase of the system is experimentally indistinguishable. For example in a particle in superposition of two states, the global phase of the particle cannot be distinguished by finding expectation value of observable or probabilities of observing different states but relative phases can affect the expectation values of observables. While the overall phase of the system is considered to be arbitrary, the relative phase for each state | ϕ i ⟩ {\textstyle |\phi _{i}\rangle } of a prepared state in superposition can be determined based on physical meaning of the prepared state and its symmetry. For example, the construction of spin states along x direction as a superposition of spin states along z direction, can done by applying appropriate rotation transformation on the spin along z states which provides appropriate phase of the states relative to each other. ==== Application to include spin ==== An example of finite dimensional Hilbert space can be constructed using spin eigenkets of s {\textstyle s} -spin particles which forms a 2 s + 1 {\textstyle 2s+1} dimensional Hilbert space. However, the general wavefunction of a particle that fully describes its state, is always from an infinite dimensional Hilbert space since it involves a tensor product with Hilbert space relating to the position or momentum of the particle. Nonetheless, the techniques developed for finite dimensional Hilbert space are useful since they can either be treated independently or treated in consideration of linearity of tensor product. Since the spin operator for a given s {\textstyle s} -spin particles can be represented as a finite ( 2 s + 1 ) 2 {\textstyle (2s+1)^{2}} matrix which acts on 2 s + 1 {\textstyle 2s+1} independent spin vector components, it is usually preferable to denote spin components using matrix/column/row notation as applicable. For example, each |sz⟩ is usually identified as a column vector: | s ⟩ ↔ [ 1 0 ⋮ 0 0 ] , | s − 1 ⟩ ↔ [ 0 1 ⋮ 0 0 ] , … , | − ( s − 1 ) ⟩ ↔ [ 0 0 ⋮ 1 0 ] , | − s ⟩ ↔ [ 0 0 ⋮ 0 1 ] {\displaystyle |s\rangle \leftrightarrow {\begin{bmatrix}1\\0\\\vdots \\0\\0\\\end{bmatrix}}\,,\quad |s-1\rangle \leftrightarrow {\begin{bmatrix}0\\1\\\vdots \\0\\0\\\end{bmatrix}}\,,\ldots \,,\quad |-(s-1)\rangle \leftrightarrow {\begin{bmatrix}0\\0\\\vdots \\1\\0\\\end{bmatrix}}\,,\quad |-s\rangle \leftrightarrow {\begin{bmatrix}0\\0\\\vdots \\0\\1\\\end{bmatrix}}} but it is a common abuse of notation, because the kets |sz⟩ are not synonymous or equal to the column vectors. Column vectors simply provide a convenient way to express the spin components. Corresponding to the notation, the z-component spin operator can be written as: 1 ℏ S ^ z = [ s 0 ⋯ 0 0 0 s − 1 ⋯ 0 0 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 ⋯ − ( s − 1 ) 0 0 0 ⋯ 0 − s ] {\displaystyle {\frac {1}{\hbar }}{\hat {S}}_{z}={\begin{bmatrix}s&0&\cdots &0&0\\0&s-1&\cdots &0&0\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &-(s-1)&0\\0&0&\cdots &0&-s\end{bmatrix}}} since the eigenvectors of z-component spin operator are the above column vectors, with eigenvalues being the corresponding spin quantum numbers. Corresponding to the notation, a vector from such a finite dimensional Hilbert space is hence represented as: | ϕ ⟩ = [ ⟨ s | ϕ ⟩ ⟨ s − 1 | ϕ ⟩ ⋮ ⟨ − ( s − 1 ) | ϕ ⟩ ⟨ − s | ϕ ⟩ ] = [ ε s ε s − 1 ⋮ ε − s + 1 ε − s ] {\displaystyle |\phi \rangle ={\begin{bmatrix}\langle s|\phi \rangle \\\langle s-1|\phi \rangle \\\vdots \\\langle -(s-1)|\phi \rangle \\\langle -s|\phi \rangle \\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{s}\\\varepsilon _{s-1}\\\vdots \\\varepsilon _{-s+1}\\\varepsilon _{-s}\\\end{bmatrix}}} where { ε i } {\textstyle \{\varepsilon _{i}\}} are corresponding complex numbers. In the following discussion involving spin, the complete wavefunction is considered as tensor product of spin states from finite dimensional Hilbert spaces and the wavefunction which was previously developed. The basis for this Hilbert space are hence considered: | r , s z ⟩ = | r ⟩ | s z ⟩ {\displaystyle |\mathbf {r} ,s_{z}\rangle =|\mathbf {r} \rangle |s_{z}\rangle } . === One-particle states in 3d position space === The position-space wave function of a single particle without spin in three spatial dimensions is similar to the case of one spatial dimension above: Ψ ( r , t ) {\displaystyle \Psi (\mathbf {r} ,t)} where r is the position vector in three-dimensional space, and t is time. As always Ψ(r, t) is a complex-valued function of real variables. As a single vector in Dirac notation | Ψ ( t ) ⟩ = ∫ d 3 r Ψ ( r , t ) | r ⟩ {\displaystyle |\Psi (t)\rangle =\int d^{3}\!\mathbf {r} \,\Psi (\mathbf {r} ,t)\,|\mathbf {r} \rangle } All the previous remarks on inner products, momentum space wave functions, Fourier transforms, and so on extend to higher dimensions. For a particle with spin, ignoring the position degrees of freedom, the wave function is a function of spin only (time is a parameter); ξ ( s z , t ) {\displaystyle \xi (s_{z},t)} where sz is the spin projection quantum number along the z axis. (The z axis is an arbitrary choice; other axes can be used instead if the wave function is transformed appropriately, see below.) The sz parameter, unlike r and t, is a discrete variable. For example, for a spin-1/2 particle, sz can only be +1/2 or −1/2, and not any other value. (In general, for spin s, sz can be s, s − 1, ..., −s + 1, −s). Inserting each quantum number gives a complex valued function of space and time, there are 2s + 1 of them. These can be arranged into a column vector ξ = [ ξ ( s , t ) ξ ( s − 1 , t ) ⋮ ξ ( − ( s − 1 ) , t ) ξ ( − s , t ) ] = ξ ( s , t ) [ 1 0 ⋮ 0 0 ] + ξ ( s − 1 , t ) [ 0 1 ⋮ 0 0 ] + ⋯ + ξ ( − ( s − 1 ) , t ) [ 0 0 ⋮ 1 0 ] + ξ ( − s , t ) [ 0 0 ⋮ 0 1 ] {\displaystyle \xi ={\begin{bmatrix}\xi (s,t)\\\xi (s-1,t)\\\vdots \\\xi (-(s-1),t)\\\xi (-s,t)\\\end{bmatrix}}=\xi (s,t){\begin{bmatrix}1\\0\\\vdots \\0\\0\\\end{bmatrix}}+\xi (s-1,t){\begin{bmatrix}0\\1\\\vdots \\0\\0\\\end{bmatrix}}+\cdots +\xi (-(s-1),t){\begin{bmatrix}0\\0\\\vdots \\1\\0\\\end{bmatrix}}+\xi (-s,t){\begin{bmatrix}0\\0\\\vdots \\0\\1\\\end{bmatrix}}} In bra–ket notation, these easily arrange into the components of a vector: | ξ ( t ) ⟩ = ∑ s z = − s s ξ ( s z , t ) | s z ⟩ {\displaystyle |\xi (t)\rangle =\sum _{s_{z}=-s}^{s}\xi (s_{z},t)\,|s_{z}\rangle } The entire vector ξ is a solution of the Schrödinger equation (with a suitable Hamiltonian), which unfolds to a coupled system of 2s + 1 ordinary differential equations with solutions ξ(s, t), ξ(s − 1, t), ..., ξ(−s, t). The term "spin function" instead of "wave function" is used by some authors. This contrasts the solutions to position space wave functions, the position coordinates being continuous degrees of freedom, because then the Schrödinger equation does take the form of a wave equation. More generally, for a particle in 3d with any spin, the wave function can be written in "position–spin space" as: Ψ ( r , s z , t ) {\displaystyle \Psi (\mathbf {r} ,s_{z},t)} and these can also be arranged into a column vector Ψ ( r , t ) = [ Ψ ( r , s , t ) Ψ ( r , s − 1 , t ) ⋮ Ψ ( r , − ( s − 1 ) , t ) Ψ ( r , − s , t ) ] {\displaystyle \Psi (\mathbf {r} ,t)={\begin{bmatrix}\Psi (\mathbf {r} ,s,t)\\\Psi (\mathbf {r} ,s-1,t)\\\vdots \\\Psi (\mathbf {r} ,-(s-1),t)\\\Psi (\mathbf {r} ,-s,t)\\\end{bmatrix}}} in which the spin dependence is placed in indexing the entries, and the wave function is a complex vector-valued function of space and time only. All values of the wave function, not only for discrete but continuous variables also, collect into a single vector | Ψ ( t ) ⟩ = ∑ s z ∫ d 3 r Ψ ( r , s z , t ) | r , s z ⟩ {\displaystyle |\Psi (t)\rangle =\sum _{s_{z}}\int d^{3}\!\mathbf {r} \,\Psi (\mathbf {r} ,s_{z},t)\,|\mathbf {r} ,s_{z}\rangle } For a single particle, the tensor product ⊗ of its position state vector |ψ⟩ and spin state vector |ξ⟩ gives the composite position-spin state vector | ψ ( t ) ⟩ ⊗ | ξ ( t ) ⟩ = ∑ s z ∫ d 3 r ψ ( r , t ) ξ ( s z , t ) | r ⟩ ⊗ | s z ⟩ {\displaystyle |\psi (t)\rangle \!\otimes \!|\xi (t)\rangle =\sum _{s_{z}}\int d^{3}\!\mathbf {r} \,\psi (\mathbf {r} ,t)\,\xi (s_{z},t)\,|\mathbf {r} \rangle \!\otimes \!|s_{z}\rangle } with the identifications | Ψ ( t ) ⟩ = | ψ ( t ) ⟩ ⊗ | ξ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle =|\psi (t)\rangle \!\otimes \!|\xi (t)\rangle } Ψ ( r , s z , t ) = ψ ( r , t ) ξ ( s z , t ) {\displaystyle \Psi (\mathbf {r} ,s_{z},t)=\psi (\mathbf {r} ,t)\,\xi (s_{z},t)} | r , s z ⟩ = | r ⟩ ⊗ | s z ⟩ {\displaystyle |\mathbf {r} ,s_{z}\rangle =|\mathbf {r} \rangle \!\otimes \!|s_{z}\rangle } The tensor product factorization of energy eigenstates is always possible if the orbital and spin angular momenta of the particle are separable in the Hamiltonian operator underlying the system's dynamics (in other words, the Hamiltonian can be split into the sum of orbital and spin terms). The time dependence can be placed in either factor, and time evolution of each can be studied separately. Under such Hamiltonians, any tensor product state evolves into another tensor product state, which essentially means any unentangled state remains unentangled under time evolution. This is said to happen when there is no physical interaction between the states of the tensor products. In the case of non separable Hamiltonians, energy eigenstates are said to be some linear combination of such states, which need not be factorizable; examples include a particle in a magnetic field, and spin–orbit coupling. The preceding discussion is not limited to spin as a discrete variable, the total angular momentum J may also be used. Other discrete degrees of freedom, like isospin, can expressed similarly to the case of spin above. === Many-particle states in 3d position space === If there are many particles, in general there is only one wave function, not a separate wave function for each particle. The fact that one wave function describes many particles is what makes quantum entanglement and the EPR paradox possible. The position-space wave function for N particles is written: Ψ ( r 1 , r 2 ⋯ r N , t ) {\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2}\cdots \mathbf {r} _{N},t)} where ri is the position of the i-th particle in three-dimensional space, and t is time. Altogether, this is a complex-valued function of 3N + 1 real variables. In quantum mechanics there is a fundamental distinction between identical particles and distinguishable particles. For example, any two electrons are identical and fundamentally indistinguishable from each other; the laws of physics make it impossible to "stamp an identification number" on a certain electron to keep track of it. This translates to a requirement on the wave function for a system of identical particles: Ψ ( … r a , … , r b , … ) = ± Ψ ( … r b , … , r a , … ) {\displaystyle \Psi \left(\ldots \mathbf {r} _{a},\ldots ,\mathbf {r} _{b},\ldots \right)=\pm \Psi \left(\ldots \mathbf {r} _{b},\ldots ,\mathbf {r} _{a},\ldots \right)} where the + sign occurs if the particles are all bosons and − sign if they are all fermions. In other words, the wave function is either totally symmetric in the positions of bosons, or totally antisymmetric in the positions of fermions. The physical interchange of particles corresponds to mathematically switching arguments in the wave function. The antisymmetry feature of fermionic wave functions leads to the Pauli principle. Generally, bosonic and fermionic symmetry requirements are the manifestation of particle statistics and are present in other quantum state formalisms. For N distinguishable particles (no two being identical, i.e. no two having the same set of quantum numbers), there is no requirement for the wave function to be either symmetric or antisymmetric. For a collection of particles, some identical with coordinates r1, r2, ... and others distinguishable x1, x2, ... (not identical with each other, and not identical to the aforementioned identical particles), the wave function is symmetric or antisymmetric in the identical particle coordinates ri only: Ψ ( … r a , … , r b , … , x 1 , x 2 , … ) = ± Ψ ( … r b , … , r a , … , x 1 , x 2 , … ) {\displaystyle \Psi \left(\ldots \mathbf {r} _{a},\ldots ,\mathbf {r} _{b},\ldots ,\mathbf {x} _{1},\mathbf {x} _{2},\ldots \right)=\pm \Psi \left(\ldots \mathbf {r} _{b},\ldots ,\mathbf {r} _{a},\ldots ,\mathbf {x} _{1},\mathbf {x} _{2},\ldots \right)} Again, there is no symmetry requirement for the distinguishable particle coordinates xi. The wave function for N particles each with spin is the complex-valued function Ψ ( r 1 , r 2 ⋯ r N , s z 1 , s z 2 ⋯ s z N , t ) {\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2}\cdots \mathbf {r} _{N},s_{z\,1},s_{z\,2}\cdots s_{z\,N},t)} Accumulating all these components into a single vector, | Ψ ⟩ = ∑ s z 1 , … , s z N ⏞ discrete labels ∫ R N d 3 r N ⋯ ∫ R 1 d 3 r 1 ⏞ continuous labels Ψ ( r 1 , … , r N , s z 1 , … , s z N ) ⏟ wave function (component of state vector along basis state) | r 1 , … , r N , s z 1 , … , s z N ⟩ ⏟ basis state (basis ket) . {\displaystyle |\Psi \rangle =\overbrace {\sum _{s_{z\,1},\ldots ,s_{z\,N}}} ^{\text{discrete labels}}\overbrace {\int _{R_{N}}d^{3}\mathbf {r} _{N}\cdots \int _{R_{1}}d^{3}\mathbf {r} _{1}} ^{\text{continuous labels}}\;\underbrace {{\Psi }(\mathbf {r} _{1},\ldots ,\mathbf {r} _{N},s_{z\,1},\ldots ,s_{z\,N})} _{\begin{array}{c}{\text{wave function (component of }}\\{\text{ state vector along basis state)}}\end{array}}\;\underbrace {|\mathbf {r} _{1},\ldots ,\mathbf {r} _{N},s_{z\,1},\ldots ,s_{z\,N}\rangle } _{\text{basis state (basis ket)}}\,.} For identical particles, symmetry requirements apply to both position and spin arguments of the wave function so it has the overall correct symmetry. The formulae for the inner products are integrals over all coordinates or momenta and sums over all spin quantum numbers. For the general case of N particles with spin in 3-d, ( Ψ 1 , Ψ 2 ) = ∑ s z N ⋯ ∑ s z 2 ∑ s z 1 ∫ a l l s p a c e d 3 r 1 ∫ a l l s p a c e d 3 r 2 ⋯ ∫ a l l s p a c e d 3 r N Ψ 1 ∗ ( r 1 ⋯ r N , s z 1 ⋯ s z N , t ) Ψ 2 ( r 1 ⋯ r N , s z 1 ⋯ s z N , t ) {\displaystyle (\Psi _{1},\Psi _{2})=\sum _{s_{z\,N}}\cdots \sum _{s_{z\,2}}\sum _{s_{z\,1}}\int \limits _{\mathrm {all\,space} }d^{3}\mathbf {r} _{1}\int \limits _{\mathrm {all\,space} }d^{3}\mathbf {r} _{2}\cdots \int \limits _{\mathrm {all\,space} }d^{3}\mathbf {r} _{N}\Psi _{1}^{*}\left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)\Psi _{2}\left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)} this is altogether N three-dimensional volume integrals and N sums over the spins. The differential volume elements d3ri are also written "dVi" or "dxi dyi dzi". The multidimensional Fourier transforms of the position or position–spin space wave functions yields momentum or momentum–spin space wave functions. ==== Probability interpretation ==== For the general case of N particles with spin in 3d, if Ψ is interpreted as a probability amplitude, the probability density is ρ ( r 1 ⋯ r N , s z 1 ⋯ s z N , t ) = | Ψ ( r 1 ⋯ r N , s z 1 ⋯ s z N , t ) | 2 {\displaystyle \rho \left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)=\left|\Psi \left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)\right|^{2}} and the probability that particle 1 is in region R1 with spin sz1 = m1 and particle 2 is in region R2 with spin sz2 = m2 etc. at time t is the integral of the probability density over these regions and evaluated at these spin numbers: P r 1 ∈ R 1 , s z 1 = m 1 , … , r N ∈ R N , s z N = m N ( t ) = ∫ R 1 d 3 r 1 ∫ R 2 d 3 r 2 ⋯ ∫ R N d 3 r N | Ψ ( r 1 ⋯ r N , m 1 ⋯ m N , t ) | 2 {\displaystyle P_{\mathbf {r} _{1}\in R_{1},s_{z\,1}=m_{1},\ldots ,\mathbf {r} _{N}\in R_{N},s_{z\,N}=m_{N}}(t)=\int _{R_{1}}d^{3}\mathbf {r} _{1}\int _{R_{2}}d^{3}\mathbf {r} _{2}\cdots \int _{R_{N}}d^{3}\mathbf {r} _{N}\left|\Psi \left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},m_{1}\cdots m_{N},t\right)\right|^{2}} ==== Physical significance of phase ==== In non-relativistic quantum mechanics, it can be shown using Schrodinger's time dependent wave equation that the equation: ∂ ρ ∂ t + ∇ ⋅ J = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0} is satisfied, where ρ ( x , t ) = | ψ ( x , t ) | 2 {\textstyle \rho (\mathbf {x} ,t)=|\psi (\mathbf {x} ,t)|^{2}} is the probability density and J ( x , t ) = ℏ 2 i m ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) = ℏ m Im ( ψ ∗ ∇ ψ ) {\textstyle \mathbf {J} (\mathbf {x} ,t)={\frac {\hbar }{2im}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}{\text{Im}}(\psi ^{*}\nabla \psi )} , is known as the probability flux in accordance with the continuity equation form of the above equation. Using the following expression for wavefunction: ψ ( x , t ) = ρ ( x , t ) exp ⁡ i S ( x , t ) ℏ {\displaystyle \psi (\mathbf {x} ,t)={\sqrt {\rho (\mathbf {x} ,t)}}\exp {\frac {iS(\mathbf {x} ,t)}{\hbar }}} where ρ ( x , t ) = | ψ ( x , t ) | 2 {\textstyle \rho (\mathbf {x} ,t)=|\psi (\mathbf {x} ,t)|^{2}} is the probability density and S ( x , t ) {\textstyle S(\mathbf {x} ,t)} is the phase of the wavefunction, it can be shown that: J ( x , t ) = ρ ∇ S m {\displaystyle \mathbf {J} (\mathbf {x} ,t)={\frac {\rho \nabla S}{m}}} Hence the spacial variation of phase characterizes the probability flux. In classical analogy, for J = ρ v {\textstyle \mathbf {J} =\rho \mathbf {v} } , the quantity ∇ S m {\textstyle {\frac {\nabla S}{m}}} is analogous with velocity. Note that this does not imply a literal interpretation of ∇ S m {\textstyle {\frac {\nabla S}{m}}} as velocity since velocity and position cannot be simultaneously determined as per the uncertainty principle. Substituting the form of wavefunction in Schrodinger's time dependent wave equation, and taking the classical limit, ℏ | ∇ 2 S | ≪ | ∇ S | 2 {\textstyle \hbar |\nabla ^{2}S|\ll |\nabla S|^{2}} : 1 2 m | ∇ S ( x , t ) | 2 + V ( x ) + ∂ S ∂ t = 0 {\displaystyle {\frac {1}{2m}}|\nabla S(\mathbf {x} ,t)|^{2}+V(\mathbf {x} )+{\frac {\partial S}{\partial t}}=0} Which is analogous to Hamilton-Jacobi equation from classical mechanics. This interpretation fits with Hamilton–Jacobi theory, in which P class. = ∇ S {\textstyle \mathbf {P} _{\text{class.}}=\nabla S} , where S is Hamilton's principal function. == Time dependence == For systems in time-independent potentials, the wave function can always be written as a function of the degrees of freedom multiplied by a time-dependent phase factor, the form of which is given by the Schrödinger equation. For N particles, considering their positions only and suppressing other degrees of freedom, Ψ ( r 1 , r 2 , … , r N , t ) = e − i E t / ℏ ψ ( r 1 , r 2 , … , r N ) , {\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t)=e^{-iEt/\hbar }\,\psi (\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N})\,,} where E is the energy eigenvalue of the system corresponding to the eigenstate Ψ. Wave functions of this form are called stationary states. The time dependence of the quantum state and the operators can be placed according to unitary transformations on the operators and states. For any quantum state |Ψ⟩ and operator O, in the Schrödinger picture |Ψ(t)⟩ changes with time according to the Schrödinger equation while O is constant. In the Heisenberg picture it is the other way round, |Ψ⟩ is constant while O(t) evolves with time according to the Heisenberg equation of motion. The Dirac (or interaction) picture is intermediate, time dependence is places in both operators and states which evolve according to equations of motion. It is useful primarily in computing S-matrix elements. == Non-relativistic examples == The following are solutions to the Schrödinger equation for one non-relativistic spinless particle. === Finite potential barrier === One of the most prominent features of wave mechanics is the possibility for a particle to reach a location with a prohibitive (in classical mechanics) force potential. A common model is the "potential barrier", the one-dimensional case has the potential V ( x ) = { V 0 | x | < a 0 | x | ≥ a {\displaystyle V(x)={\begin{cases}V_{0}&|x|<a\\0&|x|\geq a\end{cases}}} and the steady-state solutions to the wave equation have the form (for some constants k, κ) Ψ ( x ) = { A r e i k x + A l e − i k x x < − a , B r e κ x + B l e − κ x | x | ≤ a , C r e i k x + C l e − i k x x > a . {\displaystyle \Psi (x)={\begin{cases}A_{\mathrm {r} }e^{ikx}+A_{\mathrm {l} }e^{-ikx}&x<-a,\\B_{\mathrm {r} }e^{\kappa x}+B_{\mathrm {l} }e^{-\kappa x}&|x|\leq a,\\C_{\mathrm {r} }e^{ikx}+C_{\mathrm {l} }e^{-ikx}&x>a.\end{cases}}} Note that these wave functions are not normalized; see scattering theory for discussion. The standard interpretation of this is as a stream of particles being fired at the step from the left (the direction of negative x): setting Ar = 1 corresponds to firing particles singly; the terms containing Ar and Cr signify motion to the right, while Al and Cl – to the left. Under this beam interpretation, put Cl = 0 since no particles are coming from the right. By applying the continuity of wave functions and their derivatives at the boundaries, it is hence possible to determine the constants above. In a semiconductor crystallite whose radius is smaller than the size of its exciton Bohr radius, the excitons are squeezed, leading to quantum confinement. The energy levels can then be modeled using the particle in a box model in which the energy of different states is dependent on the length of the box. === Quantum harmonic oscillator === The wave functions for the quantum harmonic oscillator can be expressed in terms of Hermite polynomials Hn, they are Ψ n ( x ) = 1 2 n n ! ⋅ ( m ω π ℏ ) 1 / 4 ⋅ e − m ω x 2 2 ℏ ⋅ H n ( m ω ℏ x ) {\displaystyle \Psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}{\left({\sqrt {\frac {m\omega }{\hbar }}}x\right)}} where n = 0, 1, 2, .... === Hydrogen atom === The wave functions of an electron in a Hydrogen atom are expressed in terms of spherical harmonics and generalized Laguerre polynomials (these are defined differently by different authors—see main article on them and the hydrogen atom). It is convenient to use spherical coordinates, and the wave function can be separated into functions of each coordinate, Ψ n ℓ m ( r , θ , ϕ ) = R ( r ) Y ℓ m ( θ , ϕ ) {\displaystyle \Psi _{n\ell m}(r,\theta ,\phi )=R(r)\,\,Y_{\ell }^{m}\!(\theta ,\phi )} where R are radial functions and Ymℓ(θ, φ) are spherical harmonics of degree ℓ and order m. This is the only atom for which the Schrödinger equation has been solved exactly. Multi-electron atoms require approximative methods. The family of solutions is: Ψ n ℓ m ( r , θ , ϕ ) = ( 2 n a 0 ) 3 ( n − ℓ − 1 ) ! 2 n [ ( n + ℓ ) ! ] e − r / n a 0 ( 2 r n a 0 ) ℓ L n − ℓ − 1 2 ℓ + 1 ( 2 r n a 0 ) ⋅ Y ℓ m ( θ , ϕ ) {\displaystyle \Psi _{n\ell m}(r,\theta ,\phi )={\sqrt {{\left({\frac {2}{na_{0}}}\right)}^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\phi )} where a0 = 4πε0ħ2/mee2 is the Bohr radius, L2ℓ + 1n − ℓ − 1 are the generalized Laguerre polynomials of degree n − ℓ − 1, n = 1, 2, ... is the principal quantum number, ℓ = 0, 1, ..., n − 1 the azimuthal quantum number, m = −ℓ, −ℓ + 1, ..., ℓ − 1, ℓ the magnetic quantum number. Hydrogen-like atoms have very similar solutions. This solution does not take into account the spin of the electron. In the figure of the hydrogen orbitals, the 19 sub-images are images of wave functions in position space (their norm squared). The wave functions represent the abstract state characterized by the triple of quantum numbers (n, ℓ, m), in the lower right of each image. These are the principal quantum number, the orbital angular momentum quantum number, and the magnetic quantum number. Together with one spin-projection quantum number of the electron, this is a complete set of observables. The figure can serve to illustrate some further properties of the function spaces of wave functions. In this case, the wave functions are square integrable. One can initially take the function space as the space of square integrable functions, usually denoted L2. The displayed functions are solutions to the Schrödinger equation. Obviously, not every function in L2 satisfies the Schrödinger equation for the hydrogen atom. The function space is thus a subspace of L2. The displayed functions form part of a basis for the function space. To each triple (n, ℓ, m), there corresponds a basis wave function. If spin is taken into account, there are two basis functions for each triple. The function space thus has a countable basis. The basis functions are mutually orthonormal. == Wave functions and function spaces == The concept of function spaces enters naturally in the discussion about wave functions. A function space is a set of functions, usually with some defining requirements on the functions (in the present case that they are square integrable), sometimes with an algebraic structure on the set (in the present case a vector space structure with an inner product), together with a topology on the set. The latter will sparsely be used here, it is only needed to obtain a precise definition of what it means for a subset of a function space to be closed. It will be concluded below that the function space of wave functions is a Hilbert space. This observation is the foundation of the predominant mathematical formulation of quantum mechanics. === Vector space structure === A wave function is an element of a function space partly characterized by the following concrete and abstract descriptions. The Schrödinger equation is linear. This means that the solutions to it, wave functions, can be added and multiplied by scalars to form a new solution. The set of solutions to the Schrödinger equation is a vector space. The superposition principle of quantum mechanics. If Ψ and Φ are two states in the abstract space of states of a quantum mechanical system, and a and b are any two complex numbers, then aΨ + bΦ is a valid state as well. (Whether the null vector counts as a valid state ("no system present") is a matter of definition. The null vector does not at any rate describe the vacuum state in quantum field theory.) The set of allowable states is a vector space. This similarity is of course not accidental. There are also a distinctions between the spaces to keep in mind. === Representations === Basic states are characterized by a set of quantum numbers. This is a set of eigenvalues of a maximal set of commuting observables. Physical observables are represented by linear operators, also called observables, on the vectors space. Maximality means that there can be added to the set no further algebraically independent observables that commute with the ones already present. A choice of such a set may be called a choice of representation. It is a postulate of quantum mechanics that a physically observable quantity of a system, such as position, momentum, or spin, is represented by a linear Hermitian operator on the state space. The possible outcomes of measurement of the quantity are the eigenvalues of the operator. At a deeper level, most observables, perhaps all, arise as generators of symmetries. The physical interpretation is that such a set represents what can – in theory – simultaneously be measured with arbitrary precision. The Heisenberg uncertainty relation prohibits simultaneous exact measurements of two non-commuting observables. The set is non-unique. It may for a one-particle system, for example, be position and spin z-projection, (x, Sz), or it may be momentum and spin y-projection, (p, Sy). In this case, the operator corresponding to position (a multiplication operator in the position representation) and the operator corresponding to momentum (a differential operator in the position representation) do not commute. Once a representation is chosen, there is still arbitrariness. It remains to choose a coordinate system. This may, for example, correspond to a choice of x, y- and z-axis, or a choice of curvilinear coordinates as exemplified by the spherical coordinates used for the Hydrogen atomic wave functions. This final choice also fixes a basis in abstract Hilbert space. The basic states are labeled by the quantum numbers corresponding to the maximal set of commuting observables and an appropriate coordinate system. The abstract states are "abstract" only in that an arbitrary choice necessary for a particular explicit description of it is not given. This is the same as saying that no choice of maximal set of commuting observables has been given. This is analogous to a vector space without a specified basis. Wave functions corresponding to a state are accordingly not unique. This non-uniqueness reflects the non-uniqueness in the choice of a maximal set of commuting observables. For one spin particle in one dimension, to a particular state there corresponds two wave functions, Ψ(x, Sz) and Ψ(p, Sy), both describing the same state. For each choice of maximal commuting sets of observables for the abstract state space, there is a corresponding representation that is associated to a function space of wave functions. Between all these different function spaces and the abstract state space, there are one-to-one correspondences (here disregarding normalization and unobservable phase factors), the common denominator here being a particular abstract state. The relationship between the momentum and position space wave functions, for instance, describing the same state is the Fourier transform. Each choice of representation should be thought of as specifying a unique function space in which wave functions corresponding to that choice of representation lives. This distinction is best kept, even if one could argue that two such function spaces are mathematically equal, e.g. being the set of square integrable functions. One can then think of the function spaces as two distinct copies of that set. === Inner product === There is an additional algebraic structure on the vector spaces of wave functions and the abstract state space. Physically, different wave functions are interpreted to overlap to some degree. A system in a state Ψ that does not overlap with a state Φ cannot be found to be in the state Φ upon measurement. But if Φ1, Φ2, … overlap Ψ to some degree, there is a chance that measurement of a system described by Ψ will be found in states Φ1, Φ2, …. Also selection rules are observed apply. These are usually formulated in the preservation of some quantum numbers. This means that certain processes allowable from some perspectives (e.g. energy and momentum conservation) do not occur because the initial and final total wave functions do not overlap. Mathematically, it turns out that solutions to the Schrödinger equation for particular potentials are orthogonal in some manner, this is usually described by an integral ∫ Ψ m ∗ Ψ n w d V = δ n m , {\displaystyle \int \Psi _{m}^{*}\Psi _{n}w\,dV=\delta _{nm},} where m, n are (sets of) indices (quantum numbers) labeling different solutions, the strictly positive function w is called a weight function, and δmn is the Kronecker delta. The integration is taken over all of the relevant space. This motivates the introduction of an inner product on the vector space of abstract quantum states, compatible with the mathematical observations above when passing to a representation. It is denoted (Ψ, Φ), or in the Bra–ket notation ⟨Ψ|Φ⟩. It yields a complex number. With the inner product, the function space is an inner product space. The explicit appearance of the inner product (usually an integral or a sum of integrals) depends on the choice of representation, but the complex number (Ψ, Φ) does not. Much of the physical interpretation of quantum mechanics stems from the Born rule. It states that the probability p of finding upon measurement the state Φ given the system is in the state Ψ is p = | ( Φ , Ψ ) | 2 , {\displaystyle p=|(\Phi ,\Psi )|^{2},} where Φ and Ψ are assumed normalized. Consider a scattering experiment. In quantum field theory, if Φout describes a state in the "distant future" (an "out state") after interactions between scattering particles have ceased, and Ψin an "in state" in the "distant past", then the quantities (Φout, Ψin), with Φout and Ψin varying over a complete set of in states and out states respectively, is called the S-matrix or scattering matrix. Knowledge of it is, effectively, having solved the theory at hand, at least as far as predictions go. Measurable quantities such as decay rates and scattering cross sections are calculable from the S-matrix. === Hilbert space === The above observations encapsulate the essence of the function spaces of which wave functions are elements. However, the description is not yet complete. There is a further technical requirement on the function space, that of completeness, that allows one to take limits of sequences in the function space, and be ensured that, if the limit exists, it is an element of the function space. A complete inner product space is called a Hilbert space. The property of completeness is crucial in advanced treatments and applications of quantum mechanics. For instance, the existence of projection operators or orthogonal projections relies on the completeness of the space. These projection operators, in turn, are essential for the statement and proof of many useful theorems, e.g. the spectral theorem. It is not very important in introductory quantum mechanics, and technical details and links may be found in footnotes like the one that follows. The space L2 is a Hilbert space, with inner product presented later. The function space of the example of the figure is a subspace of L2. A subspace of a Hilbert space is a Hilbert space if it is closed. In summary, the set of all possible normalizable wave functions for a system with a particular choice of basis, together with the null vector, constitute a Hilbert space. Not all functions of interest are elements of some Hilbert space, say L2. The most glaring example is the set of functions e2πip · x⁄h. These are plane wave solutions of the Schrödinger equation for a free particle that are not normalizable, hence not in L2. But they are nonetheless fundamental for the description. One can, using them, express functions that are normalizable using wave packets. They are, in a sense, a basis (but not a Hilbert space basis, nor a Hamel basis) in which wave functions of interest can be expressed. There is also the artifact "normalization to a delta function" that is frequently employed for notational convenience, see further down. The delta functions themselves are not square integrable either. The above description of the function space containing the wave functions is mostly mathematically motivated. The function spaces are, due to completeness, very large in a certain sense. Not all functions are realistic descriptions of any physical system. For instance, in the function space L2 one can find the function that takes on the value 0 for all rational numbers and -i for the irrationals in the interval [0, 1]. This is square integrable, but can hardly represent a physical state. === Common Hilbert spaces === While the space of solutions as a whole is a Hilbert space there are many other Hilbert spaces that commonly occur as ingredients. Square integrable complex valued functions on the interval [0, 2π]. The set {eint/2π, n ∈ Z} is a Hilbert space basis, i.e. a maximal orthonormal set. The Fourier transform takes functions in the above space to elements of l2(Z), the space of square summable functions Z → C. The latter space is a Hilbert space and the Fourier transform is an isomorphism of Hilbert spaces. Its basis is {ei, i ∈ Z} with ei(j) = δij, i, j ∈ Z. The most basic example of spanning polynomials is in the space of square integrable functions on the interval [–1, 1] for which the Legendre polynomials is a Hilbert space basis (complete orthonormal set). The square integrable functions on the unit sphere S2 is a Hilbert space. The basis functions in this case are the spherical harmonics. The Legendre polynomials are ingredients in the spherical harmonics. Most problems with rotational symmetry will have "the same" (known) solution with respect to that symmetry, so the original problem is reduced to a problem of lower dimensionality. The associated Laguerre polynomials appear in the hydrogenic wave function problem after factoring out the spherical harmonics. These span the Hilbert space of square integrable functions on the semi-infinite interval [0, ∞). More generally, one may consider a unified treatment of all second order polynomial solutions to the Sturm–Liouville equations in the setting of Hilbert space. These include the Legendre and Laguerre polynomials as well as Chebyshev polynomials, Jacobi polynomials and Hermite polynomials. All of these actually appear in physical problems, the latter ones in the harmonic oscillator, and what is otherwise a bewildering maze of properties of special functions becomes an organized body of facts. For this, see Byron & Fuller (1992, Chapter 5). There occurs also finite-dimensional Hilbert spaces. The space Cn is a Hilbert space of dimension n. The inner product is the standard inner product on these spaces. In it, the "spin part" of a single particle wave function resides. In the non-relativistic description of an electron one has n = 2 and the total wave function is a solution of the Pauli equation. In the corresponding relativistic treatment, n = 4 and the wave function solves the Dirac equation. With more particles, the situations is more complicated. One has to employ tensor products and use representation theory of the symmetry groups involved (the rotation group and the Lorentz group respectively) to extract from the tensor product the spaces in which the (total) spin wave functions reside. (Further problems arise in the relativistic case unless the particles are free. See the Bethe–Salpeter equation.) Corresponding remarks apply to the concept of isospin, for which the symmetry group is SU(2). The models of the nuclear forces of the sixties (still useful today, see nuclear force) used the symmetry group SU(3). In this case, as well, the part of the wave functions corresponding to the inner symmetries reside in some Cn or subspaces of tensor products of such spaces. In quantum field theory the underlying Hilbert space is Fock space. It is built from free single-particle states, i.e. wave functions when a representation is chosen, and can accommodate any finite, not necessarily constant in time, number of particles. The interesting (or rather the tractable) dynamics lies not in the wave functions but in the field operators that are operators acting on Fock space. Thus the Heisenberg picture is the most common choice (constant states, time varying operators). Due to the infinite-dimensional nature of the system, the appropriate mathematical tools are objects of study in functional analysis. === Simplified description === Not all introductory textbooks take the long route and introduce the full Hilbert space machinery, but the focus is on the non-relativistic Schrödinger equation in position representation for certain standard potentials. The following constraints on the wave function are sometimes explicitly formulated for the calculations and physical interpretation to make sense: The wave function must be square integrable. This is motivated by the Copenhagen interpretation of the wave function as a probability amplitude. It must be everywhere continuous and everywhere continuously differentiable. This is motivated by the appearance of the Schrödinger equation for most physically reasonable potentials. It is possible to relax these conditions somewhat for special purposes. If these requirements are not met, it is not possible to interpret the wave function as a probability amplitude. Note that exceptions can arise to the continuity of derivatives rule at points of infinite discontinuity of potential field. For example, in particle in a box where the derivative of wavefunction can be discontinuous at the boundary of the box where the potential is known to have infinite discontinuity. This does not alter the structure of the Hilbert space that these particular wave functions inhabit, but the subspace of the square-integrable functions L2, which is a Hilbert space, satisfying the second requirement is not closed in L2, hence not a Hilbert space in itself. The functions that does not meet the requirements are still needed for both technical and practical reasons. == More on wave functions and abstract state space == As has been demonstrated, the set of all possible wave functions in some representation for a system constitute an in general infinite-dimensional Hilbert space. Due to the multiple possible choices of representation basis, these Hilbert spaces are not unique. One therefore talks about an abstract Hilbert space, state space, where the choice of representation and basis is left undetermined. Specifically, each state is represented as an abstract vector in state space. A quantum state |Ψ⟩ in any representation is generally expressed as a vector | Ψ ⟩ = ∑ α ∫ d m ω Ψ t ( α , ω ) | α , ω ⟩ {\displaystyle |\Psi \rangle =\sum _{\boldsymbol {\alpha }}\int d^{m}\!{\boldsymbol {\omega }}\,\,\Psi _{t}({\boldsymbol {\alpha }},{\boldsymbol {\omega }})\,|{\boldsymbol {\alpha }},{\boldsymbol {\omega }}\rangle } where |α, ω⟩ the basis vectors of the chosen representation dmω = dω1dω2...dωm a differential volume element in the continuous degrees of freedom Ψ t ( α , ω ) {\displaystyle {\boldsymbol {\Psi }}_{t}({\boldsymbol {\alpha }},{\boldsymbol {\omega }})} a component of the vector | Ψ ⟩ {\displaystyle |\Psi \rangle } , called the wave function of the system α = (α1, α2, ..., αn) dimensionless discrete quantum numbers ω = (ω1, ω2, ..., ωm) continuous variables (not necessarily dimensionless) These quantum numbers index the components of the state vector. More, all α are in an n-dimensional set A = A1 × A2 × ... × An where each Ai is the set of allowed values for αi; all ω are in an m-dimensional "volume" Ω ⊆ ℝm where Ω = Ω1 × Ω2 × ... × Ωm and each Ωi ⊆ R is the set of allowed values for ωi, a subset of the real numbers R. For generality n and m are not necessarily equal. Example: The probability density of finding the system at time t {\displaystyle t} at state |α, ω⟩ is ρ α , ω ( t ) = | Ψ ( α , ω , t ) | 2 {\displaystyle \rho _{\alpha ,\omega }(t)=|\Psi ({\boldsymbol {\alpha }},{\boldsymbol {\omega }},t)|^{2}} The probability of finding system with α in some or all possible discrete-variable configurations, D ⊆ A, and ω in some or all possible continuous-variable configurations, C ⊆ Ω, is the sum and integral over the density, P ( t ) = ∑ α ∈ D ∫ C d m ω ρ α , ω ( t ) {\displaystyle P(t)=\sum _{{\boldsymbol {\alpha }}\in D}\int _{C}d^{m}\!{\boldsymbol {\omega }}\,\,\rho _{\alpha ,\omega }(t)} Since the sum of all probabilities must be 1, the normalization condition 1 = ∑ α ∈ A ∫ Ω d m ω ρ α , ω ( t ) {\displaystyle 1=\sum _{{\boldsymbol {\alpha }}\in A}\int _{\Omega }d^{m}\!{\boldsymbol {\omega }}\,\,\rho _{\alpha ,\omega }(t)} must hold at all times during the evolution of the system. The normalization condition requires ρ dmω to be dimensionless, by dimensional analysis Ψ must have the same units as (ω1ω2...ωm)−1/2. == Ontology == Whether the wave function exists in reality, and what it represents, are major questions in the interpretation of quantum mechanics. Many famous physicists of a previous generation puzzled over this problem, such as Erwin Schrödinger, Albert Einstein and Niels Bohr. Some advocate formulations or variants of the Copenhagen interpretation (e.g. Bohr, Eugene Wigner and John von Neumann) while others, such as John Archibald Wheeler or Edwin Thompson Jaynes, take the more classical approach and regard the wave function as representing information in the mind of the observer, i.e. a measure of our knowledge of reality. Some, including Schrödinger, David Bohm and Hugh Everett III and others, argued that the wave function must have an objective, physical existence. Einstein thought that a complete description of physical reality should refer directly to physical space and time, as distinct from the wave function, which refers to an abstract mathematical space. == See also == == Notes == === Remarks === === Citations === == References == == Further reading == == External links == Quantum Mechanics for Engineers Spin wave functions NYU Identical Particles Revisited, Michael Fowler The Nature of Many-Electron Wavefunctions Quantum Mechanics and Quantum Computation at BerkeleyX Archived 2013-05-13 at the Wayback Machine Einstein, The quantum theory of radiation
Wikipedia/Wave_function
In mathematics, for a function f : X → Y {\displaystyle f:X\to Y} , the image of an input value x {\displaystyle x} is the single output value produced by f {\displaystyle f} when passed x {\displaystyle x} . The preimage of an output value y {\displaystyle y} is the set of input values that produce y {\displaystyle y} . More generally, evaluating f {\displaystyle f} at each element of a given subset A {\displaystyle A} of its domain X {\displaystyle X} produces a set, called the "image of A {\displaystyle A} under (or through) f {\displaystyle f} ". Similarly, the inverse image (or preimage) of a given subset B {\displaystyle B} of the codomain Y {\displaystyle Y} is the set of all elements of X {\displaystyle X} that map to a member of B . {\displaystyle B.} The image of the function f {\displaystyle f} is the set of all output values it may produce, that is, the image of X {\displaystyle X} . The preimage of f {\displaystyle f} is the preimage of the codomain Y {\displaystyle Y} . Because it always equals X {\displaystyle X} (the domain of f {\displaystyle f} ), it is rarely used. Image and inverse image may also be defined for general binary relations, not just functions. == Definition == The word "image" is used in three related ways. In these definitions, f : X → Y {\displaystyle f:X\to Y} is a function from the set X {\displaystyle X} to the set Y . {\displaystyle Y.} === Image of an element === If x {\displaystyle x} is a member of X , {\displaystyle X,} then the image of x {\displaystyle x} under f , {\displaystyle f,} denoted f ( x ) , {\displaystyle f(x),} is the value of f {\displaystyle f} when applied to x . {\displaystyle x.} f ( x ) {\displaystyle f(x)} is alternatively known as the output of f {\displaystyle f} for argument x . {\displaystyle x.} Given y , {\displaystyle y,} the function f {\displaystyle f} is said to take the value y {\displaystyle y} or take y {\displaystyle y} as a value if there exists some x {\displaystyle x} in the function's domain such that f ( x ) = y . {\displaystyle f(x)=y.} Similarly, given a set S , {\displaystyle S,} f {\displaystyle f} is said to take a value in S {\displaystyle S} if there exists some x {\displaystyle x} in the function's domain such that f ( x ) ∈ S . {\displaystyle f(x)\in S.} However, f {\displaystyle f} takes [all] values in S {\displaystyle S} and f {\displaystyle f} is valued in S {\displaystyle S} means that f ( x ) ∈ S {\displaystyle f(x)\in S} for every point x {\displaystyle x} in the domain of f {\displaystyle f} . === Image of a subset === Throughout, let f : X → Y {\displaystyle f:X\to Y} be a function. The image under f {\displaystyle f} of a subset A {\displaystyle A} of X {\displaystyle X} is the set of all f ( a ) {\displaystyle f(a)} for a ∈ A . {\displaystyle a\in A.} It is denoted by f [ A ] , {\displaystyle f[A],} or by f ( A ) {\displaystyle f(A)} when there is no risk of confusion. Using set-builder notation, this definition can be written as f [ A ] = { f ( a ) : a ∈ A } . {\displaystyle f[A]=\{f(a):a\in A\}.} This induces a function f [ ⋅ ] : P ( X ) → P ( Y ) , {\displaystyle f[\,\cdot \,]:{\mathcal {P}}(X)\to {\mathcal {P}}(Y),} where P ( S ) {\displaystyle {\mathcal {P}}(S)} denotes the power set of a set S ; {\displaystyle S;} that is the set of all subsets of S . {\displaystyle S.} See § Notation below for more. === Image of a function === The image of a function is the image of its entire domain, also known as the range of the function. This last usage should be avoided because the word "range" is also commonly used to mean the codomain of f . {\displaystyle f.} === Generalization to binary relations === If R {\displaystyle R} is an arbitrary binary relation on X × Y , {\displaystyle X\times Y,} then the set { y ∈ Y : x R y for some x ∈ X } {\displaystyle \{y\in Y:xRy{\text{ for some }}x\in X\}} is called the image, or the range, of R . {\displaystyle R.} Dually, the set { x ∈ X : x R y for some y ∈ Y } {\displaystyle \{x\in X:xRy{\text{ for some }}y\in Y\}} is called the domain of R . {\displaystyle R.} == Inverse image == Let f {\displaystyle f} be a function from X {\displaystyle X} to Y . {\displaystyle Y.} The preimage or inverse image of a set B ⊆ Y {\displaystyle B\subseteq Y} under f , {\displaystyle f,} denoted by f − 1 [ B ] , {\displaystyle f^{-1}[B],} is the subset of X {\displaystyle X} defined by f − 1 [ B ] = { x ∈ X : f ( x ) ∈ B } . {\displaystyle f^{-1}[B]=\{x\in X\,:\,f(x)\in B\}.} Other notations include f − 1 ( B ) {\displaystyle f^{-1}(B)} and f − ( B ) . {\displaystyle f^{-}(B).} The inverse image of a singleton set, denoted by f − 1 [ { y } ] {\displaystyle f^{-1}[\{y\}]} or by f − 1 ( y ) , {\displaystyle f^{-1}(y),} is also called the fiber or fiber over y {\displaystyle y} or the level set of y . {\displaystyle y.} The set of all the fibers over the elements of Y {\displaystyle Y} is a family of sets indexed by Y . {\displaystyle Y.} For example, for the function f ( x ) = x 2 , {\displaystyle f(x)=x^{2},} the inverse image of { 4 } {\displaystyle \{4\}} would be { − 2 , 2 } . {\displaystyle \{-2,2\}.} Again, if there is no risk of confusion, f − 1 [ B ] {\displaystyle f^{-1}[B]} can be denoted by f − 1 ( B ) , {\displaystyle f^{-1}(B),} and f − 1 {\displaystyle f^{-1}} can also be thought of as a function from the power set of Y {\displaystyle Y} to the power set of X . {\displaystyle X.} The notation f − 1 {\displaystyle f^{-1}} should not be confused with that for inverse function, although it coincides with the usual one for bijections in that the inverse image of B {\displaystyle B} under f {\displaystyle f} is the image of B {\displaystyle B} under f − 1 . {\displaystyle f^{-1}.} == Notation for image and inverse image == The traditional notations used in the previous section do not distinguish the original function f : X → Y {\displaystyle f:X\to Y} from the image-of-sets function f : P ( X ) → P ( Y ) {\displaystyle f:{\mathcal {P}}(X)\to {\mathcal {P}}(Y)} ; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative is to give explicit names for the image and preimage as functions between power sets: === Arrow notation === f → : P ( X ) → P ( Y ) {\displaystyle f^{\rightarrow }:{\mathcal {P}}(X)\to {\mathcal {P}}(Y)} with f → ( A ) = { f ( a ) | a ∈ A } {\displaystyle f^{\rightarrow }(A)=\{f(a)\;|\;a\in A\}} f ← : P ( Y ) → P ( X ) {\displaystyle f^{\leftarrow }:{\mathcal {P}}(Y)\to {\mathcal {P}}(X)} with f ← ( B ) = { a ∈ X | f ( a ) ∈ B } {\displaystyle f^{\leftarrow }(B)=\{a\in X\;|\;f(a)\in B\}} === Star notation === f ⋆ : P ( X ) → P ( Y ) {\displaystyle f_{\star }:{\mathcal {P}}(X)\to {\mathcal {P}}(Y)} instead of f → {\displaystyle f^{\rightarrow }} f ⋆ : P ( Y ) → P ( X ) {\displaystyle f^{\star }:{\mathcal {P}}(Y)\to {\mathcal {P}}(X)} instead of f ← {\displaystyle f^{\leftarrow }} === Other terminology === An alternative notation for f [ A ] {\displaystyle f[A]} used in mathematical logic and set theory is f ″ A . {\displaystyle f\,''A.} Some texts refer to the image of f {\displaystyle f} as the range of f , {\displaystyle f,} but this usage should be avoided because the word "range" is also commonly used to mean the codomain of f . {\displaystyle f.} == Examples == f : { 1 , 2 , 3 } → { a , b , c , d } {\displaystyle f:\{1,2,3\}\to \{a,b,c,d\}} defined by { 1 ↦ a , 2 ↦ a , 3 ↦ c . {\displaystyle \left\{{\begin{matrix}1\mapsto a,\\2\mapsto a,\\3\mapsto c.\end{matrix}}\right.} The image of the set { 2 , 3 } {\displaystyle \{2,3\}} under f {\displaystyle f} is f ( { 2 , 3 } ) = { a , c } . {\displaystyle f(\{2,3\})=\{a,c\}.} The image of the function f {\displaystyle f} is { a , c } . {\displaystyle \{a,c\}.} The preimage of a {\displaystyle a} is f − 1 ( { a } ) = { 1 , 2 } . {\displaystyle f^{-1}(\{a\})=\{1,2\}.} The preimage of { a , b } {\displaystyle \{a,b\}} is also f − 1 ( { a , b } ) = { 1 , 2 } . {\displaystyle f^{-1}(\{a,b\})=\{1,2\}.} The preimage of { b , d } {\displaystyle \{b,d\}} under f {\displaystyle f} is the empty set { } = ∅ . {\displaystyle \{\ \}=\emptyset .} f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = x 2 . {\displaystyle f(x)=x^{2}.} The image of { − 2 , 3 } {\displaystyle \{-2,3\}} under f {\displaystyle f} is f ( { − 2 , 3 } ) = { 4 , 9 } , {\displaystyle f(\{-2,3\})=\{4,9\},} and the image of f {\displaystyle f} is R + {\displaystyle \mathbb {R} ^{+}} (the set of all positive real numbers and zero). The preimage of { 4 , 9 } {\displaystyle \{4,9\}} under f {\displaystyle f} is f − 1 ( { 4 , 9 } ) = { − 3 , − 2 , 2 , 3 } . {\displaystyle f^{-1}(\{4,9\})=\{-3,-2,2,3\}.} The preimage of set N = { n ∈ R : n < 0 } {\displaystyle N=\{n\in \mathbb {R} :n<0\}} under f {\displaystyle f} is the empty set, because the negative numbers do not have square roots in the set of reals. f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } defined by f ( x , y ) = x 2 + y 2 . {\displaystyle f(x,y)=x^{2}+y^{2}.} The fibers f − 1 ( { a } ) {\displaystyle f^{-1}(\{a\})} are concentric circles about the origin, the origin itself, and the empty set (respectively), depending on whether a > 0 , a = 0 , or a < 0 {\displaystyle a>0,\ a=0,{\text{ or }}\ a<0} (respectively). (If a ≥ 0 , {\displaystyle a\geq 0,} then the fiber f − 1 ( { a } ) {\displaystyle f^{-1}(\{a\})} is the set of all ( x , y ) ∈ R 2 {\displaystyle (x,y)\in \mathbb {R} ^{2}} satisfying the equation x 2 + y 2 = a , {\displaystyle x^{2}+y^{2}=a,} that is, the origin-centered circle with radius a . {\displaystyle {\sqrt {a}}.} ) If M {\displaystyle M} is a manifold and π : T M → M {\displaystyle \pi :TM\to M} is the canonical projection from the tangent bundle T M {\displaystyle TM} to M , {\displaystyle M,} then the fibers of π {\displaystyle \pi } are the tangent spaces T x ( M ) for x ∈ M . {\displaystyle T_{x}(M){\text{ for }}x\in M.} This is also an example of a fiber bundle. A quotient group is a homomorphic image. == Properties == === General === For every function f : X → Y {\displaystyle f:X\to Y} and all subsets A ⊆ X {\displaystyle A\subseteq X} and B ⊆ Y , {\displaystyle B\subseteq Y,} the following properties hold: Also: f ( A ) ∩ B = ∅ if and only if A ∩ f − 1 ( B ) = ∅ {\displaystyle f(A)\cap B=\varnothing \,{\text{ if and only if }}\,A\cap f^{-1}(B)=\varnothing } === Multiple functions === For functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} with subsets A ⊆ X {\displaystyle A\subseteq X} and C ⊆ Z , {\displaystyle C\subseteq Z,} the following properties hold: ( g ∘ f ) ( A ) = g ( f ( A ) ) {\displaystyle (g\circ f)(A)=g(f(A))} ( g ∘ f ) − 1 ( C ) = f − 1 ( g − 1 ( C ) ) {\displaystyle (g\circ f)^{-1}(C)=f^{-1}(g^{-1}(C))} === Multiple subsets of domain or codomain === For function f : X → Y {\displaystyle f:X\to Y} and subsets A , B ⊆ X {\displaystyle A,B\subseteq X} and S , T ⊆ Y , {\displaystyle S,T\subseteq Y,} the following properties hold: The results relating images and preimages to the (Boolean) algebra of intersection and union work for any collection of subsets, not just for pairs of subsets: f ( ⋃ s ∈ S A s ) = ⋃ s ∈ S f ( A s ) {\displaystyle f\left(\bigcup _{s\in S}A_{s}\right)=\bigcup _{s\in S}f\left(A_{s}\right)} f ( ⋂ s ∈ S A s ) ⊆ ⋂ s ∈ S f ( A s ) {\displaystyle f\left(\bigcap _{s\in S}A_{s}\right)\subseteq \bigcap _{s\in S}f\left(A_{s}\right)} f − 1 ( ⋃ s ∈ S B s ) = ⋃ s ∈ S f − 1 ( B s ) {\displaystyle f^{-1}\left(\bigcup _{s\in S}B_{s}\right)=\bigcup _{s\in S}f^{-1}\left(B_{s}\right)} f − 1 ( ⋂ s ∈ S B s ) = ⋂ s ∈ S f − 1 ( B s ) {\displaystyle f^{-1}\left(\bigcap _{s\in S}B_{s}\right)=\bigcap _{s\in S}f^{-1}\left(B_{s}\right)} (Here, S {\displaystyle S} can be infinite, even uncountably infinite.) With respect to the algebra of subsets described above, the inverse image function is a lattice homomorphism, while the image function is only a semilattice homomorphism (that is, it does not always preserve intersections). == See also == Bijection, injection and surjection – Properties of mathematical functions Fiber (mathematics) – Set of all points in a function's domain that all map to some single given point Image (category theory) – term in category theoryPages displaying wikidata descriptions as a fallback Kernel of a function – Equivalence relation expressing that two elements have the same image under a functionPages displaying short descriptions of redirect targets Set inversion – Mathematical problem of finding the set mapped by a specified function to a certain range == Notes == == References == Artin, Michael (1991). Algebra. Prentice Hall. ISBN 81-203-0871-9. Blyth, T.S. (2005). Lattices and Ordered Algebraic Structures. Springer. ISBN 1-85233-905-5.. Dolecki, Szymon; Mynard, Frédéric (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917. Halmos, Paul R. (1960). Naive set theory. The University Series in Undergraduate Mathematics. van Nostrand Company. ISBN 9780442030643. Zbl 0087.04403. {{cite book}}: ISBN / Date incompatibility (help) Kelley, John L. (1985). General Topology. Graduate Texts in Mathematics. Vol. 27 (2 ed.). Birkhäuser. ISBN 978-0-387-90125-1. Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities) This article incorporates material from Fibre on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Image_(function)