text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle.
== History ==
The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models.
== Definition ==
A Boolean algebra is a set A, equipped with two binary operations ∧ (called "meet" or "and"), ∨ (called "join" or "or"), a unary operation ¬ (called "complement" or "not") and two elements 0 and 1 in A (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols ⊥ and ⊤, respectively), such that for all elements a, b and c of A, the following axioms hold:
Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties).
A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required 0 and 1 to be distinct elements in order to exclude this case.)
It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that
a = b ∧ a if and only if a ∨ b = b.
The relation ≤ defined by a ≤ b if these equivalent conditions hold, is a partial order with least element 0 and greatest element 1. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤.
The first four pairs of axioms constitute a definition of a bounded lattice.
It follows from the first five pairs of axioms that any complement is unique.
The set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual.
== Examples ==
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1, and is defined by the rules:
It has applications in logic, interpreting 0 as false, 1 as true, ∧ as and, ∨ as or, and ¬ as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent.
The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input–output behavior. Furthermore, every possible input–output behavior can be modeled by a suitable Boolean expression.
The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables). This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras:
(a ∨ b) ∧ (¬a ∨ c) ∧ (b ∨ c) ≡ (a ∨ b) ∧ (¬a ∨ c)
(a ∧ b) ∨ (¬a ∧ c) ∨ (b ∧ c) ≡ (a ∧ b) ∨ (¬a ∧ c)
The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with the two operations ∨ := ∪ (union) and ∧ := ∩ (intersection). The smallest element 0 is the empty set and the largest element 1 is the set S itself.
After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms:
The set A of all subsets of S that are either finite or cofinite is a Boolean algebra and an algebra of sets called the finite–cofinite algebra. If S is infinite then the set of all cofinite subsets of S, which is called the Fréchet filter, is a free ultrafilter on A. However, the Fréchet filter is not an ultrafilter on the power set of S.
Starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo logical equivalence). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra.
Given any linearly ordered set L with a least element, the interval algebra is the smallest Boolean algebra of subsets of L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to ∞. Interval algebras are useful in the study of Lindenbaum–Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra.
For any natural number n, the set of all positive divisors of n, defining a ≤ b if a divides b, forms a distributive lattice. This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top elements of this Boolean algebra are the natural numbers 1 and n, respectively. The complement of a is given by n/a. The meet and the join of a and b are given by the greatest common divisor (gcd) and the least common multiple (lcm) of a and b, respectively. The ring addition a + b is given by lcm(a, b) / gcd(a, b). The picture shows an example for n = 30. As a counter-example, considering the non-square-free n = 60, the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1.
Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the collection of all subsets of X that are both open and closed forms a Boolean algebra with the operations ∨ := ∪ (union) and ∧ := ∩ (intersection).
If R is an arbitrary ring then its set of central idempotents, which is the set
A
=
{
e
∈
R
:
e
2
=
e
and
e
x
=
x
e
for all
x
∈
R
}
,
{\displaystyle A=\left\{e\in R:e^{2}=e{\text{ and }}ex=xe\;{\text{ for all }}\;x\in R\right\},}
becomes a Boolean algebra when its operations are defined by e ∨ f := e + f − ef and e ∧ f := ef.
== Homomorphisms and isomorphisms ==
A homomorphism between two Boolean algebras A and B is a function f : A → B such that for all a, b in A:
f(a ∨ b) = f(a) ∨ f(b),
f(a ∧ b) = f(a) ∧ f(b),
f(0) = 0,
f(1) = 1.
It then follows that f(¬a) = ¬f(a) for all a in A. The class of all Boolean algebras, together with this notion of morphism, forms a full subcategory of the category of lattices.
An isomorphism between two Boolean algebras A and B is a homomorphism f : A → B with an inverse homomorphism, that is, a homomorphism g : B → A such that the composition g ∘ f : A → A is the identity function on A, and the composition f ∘ g : B → B is the identity function on B. A homomorphism of Boolean algebras is an isomorphism if and only if it is bijective.
== Boolean rings ==
Every Boolean algebra (A, ∧, ∨) gives rise to a ring (A, +, ·) by defining a + b := (a ∧ ¬b) ∨ (b ∧ ¬a) = (a ∨ b) ∧ ¬(a ∧ b) (this operation is called symmetric difference in the case of sets and XOR in the case of logic) and a · b := a ∧ b. The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the 1 of the Boolean algebra. This ring has the property that a · a = a for all a in A; rings with this property are called Boolean rings.
Conversely, if a Boolean ring A is given, we can turn it into a Boolean algebra by defining x ∨ y := x + y + (x · y) and x ∧ y := x · y.
Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a map f : A → B is a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are equivalent; in fact the categories are isomorphic.
Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every Boolean ring.
More generally, Boudet, Jouannaud, and Schmidt-Schauß (1989) gave an algorithm to solve equations between arbitrary Boolean-ring expressions.
Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications in automated theorem proving.
== Ideals and filters ==
An ideal of the Boolean algebra A is a nonempty subset I such that for all x, y in I we have x ∨ y in I and for all a in A we have a ∧ x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called prime if I ≠ A and if a ∧ b in I always implies a in I or b in I. Furthermore, for every a ∈ A we have that a ∧ −a = 0 ∈ I, and then if I is prime we have a ∈ I or −a ∈ I for every a ∈ A. An ideal I of A is called maximal if I ≠ A and if the only ideal properly containing I is A itself. For an ideal I, if a ∉ I and −a ∉ I, then I ∪ {a} or I ∪ {−a} is contained in another proper ideal J. Hence, such an I is not maximal, and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring A.
The dual of an ideal is a filter. A filter of the Boolean algebra A is a nonempty subset p such that for all x, y in p we have x ∧ y in p and for all a in A we have a ∨ x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the ultrafilter lemma and cannot be proven in Zermelo–Fraenkel set theory (ZF), if ZF is consistent. Within ZF, the ultrafilter lemma is strictly weaker than the axiom of choice.
The ultrafilter lemma has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc.
== Representations ==
It can be shown that every finite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is a power of two.
Stone's celebrated representation theorem for Boolean algebras states that every Boolean algebra A is isomorphic to the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdorff) topological space.
== Axiomatics ==
The first axiomatization of Boolean lattices/algebras in general was given by the English philosopher and mathematician Alfred North Whitehead in 1898.
It included the above axioms and additionally x ∨ 1 = 1 and x ∧ 0 = 0.
In 1904, the American mathematician Edward V. Huntington (1874–1952) gave probably the most parsimonious axiomatization based on ∧, ∨, ¬, even proving the associativity laws (see box).
He also proved that these axioms are independent of each other.
In 1933, Huntington set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation + and a unary functional symbol n, to be read as 'complement', which satisfy the following laws:
Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit:
do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996, William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the computer program EQP he designed. For a simplification of McCune's proof, see Dahn (1998).
Further work has been done for reducing the number of axioms; see Minimal axioms for Boolean algebra.
== Generalizations ==
Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, a distributive lattice B is a generalized Boolean lattice, if it has a smallest element 0 and for any elements a and b in B such that a ≤ b, there exists an element x such that a ∧ x = 0 and a ∨ x = b. Defining a \ b as the unique x such that (a ∧ b) ∨ x = a and (a ∧ b) ∧ x = 0, we say that the structure (B, ∧, ∨, \, 0) is a generalized Boolean algebra, while (B, ∨, 0) is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals of Boolean lattices.
A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed linear subspaces for separable Hilbert spaces.
== See also ==
== Notes ==
== References ==
=== Works cited ===
Davey, B.A.; Priestley, H.A. (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks. Cambridge University Press.
Cohn, Paul M. (2003), Basic Algebra: Groups, Rings, and Fields, Springer, pp. 51, 70–81, ISBN 9781852335878
Givant, Steven; Halmos, Paul (2009), Introduction to Boolean Algebras, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-40293-2.
Goodstein, R. L. (2012), "Chapter 2: The self-dual system of axioms", Boolean Algebra, Courier Dover Publications, pp. 21ff, ISBN 9780486154978
Hsiang, Jieh (1985). "Refutational Theorem Proving Using Term Rewriting Systems". Artificial Intelligence. 25 (3): 255–300. doi:10.1016/0004-3702(85)90074-8.
Huntington, Edward V. (1904). "Sets of Independent Postulates for the Algebra of Logic". Transactions of the American Mathematical Society. 5 (3): 288–309. doi:10.1090/s0002-9947-1904-1500675-4. JSTOR 1986459.
Padmanabhan, Ranganathan; Rudeanu, Sergiu (2008), Axioms for lattices and boolean algebras, World Scientific, ISBN 978-981-283-454-6.
Stone, Marshall H. (1936). "The Theory of Representations for Boolean Algebra". Transactions of the American Mathematical Society. 40: 37–111. doi:10.1090/s0002-9947-1936-1501865-8.
Whitehead, A.N. (1898). A Treatise on Universal Algebra. Cambridge University Press. ISBN 978-1-4297-0032-0. {{cite book}}: ISBN / Date incompatibility (help)
=== General references ===
Brown, Stephen; Vranesic, Zvonko (2002), Fundamentals of Digital Logic with VHDL Design (2nd ed.), McGraw–Hill, ISBN 978-0-07-249938-4. See Section 2.5.
Boudet, A.; Jouannaud, J.P.; Schmidt-Schauß, M. (1989). "Unification in Boolean Rings and Abelian Groups". Journal of Symbolic Computation. 8 (5): 449–477. doi:10.1016/s0747-7171(89)80054-9.
Cori, Rene; Lascar, Daniel (2000), Mathematical Logic: A Course with Exercises, Oxford University Press, ISBN 978-0-19-850048-3. See Chapter 2.
Dahn, B. I. (1998), "Robbins Algebras are Boolean: A Revision of McCune's Computer-Generated Solution of the Robbins Problem", Journal of Algebra, 208 (2): 526–532, doi:10.1006/jabr.1998.7467.
Halmos, Paul (1963), Lectures on Boolean Algebras, Van Nostrand, ISBN 978-0-387-90094-0 {{citation}}: ISBN / Date incompatibility (help).
Halmos, Paul; Givant, Steven (1998), Logic as Algebra, Dolciani Mathematical Expositions, vol. 21, Mathematical Association of America, ISBN 978-0-88385-327-6.
Huntington, E. V. (1933a), "New sets of independent postulates for the algebra of logic" (PDF), Transactions of the American Mathematical Society, 35 (1), American Mathematical Society: 274–304, doi:10.2307/1989325, JSTOR 1989325.
Huntington, E. V. (1933b), "Boolean algebra: A correction", Transactions of the American Mathematical Society, 35 (2): 557–558, doi:10.2307/1989783, JSTOR 1989783
Mendelson, Elliott (1970), Boolean Algebra and Switching Circuits, Schaum's Outline Series in Mathematics, McGraw–Hill, ISBN 978-0-07-041460-0
Monk, J. Donald; Bonnet, R., eds. (1989), Handbook of Boolean Algebras, North-Holland, ISBN 978-0-444-87291-3. In 3 volumes. (Vol.1:ISBN 978-0-444-70261-6, Vol.2:ISBN 978-0-444-87152-7, Vol.3:ISBN 978-0-444-87153-4)
Sikorski, Roman (1966), Boolean Algebras, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Verlag.
Stoll, R. R. (1963), Set Theory and Logic, W. H. Freeman, ISBN 978-0-486-63829-4 {{citation}}: ISBN / Date incompatibility (help). Reprinted by Dover Publications, 1979.
== External links ==
"Boolean algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra", by J. Donald Monk.
McCune W., 1997. Robbins Algebras Are Boolean JAR 19(3), 263–276
"Boolean Algebra" by Eric W. Weisstein, Wolfram Demonstrations Project, 2007.
Burris, Stanley N.; Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2.
Weisstein, Eric W. "Boolean Algebra". MathWorld. | Wikipedia/Boolean_algebra_(structure) |
In algebra, a domain is a nonzero ring in which ab = 0 implies a = 0 or b = 0. (Sometimes such a ring is said to "have the zero-product property".) Equivalently, a domain is a ring in which 0 is the only left zero divisor (or equivalently, the only right zero divisor). A commutative domain is called an integral domain. Mathematical literature contains multiple variants of the definition of "domain".
== Examples and non-examples ==
The ring
Z
/
6
Z
{\displaystyle \mathbb {Z} /6\mathbb {Z} }
is not a domain, because the images of 2 and 3 in this ring are nonzero elements with product 0. More generally, for a positive integer
n
{\displaystyle n}
, the ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
is a domain if and only if
n
{\displaystyle n}
is prime.
A finite domain is automatically a finite field, by Wedderburn's little theorem.
The quaternions form a noncommutative domain. More generally, any division ring is a domain, since every nonzero element is invertible.
The set of all Lipschitz quaternions, that is, quaternions of the form
a
+
b
i
+
c
j
+
d
k
{\displaystyle a+bi+cj+dk}
where a, b, c, d are integers, is a noncommutative subring of the quaternions, hence a noncommutative domain.
Similarly, the set of all Hurwitz quaternions, that is, quaternions of the form
a
+
b
i
+
c
j
+
d
k
{\displaystyle a+bi+cj+dk}
where a, b, c, d are either all integers or all half-integers, is a noncommutative domain.
A matrix ring Mn(R) for n ≥ 2 is never a domain: if R is nonzero, such a matrix ring has nonzero zero divisors and even nilpotent elements other than 0. For example, the square of the matrix unit E12 is 0.
The tensor algebra of a vector space, or equivalently, the algebra of polynomials in noncommuting variables over a field,
K
⟨
x
1
,
…
,
x
n
⟩
,
{\displaystyle \mathbb {K} \langle x_{1},\ldots ,x_{n}\rangle ,}
is a domain. This may be proved using an ordering on the noncommutative monomials.
If R is a domain and S is an Ore extension of R then S is a domain.
The Weyl algebra is a noncommutative domain.
The universal enveloping algebra of any Lie algebra over a field is a domain. The proof uses the standard filtration on the universal enveloping algebra and the Poincaré–Birkhoff–Witt theorem.
== Group rings and the zero divisor problem ==
Suppose that G is a group and K is a field. Is the group ring R = K[G] a domain? The identity
(
1
−
g
)
(
1
+
g
+
⋯
+
g
n
−
1
)
=
1
−
g
n
,
{\displaystyle (1-g)(1+g+\cdots +g^{n-1})=1-g^{n},}
shows that an element g of finite order n > 1 induces a zero divisor 1 − g in R. The zero divisor problem asks whether this is the only obstruction; in other words,
Given a field K and a torsion-free group G, is it true that K[G] contains no zero divisors?
No counterexamples are known, but the problem remains open in general (as of 2017).
For many special classes of groups, the answer is affirmative. Farkas and Snider proved in 1976 that if G is a torsion-free polycyclic-by-finite group and char K = 0 then the group ring K[G] is a domain. Later (1980) Cliff removed the restriction on the characteristic of the field. In 1988, Kropholler, Linnell and Moody generalized these results to the case of torsion-free solvable and solvable-by-finite groups. Earlier (1965) work of Michel Lazard, whose importance was not appreciated by the specialists in the field for about 20 years, had dealt with the case where K is the ring of p-adic integers and G is the pth congruence subgroup of GL(n, Z).
== Spectrum of an integral domain ==
Zero divisors have a topological interpretation, at least in the case of commutative rings: a ring R is an integral domain if and only if it is reduced and its spectrum Spec R is an irreducible topological space. The first property is often considered to encode some infinitesimal information, whereas the second one is more geometric.
An example: the ring k[x, y]/(xy), where k is a field, is not a domain, since the images of x and y in this ring are zero divisors. Geometrically, this corresponds to the fact that the spectrum of this ring, which is the union of the lines x = 0 and y = 0, is not irreducible. Indeed, these two lines are its irreducible components.
== See also ==
Zero divisor
Zero-product property
Divisor (ring theory)
Integral domain
== Notes ==
== References ==
Lam, Tsit-Yuen (2001). A First Course in Noncommutative Rings (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-95325-0. MR 1838439.
Charles Lanski (2005). Concepts in abstract algebra. AMS Bookstore. ISBN 0-534-42323-X.
César Polcino Milies; Sudarshan K. Sehgal (2002). An introduction to group rings. Springer. ISBN 1-4020-0238-6.
Nathan Jacobson (2009). Basic Algebra I. Dover. ISBN 978-0-486-47189-1.
Louis Halle Rowen (1994). Algebra: groups, rings, and fields. A K Peters. ISBN 1-56881-028-8. | Wikipedia/Domain_(ring_theory) |
Let
ϕ
:
M
→
N
{\displaystyle \phi :M\to N}
be a smooth map between smooth manifolds
M
{\displaystyle M}
and
N
{\displaystyle N}
. Then there is an associated linear map from the space of 1-forms on
N
{\displaystyle N}
(the linear space of sections of the cotangent bundle) to the space of 1-forms on
M
{\displaystyle M}
. This linear map is known as the pullback (by
ϕ
{\displaystyle \phi }
), and is frequently denoted by
ϕ
∗
{\displaystyle \phi ^{*}}
. More generally, any covariant tensor field – in particular any differential form – on
N
{\displaystyle N}
may be pulled back to
M
{\displaystyle M}
using
ϕ
{\displaystyle \phi }
.
When the map
ϕ
{\displaystyle \phi }
is a diffeomorphism, then the pullback, together with the pushforward, can be used to transform any tensor field from
N
{\displaystyle N}
to
M
{\displaystyle M}
or vice versa. In particular, if
ϕ
{\displaystyle \phi }
is a diffeomorphism between open subsets of
R
n
{\displaystyle \mathbb {R} ^{n}}
and
R
n
{\displaystyle \mathbb {R} ^{n}}
, viewed as a change of coordinates (perhaps between different charts on a manifold
M
{\displaystyle M}
), then the pullback and pushforward describe the transformation properties of covariant and contravariant tensors used in more traditional (coordinate dependent) approaches to the subject.
The idea behind the pullback is essentially the notion of precomposition of one function with another. However, by combining this idea in several different contexts, quite elaborate pullback operations can be constructed. This article begins with the simplest operations, then uses them to construct more sophisticated ones. Roughly speaking, the pullback mechanism (using precomposition) turns several constructions in differential geometry into contravariant functors.
== Pullback of smooth functions and smooth maps ==
Let
ϕ
:
M
→
N
{\displaystyle \phi :M\to N}
be a smooth map between (smooth) manifolds
M
{\displaystyle M}
and
N
{\displaystyle N}
, and suppose
f
:
N
→
R
{\displaystyle f:N\to \mathbb {R} }
is a smooth function on
N
{\displaystyle N}
. Then the pullback of
f
{\displaystyle f}
by
ϕ
{\displaystyle \phi }
is the smooth function
ϕ
∗
f
{\displaystyle \phi ^{*}f}
on
M
{\displaystyle M}
defined by
(
ϕ
∗
f
)
(
x
)
=
f
(
ϕ
(
x
)
)
{\displaystyle (\phi ^{*}f)(x)=f(\phi (x))}
. Similarly, if
f
{\displaystyle f}
is a smooth function on an open set
U
{\displaystyle U}
in
N
{\displaystyle N}
, then the same formula defines a smooth function on the open set
ϕ
−
1
(
U
)
{\displaystyle \phi ^{-1}(U)}
. (In the language of sheaves, pullback defines a morphism from the sheaf of smooth functions on
N
{\displaystyle N}
to the direct image by
ϕ
{\displaystyle \phi }
of the sheaf of smooth functions on
M
{\displaystyle M}
.)
More generally, if
f
:
N
→
A
{\displaystyle f:N\to A}
is a smooth map from
N
{\displaystyle N}
to any other manifold
A
{\displaystyle A}
, then
(
ϕ
∗
f
)
(
x
)
=
f
(
ϕ
(
x
)
)
{\displaystyle (\phi ^{*}f)(x)=f(\phi (x))}
is a smooth map from
M
{\displaystyle M}
to
A
{\displaystyle A}
.
== Pullback of bundles and sections ==
If
E
{\displaystyle E}
is a vector bundle (or indeed any fiber bundle) over
N
{\displaystyle N}
and
ϕ
:
M
→
N
{\displaystyle \phi :M\to N}
is a smooth map, then the pullback bundle
ϕ
∗
E
{\displaystyle \phi ^{*}E}
is a vector bundle (or fiber bundle) over
M
{\displaystyle M}
whose fiber over
x
{\displaystyle x}
in
M
{\displaystyle M}
is given by
(
ϕ
∗
E
)
x
=
E
ϕ
(
x
)
{\displaystyle (\phi ^{*}E)_{x}=E_{\phi (x)}}
.
In this situation, precomposition defines a pullback operation on sections of
E
{\displaystyle E}
: if
s
{\displaystyle s}
is a section of
E
{\displaystyle E}
over
N
{\displaystyle N}
, then the pullback section
ϕ
∗
s
=
s
∘
ϕ
{\displaystyle \phi ^{*}s=s\circ \phi }
is a section of
ϕ
∗
E
{\displaystyle \phi ^{*}E}
over
M
{\displaystyle M}
.
== Pullback of multilinear forms ==
Let Φ: V → W be a linear map between vector spaces V and W (i.e., Φ is an element of L(V, W), also denoted Hom(V, W)), and let
F
:
W
×
W
×
⋯
×
W
→
R
{\displaystyle F:W\times W\times \cdots \times W\rightarrow \mathbf {R} }
be a multilinear form on W (also known as a tensor – not to be confused with a tensor field – of rank (0, s), where s is the number of factors of W in the product). Then the pullback Φ∗F of F by Φ is a multilinear form on V defined by precomposing F with Φ. More precisely, given vectors v1, v2, ..., vs in V, Φ∗F is defined by the formula
(
Φ
∗
F
)
(
v
1
,
v
2
,
…
,
v
s
)
=
F
(
Φ
(
v
1
)
,
Φ
(
v
2
)
,
…
,
Φ
(
v
s
)
)
,
{\displaystyle (\Phi ^{*}F)(v_{1},v_{2},\ldots ,v_{s})=F(\Phi (v_{1}),\Phi (v_{2}),\ldots ,\Phi (v_{s})),}
which is a multilinear form on V. Hence Φ∗ is a (linear) operator from multilinear forms on W to multilinear forms on V. As a special case, note that if F is a linear form (or (0,1)-tensor) on W, so that F is an element of W∗, the dual space of W, then Φ∗F is an element of V∗, and so pullback by Φ defines a linear map between dual spaces which acts in the opposite direction to the linear map Φ itself:
Φ
:
V
→
W
,
Φ
∗
:
W
∗
→
V
∗
.
{\displaystyle \Phi \colon V\rightarrow W,\qquad \Phi ^{*}\colon W^{*}\rightarrow V^{*}.}
From a tensorial point of view, it is natural to try to extend the notion of pullback to tensors of arbitrary rank, i.e., to multilinear maps on W taking values in a tensor product of r copies of W, i.e., W ⊗ W ⊗ ⋅⋅⋅ ⊗ W. However, elements of such a tensor product do not pull back naturally: instead there is a pushforward operation from V ⊗ V ⊗ ⋅⋅⋅ ⊗ V to W ⊗ W ⊗ ⋅⋅⋅ ⊗ W given by
Φ
∗
(
v
1
⊗
v
2
⊗
⋯
⊗
v
r
)
=
Φ
(
v
1
)
⊗
Φ
(
v
2
)
⊗
⋯
⊗
Φ
(
v
r
)
.
{\displaystyle \Phi _{*}(v_{1}\otimes v_{2}\otimes \cdots \otimes v_{r})=\Phi (v_{1})\otimes \Phi (v_{2})\otimes \cdots \otimes \Phi (v_{r}).}
Nevertheless, it follows from this that if Φ is invertible, pullback can be defined using pushforward by the inverse function Φ−1. Combining these two constructions yields a pushforward operation, along an invertible linear map, for tensors of any rank (r, s).
== Pullback of cotangent vectors and 1-forms ==
Let
ϕ
:
M
→
N
{\displaystyle \phi :M\to N}
be a smooth map between smooth manifolds. Then the differential of
ϕ
{\displaystyle \phi }
, written
ϕ
∗
{\displaystyle \phi _{*}}
,
d
ϕ
{\displaystyle d\phi }
, or
D
ϕ
{\displaystyle D\phi }
, is a vector bundle morphism (over
M
{\displaystyle M}
) from the tangent bundle
T
M
{\displaystyle TM}
of
M
{\displaystyle M}
to the pullback bundle
ϕ
∗
T
N
{\displaystyle \phi ^{*}TN}
. The transpose of
ϕ
∗
{\displaystyle \phi _{*}}
is therefore a bundle map from
ϕ
∗
T
∗
N
{\displaystyle \phi ^{*}T^{*}N}
to
T
∗
M
{\displaystyle T^{*}M}
, the cotangent bundle of
M
{\displaystyle M}
.
Now suppose that
α
{\displaystyle \alpha }
is a section of
T
∗
N
{\displaystyle T^{*}N}
(a 1-form on
N
{\displaystyle N}
), and precompose
α
{\displaystyle \alpha }
with
ϕ
{\displaystyle \phi }
to obtain a pullback section of
ϕ
∗
T
∗
N
{\displaystyle \phi ^{*}T^{*}N}
. Applying the above bundle map (pointwise) to this section yields the pullback of
α
{\displaystyle \alpha }
by
ϕ
{\displaystyle \phi }
, which is the 1-form
ϕ
∗
α
{\displaystyle \phi ^{*}\alpha }
on
M
{\displaystyle M}
defined by
(
ϕ
∗
α
)
x
(
X
)
=
α
ϕ
(
x
)
(
d
ϕ
x
(
X
)
)
{\displaystyle (\phi ^{*}\alpha )_{x}(X)=\alpha _{\phi (x)}(d\phi _{x}(X))}
for
x
{\displaystyle x}
in
M
{\displaystyle M}
and
X
{\displaystyle X}
in
T
x
M
{\displaystyle T_{x}M}
.
== Pullback of (covariant) tensor fields ==
The construction of the previous section generalizes immediately to tensor bundles of rank
(
0
,
s
)
{\displaystyle (0,s)}
for any natural number
s
{\displaystyle s}
: a
(
0
,
s
)
{\displaystyle (0,s)}
tensor field on a manifold
N
{\displaystyle N}
is a section of the tensor bundle on
N
{\displaystyle N}
whose fiber at
y
{\displaystyle y}
in
N
{\displaystyle N}
is the space of multilinear
s
{\displaystyle s}
-forms
F
:
T
y
N
×
⋯
×
T
y
N
→
R
.
{\displaystyle F:T_{y}N\times \cdots \times T_{y}N\to \mathbb {R} .}
By taking
ϕ
{\displaystyle \phi }
equal to the (pointwise) differential of a smooth map
ϕ
{\displaystyle \phi }
from
M
{\displaystyle M}
to
N
{\displaystyle N}
, the pullback of multilinear forms can be combined with the pullback of sections to yield a pullback
(
0
,
s
)
{\displaystyle (0,s)}
tensor field on
M
{\displaystyle M}
. More precisely if
S
{\displaystyle S}
is a
(
0
,
s
)
{\displaystyle (0,s)}
-tensor field on
N
{\displaystyle N}
, then the pullback of
S
{\displaystyle S}
by
ϕ
{\displaystyle \phi }
is the
(
0
,
s
)
{\displaystyle (0,s)}
-tensor field
ϕ
∗
S
{\displaystyle \phi ^{*}S}
on
M
{\displaystyle M}
defined by
(
ϕ
∗
S
)
x
(
X
1
,
…
,
X
s
)
=
S
ϕ
(
x
)
(
d
ϕ
x
(
X
1
)
,
…
,
d
ϕ
x
(
X
s
)
)
{\displaystyle (\phi ^{*}S)_{x}(X_{1},\ldots ,X_{s})=S_{\phi (x)}(d\phi _{x}(X_{1}),\ldots ,d\phi _{x}(X_{s}))}
for
x
{\displaystyle x}
in
M
{\displaystyle M}
and
X
j
{\displaystyle X_{j}}
in
T
x
M
{\displaystyle T_{x}M}
.
== Pullback of differential forms ==
A particular important case of the pullback of covariant tensor fields is the pullback of differential forms. If
α
{\displaystyle \alpha }
is a differential
k
{\displaystyle k}
-form, i.e., a section of the exterior bundle
Λ
k
(
T
∗
N
)
{\displaystyle \Lambda ^{k}(T^{*}N)}
of (fiberwise) alternating
k
{\displaystyle k}
-forms on
T
N
{\displaystyle TN}
, then the pullback of
α
{\displaystyle \alpha }
is the differential
k
{\displaystyle k}
-form on
M
{\displaystyle M}
defined by the same formula as in the previous section:
(
ϕ
∗
α
)
x
(
X
1
,
…
,
X
k
)
=
α
ϕ
(
x
)
(
d
ϕ
x
(
X
1
)
,
…
,
d
ϕ
x
(
X
k
)
)
{\displaystyle (\phi ^{*}\alpha )_{x}(X_{1},\ldots ,X_{k})=\alpha _{\phi (x)}(d\phi _{x}(X_{1}),\ldots ,d\phi _{x}(X_{k}))}
for
x
{\displaystyle x}
in
M
{\displaystyle M}
and
X
j
{\displaystyle X_{j}}
in
T
x
M
{\displaystyle T_{x}M}
.
The pullback of differential forms has two properties which make it extremely useful.
It is compatible with the wedge product in the sense that for differential forms
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
on
N
{\displaystyle N}
,
ϕ
∗
(
α
∧
β
)
=
ϕ
∗
α
∧
ϕ
∗
β
.
{\displaystyle \phi ^{*}(\alpha \wedge \beta )=\phi ^{*}\alpha \wedge \phi ^{*}\beta .}
It is compatible with the exterior derivative
d
{\displaystyle d}
: if
α
{\displaystyle \alpha }
is a differential form on
N
{\displaystyle N}
then
ϕ
∗
(
d
α
)
=
d
(
ϕ
∗
α
)
.
{\displaystyle \phi ^{*}(d\alpha )=d(\phi ^{*}\alpha ).}
== Pullback by diffeomorphisms ==
When the map
ϕ
{\displaystyle \phi }
between manifolds is a diffeomorphism, that is, it has a smooth inverse, then pullback can be defined for the vector fields as well as for 1-forms, and thus, by extension, for an arbitrary mixed tensor field on the manifold. The linear map
Φ
=
d
ϕ
x
∈
GL
(
T
x
M
,
T
ϕ
(
x
)
N
)
{\displaystyle \Phi =d\phi _{x}\in \operatorname {GL} \left(T_{x}M,T_{\phi (x)}N\right)}
can be inverted to give
Φ
−
1
=
(
d
ϕ
x
)
−
1
∈
GL
(
T
ϕ
(
x
)
N
,
T
x
M
)
.
{\displaystyle \Phi ^{-1}=\left({d\phi _{x}}\right)^{-1}\in \operatorname {GL} \left(T_{\phi (x)}N,T_{x}M\right).}
A general mixed tensor field will then transform using
Φ
{\displaystyle \Phi }
and
Φ
−
1
{\displaystyle \Phi ^{-1}}
according to the tensor product decomposition of the tensor bundle into copies of
T
N
{\displaystyle TN}
and
T
∗
N
{\displaystyle T^{*}N}
. When
M
=
N
{\displaystyle M=N}
, then the pullback and the pushforward describe the transformation properties of a tensor on the manifold
M
{\displaystyle M}
. In traditional terms, the pullback describes the transformation properties of the covariant indices of a tensor; by contrast, the transformation of the contravariant indices is given by a pushforward.
== Pullback by automorphisms ==
The construction of the previous section has a representation-theoretic interpretation when
ϕ
{\displaystyle \phi }
is a diffeomorphism from a manifold
M
{\displaystyle M}
to itself. In this case the derivative
d
ϕ
{\displaystyle d\phi }
is a section of
GM
(
T
M
,
ϕ
∗
T
M
)
{\displaystyle \operatorname {GM} (TM,\phi ^{*}TM)}
. This induces a pullback action on sections of any bundle associated to the frame bundle
GM
(
m
)
{\displaystyle \operatorname {GM} (m)}
of
M
{\displaystyle M}
by a representation of the general linear group
GM
(
m
)
{\displaystyle \operatorname {GM} (m)}
(where
m
=
dim
M
{\displaystyle m=\dim M}
).
== Pullback and Lie derivative ==
See Lie derivative. By applying the preceding ideas to the local 1-parameter group of diffeomorphisms defined by a vector field on
M
{\displaystyle M}
, and differentiating with respect to the parameter, a notion of Lie derivative on any associated bundle is obtained.
== Pullback of connections (covariant derivatives) ==
If
∇
{\displaystyle \nabla }
is a connection (or covariant derivative) on a vector bundle
E
{\displaystyle E}
over
N
{\displaystyle N}
and
ϕ
{\displaystyle \phi }
is a smooth map from
M
{\displaystyle M}
to
N
{\displaystyle N}
, then there is a pullback connection
ϕ
∗
∇
{\displaystyle \phi ^{*}\nabla }
on
ϕ
∗
E
{\displaystyle \phi ^{*}E}
over
M
{\displaystyle M}
, determined uniquely by the condition that
(
ϕ
∗
∇
)
X
(
ϕ
∗
s
)
=
ϕ
∗
(
∇
d
ϕ
(
X
)
s
)
.
{\displaystyle \left(\phi ^{*}\nabla \right)_{X}\left(\phi ^{*}s\right)=\phi ^{*}\left(\nabla _{d\phi (X)}s\right).}
== See also ==
Pushforward (differential)
Pullback bundle
Pullback (category theory)
== References ==
Jost, Jürgen (2002). Riemannian Geometry and Geometric Analysis. Berlin: Springer-Verlag. ISBN 3-540-42627-2. See sections 1.5 and 1.6.
Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. ISBN 0-8053-0102-X. See section 1.7 and 2.3. | Wikipedia/Pullback_(differential_geometry) |
In mathematics, a Hopf algebra, named after Heinz Hopf, is a structure that is simultaneously a (unital associative) algebra and a (counital coassociative) coalgebra, with these structures' compatibility making it a bialgebra, and that moreover is equipped with an antihomomorphism satisfying a certain property. The representation theory of a Hopf algebra is particularly nice, since the existence of compatible comultiplication, counit, and antipode allows for the construction of tensor products of representations, trivial representations, and dual representations.
Hopf algebras occur naturally in algebraic topology, where they originated and are related to the H-space concept, in group scheme theory, in group theory (via the concept of a group ring), and in numerous other places, making them probably the most familiar type of bialgebra. Hopf algebras are also studied in their own right, with much work on specific classes of examples on the one hand and classification problems on the other. They have diverse applications ranging from condensed matter physics and quantum field theory to string theory and LHC phenomenology.
== Formal definition ==
Formally, a Hopf algebra is an (associative and coassociative) bialgebra H over a field K together with a K-linear map S: H → H (called the antipode) such that the following diagram commutes:
Here Δ is the comultiplication of the bialgebra, ∇ its multiplication, η its unit and ε its counit. In the sumless Sweedler notation, this property can also be expressed as
S
(
c
(
1
)
)
c
(
2
)
=
c
(
1
)
S
(
c
(
2
)
)
=
ε
(
c
)
1
for all
c
∈
H
.
{\displaystyle S(c_{(1)})c_{(2)}=c_{(1)}S(c_{(2)})=\varepsilon (c)1\qquad {\mbox{ for all }}c\in H.}
As for algebras, one can replace the underlying field K with a commutative ring R in the above definition.
The definition of Hopf algebra is self-dual (as reflected in the symmetry of the above diagram), so if one can define a dual of H (which is always possible if H is finite-dimensional), then it is automatically a Hopf algebra.
=== Structure constants ===
Fixing a basis
{
e
k
}
{\displaystyle \{e_{k}\}}
for the underlying vector space, one may define the algebra in terms of structure constants for multiplication:
e
i
∇
e
j
=
∑
k
μ
i
j
k
e
k
{\displaystyle e_{i}\nabla e_{j}=\sum _{k}\mu _{\;ij}^{k}e_{k}}
for co-multiplication:
Δ
e
i
=
∑
j
,
k
ν
i
j
k
e
j
⊗
e
k
{\displaystyle \Delta e_{i}=\sum _{j,k}\nu _{i}^{\;jk}e_{j}\otimes e_{k}}
and the antipode:
S
e
i
=
∑
j
τ
i
j
e
j
{\displaystyle Se_{i}=\sum _{j}\tau _{i}^{\;j}e_{j}}
Associativity then requires that
μ
i
j
k
μ
k
n
m
=
μ
j
n
k
μ
i
k
m
{\displaystyle \mu _{\;ij}^{k}\mu _{\;kn}^{m}=\mu _{\;jn}^{k}\mu _{\;ik}^{m}}
while co-associativity requires that
ν
k
i
j
ν
i
m
n
=
ν
k
m
i
ν
i
n
j
{\displaystyle \nu _{k}^{\;ij}\nu _{i}^{\;mn}=\nu _{k}^{\;mi}\nu _{i}^{\;nj}}
The connecting axiom requires that
ν
k
i
j
τ
j
m
μ
i
m
n
=
ν
k
j
m
τ
j
i
μ
i
m
n
{\displaystyle \nu _{k}^{\;ij}\tau _{j}^{\;m}\mu _{\;im}^{n}=\nu _{k}^{\;jm}\tau _{j}^{\,\;i}\mu _{\;im}^{n}}
=== Properties of the antipode ===
The antipode S is sometimes required to have a K-linear inverse, which is automatic in the finite-dimensional case, or if H is commutative or cocommutative (or more generally quasitriangular).
In general, S is an antihomomorphism, so S2 is a homomorphism, which is therefore an automorphism if S was invertible (as may be required).
If S2 = idH, then the Hopf algebra is said to be involutive (and the underlying algebra with involution is a *-algebra). If H is finite-dimensional semisimple over a field of characteristic zero, commutative, or cocommutative, then it is involutive.
If a bialgebra B admits an antipode S, then S is unique ("a bialgebra admits at most 1 Hopf algebra structure"). Thus, the antipode does not pose any extra structure which we can choose: Being a Hopf algebra is a property of a bialgebra.
The antipode is an analog to the inversion map on a group that sends g to g−1.
=== Hopf subalgebras ===
A subalgebra A of a Hopf algebra H is a Hopf subalgebra if it is a subcoalgebra of H and the antipode S maps A into A. In other words, a Hopf subalgebra A is a Hopf algebra in its own right when the multiplication, comultiplication, counit and antipode of H are restricted to A (and additionally the identity 1 of H is required to be in A). The Nichols–Zoeller freeness theorem of Warren Nichols and Bettina Zoeller (1989) established that the natural A-module H is free of finite rank if H is finite-dimensional: a generalization of Lagrange's theorem for subgroups. As a corollary of this and integral theory, a Hopf subalgebra of a semisimple finite-dimensional Hopf algebra is automatically semisimple.
A Hopf subalgebra A is said to be right normal in a Hopf algebra H if it satisfies the condition of stability, adr(h)(A) ⊆ A for all h in H, where the right adjoint mapping adr is defined by adr(h)(a) = S(h(1))ah(2) for all a in A, h in H. Similarly, a Hopf subalgebra A is left normal in H if it is stable under the left adjoint mapping defined by adl(h)(a) = h(1)aS(h(2)). The two conditions of normality are equivalent if the antipode S is bijective, in which case A is said to be a normal Hopf subalgebra.
A normal Hopf subalgebra A in H satisfies the condition (of equality of subsets of H): HA+ = A+H where A+ denotes the kernel of the counit on A. This normality condition implies that HA+ is a Hopf ideal of H (i.e. an algebra ideal in the kernel of the counit, a coalgebra coideal and stable under the antipode). As a consequence one has a quotient Hopf algebra H/HA+ and epimorphism H → H/A+H, a theory analogous to that of normal subgroups and quotient groups in group theory.
=== Hopf orders ===
A Hopf order O over an integral domain R with field of fractions K is an order in a Hopf algebra H over K which is closed under the algebra and coalgebra operations: in particular, the comultiplication Δ maps O to O⊗O.
=== Group-like elements ===
A group-like element is a nonzero element x such that Δ(x) = x⊗x. The group-like elements form a group with inverse given by the antipode. A primitive element x satisfies Δ(x) = x⊗1 + 1⊗x.
== Examples ==
Note that functions on a finite group can be identified with the group ring, though these are more naturally thought of as dual – the group ring consists of finite sums of elements, and thus pairs with functions on the group by evaluating the function on the summed elements.
=== Cohomology of Lie groups ===
The cohomology algebra (over a field
K
{\displaystyle K}
) of a Lie group
G
{\displaystyle G}
is a Hopf algebra: the multiplication is provided by the cup product, and the comultiplication
H
∗
(
G
,
K
)
→
H
∗
(
G
×
G
,
K
)
≅
H
∗
(
G
,
K
)
⊗
H
∗
(
G
,
K
)
{\displaystyle H^{*}(G,K)\rightarrow H^{*}(G\times G,K)\cong H^{*}(G,K)\otimes H^{*}(G,K)}
by the group multiplication
G
×
G
→
G
{\displaystyle G\times G\to G}
. This observation was actually a source of the notion of Hopf algebra. Using this structure, Hopf proved a structure theorem for the cohomology algebra of Lie groups.
Theorem (Hopf) Let
A
{\displaystyle A}
be a finite-dimensional, graded commutative, graded cocommutative Hopf algebra over a field of characteristic 0. Then
A
{\displaystyle A}
(as an algebra) is a free exterior algebra with generators of odd degree.
=== Quantum groups and non-commutative geometry ===
Most examples above are either commutative (i.e. the multiplication is commutative) or co-commutative (i.e. Δ = T ∘ Δ where the twist map T: H ⊗ H → H ⊗ H is defined by T(x ⊗ y) = y ⊗ x). Other interesting Hopf algebras are certain "deformations" or "quantizations" of those from example 3 which are neither commutative nor co-commutative. These Hopf algebras are often called quantum groups, a term that is so far only loosely defined. They are important in noncommutative geometry, the idea being the following: a standard algebraic group is well described by its standard Hopf algebra of regular functions; we can then think of the deformed version of this Hopf algebra as describing a certain "non-standard" or "quantized" algebraic group (which is not an algebraic group at all). While there does not seem to be a direct way to define or manipulate these non-standard objects, one can still work with their Hopf algebras, and indeed one identifies them with their Hopf algebras. Hence the name "quantum group".
== Representation theory ==
Let A be a Hopf algebra, and let M and N be A-modules. Then, M ⊗ N is also an A-module, with
a
(
m
⊗
n
)
:=
Δ
(
a
)
(
m
⊗
n
)
=
(
a
1
⊗
a
2
)
(
m
⊗
n
)
=
(
a
1
m
⊗
a
2
n
)
{\displaystyle a(m\otimes n):=\Delta (a)(m\otimes n)=(a_{1}\otimes a_{2})(m\otimes n)=(a_{1}m\otimes a_{2}n)}
for m ∈ M, n ∈ N and Δ(a) = (a1, a2). Furthermore, we can define the trivial representation as the base field K with
a
(
m
)
:=
ϵ
(
a
)
m
{\displaystyle a(m):=\epsilon (a)m}
for m ∈ K. Finally, the dual representation of A can be defined: if M is an A-module and M* is its dual space, then
(
a
f
)
(
m
)
:=
f
(
S
(
a
)
m
)
{\displaystyle (af)(m):=f(S(a)m)}
where f ∈ M* and m ∈ M.
The relationship between Δ, ε, and S ensure that certain natural homomorphisms of vector spaces are indeed homomorphisms of A-modules. For instance, the natural isomorphisms of vector spaces M → M ⊗ K and M → K ⊗ M are also isomorphisms of A-modules. Also, the map of vector spaces M* ⊗ M → K with f ⊗ m → f(m) is also a homomorphism of A-modules. However, the map M ⊗ M* → K is not necessarily a homomorphism of A-modules.
== Related concepts ==
Graded Hopf algebras are often used in algebraic topology: they are the natural algebraic structure on the direct sum of all homology or cohomology groups of an H-space.
Locally compact quantum groups generalize Hopf algebras and carry a topology. The algebra of all continuous functions on a Lie group is a locally compact quantum group.
Quasi-Hopf algebras are generalizations of Hopf algebras, where coassociativity only holds up to a twist. They have been used in the study of the Knizhnik–Zamolodchikov equations.
Multiplier Hopf algebras introduced by Alfons Van Daele in 1994 are generalizations of Hopf algebras where comultiplication from an algebra (with or without unit) to the multiplier algebra of tensor product algebra of the algebra with itself.
Hopf group-(co)algebras introduced by V. G. Turaev in 2000 are also generalizations of Hopf algebras.
=== Weak Hopf algebras ===
Weak Hopf algebras, or quantum groupoids, are generalizations of Hopf algebras. Like Hopf algebras, weak Hopf algebras form a self-dual class of algebras; i.e., if H is a (weak) Hopf algebra, so is H*, the dual space of linear forms on H (with respect to the algebra-coalgebra structure obtained from the natural pairing with H and its coalgebra-algebra structure). A weak Hopf algebra H is usually taken to be a
finite-dimensional algebra and coalgebra with coproduct Δ: H → H ⊗ H and counit ε: H → k satisfying all the axioms of Hopf algebra except possibly Δ(1) ≠ 1 ⊗ 1 or ε(ab) ≠ ε(a)ε(b) for some a,b in H. Instead one requires the following:
(
Δ
(
1
)
⊗
1
)
(
1
⊗
Δ
(
1
)
)
=
(
1
⊗
Δ
(
1
)
)
(
Δ
(
1
)
⊗
1
)
=
(
Δ
⊗
Id
)
Δ
(
1
)
{\displaystyle (\Delta (1)\otimes 1)(1\otimes \Delta (1))=(1\otimes \Delta (1))(\Delta (1)\otimes 1)=(\Delta \otimes {\mbox{Id}})\Delta (1)}
ϵ
(
a
b
c
)
=
∑
ϵ
(
a
b
(
1
)
)
ϵ
(
b
(
2
)
c
)
=
∑
ϵ
(
a
b
(
2
)
)
ϵ
(
b
(
1
)
c
)
{\displaystyle \epsilon (abc)=\sum \epsilon (ab_{(1)})\epsilon (b_{(2)}c)=\sum \epsilon (ab_{(2)})\epsilon (b_{(1)}c)}
for all a, b, and c in H.
H has a weakened antipode S: H → H satisfying the axioms:
S
(
a
(
1
)
)
a
(
2
)
=
1
(
1
)
ϵ
(
a
1
(
2
)
)
{\displaystyle S(a_{(1)})a_{(2)}=1_{(1)}\epsilon (a1_{(2)})}
for all a in H (the right-hand side is the interesting projection usually denoted by ΠR(a) or εs(a) with image a separable subalgebra denoted by HR or Hs);
a
(
1
)
S
(
a
(
2
)
)
=
ϵ
(
1
(
1
)
a
)
1
(
2
)
{\displaystyle a_{(1)}S(a_{(2)})=\epsilon (1_{(1)}a)1_{(2)}}
for all a in H (another interesting projection usually denoted by ΠR(a) or εt(a) with image a separable algebra HL or Ht, anti-isomorphic to HL via S);
S
(
a
(
1
)
)
a
(
2
)
S
(
a
(
3
)
)
=
S
(
a
)
{\displaystyle S(a_{(1)})a_{(2)}S(a_{(3)})=S(a)}
for all a in H.
Note that if Δ(1) = 1 ⊗ 1, these conditions reduce to the two usual conditions on the antipode of a Hopf algebra.
The axioms are partly chosen so that the category of H-modules is a rigid monoidal category. The unit H-module is the separable algebra HL mentioned above.
For example, a finite groupoid algebra is a weak Hopf algebra. In particular, the groupoid algebra on [n] with one pair of invertible arrows eij and eji between i and j in [n] is isomorphic to the algebra H of n x n matrices. The weak Hopf algebra structure on this particular H is given by coproduct Δ(eij) = eij ⊗ eij, counit ε(eij) = 1 and antipode S(eij) = eji. The separable subalgebras HL and HR coincide and are non-central commutative algebras in this particular case (the subalgebra of diagonal matrices).
Early theoretical contributions to weak Hopf algebras are to be found in as well as
=== Hopf algebroids ===
See Hopf algebroid
== Analogy with groups ==
Groups can be axiomatized by the same diagrams (equivalently, operations) as a Hopf algebra, where G is taken to be a set instead of a module. In this case:
the field K is replaced by the 1-point set
there is a natural counit (map to 1 point)
there is a natural comultiplication (the diagonal map)
the unit is the identity element of the group
the multiplication is the multiplication in the group
the antipode is the inverse
In this philosophy, a group can be thought of as a Hopf algebra over the "field with one element".
== Hopf algebras in braided monoidal categories ==
The definition of Hopf algebra is naturally extended to arbitrary braided monoidal categories. A Hopf algebra in such a category
(
C
,
⊗
,
I
,
α
,
λ
,
ρ
,
γ
)
{\displaystyle (C,\otimes ,I,\alpha ,\lambda ,\rho ,\gamma )}
is a sextuple
(
H
,
∇
,
η
,
Δ
,
ε
,
S
)
{\displaystyle (H,\nabla ,\eta ,\Delta ,\varepsilon ,S)}
where
H
{\displaystyle H}
is an object in
C
{\displaystyle C}
, and
∇
:
H
⊗
H
→
H
{\displaystyle \nabla :H\otimes H\to H}
(multiplication),
η
:
I
→
H
{\displaystyle \eta :I\to H}
(unit),
Δ
:
H
→
H
⊗
H
{\displaystyle \Delta :H\to H\otimes H}
(comultiplication),
ε
:
H
→
I
{\displaystyle \varepsilon :H\to I}
(counit),
S
:
H
→
H
{\displaystyle S:H\to H}
(antipode)
— are morphisms in
C
{\displaystyle C}
such that
1) the triple
(
H
,
∇
,
η
)
{\displaystyle (H,\nabla ,\eta )}
is a monoid in the monoidal category
(
C
,
⊗
,
I
,
α
,
λ
,
ρ
,
γ
)
{\displaystyle (C,\otimes ,I,\alpha ,\lambda ,\rho ,\gamma )}
, i.e. the following diagrams are commutative:
2) the triple
(
H
,
Δ
,
ε
)
{\displaystyle (H,\Delta ,\varepsilon )}
is a comonoid in the monoidal category
(
C
,
⊗
,
I
,
α
,
λ
,
ρ
,
γ
)
{\displaystyle (C,\otimes ,I,\alpha ,\lambda ,\rho ,\gamma )}
, i.e. the following diagrams are commutative:
3) the structures of monoid and comonoid on
H
{\displaystyle H}
are compatible: the multiplication
∇
{\displaystyle \nabla }
and the unit
η
{\displaystyle \eta }
are morphisms of comonoids, and (this is equivalent in this situation) at the same time the comultiplication
Δ
{\displaystyle \Delta }
and the counit
ε
{\displaystyle \varepsilon }
are morphisms of monoids; this means that the following diagrams must be commutative:
where
λ
I
:
I
⊗
I
→
I
{\displaystyle \lambda _{I}:I\otimes I\to I}
is the left unit morphism in
C
{\displaystyle C}
, and
θ
{\displaystyle \theta }
the natural transformation of functors
(
A
⊗
B
)
⊗
(
C
⊗
D
)
↣
θ
(
A
⊗
C
)
⊗
(
B
⊗
D
)
{\displaystyle (A\otimes B)\otimes (C\otimes D){\stackrel {\theta }{\rightarrowtail }}(A\otimes C)\otimes (B\otimes D)}
which is unique in the class of natural transformations of functors composed from the structural transformations (associativity, left and right units, transposition, and their inverses) in the category
C
{\displaystyle C}
.
The quintuple
(
H
,
∇
,
η
,
Δ
,
ε
)
{\displaystyle (H,\nabla ,\eta ,\Delta ,\varepsilon )}
with the properties 1),2),3) is called a bialgebra in the category
(
C
,
⊗
,
I
,
α
,
λ
,
ρ
,
γ
)
{\displaystyle (C,\otimes ,I,\alpha ,\lambda ,\rho ,\gamma )}
;
4) the diagram of antipode is commutative:
The typical examples are the following.
Groups. In the monoidal category
(
Set
,
×
,
1
)
{\displaystyle ({\text{Set}},\times ,1)}
of sets (with the cartesian product
×
{\displaystyle \times }
as the tensor product, and an arbitrary singletone, say,
1
=
{
∅
}
{\displaystyle 1=\{\varnothing \}}
, as the unit object) a triple
(
H
,
∇
,
η
)
{\displaystyle (H,\nabla ,\eta )}
is a monoid in the categorical sense if and only if it is a monoid in the usual algebraic sense, i.e. if the operations
∇
(
x
,
y
)
=
x
⋅
y
{\displaystyle \nabla (x,y)=x\cdot y}
and
η
(
1
)
{\displaystyle \eta (1)}
behave like usual multiplication and unit in
H
{\displaystyle H}
(but possibly without the invertibility of elements
x
∈
H
{\displaystyle x\in H}
). At the same time, a triple
(
H
,
Δ
,
ε
)
{\displaystyle (H,\Delta ,\varepsilon )}
is a comonoid in the categorical sense iff
Δ
{\displaystyle \Delta }
is the diagonal operation
Δ
(
x
)
=
(
x
,
x
)
{\displaystyle \Delta (x)=(x,x)}
(and the operation
ε
{\displaystyle \varepsilon }
is defined uniquely as well:
ε
(
x
)
=
∅
{\displaystyle \varepsilon (x)=\varnothing }
). And any such a structure of comonoid
(
H
,
Δ
,
ε
)
{\displaystyle (H,\Delta ,\varepsilon )}
is compatible with any structure of monoid
(
H
,
∇
,
η
)
{\displaystyle (H,\nabla ,\eta )}
in the sense that the diagrams in the section 3 of the definition always commute. As a corollary, each monoid
(
H
,
∇
,
η
)
{\displaystyle (H,\nabla ,\eta )}
in
(
Set
,
×
,
1
)
{\displaystyle ({\text{Set}},\times ,1)}
can naturally be considered as a bialgebra
(
H
,
∇
,
η
,
Δ
,
ε
)
{\displaystyle (H,\nabla ,\eta ,\Delta ,\varepsilon )}
in
(
Set
,
×
,
1
)
{\displaystyle ({\text{Set}},\times ,1)}
, and vice versa. The existence of the antipode
S
:
H
→
H
{\displaystyle S:H\to H}
for such a bialgebra
(
H
,
∇
,
η
,
Δ
,
ε
)
{\displaystyle (H,\nabla ,\eta ,\Delta ,\varepsilon )}
means exactly that every element
x
∈
H
{\displaystyle x\in H}
has an inverse element
x
−
1
∈
H
{\displaystyle x^{-1}\in H}
with respect to the multiplication
∇
(
x
,
y
)
=
x
⋅
y
{\displaystyle \nabla (x,y)=x\cdot y}
. Thus, in the category of sets
(
Set
,
×
,
1
)
{\displaystyle ({\text{Set}},\times ,1)}
Hopf algebras are exactly groups in the usual algebraic sense.
Classical Hopf algebras. In the special case when
(
C
,
⊗
,
s
,
I
)
{\displaystyle (C,\otimes ,s,I)}
is the category of vector spaces over a given field
K
{\displaystyle K}
, the Hopf algebras in
(
C
,
⊗
,
s
,
I
)
{\displaystyle (C,\otimes ,s,I)}
are exactly the classical Hopf algebras described above.
Functional algebras on groups. The standard functional algebras
C
(
G
)
{\displaystyle {\mathcal {C}}(G)}
,
E
(
G
)
{\displaystyle {\mathcal {E}}(G)}
,
O
(
G
)
{\displaystyle {\mathcal {O}}(G)}
,
P
(
G
)
{\displaystyle {\mathcal {P}}(G)}
(of continuous, smooth, holomorphic, regular functions) on groups are Hopf algebras in the category (Ste,
⊙
{\displaystyle \odot }
) of stereotype spaces,
Group algebras. The stereotype group algebras
C
⋆
(
G
)
{\displaystyle {\mathcal {C}}^{\star }(G)}
,
E
⋆
(
G
)
{\displaystyle {\mathcal {E}}^{\star }(G)}
,
O
⋆
(
G
)
{\displaystyle {\mathcal {O}}^{\star }(G)}
,
P
⋆
(
G
)
{\displaystyle {\mathcal {P}}^{\star }(G)}
(of measures, distributions, analytic functionals and currents) on groups are Hopf algebras in the category (Ste,
⊛
{\displaystyle \circledast }
) of stereotype spaces. These Hopf algebras are used in the duality theories for non-commutative groups.
== See also ==
Quasitriangular Hopf algebra
Algebra/set analogy
Representation theory of Hopf algebras
Ribbon Hopf algebra
Superalgebra
Supergroup
Anyonic Lie algebra
Sweedler's Hopf algebra
Hopf algebra of permutations
Milnor–Moore theorem
== Notes and references ==
=== Notes ===
=== Citations ===
=== References ===
Dăscălescu, Sorin; Năstăsescu, Constantin; Raianu, Șerban (2001), Hopf Algebras. An introduction, Pure and Applied Mathematics, vol. 235 (1st ed.), Marcel Dekker, ISBN 978-0-8247-0481-0, Zbl 0962.16026.
Cartier, Pierre (2007), "A Primer of Hopf Algebras", in Cartier, P.; Moussa, P.; Julia, B.; Vanhove, P. (eds.), Frontiers in Number Theory, Physics, and Geometry, vol. II, Berlin: Springer, pp. 537–615, doi:10.1007/978-3-540-30308-4_12, ISBN 978-3-540-30307-7
Fuchs, Jürgen (1992), Affine Lie algebras and quantum groups. An introduction with applications in conformal field theory, Cambridge Monographs on Mathematical Physics, Cambridge: Cambridge University Press, ISBN 978-0-521-48412-1, Zbl 0925.17031
Heinz Hopf, Uber die Topologie der Gruppen-Mannigfaltigkeiten und ihrer Verallgemeinerungen, Annals of Mathematics 42 (1941), 22–52. Reprinted in Selecta Heinz Hopf, pp. 119–151, Springer, Berlin (1964). MR4784, Zbl 0025.09303
Montgomery, Susan (1993), Hopf algebras and their actions on rings, Regional Conference Series in Mathematics, vol. 82, Providence, Rhode Island: American Mathematical Society, ISBN 978-0-8218-0738-5, Zbl 0793.16029
Street, Ross (2007), Quantum groups: A Path To Current Algebra, Australian Mathematical Society Lecture Series, vol. 19, Cambridge University Press, ISBN 978-0-521-69524-4, MR 2294803, Zbl 1117.16031.
Sweedler, Moss E. (1969), Hopf algebras, Mathematics Lecture Note Series, W. A. Benjamin, Inc., New York, ISBN 9780805392548, MR 0252485, Zbl 0194.32901
Underwood, Robert G. (2011), An introduction to Hopf algebras, Berlin: Springer-Verlag, ISBN 978-0-387-72765-3, Zbl 1234.16022
Turaev, Vladimir; Virelizier, Alexis (2017), Monoidal Categories and Topological Field Theory, Progress in Mathematics, vol. 322, Springer, doi:10.1007/978-3-319-49834-8, ISBN 978-3-319-49833-1.
Akbarov, S.S. (2003). "Pontryagin duality in the theory of topological vector spaces and in topological algebra". Journal of Mathematical Sciences. 113 (2): 179–349. doi:10.1023/A:1020929201133. S2CID 115297067.
Akbarov, S.S. (2009). "Holomorphic functions of exponential type and duality for Stein groups with algebraic connected component of identity". Journal of Mathematical Sciences. 162 (4): 459–586. arXiv:0806.3205. doi:10.1007/s10958-009-9646-1. S2CID 115153766. | Wikipedia/Hopf_algebra |
In mathematics, a bialgebra over a field K is a vector space over K which is both a unital associative algebra and a counital coassociative coalgebra.: 46 The algebraic and coalgebraic structures are made compatible with a few more axioms. Specifically, the comultiplication and the counit are both unital algebra homomorphisms, or equivalently, the multiplication and the unit of the algebra both are coalgebra morphisms.: 46 (These statements are equivalent since they are expressed by the same commutative diagrams.): 46
Similar bialgebras are related by bialgebra homomorphisms. A bialgebra homomorphism is a linear map that is both an algebra and a coalgebra homomorphism.: 45
As reflected in the symmetry of the commutative diagrams, the definition of bialgebra is self-dual, so if one can define a dual of B (which is always possible if B is finite-dimensional), then it is automatically a bialgebra.
== Formal definition ==
(B, ∇, η, Δ, ε) is a bialgebra over K if it has the following properties:
B is a vector space over K;
there are K-linear maps (multiplication) ∇: B ⊗ B → B (equivalent to K-multilinear map ∇: B × B → B) and (unit) η: K → B, such that (B, ∇, η) is a unital associative algebra;
there are K-linear maps (comultiplication) Δ: B → B ⊗ B and (counit) ε: B → K, such that (B, Δ, ε) is a (counital coassociative) coalgebra;
compatibility conditions expressed by the following commutative diagrams:
Multiplication ∇ and comultiplication Δ: 147
where τ: B ⊗ B → B ⊗ B is the linear map defined by τ(x ⊗ y) = y ⊗ x for all x and y in B,
Multiplication ∇ and counit ε: 148
Comultiplication Δ and unit η: 148
Unit η and counit ε: 148
== Coassociativity and counit ==
The K-linear map Δ: B → B ⊗ B is coassociative if
(
i
d
B
⊗
Δ
)
∘
Δ
=
(
Δ
⊗
i
d
B
)
∘
Δ
{\displaystyle (\mathrm {id} _{B}\otimes \Delta )\circ \Delta =(\Delta \otimes \mathrm {id} _{B})\circ \Delta }
.
The K-linear map ε: B → K is a counit if
(
i
d
B
⊗
ϵ
)
∘
Δ
=
i
d
B
=
(
ϵ
⊗
i
d
B
)
∘
Δ
{\displaystyle (\mathrm {id} _{B}\otimes \epsilon )\circ \Delta =\mathrm {id} _{B}=(\epsilon \otimes \mathrm {id} _{B})\circ \Delta }
.
Coassociativity and counit are expressed by the commutativity of the following two diagrams (they are the duals of the diagrams expressing associativity and unit of an algebra):
== Compatibility conditions ==
The four commutative diagrams can be read either as "comultiplication and counit are homomorphisms of algebras" or, equivalently, "multiplication and unit are homomorphisms of coalgebras".
These statements are meaningful once we explain the natural structures of algebra and coalgebra in all the vector spaces involved besides B: (K, ∇0, η0) is a unital associative algebra in an obvious way and (B ⊗ B, ∇2, η2) is a unital associative algebra with unit and multiplication
η
2
:=
(
η
⊗
η
)
:
K
⊗
K
≡
K
→
(
B
⊗
B
)
{\displaystyle \eta _{2}:=(\eta \otimes \eta ):K\otimes K\equiv K\to (B\otimes B)}
∇
2
:=
(
∇
⊗
∇
)
∘
(
i
d
⊗
τ
⊗
i
d
)
:
(
B
⊗
B
)
⊗
(
B
⊗
B
)
→
(
B
⊗
B
)
{\displaystyle \nabla _{2}:=(\nabla \otimes \nabla )\circ (id\otimes \tau \otimes id):(B\otimes B)\otimes (B\otimes B)\to (B\otimes B)}
,
so that
∇
2
(
(
x
1
⊗
x
2
)
⊗
(
y
1
⊗
y
2
)
)
=
∇
(
x
1
⊗
y
1
)
⊗
∇
(
x
2
⊗
y
2
)
{\displaystyle \nabla _{2}((x_{1}\otimes x_{2})\otimes (y_{1}\otimes y_{2}))=\nabla (x_{1}\otimes y_{1})\otimes \nabla (x_{2}\otimes y_{2})}
or, omitting ∇ and writing multiplication as juxtaposition,
(
x
1
⊗
x
2
)
(
y
1
⊗
y
2
)
=
x
1
y
1
⊗
x
2
y
2
{\displaystyle (x_{1}\otimes x_{2})(y_{1}\otimes y_{2})=x_{1}y_{1}\otimes x_{2}y_{2}}
;
similarly, (K, Δ0, ε0) is a coalgebra in an obvious way and B ⊗ B is a coalgebra with counit and comultiplication
ϵ
2
:=
(
ϵ
⊗
ϵ
)
:
(
B
⊗
B
)
→
K
⊗
K
≡
K
{\displaystyle \epsilon _{2}:=(\epsilon \otimes \epsilon ):(B\otimes B)\to K\otimes K\equiv K}
Δ
2
:=
(
i
d
⊗
τ
⊗
i
d
)
∘
(
Δ
⊗
Δ
)
:
(
B
⊗
B
)
→
(
B
⊗
B
)
⊗
(
B
⊗
B
)
{\displaystyle \Delta _{2}:=(id\otimes \tau \otimes id)\circ (\Delta \otimes \Delta ):(B\otimes B)\to (B\otimes B)\otimes (B\otimes B)}
.
Then, diagrams 1 and 3 say that Δ: B → B ⊗ B is a homomorphism of unital (associative) algebras (B, ∇, η) and (B ⊗ B, ∇2, η2)
Δ
∘
∇
=
∇
2
∘
(
Δ
⊗
Δ
)
:
(
B
⊗
B
)
→
(
B
⊗
B
)
{\displaystyle \Delta \circ \nabla =\nabla _{2}\circ (\Delta \otimes \Delta ):(B\otimes B)\to (B\otimes B)}
, or simply Δ(xy) = Δ(x) Δ(y),
Δ
∘
η
=
η
2
:
K
→
(
B
⊗
B
)
{\displaystyle \Delta \circ \eta =\eta _{2}:K\to (B\otimes B)}
, or simply Δ(1B) = 1B ⊗ B;
diagrams 2 and 4 say that ε: B → K is a homomorphism of unital (associative) algebras (B, ∇, η) and (K, ∇0, η0):
ϵ
∘
∇
=
∇
0
∘
(
ϵ
⊗
ϵ
)
:
(
B
⊗
B
)
→
K
{\displaystyle \epsilon \circ \nabla =\nabla _{0}\circ (\epsilon \otimes \epsilon ):(B\otimes B)\to K}
, or simply ε(xy) = ε(x) ε(y)
ϵ
∘
η
=
η
0
:
K
→
K
{\displaystyle \epsilon \circ \eta =\eta _{0}:K\to K}
, or simply ε(1B) = 1K.
Equivalently, diagrams 1 and 2 say that ∇: B ⊗ B → B is a homomorphism of (counital coassociative) coalgebras (B ⊗ B, Δ2, ε2) and (B, Δ, ε):
∇
⊗
∇
∘
Δ
2
=
Δ
∘
∇
:
(
B
⊗
B
)
→
(
B
⊗
B
)
,
{\displaystyle \nabla \otimes \nabla \circ \Delta _{2}=\Delta \circ \nabla :(B\otimes B)\to (B\otimes B),}
∇
0
∘
ϵ
2
=
ϵ
∘
∇
:
(
B
⊗
B
)
→
K
{\displaystyle \nabla _{0}\circ \epsilon _{2}=\epsilon \circ \nabla :(B\otimes B)\to K}
;
diagrams 3 and 4 say that η: K → B is a homomorphism of (counital coassociative) coalgebras (K, Δ0, ε0) and (B, Δ, ε):
η
2
∘
Δ
0
=
Δ
∘
η
:
K
→
(
B
⊗
B
)
,
{\displaystyle \eta _{2}\circ \Delta _{0}=\Delta \circ \eta :K\to (B\otimes B),}
η
0
∘
ϵ
0
=
ϵ
∘
η
:
K
→
K
{\displaystyle \eta _{0}\circ \epsilon _{0}=\epsilon \circ \eta :K\to K}
,
where
ϵ
0
=
i
d
K
=
η
0
{\displaystyle \epsilon _{0}=id_{K}=\eta _{0}}
.
== Examples ==
=== Group bialgebra ===
An example of a bialgebra is the set of functions from a finite group G (or more generally, any finite monoid) to
R
{\displaystyle \mathbb {R} }
, which we may represent as a vector space
R
G
{\displaystyle \mathbb {R} ^{G}}
consisting of linear combinations of standard basis vectors eg for each g ∈ G, which may represent a probability distribution over G in the case of vectors whose coefficients are all non-negative and sum to 1. An example of suitable comultiplication operators and counits which yield a counital coalgebra are
Δ
(
e
g
)
=
e
g
⊗
e
g
,
{\displaystyle \Delta (\mathbf {e} _{g})=\mathbf {e} _{g}\otimes \mathbf {e} _{g}\,,}
which represents making a copy of a random variable (which we extend to all
R
G
{\displaystyle \mathbb {R} ^{G}}
by linearity), and
ε
(
e
g
)
=
1
,
{\displaystyle \varepsilon (\mathbf {e} _{g})=1\,,}
(again extended linearly to all of
R
G
{\displaystyle \mathbb {R} ^{G}}
) which represents "tracing out" a random variable — i.e., forgetting the value of a random variable (represented by a single tensor factor) to obtain a marginal distribution on the remaining variables (the remaining tensor factors).
Given the interpretation of (Δ,ε) in terms of probability distributions as above, the bialgebra consistency conditions amount to constraints on (∇,η) as follows:
η is an operator preparing a normalized probability distribution which is independent of all other random variables;
The product ∇ maps a probability distribution on two variables to a probability distribution on one variable;
Copying a random variable in the distribution given by η is equivalent to having two independent random variables in the distribution η;
Taking the product of two random variables, and preparing a copy of the resulting random variable, has the same distribution as preparing copies of each random variable independently of one another, and multiplying them together in pairs.
A pair (∇,η) which satisfy these constraints are the convolution operator
∇
(
e
g
⊗
e
h
)
=
e
g
h
,
{\displaystyle \nabla {\bigl (}\mathbf {e} _{g}\otimes \mathbf {e} _{h}{\bigr )}=\mathbf {e} _{gh}\,,}
again extended to all
R
G
⊗
R
G
{\displaystyle \mathbb {R} ^{G}\otimes \mathbb {R} ^{G}}
by linearity; this produces a normalized probability distribution from a distribution on two random variables, and has as a unit the delta-distribution
η
=
e
i
,
{\displaystyle \eta =\mathbf {e} _{i}\;,}
where i ∈ G denotes the identity element of the group G.
=== Other examples ===
Other examples of bialgebras include the tensor algebra, which can be made into a bialgebra by adding the appropriate comultiplication and counit; these are worked out in detail in that article.
Bialgebras can often be extended to Hopf algebras, if an appropriate antipode can be found; thus, all Hopf algebras are examples of bialgebras.: 151 Similar structures with different compatibility between the product and comultiplication, or different types of multiplication and comultiplication, include Lie bialgebras and Frobenius algebras. Additional examples are given in the article on coalgebras.
== See also ==
Quasi-bialgebra
== Notes ==
== References ==
Dăscălescu, Sorin; Năstăsescu, Constantin; Raianu, Șerban (2001), "4. Bialgebras and Hopf Algebras", Hopf Algebras: An introduction, Pure and Applied Mathematics, vol. 235, Marcel Dekker, ISBN 0-8247-0481-9.
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, V. (2010). "Bialgebras and Hopf algebras. Motivation, definitions, and examples". Algebras, Rings and Modules Lie Algebras and Hopf Algebras. American Mathematical Society. pp. 131–173. ISBN 978-0-8218-5262-0.Download full-text PDF
Kassel, Christian (2012). "The Language of Hopf Algebras". Quantum Groups. Springer Science & Business Media. ISBN 978-1-4612-0783-2.
Underwood, Robert G. (28 August 2011). An Introduction to Hopf Algebras. Springer Science & Business Media. ISBN 978-0-387-72766-0. Online Book | Wikipedia/Bialgebra |
In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space
C
n
{\displaystyle \mathbb {C} ^{n}}
. The existence of a complex derivative in a neighbourhood is a very strong condition: It implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (is analytic). Holomorphic functions are the central objects of study in complex analysis.
Though the term analytic function is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. That all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis.
Holomorphic functions are also sometimes referred to as regular functions. A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase "holomorphic at a point
z
0
{\displaystyle z_{0}}
" means not just differentiable at
z
0
{\displaystyle z_{0}}
, but differentiable everywhere within some close neighbourhood of
z
0
{\displaystyle z_{0}}
in the complex plane.
== Definition ==
Given a complex-valued function
f
{\displaystyle f}
of a single complex variable, the derivative of
f
{\displaystyle f}
at a point
z
0
{\displaystyle z_{0}}
in its domain is defined as the limit
f
′
(
z
0
)
=
lim
z
→
z
0
f
(
z
)
−
f
(
z
0
)
z
−
z
0
.
{\displaystyle f'(z_{0})=\lim _{z\to z_{0}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}.}
This is the same definition as for the derivative of a real function, except that all quantities are complex. In particular, the limit is taken as the complex number
z
{\displaystyle z}
tends to
z
0
{\displaystyle z_{0}}
, and this means that the same value is obtained for any sequence of complex values for
z
{\displaystyle z}
that tends to
z
0
{\displaystyle z_{0}}
. If the limit exists,
f
{\displaystyle f}
is said to be complex differentiable at
z
0
{\displaystyle z_{0}}
. This concept of complex differentiability shares several properties with real differentiability: It is linear and obeys the product rule, quotient rule, and chain rule.
A function is holomorphic on an open set
U
{\displaystyle U}
if it is complex differentiable at every point of
U
{\displaystyle U}
. A function
f
{\displaystyle f}
is holomorphic at a point
z
0
{\displaystyle z_{0}}
if it is holomorphic on some neighbourhood of
z
0
{\displaystyle z_{0}}
.
A function is holomorphic on some non-open set
A
{\displaystyle A}
if it is holomorphic at every point of
A
{\displaystyle A}
.
A function may be complex differentiable at a point but not holomorphic at this point. For example, the function
f
(
z
)
=
|
z
|
l
2
=
z
z
¯
{\displaystyle \textstyle f(z)=|z|{\vphantom {l}}^{2}=z{\bar {z}}}
is complex differentiable at
0
{\displaystyle 0}
, but is not complex differentiable anywhere else, esp. including in no place close to
0
{\displaystyle 0}
(see the Cauchy–Riemann equations, below). So, it is not holomorphic at
0
{\displaystyle 0}
.
The relationship between real differentiability and complex differentiability is the following: If a complex function
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
{\displaystyle f(x+iy)=u(x,y)+i\,v(x,y)}
is holomorphic, then
u
{\displaystyle u}
and
v
{\displaystyle v}
have first partial derivatives with respect to
x
{\displaystyle x}
and
y
{\displaystyle y}
, and satisfy the Cauchy–Riemann equations:
∂
u
∂
x
=
∂
v
∂
y
and
∂
u
∂
y
=
−
∂
v
∂
x
{\displaystyle {\frac {\partial u}{\partial x}}={\frac {\partial v}{\partial y}}\qquad {\mbox{and}}\qquad {\frac {\partial u}{\partial y}}=-{\frac {\partial v}{\partial x}}\,}
or, equivalently, the Wirtinger derivative of
f
{\displaystyle f}
with respect to
z
¯
{\displaystyle {\bar {z}}}
, the complex conjugate of
z
{\displaystyle z}
, is zero:
∂
f
∂
z
¯
=
0
,
{\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}=0,}
which is to say that, roughly,
f
{\displaystyle f}
is functionally independent from
z
¯
{\displaystyle {\bar {z}}}
, the complex conjugate of
z
{\displaystyle z}
.
If continuity is not given, the converse is not necessarily true. A simple converse is that if
u
{\displaystyle u}
and
v
{\displaystyle v}
have continuous first partial derivatives and satisfy the Cauchy–Riemann equations, then
f
{\displaystyle f}
is holomorphic. A more satisfying converse, which is much harder to prove, is the Looman–Menchoff theorem: if
f
{\displaystyle f}
is continuous,
u
{\displaystyle u}
and
v
{\displaystyle v}
have first partial derivatives (but not necessarily continuous), and they satisfy the Cauchy–Riemann equations, then
f
{\displaystyle f}
is holomorphic.
== Terminology ==
The term holomorphic was introduced in 1875 by Charles Briot and Jean-Claude Bouquet, two of Augustin-Louis Cauchy's students, and derives from the Greek ὅλος (hólos) meaning "whole", and μορφή (morphḗ) meaning "form" or "appearance" or "type", in contrast to the term meromorphic derived from μέρος (méros) meaning "part". A holomorphic function resembles an entire function ("whole") in a domain of the complex plane while a meromorphic function (defined to mean holomorphic except at certain isolated poles), resembles a rational fraction ("part") of entire functions in a domain of the complex plane. Cauchy had instead used the term synectic.
Today, the term "holomorphic function" is sometimes preferred to "analytic function". An important result in complex analysis is that every holomorphic function is complex analytic, a fact that does not follow obviously from the definitions. The term "analytic" is however also in wide use.
== Properties ==
Because complex differentiation is linear and obeys the product, quotient, and chain rules, the sums, products and compositions of holomorphic functions are holomorphic, and the quotient of two holomorphic functions is holomorphic wherever the denominator is not zero. That is, if functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are holomorphic in a domain
U
{\displaystyle U}
, then so are
f
+
g
{\displaystyle f+g}
,
f
−
g
{\displaystyle f-g}
,
f
g
{\displaystyle fg}
, and
f
∘
g
{\displaystyle f\circ g}
. Furthermore,
f
/
g
{\displaystyle f/g}
is holomorphic if
g
{\displaystyle g}
has no zeros in
U
{\displaystyle U}
; otherwise it is meromorphic.
If one identifies
C
{\displaystyle \mathbb {C} }
with the real plane
R
2
{\displaystyle \textstyle \mathbb {R} ^{2}}
, then the holomorphic functions coincide with those functions of two real variables with continuous first derivatives which solve the Cauchy–Riemann equations, a set of two partial differential equations.
Every holomorphic function can be separated into its real and imaginary parts
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
{\displaystyle f(x+iy)=u(x,y)+i\,v(x,y)}
, and each of these is a harmonic function on
R
2
{\displaystyle \textstyle \mathbb {R} ^{2}}
(each satisfies Laplace's equation
∇
2
u
=
∇
2
v
=
0
{\displaystyle \textstyle \nabla ^{2}u=\nabla ^{2}v=0}
), with
v
{\displaystyle v}
the harmonic conjugate of
u
{\displaystyle u}
.
Conversely, every harmonic function
u
(
x
,
y
)
{\displaystyle u(x,y)}
on a simply connected domain
Ω
⊂
R
2
{\displaystyle \textstyle \Omega \subset \mathbb {R} ^{2}}
is the real part of a holomorphic function: If
v
{\displaystyle v}
is the harmonic conjugate of
u
{\displaystyle u}
, unique up to a constant, then
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
{\displaystyle f(x+iy)=u(x,y)+i\,v(x,y)}
is holomorphic.
Cauchy's integral theorem implies that the contour integral of every holomorphic function along a loop vanishes:
∮
γ
f
(
z
)
d
z
=
0.
{\displaystyle \oint _{\gamma }f(z)\,\mathrm {d} z=0.}
Here
γ
{\displaystyle \gamma }
is a rectifiable path in a simply connected complex domain
U
⊂
C
{\displaystyle U\subset \mathbb {C} }
whose start point is equal to its end point, and
f
:
U
→
C
{\displaystyle f\colon U\to \mathbb {C} }
is a holomorphic function.
Cauchy's integral formula states that every function holomorphic inside a disk is completely determined by its values on the disk's boundary. Furthermore: Suppose
U
⊂
C
{\displaystyle U\subset \mathbb {C} }
is a complex domain,
f
:
U
→
C
{\displaystyle f\colon U\to \mathbb {C} }
is a holomorphic function and the closed disk
D
≡
{
z
:
|
z
−
z
0
|
≤
r
}
{\displaystyle D\equiv \{z:|z-z_{0}|\leq r\}}
is completely contained in
U
{\displaystyle U}
. Let
γ
{\displaystyle \gamma }
be the circle forming the boundary of
D
{\displaystyle D}
. Then for every
a
{\displaystyle a}
in the interior of
D
{\displaystyle D}
:
f
(
a
)
=
1
2
π
i
∮
γ
f
(
z
)
z
−
a
d
z
{\displaystyle f(a)={\frac {1}{2\pi i}}\oint _{\gamma }{\frac {f(z)}{z-a}}\,\mathrm {d} z}
where the contour integral is taken counter-clockwise.
The derivative
f
′
(
a
)
{\displaystyle {f'}(a)}
can be written as a contour integral using Cauchy's differentiation formula:
f
′
(
a
)
=
1
2
π
i
∮
γ
f
(
z
)
(
z
−
a
)
2
d
z
,
{\displaystyle f'\!(a)={\frac {1}{2\pi i}}\oint _{\gamma }{\frac {f(z)}{(z-a)^{2}}}\,\mathrm {d} z,}
for any simple loop positively winding once around
a
{\displaystyle a}
, and
f
′
(
a
)
=
lim
γ
→
a
i
2
A
(
γ
)
∮
γ
f
(
z
)
d
z
¯
,
{\displaystyle f'\!(a)=\lim \limits _{\gamma \to a}{\frac {i}{2{\mathcal {A}}(\gamma )}}\oint _{\gamma }f(z)\,\mathrm {d} {\bar {z}},}
for infinitesimal positive loops
γ
{\displaystyle \gamma }
around
a
{\displaystyle a}
.
In regions where the first derivative is not zero, holomorphic functions are conformal: they preserve angles and the shape (but not size) of small figures.
Every holomorphic function is analytic. That is, a holomorphic function
f
{\displaystyle f}
has derivatives of every order at each point
a
{\displaystyle a}
in its domain, and it coincides with its own Taylor series at
a
{\displaystyle a}
in a neighbourhood of
a
{\displaystyle a}
. In fact,
f
{\displaystyle f}
coincides with its Taylor series at
a
{\displaystyle a}
in any disk centred at that point and lying within the domain of the function.
From an algebraic point of view, the set of holomorphic functions on an open set is a commutative ring and a complex vector space. Additionally, the set of holomorphic functions in an open set
U
{\displaystyle U}
is an integral domain if and only if the open set
U
{\displaystyle U}
is connected. In fact, it is a locally convex topological vector space, with the seminorms being the suprema on compact subsets.
From a geometric perspective, a function
f
{\displaystyle f}
is holomorphic at
z
0
{\displaystyle z_{0}}
if and only if its exterior derivative
d
f
{\displaystyle \mathrm {d} f}
in a neighbourhood
U
{\displaystyle U}
of
z
0
{\displaystyle z_{0}}
is equal to
f
′
(
z
)
d
z
{\displaystyle f'(z)\,\mathrm {d} z}
for some continuous function
f
′
{\displaystyle f'}
. It follows from
0
=
d
2
f
=
d
(
f
′
d
z
)
=
d
f
′
∧
d
z
{\displaystyle 0=\mathrm {d} ^{2}f=\mathrm {d} (f'\,\mathrm {d} z)=\mathrm {d} f'\wedge \mathrm {d} z}
that
d
f
′
{\displaystyle \mathrm {d} f'}
is also proportional to
d
z
{\displaystyle \mathrm {d} z}
, implying that the derivative
d
f
′
{\displaystyle \mathrm {d} f'}
is itself holomorphic and thus that
f
{\displaystyle f}
is infinitely differentiable. Similarly,
d
(
f
d
z
)
=
f
′
d
z
∧
d
z
=
0
{\displaystyle \mathrm {d} (f\,\mathrm {d} z)=f'\,\mathrm {d} z\wedge \mathrm {d} z=0}
implies that any function
f
{\displaystyle f}
that is holomorphic on the simply connected region
U
{\displaystyle U}
is also integrable on
U
{\displaystyle U}
.
(For a path
γ
{\displaystyle \gamma }
from
z
0
{\displaystyle z_{0}}
to
z
{\displaystyle z}
lying entirely in
U
{\displaystyle U}
, define
F
γ
(
z
)
=
F
(
0
)
+
∫
γ
f
d
z
{\displaystyle F_{\gamma }(z)=F(0)+\int _{\gamma }f\,\mathrm {d} z}
; in light of the Jordan curve theorem and the generalized Stokes' theorem,
F
γ
(
z
)
{\displaystyle F_{\gamma }(z)}
is independent of the particular choice of path
γ
{\displaystyle \gamma }
, and thus
F
(
z
)
{\displaystyle F(z)}
is a well-defined function on
U
{\displaystyle U}
having
d
F
=
f
d
z
{\displaystyle \mathrm {d} F=f\,\mathrm {d} z}
or
f
=
d
F
d
z
{\displaystyle f={\frac {\mathrm {d} F}{\mathrm {d} z}}}
.)
== Examples ==
All polynomial functions in
z
{\displaystyle z}
with complex coefficients are entire functions (holomorphic in the whole complex plane
C
{\displaystyle \mathbb {C} }
), and so are the exponential function
exp
z
{\displaystyle \exp z}
and the trigonometric functions
cos
z
=
1
2
(
exp
(
+
i
z
)
+
exp
(
−
i
z
)
)
{\displaystyle \cos {z}={\tfrac {1}{2}}{\bigl (}\exp(+iz)+\exp(-iz){\bigr )}}
and
sin
z
=
−
1
2
i
(
exp
(
+
i
z
)
−
exp
(
−
i
z
)
)
{\displaystyle \sin {z}=-{\tfrac {1}{2}}i{\bigl (}\exp(+iz)-\exp(-iz){\bigr )}}
(cf. Euler's formula). The principal branch of the complex logarithm function
log
z
{\displaystyle \log z}
is holomorphic on the domain
C
∖
{
z
∈
R
:
z
≤
0
}
{\displaystyle \mathbb {C} \smallsetminus \{z\in \mathbb {R} :z\leq 0\}}
. The square root function can be defined as
z
≡
exp
(
1
2
log
z
)
{\displaystyle {\sqrt {z}}\equiv \exp {\bigl (}{\tfrac {1}{2}}\log z{\bigr )}}
and is therefore holomorphic wherever the logarithm
log
z
{\displaystyle \log z}
is. The reciprocal function
1
z
{\displaystyle {\tfrac {1}{z}}}
is holomorphic on
C
∖
{
0
}
{\displaystyle \mathbb {C} \smallsetminus \{0\}}
. (The reciprocal function, and any other rational function, is meromorphic on
C
{\displaystyle \mathbb {C} }
.)
As a consequence of the Cauchy–Riemann equations, any real-valued holomorphic function must be constant. Therefore, the absolute value
|
z
|
{\displaystyle |z|}
, the argument
arg
z
{\displaystyle \arg z}
, the real part
Re
(
z
)
{\displaystyle \operatorname {Re} (z)}
and the imaginary part
Im
(
z
)
{\displaystyle \operatorname {Im} (z)}
are not holomorphic. Another typical example of a continuous function which is not holomorphic is the complex conjugate
z
¯
.
{\displaystyle {\bar {z}}.}
(The complex conjugate is antiholomorphic.)
== Several variables ==
The definition of a holomorphic function generalizes to several complex variables in a straightforward way. A function
f
:
(
z
1
,
z
2
,
…
,
z
n
)
↦
f
(
z
1
,
z
2
,
…
,
z
n
)
{\displaystyle f\colon (z_{1},z_{2},\ldots ,z_{n})\mapsto f(z_{1},z_{2},\ldots ,z_{n})}
in
n
{\displaystyle n}
complex variables is analytic at a point
p
{\displaystyle p}
if there exists a neighbourhood of
p
{\displaystyle p}
in which
f
{\displaystyle f}
is equal to a convergent power series in
n
{\displaystyle n}
complex variables;
the function
f
{\displaystyle f}
is holomorphic in an open subset
U
{\displaystyle U}
of
C
n
{\displaystyle \mathbb {C} ^{n}}
if it is analytic at each point in
U
{\displaystyle U}
. Osgood's lemma shows (using the multivariate Cauchy integral formula) that, for a continuous function
f
{\displaystyle f}
, this is equivalent to
f
{\displaystyle f}
being holomorphic in each variable separately (meaning that if any
n
−
1
{\displaystyle n-1}
coordinates are fixed, then the restriction of
f
{\displaystyle f}
is a holomorphic function of the remaining coordinate). The much deeper Hartogs' theorem proves that the continuity assumption is unnecessary:
f
{\displaystyle f}
is holomorphic if and only if it is holomorphic in each variable separately.
More generally, a function of several complex variables that is square integrable over every compact subset of its domain is analytic if and only if it satisfies the Cauchy–Riemann equations in the sense of distributions.
Functions of several complex variables are in some basic ways more complicated than functions of a single complex variable. For example, the region of convergence of a power series is not necessarily an open ball; these regions are logarithmically-convex Reinhardt domains, the simplest example of which is a polydisk. However, they also come with some fundamental restrictions. Unlike functions of a single complex variable, the possible domains on which there are holomorphic functions that cannot be extended to larger domains are highly limited. Such a set is called a domain of holomorphy.
A complex differential
(
p
,
0
)
{\displaystyle (p,0)}
-form
α
{\displaystyle \alpha }
is holomorphic if and only if its antiholomorphic Dolbeault derivative is zero:
∂
¯
α
=
0
{\displaystyle {\bar {\partial }}\alpha =0}
.
== Extension to functional analysis ==
The concept of a holomorphic function can be extended to the infinite-dimensional spaces of functional analysis. For instance, the Fréchet or Gateaux derivative can be used to define a notion of a holomorphic function on a Banach space over the field of complex numbers.
== See also ==
== References ==
== Further reading ==
Blakey, Joseph (1958). University Mathematics (2nd ed.). London, UK: Blackie and Sons. OCLC 2370110.
== External links ==
"Analytic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Holomorphic_function |
In algebra, a unit or invertible element of a ring is an invertible element for the multiplication of the ring. That is, an element u of a ring R is a unit if there exists v in R such that
v
u
=
u
v
=
1
,
{\displaystyle vu=uv=1,}
where 1 is the multiplicative identity; the element v is unique for this property and is called the multiplicative inverse of u. The set of units of R forms a group R× under multiplication, called the group of units or unit group of R. Other notations for the unit group are R∗, U(R), and E(R) (from the German term Einheit).
Less commonly, the term unit is sometimes used to refer to the element 1 of the ring, in expressions like ring with a unit or unit ring, and also unit matrix. Because of this ambiguity, 1 is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of a rng.
== Examples ==
The multiplicative identity 1 and its additive inverse −1 are always units. More generally, any root of unity in a ring R is a unit: if rn = 1, then rn−1 is a multiplicative inverse of r.
In a nonzero ring, the element 0 is not a unit, so R× is not closed under addition.
A nonzero ring R in which every nonzero element is a unit (that is, R× = R ∖ {0}) is called a division ring (or a skew-field). A commutative division ring is called a field. For example, the unit group of the field of real numbers R is R ∖ {0}.
=== Integer ring ===
In the ring of integers Z, the only units are 1 and −1.
In the ring Z/nZ of integers modulo n, the units are the congruence classes (mod n) represented by integers coprime to n. They constitute the multiplicative group of integers modulo n.
=== Ring of integers of a number field ===
In the ring Z[√3] obtained by adjoining the quadratic integer √3 to Z, one has (2 + √3)(2 − √3) = 1, so 2 + √3 is a unit, and so are its powers, so Z[√3] has infinitely many units.
More generally, for the ring of integers R in a number field F, Dirichlet's unit theorem states that R× is isomorphic to the group
Z
n
×
μ
R
{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}
where
μ
R
{\displaystyle \mu _{R}}
is the (finite, cyclic) group of roots of unity in R and n, the rank of the unit group, is
n
=
r
1
+
r
2
−
1
,
{\displaystyle n=r_{1}+r_{2}-1,}
where
r
1
,
r
2
{\displaystyle r_{1},r_{2}}
are the number of real embeddings and the number of pairs of complex embeddings of F, respectively.
This recovers the Z[√3] example: The unit group of (the ring of integers of) a real quadratic field is infinite of rank 1, since
r
1
=
2
,
r
2
=
0
{\displaystyle r_{1}=2,r_{2}=0}
.
=== Polynomials and power series ===
For a commutative ring R, the units of the polynomial ring R[x] are the polynomials
p
(
x
)
=
a
0
+
a
1
x
+
⋯
+
a
n
x
n
{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}
such that a0 is a unit in R and the remaining coefficients
a
1
,
…
,
a
n
{\displaystyle a_{1},\dots ,a_{n}}
are nilpotent, i.e., satisfy
a
i
N
=
0
{\displaystyle a_{i}^{N}=0}
for some N.
In particular, if R is a domain (or more generally reduced), then the units of R[x] are the units of R.
The units of the power series ring
R
[
[
x
]
]
{\displaystyle R[[x]]}
are the power series
p
(
x
)
=
∑
i
=
0
∞
a
i
x
i
{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}
such that a0 is a unit in R.
=== Matrix rings ===
The unit group of the ring Mn(R) of n × n matrices over a ring R is the group GLn(R) of invertible matrices. For a commutative ring R, an element A of Mn(R) is invertible if and only if the determinant of A is invertible in R. In that case, A−1 can be given explicitly in terms of the adjugate matrix.
=== In general ===
For elements x and y in a ring R, if
1
−
x
y
{\displaystyle 1-xy}
is invertible, then
1
−
y
x
{\displaystyle 1-yx}
is invertible with inverse
1
+
y
(
1
−
x
y
)
−
1
x
{\displaystyle 1+y(1-xy)^{-1}x}
; this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:
(
1
−
y
x
)
−
1
=
∑
n
≥
0
(
y
x
)
n
=
1
+
y
(
∑
n
≥
0
(
x
y
)
n
)
x
=
1
+
y
(
1
−
x
y
)
−
1
x
.
{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}
See Hua's identity for similar results.
== Group of units ==
A commutative ring is a local ring if R ∖ R× is a maximal ideal.
As it turns out, if R ∖ R× is an ideal, then it is necessarily a maximal ideal and R is local since a maximal ideal is disjoint from R×.
If R is a finite field, then R× is a cyclic group of order |R| − 1.
Every ring homomorphism f : R → S induces a group homomorphism R× → S×, since f maps units to units. In fact, the formation of the unit group defines a functor from the category of rings to the category of groups. This functor has a left adjoint which is the integral group ring construction.
The group scheme
GL
1
{\displaystyle \operatorname {GL} _{1}}
is isomorphic to the multiplicative group scheme
G
m
{\displaystyle \mathbb {G} _{m}}
over any base, so for any commutative ring R, the groups
GL
1
(
R
)
{\displaystyle \operatorname {GL} _{1}(R)}
and
G
m
(
R
)
{\displaystyle \mathbb {G} _{m}(R)}
are canonically isomorphic to U(R). Note that the functor
G
m
{\displaystyle \mathbb {G} _{m}}
(that is, R ↦ U(R)) is representable in the sense:
G
m
(
R
)
≃
Hom
(
Z
[
t
,
t
−
1
]
,
R
)
{\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}
for commutative rings R (this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphisms
Z
[
t
,
t
−
1
]
→
R
{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}
and the set of unit elements of R (in contrast,
Z
[
t
]
{\displaystyle \mathbb {Z} [t]}
represents the additive group
G
a
{\displaystyle \mathbb {G} _{a}}
, the forgetful functor from the category of commutative rings to the category of abelian groups).
== Associatedness ==
Suppose that R is commutative. Elements r and s of R are called associate if there exists a unit u in R such that r = us; then write r ~ s. In any ring, pairs of additive inverse elements x and −x are associate, since any ring includes the unit −1. For example, 6 and −6 are associate in Z. In general, ~ is an equivalence relation on R.
Associatedness can also be described in terms of the action of R× on R via multiplication: Two elements of R are associate if they are in the same R×-orbit.
In an integral domain, the set of associates of a given nonzero element has the same cardinality as R×.
The equivalence relation ~ can be viewed as any one of Green's semigroup relations specialized to the multiplicative semigroup of a commutative ring R.
== See also ==
S-units
Localization of a ring and a module
== Notes ==
== Citations ==
== Sources == | Wikipedia/Unit_(algebra) |
In differential geometry, pushforward is a linear approximation of smooth maps (formulating manifold) on tangent spaces. Suppose that
φ
:
M
→
N
{\displaystyle \varphi \colon M\to N}
is a smooth map between smooth manifolds; then the differential of
φ
{\displaystyle \varphi }
at a point
x
{\displaystyle x}
, denoted
d
φ
x
{\displaystyle \mathrm {d} \varphi _{x}}
, is, in some sense, the best linear approximation of
φ
{\displaystyle \varphi }
near
x
{\displaystyle x}
. It can be viewed as a generalization of the total derivative of ordinary calculus. Explicitly, the differential is a linear map from the tangent space of
M
{\displaystyle M}
at
x
{\displaystyle x}
to the tangent space of
N
{\displaystyle N}
at
φ
(
x
)
{\displaystyle \varphi (x)}
,
d
φ
x
:
T
x
M
→
T
φ
(
x
)
N
{\displaystyle \mathrm {d} \varphi _{x}\colon T_{x}M\to T_{\varphi (x)}N}
. Hence it can be used to push tangent vectors on
M
{\displaystyle M}
forward to tangent vectors on
N
{\displaystyle N}
. The differential of a map
φ
{\displaystyle \varphi }
is also called, by various authors, the derivative or total derivative of
φ
{\displaystyle \varphi }
.
== Motivation ==
Let
φ
:
U
→
V
{\displaystyle \varphi :U\to V}
be a smooth map from an open subset
U
{\displaystyle U}
of
R
m
{\displaystyle \mathbb {R} ^{m}}
to an open subset
V
{\displaystyle V}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
. For any point
x
{\displaystyle x}
in
U
{\displaystyle U}
, the Jacobian of
φ
{\displaystyle \varphi }
at
x
{\displaystyle x}
(with respect to the standard coordinates) is the matrix representation of the total derivative of
φ
{\displaystyle \varphi }
at
x
{\displaystyle x}
, which is a linear map
d
φ
x
:
T
x
R
m
→
T
φ
(
x
)
R
n
{\displaystyle d\varphi _{x}:T_{x}\mathbb {R} ^{m}\to T_{\varphi (x)}\mathbb {R} ^{n}}
between their tangent spaces. Note the tangent spaces
T
x
R
m
,
T
φ
(
x
)
R
n
{\displaystyle T_{x}\mathbb {R} ^{m},T_{\varphi (x)}\mathbb {R} ^{n}}
are isomorphic to
R
m
{\displaystyle \mathbb {R} ^{m}}
and
R
n
{\displaystyle \mathbb {R} ^{n}}
, respectively. The pushforward generalizes this construction to the case that
φ
{\displaystyle \varphi }
is a smooth function between any smooth manifolds
M
{\displaystyle M}
and
N
{\displaystyle N}
.
== The differential of a smooth map ==
Let
φ
:
M
→
N
{\displaystyle \varphi \colon M\to N}
be a smooth map of smooth manifolds. Given
x
∈
M
,
{\displaystyle x\in M,}
the differential of
φ
{\displaystyle \varphi }
at
x
{\displaystyle x}
is a linear map
d
φ
x
:
T
x
M
→
T
φ
(
x
)
N
{\displaystyle d\varphi _{x}\colon \ T_{x}M\to T_{\varphi (x)}N\,}
from the tangent space of
M
{\displaystyle M}
at
x
{\displaystyle x}
to the tangent space of
N
{\displaystyle N}
at
φ
(
x
)
.
{\displaystyle \varphi (x).}
The image
d
φ
x
X
{\displaystyle d\varphi _{x}X}
of a tangent vector
X
∈
T
x
M
{\displaystyle X\in T_{x}M}
under
d
φ
x
{\displaystyle d\varphi _{x}}
is sometimes called the pushforward of
X
{\displaystyle X}
by
φ
.
{\displaystyle \varphi .}
The exact definition of this pushforward depends on the definition one uses for tangent vectors (for the various definitions see tangent space).
If tangent vectors are defined as equivalence classes of the curves
γ
{\displaystyle \gamma }
for which
γ
(
0
)
=
x
,
{\displaystyle \gamma (0)=x,}
then the differential is given by
d
φ
x
(
γ
′
(
0
)
)
=
(
φ
∘
γ
)
′
(
0
)
.
{\displaystyle d\varphi _{x}(\gamma '(0))=(\varphi \circ \gamma )'(0).}
Here,
γ
{\displaystyle \gamma }
is a curve in
M
{\displaystyle M}
with
γ
(
0
)
=
x
,
{\displaystyle \gamma (0)=x,}
and
γ
′
(
0
)
{\displaystyle \gamma '(0)}
is tangent vector to the curve
γ
{\displaystyle \gamma }
at
0.
{\displaystyle 0.}
In other words, the pushforward of the tangent vector to the curve
γ
{\displaystyle \gamma }
at
0
{\displaystyle 0}
is the tangent vector to the curve
φ
∘
γ
{\displaystyle \varphi \circ \gamma }
at
0.
{\displaystyle 0.}
Alternatively, if tangent vectors are defined as derivations acting on smooth real-valued functions, then the differential is given by
d
φ
x
(
X
)
(
f
)
=
X
(
f
∘
φ
)
,
{\displaystyle d\varphi _{x}(X)(f)=X(f\circ \varphi ),}
for an arbitrary function
f
∈
C
∞
(
N
)
{\displaystyle f\in C^{\infty }(N)}
and an arbitrary derivation
X
∈
T
x
M
{\displaystyle X\in T_{x}M}
at point
x
∈
M
{\displaystyle x\in M}
(a derivation is defined as a linear map
X
:
C
∞
(
M
)
→
R
{\displaystyle X\colon C^{\infty }(M)\to \mathbb {R} }
that satisfies the Leibniz rule, see: definition of tangent space via derivations). By definition, the pushforward of
X
{\displaystyle X}
is in
T
φ
(
x
)
N
{\displaystyle T_{\varphi (x)}N}
and therefore itself is a derivation,
d
φ
x
(
X
)
:
C
∞
(
N
)
→
R
{\displaystyle d\varphi _{x}(X)\colon C^{\infty }(N)\to \mathbb {R} }
.
After choosing two charts around
x
{\displaystyle x}
and around
φ
(
x
)
,
{\displaystyle \varphi (x),}
φ
{\displaystyle \varphi }
is locally determined by a smooth map
φ
^
:
U
→
V
{\displaystyle {\widehat {\varphi }}\colon U\to V}
between open sets of
R
m
{\displaystyle \mathbb {R} ^{m}}
and
R
n
{\displaystyle \mathbb {R} ^{n}}
, and
d
φ
x
(
∂
∂
u
a
)
=
∂
φ
^
b
∂
u
a
∂
∂
v
b
,
{\displaystyle d\varphi _{x}\left({\frac {\partial }{\partial u^{a}}}\right)={\frac {\partial {\widehat {\varphi }}^{b}}{\partial u^{a}}}{\frac {\partial }{\partial v^{b}}},}
in the Einstein summation notation, where the partial derivatives are evaluated at the point in
U
{\displaystyle U}
corresponding to
x
{\displaystyle x}
in the given chart.
Extending by linearity gives the following matrix
(
d
φ
x
)
a
b
=
∂
φ
^
b
∂
u
a
.
{\displaystyle \left(d\varphi _{x}\right)_{a}^{\;b}={\frac {\partial {\widehat {\varphi }}^{b}}{\partial u^{a}}}.}
Thus the differential is a linear transformation, between tangent spaces, associated to the smooth map
φ
{\displaystyle \varphi }
at each point. Therefore, in some chosen local coordinates, it is represented by the Jacobian matrix of the corresponding smooth map from
R
m
{\displaystyle \mathbb {R} ^{m}}
to
R
n
{\displaystyle \mathbb {R} ^{n}}
. In general, the differential need not be invertible. However, if
φ
{\displaystyle \varphi }
is a local diffeomorphism, then
d
φ
x
{\displaystyle d\varphi _{x}}
is invertible, and the inverse gives the pullback of
T
φ
(
x
)
N
.
{\displaystyle T_{\varphi (x)}N.}
The differential is frequently expressed using a variety of other notations such as
D
φ
x
,
(
φ
∗
)
x
,
φ
′
(
x
)
,
T
x
φ
.
{\displaystyle D\varphi _{x},\left(\varphi _{*}\right)_{x},\varphi '(x),T_{x}\varphi .}
It follows from the definition that the differential of a composite is the composite of the differentials (i.e., functorial behaviour). This is the chain rule for smooth maps.
Also, the differential of a local diffeomorphism is a linear isomorphism of tangent spaces.
== The differential on the tangent bundle ==
The differential of a smooth map
φ
{\displaystyle \varphi }
induces, in an obvious manner, a bundle map (in fact a vector bundle homomorphism) from the tangent bundle of
M
{\displaystyle M}
to the tangent bundle of
N
{\displaystyle N}
, denoted by
d
φ
{\displaystyle d\varphi }
, which fits into the following commutative diagram:
where
π
M
{\displaystyle \pi _{M}}
and
π
N
{\displaystyle \pi _{N}}
denote the bundle projections of the tangent bundles of
M
{\displaystyle M}
and
N
{\displaystyle N}
respectively.
d
φ
{\displaystyle \operatorname {d} \!\varphi }
induces a bundle map from
T
M
{\displaystyle TM}
to the pullback bundle φ∗TN over
M
{\displaystyle M}
via
(
m
,
v
m
)
↦
(
φ
(
m
)
,
d
φ
(
m
,
v
m
)
)
,
{\displaystyle (m,v_{m})\mapsto (\varphi (m),\operatorname {d} \!\varphi (m,v_{m})),}
where
m
∈
M
{\displaystyle m\in M}
and
v
m
∈
T
m
M
.
{\displaystyle v_{m}\in T_{m}M.}
The latter map may in turn be viewed as a section of the vector bundle Hom(TM, φ∗TN) over M. The bundle map
d
φ
{\displaystyle \operatorname {d} \!\varphi }
is also denoted by
T
φ
{\displaystyle T\varphi }
and called the tangent map. In this way,
T
{\displaystyle T}
is a functor.
== Pushforward of vector fields ==
Given a smooth map φ : M → N and a vector field X on M, it is not usually possible to identify a pushforward of X by φ with some vector field Y on N. For example, if the map φ is not surjective, there is no natural way to define such a pushforward outside of the image of φ. Also, if φ is not injective there may be more than one choice of pushforward at a given point. Nevertheless, one can make this difficulty precise, using the notion of a vector field along a map.
A section of φ∗TN over M is called a vector field along φ. For example, if M is a submanifold of N and φ is the inclusion, then a vector field along φ is just a section of the tangent bundle of N along M; in particular, a vector field on M defines such a section via the inclusion of TM inside TN. This idea generalizes to arbitrary smooth maps.
Suppose that X is a vector field on M, i.e., a section of TM. Then,
d
ϕ
∘
X
{\displaystyle \operatorname {d} \!\phi \circ X}
yields, in the above sense, the pushforward φ∗X, which is a vector field along φ, i.e., a section of φ∗TN over M.
Any vector field Y on N defines a pullback section φ∗Y of φ∗TN with (φ∗Y)x = Yφ(x). A vector field X on M and a vector field Y on N are said to be φ-related if φ∗X = φ∗Y as vector fields along φ. In other words, for all x in M, dφx(X) = Yφ(x).
In some situations, given a X vector field on M, there is a unique vector field Y on N which is φ-related to X. This is true in particular when φ is a diffeomorphism. In this case, the pushforward defines a vector field Y on N, given by
Y
y
=
ϕ
∗
(
X
ϕ
−
1
(
y
)
)
.
{\displaystyle Y_{y}=\phi _{*}\left(X_{\phi ^{-1}(y)}\right).}
A more general situation arises when φ is surjective (for example the bundle projection of a fiber bundle). Then a vector field X on M is said to be projectable if for all y in N, dφx(Xx) is independent of the choice of x in φ−1({y}). This is precisely the condition that guarantees that a pushforward of X, as a vector field on N, is well defined.
=== Examples ===
==== Pushforward from multiplication on Lie groups ====
Given a Lie group
G
{\displaystyle G}
, we can use the multiplication map
m
(
−
,
−
)
:
G
×
G
→
G
{\displaystyle m(-,-):G\times G\to G}
to get left multiplication
L
g
=
m
(
g
,
−
)
{\displaystyle L_{g}=m(g,-)}
and right multiplication
R
g
=
m
(
−
,
g
)
{\displaystyle R_{g}=m(-,g)}
maps
G
→
G
{\displaystyle G\to G}
. These maps can be used to construct left or right invariant vector fields on
G
{\displaystyle G}
from its tangent space at the origin
g
=
T
e
G
{\displaystyle {\mathfrak {g}}=T_{e}G}
(which is its associated Lie algebra). For example, given
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
we get an associated vector field
X
{\displaystyle {\mathfrak {X}}}
on
G
{\displaystyle G}
defined by
X
g
=
(
L
g
)
∗
(
X
)
∈
T
g
G
{\displaystyle {\mathfrak {X}}_{g}=(L_{g})_{*}(X)\in T_{g}G}
for every
g
∈
G
{\displaystyle g\in G}
. This can be readily computed using the curves definition of pushforward maps. If we have a curve
γ
:
(
−
1
,
1
)
→
G
{\displaystyle \gamma :(-1,1)\to G}
where
γ
(
0
)
=
e
,
γ
′
(
0
)
=
X
{\displaystyle \gamma (0)=e\,,\quad \gamma '(0)=X}
we get
(
L
g
)
∗
(
X
)
=
(
L
g
∘
γ
)
′
(
0
)
=
(
g
⋅
γ
(
t
)
)
′
(
0
)
=
d
g
d
t
γ
(
0
)
+
g
⋅
d
γ
d
t
(
0
)
=
g
⋅
γ
′
(
0
)
{\displaystyle {\begin{aligned}(L_{g})_{*}(X)&=(L_{g}\circ \gamma )'(0)\\&=(g\cdot \gamma (t))'(0)\\&={\frac {dg}{dt}}\gamma (0)+g\cdot {\frac {d\gamma }{dt}}(0)\\&=g\cdot \gamma '(0)\end{aligned}}}
since
L
g
{\displaystyle L_{g}}
is constant with respect to
γ
{\displaystyle \gamma }
. This implies we can interpret the tangent spaces
T
g
G
{\displaystyle T_{g}G}
as
T
g
G
=
g
⋅
T
e
G
=
g
⋅
g
{\displaystyle T_{g}G=g\cdot T_{e}G=g\cdot {\mathfrak {g}}}
.
==== Pushforward for some Lie groups ====
For example, if
G
{\displaystyle G}
is the Heisenberg group given by matrices
H
=
{
[
1
a
b
0
1
c
0
0
1
]
:
a
,
b
,
c
∈
R
}
{\displaystyle H=\left\{{\begin{bmatrix}1&a&b\\0&1&c\\0&0&1\end{bmatrix}}:a,b,c\in \mathbb {R} \right\}}
it has Lie algebra given by the set of matrices
h
=
{
[
0
a
b
0
0
c
0
0
0
]
:
a
,
b
,
c
∈
R
}
{\displaystyle {\mathfrak {h}}=\left\{{\begin{bmatrix}0&a&b\\0&0&c\\0&0&0\end{bmatrix}}:a,b,c\in \mathbb {R} \right\}}
since we can find a path
γ
:
(
−
1
,
1
)
→
H
{\displaystyle \gamma :(-1,1)\to H}
giving any real number in one of the upper matrix entries with
i
<
j
{\displaystyle i<j}
(i-th row and j-th column). Then, for
g
=
[
1
2
3
0
1
4
0
0
1
]
{\displaystyle g={\begin{bmatrix}1&2&3\\0&1&4\\0&0&1\end{bmatrix}}}
we have
T
g
H
=
g
⋅
h
=
{
[
0
a
b
+
2
c
0
0
c
0
0
0
]
:
a
,
b
,
c
∈
R
}
{\displaystyle T_{g}H=g\cdot {\mathfrak {h}}=\left\{{\begin{bmatrix}0&a&b+2c\\0&0&c\\0&0&0\end{bmatrix}}:a,b,c\in \mathbb {R} \right\}}
which is equal to the original set of matrices. This is not always the case, for example, in the group
G
=
{
[
a
b
0
1
/
a
]
:
a
,
b
∈
R
,
a
≠
0
}
{\displaystyle G=\left\{{\begin{bmatrix}a&b\\0&1/a\end{bmatrix}}:a,b\in \mathbb {R} ,a\neq 0\right\}}
we have its Lie algebra as the set of matrices
g
=
{
[
a
b
0
−
a
]
:
a
,
b
∈
R
}
{\displaystyle {\mathfrak {g}}=\left\{{\begin{bmatrix}a&b\\0&-a\end{bmatrix}}:a,b\in \mathbb {R} \right\}}
hence for some matrix
g
=
[
2
3
0
1
/
2
]
{\displaystyle g={\begin{bmatrix}2&3\\0&1/2\end{bmatrix}}}
we have
T
g
G
=
{
[
2
a
2
b
−
3
a
0
−
a
/
2
]
:
a
,
b
∈
R
}
{\displaystyle T_{g}G=\left\{{\begin{bmatrix}2a&2b-3a\\0&-a/2\end{bmatrix}}:a,b\in \mathbb {R} \right\}}
which is not the same set of matrices.
== See also ==
Pullback (differential geometry)
Flow-based generative model
== References ==
Lee, John M. (2003). Introduction to Smooth Manifolds. Springer Graduate Texts in Mathematics. Vol. 218.
Jost, Jürgen (2002). Riemannian Geometry and Geometric Analysis. Berlin: Springer-Verlag. ISBN 3-540-42627-2. See section 1.6.
Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. ISBN 0-8053-0102-X. See section 1.7 and 2.3. | Wikipedia/Pushforward_(differential) |
In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation a → b called implication such that (c ∧ a) ≤ b is equivalent to c ≤ (a → b). From a logical standpoint, A → B is by this definition the weakest proposition for which modus ponens, the inference rule A → B, A ⊢ B, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced in 1930 by Arend Heyting to formalize intuitionistic logic.
Heyting algebras are distributive lattices. Every Boolean algebra is a Heyting algebra when a → b is defined as ¬a ∨ b, as is every complete distributive lattice satisfying a one-sided infinite distributive law when a → b is taken to be the supremum of the set of all c for which c ∧ a ≤ b. In the finite case, every nonempty distributive lattice, in particular every nonempty finite chain, is automatically complete and completely distributive, and hence a Heyting algebra.
It follows from the definition that 1 ≤ 0 → a, corresponding to the intuition that any proposition a is implied by a contradiction 0. Although the negation operation ¬a is not part of the definition, it is definable as a → 0. The intuitive content of ¬a is the proposition that to assume a would lead to a contradiction. The definition implies that a ∧ ¬a = 0. It can further be shown that a ≤ ¬¬a, although the converse, ¬¬a ≤ a, is not true in general, that is, double negation elimination does not hold in general in a Heyting algebra.
Heyting algebras generalize Boolean algebras in the sense that Boolean algebras are precisely the Heyting algebras satisfying a ∨ ¬a = 1 (excluded middle), equivalently ¬¬a = a. Those elements of a Heyting algebra H of the form ¬a comprise a Boolean lattice, but in general this is not a subalgebra of H (see below).
Heyting algebras serve as the algebraic models of propositional intuitionistic logic in the same way Boolean algebras model propositional classical logic. The internal logic of an elementary topos is based on the Heyting algebra of subobjects of the terminal object 1 ordered by inclusion, equivalently the morphisms from 1 to the subobject classifier Ω.
The open sets of any topological space form a complete Heyting algebra. Complete Heyting algebras thus become a central object of study in pointless topology.
Every Heyting algebra whose set of non-greatest elements has a greatest element (and forms another Heyting algebra) is subdirectly irreducible, whence every Heyting algebra can be made subdirectly irreducible by adjoining a new greatest element. It follows that even among the finite Heyting algebras there exist infinitely many that are subdirectly irreducible, no two of which have the same equational theory. Hence no finite set of finite Heyting algebras can supply all the counterexamples to non-laws of Heyting algebra. This is in sharp contrast to Boolean algebras, whose only subdirectly irreducible one is the two-element one, which on its own therefore suffices for all counterexamples to non-laws of Boolean algebra, the basis for the simple truth table decision method. Nevertheless, it is decidable whether an equation holds of all Heyting algebras.
Heyting algebras are less often called pseudo-Boolean algebras, or even Brouwer lattices, although the latter term may denote the dual definition, or have a slightly more general meaning.
== Formal definition ==
A Heyting algebra H is a bounded lattice such that for all a and b in H there is a greatest element x of H such that
a
∧
x
≤
b
.
{\displaystyle a\wedge x\leq b.}
This element is the relative pseudo-complement of a with respect to b, and is denoted a→b. We write 1 and 0 for the largest and the smallest element of H, respectively.
In any Heyting algebra, one defines the pseudo-complement ¬a of any element a by setting ¬a = (a→0). By definition,
a
∧
¬
a
=
0
{\displaystyle a\wedge \lnot a=0}
, and ¬a is the largest element having this property. However, it is not in general true that
a
∨
¬
a
=
1
{\displaystyle a\vee \lnot a=1}
, thus ¬ is only a pseudo-complement, not a true complement, as would be the case in a Boolean algebra.
A complete Heyting algebra is a Heyting algebra that is a complete lattice.
A subalgebra of a Heyting algebra H is a subset H1 of H containing 0 and 1 and closed under the operations ∧, ∨ and →. It follows that it is also closed under ¬. A subalgebra is made into a Heyting algebra by the induced operations.
== Alternative definitions ==
=== Category-theoretic definition ===
A Heyting algebra
H
{\displaystyle H}
is a bounded lattice that has all exponential objects.
The lattice
H
{\displaystyle H}
is regarded as a category where
meet,
∧
{\displaystyle \wedge }
, is the product. The exponential condition means that for any objects
Y
{\displaystyle Y}
and
Z
{\displaystyle Z}
in
H
{\displaystyle H}
an exponential
Z
Y
{\displaystyle Z^{Y}}
uniquely exists as an object in
H
{\displaystyle H}
.
A Heyting implication (often written using
⇒
{\displaystyle \Rightarrow }
or
⊸
{\displaystyle \multimap }
to avoid confusions such as the use of
→
{\displaystyle \to }
to indicate a functor) is just an exponential:
Y
⇒
Z
{\displaystyle Y\Rightarrow Z}
is an alternative notation for
Z
Y
{\displaystyle Z^{Y}}
. From the definition of exponentials we have that implication (
⇒:
H
×
H
→
H
{\displaystyle \Rightarrow :H\times H\to H}
) is right adjoint to meet (
∧
:
H
×
H
→
H
{\displaystyle \wedge :H\times H\to H}
). This adjunction can be written as
(
−
∧
Y
)
⊣
(
Y
⇒
−
)
{\displaystyle (-\wedge Y)\dashv (Y\Rightarrow -)}
or more fully as:
(
−
∧
Y
)
:
H
⊤
⟵
⟶
H
:
(
Y
⇒
−
)
{\displaystyle (-\wedge Y):H{\stackrel {\longrightarrow }{\underset {\longleftarrow }{\top }}}H:(Y\Rightarrow -)}
=== Lattice-theoretic definitions ===
An equivalent definition of Heyting algebras can be given by considering the mappings:
{
f
a
:
H
→
H
f
a
(
x
)
=
a
∧
x
{\displaystyle {\begin{cases}f_{a}\colon H\to H\\f_{a}(x)=a\wedge x\end{cases}}}
for some fixed a in H. A bounded lattice H is a Heyting algebra if and only if every mapping fa is the lower adjoint of a monotone Galois connection. In this case the respective upper adjoint ga is given by ga(x) = a→x, where → is defined as above.
Yet another definition is as a residuated lattice whose monoid operation is ∧. The monoid unit must then be the top element 1. Commutativity of this monoid implies that the two residuals coincide as a→b.
=== Bounded lattice with an implication operation ===
Given a bounded lattice A with largest and smallest elements 1 and 0, and a binary operation →, these together form a Heyting algebra if and only if the following hold:
a
→
a
=
1
{\displaystyle a\to a=1}
a
∧
(
a
→
b
)
=
a
∧
b
{\displaystyle a\wedge (a\to b)=a\wedge b}
b
∧
(
a
→
b
)
=
b
{\displaystyle b\wedge (a\to b)=b}
a
→
(
b
∧
c
)
=
(
a
→
b
)
∧
(
a
→
c
)
{\displaystyle a\to (b\wedge c)=(a\to b)\wedge (a\to c)}
where equation 4 is the distributive law for →.
=== Characterization using the axioms of intuitionistic logic ===
This characterization of Heyting algebras makes the proof of the basic facts concerning the relationship between intuitionist propositional calculus and Heyting algebras immediate. (For these facts, see the sections "Provable identities" and "Universal constructions".) One should think of the element
⊤
{\displaystyle \top }
as meaning, intuitively, "provably true". Compare with the axioms at Intuitionistic logic.
Given a set A with three binary operations →, ∧ and ∨, and two distinguished elements
⊥
{\displaystyle \bot }
and
⊤
{\displaystyle \top }
, then A is a Heyting algebra for these operations (and the relation ≤ defined by the condition that
a
≤
b
{\displaystyle a\leq b}
when a→b =
⊤
{\displaystyle \top }
) if and only if the following conditions hold for any elements x, y and z of A:
If
x
≤
y
and
y
≤
x
then
x
=
y
,
{\displaystyle {\mbox{If }}x\leq y{\mbox{ and }}y\leq x{\mbox{ then }}x=y,}
If
⊤
≤
y
,
then
y
=
⊤
,
{\displaystyle {\mbox{If }}\top \leq y,{\mbox{ then }}y=\top ,}
x
≤
y
→
x
,
{\displaystyle x\leq y\to x,}
x
→
(
y
→
z
)
≤
(
x
→
y
)
→
(
x
→
z
)
,
{\displaystyle x\to (y\to z)\leq (x\to y)\to (x\to z),}
x
∧
y
≤
x
,
{\displaystyle x\land y\leq x,}
x
∧
y
≤
y
,
{\displaystyle x\land y\leq y,}
x
≤
y
→
(
x
∧
y
)
,
{\displaystyle x\leq y\to (x\land y),}
x
≤
x
∨
y
,
{\displaystyle x\leq x\lor y,}
y
≤
x
∨
y
,
{\displaystyle y\leq x\lor y,}
x
→
z
≤
(
y
→
z
)
→
(
x
∨
y
→
z
)
,
{\displaystyle x\to z\leq (y\to z)\to (x\lor y\to z),}
⊥
≤
x
.
{\displaystyle \bot \leq x.}
Finally, we define ¬x to be x→
⊥
{\displaystyle \bot }
.
Condition 1 says that equivalent formulas should be identified. Condition 2 says that provably true formulas are closed under modus ponens. Conditions 3 and 4 are then conditions. Conditions 5, 6 and 7 are and conditions. Conditions 8, 9 and 10 are or conditions. Condition 11 is a false condition.
Of course, if a different set of axioms were chosen for logic, we could modify ours accordingly.
== Examples ==
Every Boolean algebra is a Heyting algebra, with p→q given by ¬p∨q.
Every totally ordered set that has a least element 0 and a greatest element 1 is a Heyting algebra (if viewed as a lattice). In this case p→q equals to 1 when p≤q, and q otherwise.
The simplest Heyting algebra that is not already a Boolean algebra is the totally ordered set {0, 1/2, 1} (viewed as a lattice), yielding the operations:
In this example, note that 1/2∨¬1/2 = 1/2∨(1/2 → 0) = 1/2∨0 = 1/2 falsifies the law of excluded middle.
Every topology provides a complete Heyting algebra in the form of its open set lattice. In this case, the element A→B is the interior of the union of Ac and B, where Ac denotes the complement of the open set A. Not all complete Heyting algebras are of this form. These issues are studied in pointless topology, where complete Heyting algebras are also called frames or locales.
Every interior algebra provides a Heyting algebra in the form of its lattice of open elements. Every Heyting algebra is of this form as a Heyting algebra can be completed to a Boolean algebra by taking its free Boolean extension as a bounded distributive lattice and then treating it as a generalized topology in this Boolean algebra.
The Lindenbaum algebra of propositional intuitionistic logic is a Heyting algebra.
The global elements of the subobject classifier Ω of an elementary topos form a Heyting algebra; it is the Heyting algebra of truth values of the intuitionistic higher-order logic induced by the topos. More generally, the set of subobjects of any object X in a topos forms a Heyting algebra.
Łukasiewicz–Moisil algebras (LMn) are also Heyting algebras for any n (but they are not MV-algebras for n ≥ 5).
== Properties ==
=== General properties ===
The ordering
≤
{\displaystyle \leq }
on a Heyting algebra H can be recovered from the operation → as follows: for any elements a, b of H,
a
≤
b
{\displaystyle a\leq b}
if and only if a→b = 1.
In contrast to some many-valued logics, Heyting algebras share the following property with Boolean algebras: if negation has a fixed point (i.e. ¬a = a for some a), then the Heyting algebra is the trivial one-element Heyting algebra.
=== Provable identities ===
Given a formula
F
(
A
1
,
A
2
,
…
,
A
n
)
{\displaystyle F(A_{1},A_{2},\ldots ,A_{n})}
of propositional calculus (using, in addition to the variables, the connectives
∧
,
∨
,
¬
,
→
{\displaystyle \land ,\lor ,\lnot ,\to }
, and the constants 0 and 1), it is a fact, proved early on in any study of Heyting algebras, that the following two conditions are equivalent:
The formula F is provably true in intuitionist propositional calculus.
The identity
F
(
a
1
,
a
2
,
…
,
a
n
)
=
1
{\displaystyle F(a_{1},a_{2},\ldots ,a_{n})=1}
is true for any Heyting algebra H and any elements
a
1
,
a
2
,
…
,
a
n
∈
H
{\displaystyle a_{1},a_{2},\ldots ,a_{n}\in H}
.
The metaimplication 1 ⇒ 2 is extremely useful and is the principal practical method for proving identities in Heyting algebras. In practice, one frequently uses the deduction theorem in such proofs.
Since for any a and b in a Heyting algebra H we have
a
≤
b
{\displaystyle a\leq b}
if and only if a→b = 1, it follows from 1 ⇒ 2 that whenever a formula F→G is provably true, we have
F
(
a
1
,
a
2
,
…
,
a
n
)
≤
G
(
a
1
,
a
2
,
…
,
a
n
)
{\displaystyle F(a_{1},a_{2},\ldots ,a_{n})\leq G(a_{1},a_{2},\ldots ,a_{n})}
for any Heyting algebra H, and any elements
a
1
,
a
2
,
…
,
a
n
∈
H
{\displaystyle a_{1},a_{2},\ldots ,a_{n}\in H}
. (It follows from the deduction theorem that F→G is provable (unconditionally) if and only if G is provable from F, that is, if G is a provable consequence of F.) In particular, if F and G are provably equivalent, then
F
(
a
1
,
a
2
,
…
,
a
n
)
=
G
(
a
1
,
a
2
,
…
,
a
n
)
{\displaystyle F(a_{1},a_{2},\ldots ,a_{n})=G(a_{1},a_{2},\ldots ,a_{n})}
, since ≤ is an order relation.
1 ⇒ 2 can be proved by examining the logical axioms of the system of proof and verifying that their value is 1 in any Heyting algebra, and then verifying that the application of the rules of inference to expressions with value 1 in a Heyting algebra results in expressions with value 1. For example, let us choose the system of proof having modus ponens as its sole rule of inference, and whose axioms are the Hilbert-style ones given at Intuitionistic logic#Axiomatization. Then the facts to be verified follow immediately from the axiom-like definition of Heyting algebras given above.
1 ⇒ 2 also provides a method for proving that certain propositional formulas, though tautologies in classical logic, cannot be proved in intuitionist propositional logic. In order to prove that some formula
F
(
A
1
,
A
2
,
…
,
A
n
)
{\displaystyle F(A_{1},A_{2},\ldots ,A_{n})}
is not provable, it is enough to exhibit a Heyting algebra H and elements
a
1
,
a
2
,
…
,
a
n
∈
H
{\displaystyle a_{1},a_{2},\ldots ,a_{n}\in H}
such that
F
(
a
1
,
a
2
,
…
,
a
n
)
≠
1
{\displaystyle F(a_{1},a_{2},\ldots ,a_{n})\neq 1}
.
If one wishes to avoid mention of logic, then in practice it becomes necessary to prove as a lemma a version of the deduction theorem valid for Heyting algebras: for any elements a, b and c of a Heyting algebra H, we have
(
a
∧
b
)
→
c
=
a
→
(
b
→
c
)
{\displaystyle (a\land b)\to c=a\to (b\to c)}
.
For more on the metaimplication 2 ⇒ 1, see the section "Universal constructions" below.
=== Distributivity ===
Heyting algebras are always distributive. Specifically, we always have the identities
a
∧
(
b
∨
c
)
=
(
a
∧
b
)
∨
(
a
∧
c
)
{\displaystyle a\wedge (b\vee c)=(a\wedge b)\vee (a\wedge c)}
a
∨
(
b
∧
c
)
=
(
a
∨
b
)
∧
(
a
∨
c
)
{\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge (a\vee c)}
The distributive law is sometimes stated as an axiom, but in fact it follows from the existence of relative pseudo-complements. The reason is that, being the lower adjoint of a Galois connection,
∧
{\displaystyle \wedge }
preserves all existing suprema. Distributivity in turn is just the preservation of binary suprema by
∧
{\displaystyle \wedge }
.
By a similar argument, the following infinite distributive law holds in any complete Heyting algebra:
x
∧
⋁
Y
=
⋁
{
x
∧
y
∣
y
∈
Y
}
{\displaystyle x\wedge \bigvee Y=\bigvee \{x\wedge y\mid y\in Y\}}
for any element x in H and any subset Y of H. Conversely, any complete lattice satisfying the above infinite distributive law is a complete Heyting algebra, with
a
→
b
=
⋁
{
c
∣
a
∧
c
≤
b
}
{\displaystyle a\to b=\bigvee \{c\mid a\land c\leq b\}}
being its relative pseudo-complement operation.
=== Regular and complemented elements ===
An element x of a Heyting algebra H is called regular if either of the following equivalent conditions hold:
x = ¬¬x.
x = ¬y for some y in H.
The equivalence of these conditions can be restated simply as the identity ¬¬¬x = ¬x, valid for all x in H.
Elements x and y of a Heyting algebra H are called complements to each other if x∧y = 0 and x∨y = 1. If it exists, any such y is unique and must in fact be equal to ¬x. We call an element x complemented if it admits a complement. It is true that if x is complemented, then so is ¬x, and then x and ¬x are complements to each other. However, confusingly, even if x is not complemented, ¬x may nonetheless have a complement (not equal to x). In any Heyting algebra, the elements 0 and 1 are complements to each other. For instance, it is possible that ¬x is 0 for every x different from 0, and 1 if x = 0, in which case 0 and 1 are the only regular elements.
Any complemented element of a Heyting algebra is regular, though the converse is not true in general. In particular, 0 and 1 are always regular.
For any Heyting algebra H, the following conditions are equivalent:
H is a Boolean algebra;
every x in H is regular;
every x in H is complemented.
In this case, the element a→b is equal to ¬a ∨ b.
The regular (respectively complemented) elements of any Heyting algebra H constitute a Boolean algebra Hreg (respectively Hcomp), in which the operations ∧, ¬ and →, as well as the constants 0 and 1, coincide with those of H. In the case of Hcomp, the operation ∨ is also the same, hence Hcomp is a subalgebra of H. In general however, Hreg will not be a subalgebra of H, because its join operation ∨reg may be different from ∨. For x, y ∈ Hreg, we have x ∨reg y = ¬(¬x ∧ ¬y). See below for necessary and sufficient conditions in order for ∨reg to coincide with ∨.
=== The De Morgan laws in a Heyting algebra ===
One of the two De Morgan laws is satisfied in every Heyting algebra, namely
∀
x
,
y
∈
H
:
¬
(
x
∨
y
)
=
¬
x
∧
¬
y
.
{\displaystyle \forall x,y\in H:\qquad \lnot (x\vee y)=\lnot x\wedge \lnot y.}
However, the other De Morgan law does not always hold. We have instead a weak de Morgan law:
∀
x
,
y
∈
H
:
¬
(
x
∧
y
)
=
¬
¬
(
¬
x
∨
¬
y
)
.
{\displaystyle \forall x,y\in H:\qquad \lnot (x\wedge y)=\lnot \lnot (\lnot x\vee \lnot y).}
The following statements are equivalent for all Heyting algebras H:
H satisfies both De Morgan laws,
¬
(
x
∧
y
)
=
¬
x
∨
¬
y
for all
x
,
y
∈
H
,
{\displaystyle \lnot (x\wedge y)=\lnot x\vee \lnot y{\mbox{ for all }}x,y\in H,}
¬
(
x
∧
y
)
=
¬
x
∨
¬
y
for all regular
x
,
y
∈
H
,
{\displaystyle \lnot (x\wedge y)=\lnot x\vee \lnot y{\mbox{ for all regular }}x,y\in H,}
¬
¬
(
x
∨
y
)
=
¬
¬
x
∨
¬
¬
y
for all
x
,
y
∈
H
,
{\displaystyle \lnot \lnot (x\vee y)=\lnot \lnot x\vee \lnot \lnot y{\mbox{ for all }}x,y\in H,}
¬
¬
(
x
∨
y
)
=
x
∨
y
for all regular
x
,
y
∈
H
,
{\displaystyle \lnot \lnot (x\vee y)=x\vee y{\mbox{ for all regular }}x,y\in H,}
¬
(
¬
x
∧
¬
y
)
=
x
∨
y
for all regular
x
,
y
∈
H
,
{\displaystyle \lnot (\lnot x\wedge \lnot y)=x\vee y{\mbox{ for all regular }}x,y\in H,}
¬
x
∨
¬
¬
x
=
1
for all
x
∈
H
.
{\displaystyle \lnot x\vee \lnot \lnot x=1{\mbox{ for all }}x\in H.}
Condition 2 is the other De Morgan law. Condition 6 says that the join operation ∨reg on the Boolean algebra Hreg of regular elements of H coincides with the operation ∨ of H. Condition 7 states that every regular element is complemented, i.e., Hreg = Hcomp.
We prove the equivalence. Clearly the metaimplications 1 ⇒ 2, 2 ⇒ 3 and 4 ⇒ 5 are trivial. Furthermore, 3 ⇔ 4 and 5 ⇔ 6 result simply from the first De Morgan law and the definition of regular elements. We show that 6 ⇒ 7 by taking ¬x and ¬¬x in place of x and y in 6 and using the identity a ∧ ¬a = 0. Notice that 2 ⇒ 1 follows from the first De Morgan law, and 7 ⇒ 6 results from the fact that the join operation ∨ on the subalgebra Hcomp is just the restriction of ∨ to Hcomp, taking into account the characterizations we have given of conditions 6 and 7. The metaimplication 5 ⇒ 2 is a trivial consequence of the weak De Morgan law, taking ¬x and ¬y in place of x and y in 5.
Heyting algebras satisfying the above properties are related to De Morgan logic in the same way Heyting algebras in general are related to intuitionist logic.
== Heyting algebra morphisms ==
=== Definition ===
Given two Heyting algebras H1 and H2 and a mapping f : H1 → H2, we say that ƒ is a morphism of Heyting algebras if, for any elements x and y in H1, we have:
f
(
0
)
=
0
,
{\displaystyle f(0)=0,}
f
(
x
∧
y
)
=
f
(
x
)
∧
f
(
y
)
,
{\displaystyle f(x\land y)=f(x)\land f(y),}
f
(
x
∨
y
)
=
f
(
x
)
∨
f
(
y
)
,
{\displaystyle f(x\lor y)=f(x)\lor f(y),}
f
(
x
→
y
)
=
f
(
x
)
→
f
(
y
)
,
{\displaystyle f(x\to y)=f(x)\to f(y),}
It follows from any of the last three conditions (2, 3, or 4) that f is an increasing function, that is, that f(x) ≤ f(y) whenever x ≤ y.
Assume H1 and H2 are structures with operations →, ∧, ∨ (and possibly ¬) and constants 0 and 1, and f is a surjective mapping from H1 to H2 with properties 1 through 4 above. Then if H1 is a Heyting algebra, so too is H2. This follows from the characterization of Heyting algebras as bounded lattices (thought of as algebraic structures rather than partially ordered sets) with an operation → satisfying certain identities.
=== Properties ===
The identity map f(x) = x from any Heyting algebra to itself is a morphism, and the composite g ∘ f of any two morphisms f and g is a morphism. Hence Heyting algebras form a category.
=== Examples ===
Given a Heyting algebra H and any subalgebra H1, the inclusion mapping i : H1 → H is a morphism.
For any Heyting algebra H, the map x ↦ ¬¬x defines a morphism from H onto the Boolean algebra of its regular elements Hreg. This is not in general a morphism from H to itself, since the join operation of Hreg may be different from that of H.
== Quotients ==
Let H be a Heyting algebra, and let F ⊆ H. We call F a filter on H if it satisfies the following properties:
1
∈
F
,
{\displaystyle 1\in F,}
If
x
,
y
∈
F
then
x
∧
y
∈
F
,
{\displaystyle {\mbox{If }}x,y\in F{\mbox{ then }}x\land y\in F,}
If
x
∈
F
,
y
∈
H
,
and
x
≤
y
then
y
∈
F
.
{\displaystyle {\mbox{If }}x\in F,\ y\in H,\ {\mbox{and }}x\leq y{\mbox{ then }}y\in F.}
The intersection of any set of filters on H is again a filter. Therefore, given any subset S of H there is a smallest filter containing S. We call it the filter generated by S. If S is empty, F = {1}. Otherwise, F is equal to the set of x in H such that there exist y1, y2, ..., yn ∈ S with y1 ∧ y2 ∧ ... ∧ yn ≤ x.
If H is a Heyting algebra and F is a filter on H, we define a relation ~ on H as follows: we write x ~ y whenever x → y and y → x both belong to F. Then ~ is an equivalence relation; we write H/F for the quotient set. There is a unique Heyting algebra structure on H/F such that the canonical surjection pF : H → H/F becomes a Heyting algebra morphism. We call the Heyting algebra H/F the quotient of H by F.
Let S be a subset of a Heyting algebra H and let F be the filter generated by S. Then H/F satisfies the following universal property:
Given any morphism of Heyting algebras f : H → H′ satisfying f(y) = 1 for every y ∈ S, f factors uniquely through the canonical surjection pF : H → H/F. That is, there is a unique morphism f′ : H/F → H′ satisfying f′pF = f. The morphism f′ is said to be induced by f.
Let f : H1 → H2 be a morphism of Heyting algebras. The kernel of f, written ker f, is the set f−1[{1}]. It is a filter on H1. (Care should be taken because this definition, if applied to a morphism of Boolean algebras, is dual to what would be called the kernel of the morphism viewed as a morphism of rings.) By the foregoing, f induces a morphism f′ : H1/(ker f) → H2. It is an isomorphism of H1/(ker f) onto the subalgebra f[H1] of H2.
== Universal constructions ==
=== Heyting algebra of propositional formulas in n variables up to intuitionist equivalence ===
The metaimplication 2 ⇒ 1 in the section "Provable identities" is proved by showing that the result of the following construction is itself a Heyting algebra:
Consider the set L of propositional formulas in the variables A1, A2,..., An.
Endow L with a preorder ≼ by defining F≼G if G is an (intuitionist) logical consequence of F, that is, if G is provable from F. It is immediate that ≼ is a preorder.
Consider the equivalence relation F~G induced by the preorder F≼G. (It is defined by F~G if and only if F≼G and G≼F. In fact, ~ is the relation of (intuitionist) logical equivalence.)
Let H0 be the quotient set L/~. This will be the desired Heyting algebra.
We write [F] for the equivalence class of a formula F. Operations →, ∧, ∨ and ¬ are defined in an obvious way on L. Verify that given formulas F and G, the equivalence classes [F→G], [F∧G], [F∨G] and [¬F] depend only on [F] and [G]. This defines operations →, ∧, ∨ and ¬ on the quotient set H0=L/~. Further define 1 to be the class of provably true statements, and set 0=[⊥].
Verify that H0, together with these operations, is a Heyting algebra. We do this using the axiom-like definition of Heyting algebras. H0 satisfies conditions THEN-1 through FALSE because all formulas of the given forms are axioms of intuitionist logic. MODUS-PONENS follows from the fact that if a formula ⊤→F is provably true, where ⊤ is provably true, then F is provably true (by application of the rule of inference modus ponens). Finally, EQUIV results from the fact that if F→G and G→F are both provably true, then F and G are provable from each other (by application of the rule of inference modus ponens), hence [F]=[G].
As always under the axiom-like definition of Heyting algebras, we define ≤ on H0 by the condition that x≤y if and only if x→y=1. Since, by the deduction theorem, a formula F→G is provably true if and only if G is provable from F, it follows that [F]≤[G] if and only if F≼G. In other words, ≤ is the order relation on L/~ induced by the preorder ≼ on L.
=== Free Heyting algebra on an arbitrary set of generators ===
In fact, the preceding construction can be carried out for any set of variables {Ai : i∈I} (possibly infinite). One obtains in this way the free Heyting algebra on the variables {Ai}, which we will again denote by H0. It is free in the sense that given any Heyting algebra H given together with a family of its elements 〈ai: i∈I 〉, there is a unique morphism f:H0→H satisfying f([Ai])=ai. The uniqueness of f is not difficult to see, and its existence results essentially from the metaimplication 1 ⇒ 2 of the section "Provable identities" above, in the form of its corollary that whenever F and G are provably equivalent formulas, F(〈ai〉)=G(〈ai〉) for any family of elements 〈ai〉in H.
=== Heyting algebra of formulas equivalent with respect to a theory T ===
Given a set of formulas T in the variables {Ai}, viewed as axioms, the same construction could have been carried out with respect to a relation F≼G defined on L to mean that G is a provable consequence of F and the set of axioms T. Let us denote by HT the Heyting algebra so obtained. Then HT satisfies the same universal property as H0 above, but with respect to Heyting algebras H and families of elements 〈ai〉 satisfying the property that J(〈ai〉)=1 for any axiom J(〈Ai〉) in T. (Let us note that HT, taken with the family of its elements 〈[Ai]〉, itself satisfies this property.) The existence and uniqueness of the morphism is proved the same way as for H0, except that one must modify the metaimplication 1 ⇒ 2 in "Provable identities" so that 1 reads "provably true from T," and 2 reads "any elements a1, a2,..., an in H satisfying the formulas of T."
The Heyting algebra HT that we have just defined can be viewed as a quotient of the free Heyting algebra H0 on the same set of variables, by applying the universal property of H0 with respect to HT, and the family of its elements 〈[Ai]〉.
Every Heyting algebra is isomorphic to one of the form HT. To see this, let H be any Heyting algebra, and let 〈ai: i∈I〉 be a family of elements generating H (for example, any surjective family). Now consider the set T of formulas J(〈Ai〉) in the variables 〈Ai: i∈I〉 such that J(〈ai〉)=1. Then we obtain a morphism f:HT→H by the universal property of HT, which is clearly surjective. It is not difficult to show that f is injective.
=== Comparison to Lindenbaum algebras ===
The constructions we have just given play an entirely analogous role with respect to Heyting algebras to that of Lindenbaum algebras with respect to Boolean algebras. In fact, The Lindenbaum algebra BT in the variables {Ai} with respect to the axioms T is just our HT∪T1, where T1 is the set of all formulas of the form ¬¬F→F, since the additional axioms of T1 are the only ones that need to be added in order to make all classical tautologies provable.
== Heyting algebras as applied to intuitionistic logic ==
If one interprets the axioms of the intuitionistic propositional logic as terms of a Heyting algebra, then they will evaluate to the largest element, 1, in any Heyting algebra under any assignment of values to the formula's variables. For instance, (P∧Q)→P is, by definition of the pseudo-complement, the largest element x such that
P
∧
Q
∧
x
≤
P
{\displaystyle P\land Q\land x\leq P}
. This inequation is satisfied for any x, so the largest such x is 1.
Furthermore, the rule of modus ponens allows us to derive the formula Q from the formulas P and P→Q. But in any Heyting algebra, if P has the value 1, and P→Q has the value 1, then it means that
P
∧
1
≤
Q
{\displaystyle P\land 1\leq Q}
, and so
1
∧
1
≤
Q
{\displaystyle 1\land 1\leq Q}
; it can only be that Q has the value 1.
This means that if a formula is deducible from the laws of intuitionistic logic, being derived from its axioms by way of the rule of modus ponens, then it will always have the value 1 in all Heyting algebras under any assignment of values to the formula's variables. However one can construct a Heyting algebra in which the value of Peirce's law is not always 1. Consider the 3-element algebra {0,1/2,1} as given above. If we assign 1/2 to P and 0 to Q, then the value of Peirce's law ((P→Q)→P)→P is 1/2. It follows that Peirce's law cannot be intuitionistically derived. See Curry–Howard isomorphism for the general context of what this implies in type theory.
The converse can be proven as well: if a formula always has the value 1, then it is deducible from the laws of intuitionistic logic, so the intuitionistically valid formulas are exactly those that always have a value of 1. This is similar to the notion that classically valid formulas are those formulas that have a value of 1 in the two-element Boolean algebra under any possible assignment of true and false to the formula's variables—that is, they are formulas that are tautologies in the usual truth-table sense. A Heyting algebra, from the logical standpoint, is then a generalization of the usual system of truth values, and its largest element 1 is analogous to 'true'. The usual two-valued logic system is a special case of a Heyting algebra, and the smallest non-trivial one, in which the only elements of the algebra are 1 (true) and 0 (false).
== Decision problems ==
The problem of whether a given equation holds in every Heyting algebra was shown to be decidable by Saul Kripke in 1965. The precise computational complexity of the problem was established by Richard Statman in 1979, who showed it was PSPACE-complete and hence at least as hard as deciding equations of Boolean algebra (shown coNP-complete in 1971 by Stephen Cook) and conjectured to be considerably harder. The elementary or first-order theory of Heyting algebras is undecidable. It remains open whether the universal Horn theory of Heyting algebras, or word problem, is decidable. Regarding the word problem it is known that Heyting algebras are not locally finite (no Heyting algebra generated by a finite nonempty set is finite), in contrast to Boolean algebras, which are locally finite and whose word problem is decidable.
== Topological representation and duality theory ==
Every Heyting algebra H is naturally isomorphic to a bounded sublattice L of open sets of a topological space X, where the implication
U
→
V
{\displaystyle U\to V}
of L is given by the interior of
(
X
∖
U
)
∪
V
{\displaystyle (X\setminus U)\cup V}
.
More precisely, X is the spectral space of prime ideals of the bounded lattice H and L is the lattice of open and quasi-compact subsets of X.
More generally, the category of Heyting algebras
is dually equivalent to the category of Heyting spaces.
This duality can be seen as restriction of the classical Stone duality of bounded distributive lattices to the (non-full) subcategory of Heyting algebras.
Alternatively, the category of Heyting algebras is dually equivalent to the category of Esakia spaces. This is called Esakia duality.
== Notes ==
== See also ==
Alexandrov topology
Superintuitionistic (aka intermediate) logics
List of Boolean algebra topics
Ockham algebra
== References ==
Rutherford, Daniel Edwin (1965). Introduction to Lattice Theory. Oliver and Boyd. OCLC 224572.
F. Borceux, Handbook of Categorical Algebra 3, In Encyclopedia of Mathematics and its Applications, Vol. 53, Cambridge University Press, 1994. ISBN 0-521-44180-3 OCLC 52238554
G. Gierz, K.H. Hoffmann, K. Keimel, J. D. Lawson, M. Mislove and D. S. Scott, Continuous Lattices and Domains, In Encyclopedia of Mathematics and its Applications, Vol. 93, Cambridge University Press, 2003.
S. Ghilardi. Free Heyting algebras as bi-Heyting algebras, Math. Rep. Acad. Sci. Canada XVI., 6:240–244, 1992.
Heyting, A. (1930), "Die formalen Regeln der intuitionistischen Logik. I, II, III", Sitzungsberichte Akad. Berlin: 42–56, 57–71, 158–169, JFM 56.0823.01
Mac Lane, S., Moerdijk, I. (1994). Sheaves in Geometry and Logic. Universitext. Springer New York. doi:10.1007/978-1-4612-0927-0. ISBN 978-0-387-97710-2.
Dickmann, Max; Schwartz, Niels; Tressl, Marcus (2019). Spectral Spaces. New Mathematical Monographs. Vol. 35. Cambridge: Cambridge University Press. doi:10.1017/9781316543870. ISBN 9781107146723. S2CID 201542298.
== External links ==
Heyting algebra at PlanetMath. | Wikipedia/Heyting_algebra |
In algebra, an operad algebra is an "algebra" over an operad. It is a generalization of an associative algebra over a commutative ring R, with an operad replacing R.
== Definitions ==
Given an operad O (say, a symmetric sequence in a symmetric monoidal ∞-category C), an algebra over an operad, or O-algebra for short, is, roughly, a left module over O with multiplications parametrized by O.
If O is a topological operad, then one can say an algebra over an operad is an O-monoid object in C. If C is symmetric monoidal, this recovers the usual definition.
Let C be symmetric monoidal ∞-category with monoidal structure distributive over colimits. If
f
:
O
→
O
′
{\displaystyle f:O\to O'}
is a map of operads and, moreover, if f is a homotopy equivalence, then the ∞-category of algebras over O in C is equivalent to the ∞-category of algebras over O' in C.
== See also ==
En-ring
Homotopy Lie algebra
== Notes ==
== References ==
Francis, John. "Derived Algebraic Geometry Over
E
n
{\displaystyle {\mathcal {E}}_{n}}
-Rings" (PDF).
Hinich, Vladimir (1997-02-11). "Homological algebra of homotopy algebras". arXiv:q-alg/9702015.
== External links ==
"operad", ncatlab.org
http://ncatlab.org/nlab/show/algebra+over+an+operad | Wikipedia/Algebra_over_an_operad |
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
== History ==
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of
y
=
f
(
x
)
{\displaystyle y=f(x)}
as follows: an infinitely small increment
α
{\displaystyle \alpha }
of the independent variable x always produces an infinitely small change
f
(
x
+
α
)
−
f
(
x
)
{\displaystyle f(x+\alpha )-f(x)}
of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
== Real functions ==
=== Definition ===
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x tends to c, is equal to
f
(
c
)
.
{\displaystyle f(c).}
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
(the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
is continuous on its whole domain, which is the closed interval
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and the tangent function
x
↦
tan
x
.
{\displaystyle x\mapsto \tan x.}
When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and
x
↦
sin
(
1
x
)
{\textstyle x\mapsto \sin({\frac {1}{x}})}
are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let
f
:
D
→
R
{\textstyle f:D\to \mathbb {R} }
be a function whose domain
D
{\displaystyle D}
is contained in
R
{\displaystyle \mathbb {R} }
of real numbers.
Some (but not all) possibilities for
D
{\displaystyle D}
are:
D
{\displaystyle D}
is the whole real line; that is,
D
=
R
{\displaystyle D=\mathbb {R} }
D
{\displaystyle D}
is a closed interval of the form
D
=
[
a
,
b
]
=
{
x
∈
R
∣
a
≤
x
≤
b
}
,
{\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},}
where a and b are real numbers
D
{\displaystyle D}
is an open interval of the form
D
=
(
a
,
b
)
=
{
x
∈
R
∣
a
<
x
<
b
}
,
{\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},}
where a and b are real numbers
In the case of an open interval,
a
{\displaystyle a}
and
b
{\displaystyle b}
do not belong to
D
{\displaystyle D}
, and the values
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
are not defined, and if they are, they do not matter for continuity on
D
{\displaystyle D}
.
==== Definition in terms of limits of functions ====
The function f is continuous at some point c of its domain if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x approaches c through the domain of f, exists and is equal to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation, this is written as
lim
x
→
c
f
(
x
)
=
f
(
c
)
.
{\displaystyle \lim _{x\to c}{f(x)}=f(c).}
In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal
f
(
c
)
.
{\displaystyle f(c).}
(Here, we have assumed that the domain of f does not have any isolated points.)
==== Definition in terms of neighborhoods ====
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point
f
(
c
)
{\displaystyle f(c)}
as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood
N
1
(
f
(
c
)
)
{\displaystyle N_{1}(f(c))}
there is a neighborhood
N
2
(
c
)
{\displaystyle N_{2}(c)}
in its domain such that
f
(
x
)
∈
N
1
(
f
(
c
)
)
{\displaystyle f(x)\in N_{1}(f(c))}
whenever
x
∈
N
2
(
c
)
.
{\displaystyle x\in N_{2}(c).}
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
==== Definition in terms of limits of sequences ====
One can instead require that for any sequence
(
x
n
)
n
∈
N
{\displaystyle (x_{n})_{n\in \mathbb {N} }}
of points in the domain which converges to c, the corresponding sequence
(
f
(
x
n
)
)
n
∈
N
{\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}
converges to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation,
∀
(
x
n
)
n
∈
N
⊂
D
:
lim
n
→
∞
x
n
=
c
⇒
lim
n
→
∞
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.}
==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ====
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
as above and an element
x
0
{\displaystyle x_{0}}
of the domain
D
{\displaystyle D}
,
f
{\displaystyle f}
is said to be continuous at the point
x
0
{\displaystyle x_{0}}
when the following holds: For any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
however small, there exists some positive real number
δ
>
0
{\displaystyle \delta >0}
such that for all
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
with
x
0
−
δ
<
x
<
x
0
+
δ
,
{\displaystyle x_{0}-\delta <x<x_{0}+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
0
)
−
ε
<
f
(
x
)
<
f
(
x
0
)
+
ε
.
{\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .}
Alternatively written, continuity of
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
at
x
0
∈
D
{\displaystyle x_{0}\in D}
means that for every
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that for all
x
∈
D
{\displaystyle x\in D}
:
|
x
−
x
0
|
<
δ
implies
|
f
(
x
)
−
f
(
x
0
)
|
<
ε
.
{\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .}
More intuitively, we can say that if we want to get all the
f
(
x
)
{\displaystyle f(x)}
values to stay in some small neighborhood around
f
(
x
0
)
,
{\displaystyle f\left(x_{0}\right),}
we need to choose a small enough neighborhood for the
x
{\displaystyle x}
values around
x
0
.
{\displaystyle x_{0}.}
If we can do that no matter how small the
f
(
x
0
)
{\displaystyle f(x_{0})}
neighborhood is, then
f
{\displaystyle f}
is continuous at
x
0
.
{\displaystyle x_{0}.}
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval
x
0
−
δ
<
x
<
x
0
+
δ
{\displaystyle x_{0}-\delta <x<x_{0}+\delta }
be entirely within the domain
D
{\displaystyle D}
, but Jordan removed that restriction.
==== Definition in terms of control of the remainder ====
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function
C
:
[
0
,
∞
)
→
[
0
,
∞
]
{\displaystyle C:[0,\infty )\to [0,\infty ]}
is called a control function if
C is non-decreasing
inf
δ
>
0
C
(
δ
)
=
0
{\displaystyle \inf _{\delta >0}C(\delta )=0}
A function
f
:
D
→
R
{\displaystyle f:D\to R}
is C-continuous at
x
0
{\displaystyle x_{0}}
if there exists such a neighbourhood
N
(
x
0
)
{\textstyle N(x_{0})}
that
|
f
(
x
)
−
f
(
x
0
)
|
≤
C
(
|
x
−
x
0
|
)
for all
x
∈
D
∩
N
(
x
0
)
{\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})}
A function is continuous in
x
0
{\displaystyle x_{0}}
if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions
C
{\displaystyle {\mathcal {C}}}
a function is
C
{\displaystyle {\mathcal {C}}}
-continuous if it is
C
{\displaystyle C}
-continuous for some
C
∈
C
.
{\displaystyle C\in {\mathcal {C}}.}
For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions
C
L
i
p
s
c
h
i
t
z
=
{
C
:
C
(
δ
)
=
K
|
δ
|
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}}
C
Hölder
−
α
=
{
C
:
C
(
δ
)
=
K
|
δ
|
α
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}}
C
uniform cont.
=
{
C
:
C
(
0
)
=
0
}
{\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}}
respectively.
==== Definition using oscillation ====
Continuity can also be defined in terms of oscillation: a function f is continuous at a point
x
0
{\displaystyle x_{0}}
if and only if its oscillation at that point is zero; in symbols,
ω
f
(
x
0
)
=
0.
{\displaystyle \omega _{f}(x_{0})=0.}
A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than
ε
{\displaystyle \varepsilon }
(hence a
G
δ
{\displaystyle G_{\delta }}
set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given
ε
0
{\displaystyle \varepsilon _{0}}
there is no
δ
{\displaystyle \delta }
that satisfies the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition, then the oscillation is at least
ε
0
,
{\displaystyle \varepsilon _{0},}
and conversely if for every
ε
{\displaystyle \varepsilon }
there is a desired
δ
,
{\displaystyle \delta ,}
the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
==== Definition using the hyperreals ====
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
=== Rules for continuity ===
Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules:
Every constant function is continuous
The identity function
f
(
x
)
=
x
{\displaystyle f(x)=x}
is continuous
Addition and multiplication: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then their sum
f
+
g
{\displaystyle f+g}
and their product
f
⋅
g
{\displaystyle f\cdot g}
are continuous on the intersection
D
f
∩
D
g
{\displaystyle D_{f}\cap D_{g}}
, where
f
+
g
{\displaystyle f+g}
and
f
g
{\displaystyle fg}
are defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
and
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
{\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)}
.
Reciprocal: If the function
f
{\displaystyle f}
is continuous on the domain
D
f
{\displaystyle D_{f}}
, then its reciprocal
1
f
{\displaystyle {\tfrac {1}{f}}}
, defined by
(
1
f
)
(
x
)
=
1
f
(
x
)
{\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}}
is continuous on the domain
D
f
∖
f
−
1
(
0
)
{\displaystyle D_{f}\setminus f^{-1}(0)}
, that is, the domain
D
f
{\displaystyle D_{f}}
from which the points
x
{\displaystyle x}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
are removed.
Function composition: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then the composition
g
∘
f
{\displaystyle g\circ f}
defined by
1
{\displaystyle {1}}
is continuous on
D
f
∩
f
−
1
(
D
g
)
{\displaystyle D_{f}\cap f^{-1}(D_{g})}
, that the part of
D
f
{\displaystyle D_{f}}
that is mapped by
f
{\displaystyle f}
inside
D
g
{\displaystyle D_{g}}
.
The sine and cosine functions (
sin
x
{\displaystyle \sin x}
and
cos
x
{\displaystyle \cos x}
) are continuous everywhere.
The exponential function
e
x
{\displaystyle e^{x}}
is continuous everywhere.
The natural logarithm
ln
x
{\displaystyle \ln x}
is continuous on the domain formed by all positive real numbers
{
x
∣
x
>
0
}
{\displaystyle \{x\mid x>0\}}
.
These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator.
An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by
sinc
(
0
)
=
1
{\displaystyle \operatorname {sinc} (0)=1}
and
sinc
(
x
)
=
sin
x
x
{\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}}
for
x
≠
0
{\displaystyle x\neq 0}
. The above rules show immediately that the function is continuous for
x
≠
0
{\displaystyle x\neq 0}
, but, for proving the continuity at
0
{\displaystyle 0}
, one has to prove
lim
x
→
0
sin
x
x
=
1.
{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.}
As this is true, one gets that the sinc function is continuous function on all real numbers.
=== Examples of discontinuous functions ===
An example of a discontinuous function is the Heaviside step function
H
{\displaystyle H}
, defined by
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0
{\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}}
Pick for instance
ε
=
1
/
2
{\displaystyle \varepsilon =1/2}
. Then there is no
δ
{\displaystyle \delta }
-neighborhood around
x
=
0
{\displaystyle x=0}
, i.e. no open interval
(
−
δ
,
δ
)
{\displaystyle (-\delta ,\;\delta )}
with
δ
>
0
,
{\displaystyle \delta >0,}
that will force all the
H
(
x
)
{\displaystyle H(x)}
values to be within the
ε
{\displaystyle \varepsilon }
-neighborhood of
H
(
0
)
{\displaystyle H(0)}
, i.e. within
(
1
/
2
,
3
/
2
)
{\displaystyle (1/2,\;3/2)}
. Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
sgn
(
x
)
=
{
1
if
x
>
0
0
if
x
=
0
−
1
if
x
<
0
{\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}}
is discontinuous at
x
=
0
{\displaystyle x=0}
but continuous everywhere else. Yet another example: the function
f
(
x
)
=
{
sin
(
x
−
2
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is continuous everywhere apart from
x
=
0
{\displaystyle x=0}
.
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
f
(
x
)
=
{
1
if
x
=
0
1
q
if
x
=
p
q
(in lowest terms) is a rational number
0
if
x
is irrational
.
{\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}}
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
D
(
x
)
=
{
0
if
x
is irrational
(
∈
R
∖
Q
)
1
if
x
is rational
(
∈
Q
)
{\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}}
is nowhere continuous.
=== Properties ===
==== A useful lemma ====
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is continuous at a point
x
0
,
{\displaystyle x_{0},}
and
y
0
{\displaystyle y_{0}}
be a value such
f
(
x
0
)
≠
y
0
.
{\displaystyle f\left(x_{0}\right)\neq y_{0}.}
Then
f
(
x
)
≠
y
0
{\displaystyle f(x)\neq y_{0}}
throughout some neighbourhood of
x
0
.
{\displaystyle x_{0}.}
Proof: By the definition of continuity, take
ε
=
|
y
0
−
f
(
x
0
)
|
2
>
0
{\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0}
, then there exists
δ
>
0
{\displaystyle \delta >0}
such that
|
f
(
x
)
−
f
(
x
0
)
|
<
|
y
0
−
f
(
x
0
)
|
2
whenever
|
x
−
x
0
|
<
δ
{\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta }
Suppose there is a point in the neighbourhood
|
x
−
x
0
|
<
δ
{\displaystyle |x-x_{0}|<\delta }
for which
f
(
x
)
=
y
0
;
{\displaystyle f(x)=y_{0};}
then we have the contradiction
|
f
(
x
0
)
−
y
0
|
<
|
f
(
x
0
)
−
y
0
|
2
.
{\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.}
==== Intermediate value theorem ====
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval
[
a
,
b
]
,
{\displaystyle [a,b],}
and k is some number between
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
,
{\displaystyle f(b),}
then there is some number
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
such that
f
(
c
)
=
k
.
{\displaystyle f(c)=k.}
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on
[
a
,
b
]
{\displaystyle [a,b]}
and
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
differ in sign, then, at some point
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
f
(
c
)
{\displaystyle f(c)}
must equal zero.
==== Extreme value theorem ====
The extreme value theorem states that if a function f is defined on a closed interval
[
a
,
b
]
{\displaystyle [a,b]}
(or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists
c
∈
[
a
,
b
]
{\displaystyle c\in [a,b]}
with
f
(
c
)
≥
f
(
x
)
{\displaystyle f(c)\geq f(x)}
for all
x
∈
[
a
,
b
]
.
{\displaystyle x\in [a,b].}
The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval
(
a
,
b
)
{\displaystyle (a,b)}
(or any set that is not both closed and bounded), as, for example, the continuous function
f
(
x
)
=
1
x
,
{\displaystyle f(x)={\frac {1}{x}},}
defined on the open interval (0,1), does not attain a maximum, being unbounded above.
==== Relation to differentiability and integrability ====
Every differentiable function
f
:
(
a
,
b
)
→
R
{\displaystyle f:(a,b)\to \mathbb {R} }
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
f
(
x
)
=
|
x
|
=
{
x
if
x
≥
0
−
x
if
x
<
0
{\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}}
is everywhere continuous. However, it is not differentiable at
x
=
0
{\displaystyle x=0}
(but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted
C
1
(
(
a
,
b
)
)
.
{\displaystyle C^{1}((a,b)).}
More generally, the set of functions
f
:
Ω
→
R
{\displaystyle f:\Omega \to \mathbb {R} }
(from an open interval (or open subset of
R
{\displaystyle \mathbb {R} }
)
Ω
{\displaystyle \Omega }
to the reals) such that f is
n
{\displaystyle n}
times differentiable and such that the
n
{\displaystyle n}
-th derivative of f is continuous is denoted
C
n
(
Ω
)
.
{\displaystyle C^{n}(\Omega ).}
See differentiability class. In the field of computer graphics, properties related (but not identical) to
C
0
,
C
1
,
C
2
{\displaystyle C^{0},C^{1},C^{2}}
are sometimes called
G
0
{\displaystyle G^{0}}
(continuity of position),
G
1
{\displaystyle G^{1}}
(continuity of tangency), and
G
2
{\displaystyle G^{2}}
(continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
f
:
[
a
,
b
]
→
R
{\displaystyle f:[a,b]\to \mathbb {R} }
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
==== Pointwise and uniform limits ====
Given a sequence
f
1
,
f
2
,
…
:
I
→
R
{\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} }
of functions such that the limit
f
(
x
)
:=
lim
n
→
∞
f
n
(
x
)
{\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)}
exists for all
x
∈
D
,
{\displaystyle x\in D,}
, the resulting function
f
(
x
)
{\displaystyle f(x)}
is referred to as the pointwise limit of the sequence of functions
(
f
n
)
n
∈
N
.
{\displaystyle \left(f_{n}\right)_{n\in N}.}
The pointwise limit function need not be continuous, even if all functions
f
n
{\displaystyle f_{n}}
are continuous, as the animation at the right shows. However, f is continuous if all functions
f
n
{\displaystyle f_{n}}
are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
=== Directional Continuity ===
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number
ε
>
0
{\displaystyle \varepsilon >0}
however small, there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
c
<
x
<
c
+
δ
,
{\displaystyle c<x<c+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
will satisfy
|
f
(
x
)
−
f
(
c
)
|
<
ε
.
{\displaystyle |f(x)-f(c)|<\varepsilon .}
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with
c
−
δ
<
x
<
c
{\displaystyle c-\delta <x<c}
yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.
=== Semicontinuity ===
A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
|
x
−
c
|
<
δ
,
{\displaystyle |x-c|<\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
)
≥
f
(
c
)
−
ϵ
.
{\displaystyle f(x)\geq f(c)-\epsilon .}
The reverse condition is upper semi-continuity.
== Continuous functions between metric spaces ==
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set
X
{\displaystyle X}
equipped with a function (called metric)
d
X
,
{\displaystyle d_{X},}
that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
d
X
:
X
×
X
→
R
{\displaystyle d_{X}:X\times X\to \mathbb {R} }
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces
(
X
,
d
X
)
{\displaystyle \left(X,d_{X}\right)}
and
(
Y
,
d
Y
)
{\displaystyle \left(Y,d_{Y}\right)}
and a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
then
f
{\displaystyle f}
is continuous at the point
c
∈
X
{\displaystyle c\in X}
(with respect to the given metrics) if for any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a positive real number
δ
>
0
{\displaystyle \delta >0}
such that all
x
∈
X
{\displaystyle x\in X}
satisfying
d
X
(
x
,
c
)
<
δ
{\displaystyle d_{X}(x,c)<\delta }
will also satisfy
d
Y
(
f
(
x
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(x),f(c))<\varepsilon .}
As in the case of real functions above, this is equivalent to the condition that for every sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
lim
x
n
=
c
,
{\displaystyle \lim x_{n}=c,}
we have
lim
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \lim f\left(x_{n}\right)=f(c).}
The latter condition can be weakened as follows:
f
{\displaystyle f}
is continuous at the point
c
{\displaystyle c}
if and only if for every convergent sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
c
{\displaystyle c}
, the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
is a Cauchy sequence, and
c
{\displaystyle c}
is in the domain of
f
{\displaystyle f}
.
The set of points at which a function between metric spaces is continuous is a
G
δ
{\displaystyle G_{\delta }}
set – this follows from the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
T
:
V
→
W
{\displaystyle T:V\to W}
between normed vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
(which are vector spaces equipped with a compatible norm, denoted
‖
x
‖
{\displaystyle \|x\|}
) is continuous if and only if it is bounded, that is, there is a constant
K
{\displaystyle K}
such that
‖
T
(
x
)
‖
≤
K
‖
x
‖
{\displaystyle \|T(x)\|\leq K\|x\|}
for all
x
∈
V
.
{\displaystyle x\in V.}
=== Uniform, Hölder and Lipschitz continuity ===
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way
δ
{\displaystyle \delta }
depends on
ε
{\displaystyle \varepsilon }
and c in the definition above. Intuitively, a function f as above is uniformly continuous if the
δ
{\displaystyle \delta }
does
not depend on the point c. More precisely, it is required that for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that for every
c
,
b
∈
X
{\displaystyle c,b\in X}
with
d
X
(
b
,
c
)
<
δ
,
{\displaystyle d_{X}(b,c)<\delta ,}
we have that
d
Y
(
f
(
b
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(b),f(c))<\varepsilon .}
Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all
b
,
c
∈
X
,
{\displaystyle b,c\in X,}
the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
(
d
X
(
b
,
c
)
)
α
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }}
holds. Any Hölder continuous function is uniformly continuous. The particular case
α
=
1
{\displaystyle \alpha =1}
is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
d
X
(
b
,
c
)
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)}
holds for any
b
,
c
∈
X
.
{\displaystyle b,c\in X.}
The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
== Continuous functions between topological spaces ==
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two topological spaces X and Y is continuous if for every open set
V
⊆
Y
,
{\displaystyle V\subseteq Y,}
the inverse image
f
−
1
(
V
)
=
{
x
∈
X
|
f
(
x
)
∈
V
}
{\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}}
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology
T
X
{\displaystyle T_{X}}
), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
f
:
X
→
T
{\displaystyle f:X\to T}
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
=== Continuity at a point ===
The translation in the language of neighborhoods of the
(
ε
,
δ
)
{\displaystyle (\varepsilon ,\delta )}
-definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and
f
−
1
(
V
)
{\displaystyle f^{-1}(V)}
is the largest subset U of X such that
f
(
U
)
⊆
V
,
{\displaystyle f(U)\subseteq V,}
this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at every point of X if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given
x
∈
X
,
{\displaystyle x\in X,}
a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if whenever
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
that converges to
x
{\displaystyle x}
in
X
,
{\displaystyle X,}
which is expressed by writing
B
→
x
,
{\displaystyle {\mathcal {B}}\to x,}
then necessarily
f
(
B
)
→
f
(
x
)
{\displaystyle f({\mathcal {B}})\to f(x)}
in
Y
.
{\displaystyle Y.}
If
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
denotes the neighborhood filter at
x
{\displaystyle x}
then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if
f
(
N
(
x
)
)
→
f
(
x
)
{\displaystyle f({\mathcal {N}}(x))\to f(x)}
in
Y
.
{\displaystyle Y.}
Moreover, this happens if and only if the prefilter
f
(
N
(
x
)
)
{\displaystyle f({\mathcal {N}}(x))}
is a filter base for the neighborhood filter of
f
(
x
)
{\displaystyle f(x)}
in
Y
.
{\displaystyle Y.}
=== Alternative definitions ===
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
==== Sequences and nets ====
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is sequentially continuous if whenever a sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
converges to a limit
x
,
{\displaystyle x,}
the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
converges to
f
(
x
)
.
{\displaystyle f(x).}
Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If
X
{\displaystyle X}
is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if
X
{\displaystyle X}
is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
==== Closure operator and interior operator definitions ====
In terms of the interior and closure operators, we have the following equivalences,
If we declare that a point
x
{\displaystyle x}
is close to a subset
A
⊆
X
{\displaystyle A\subseteq X}
if
x
∈
cl
X
A
,
{\displaystyle x\in \operatorname {cl} _{X}A,}
then this terminology allows for a plain English description of continuity:
f
{\displaystyle f}
is continuous if and only if for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
f
{\displaystyle f}
maps points that are close to
A
{\displaystyle A}
to points that are close to
f
(
A
)
.
{\displaystyle f(A).}
Similarly,
f
{\displaystyle f}
is continuous at a fixed given point
x
∈
X
{\displaystyle x\in X}
if and only if whenever
x
{\displaystyle x}
is close to a subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
then
f
(
x
)
{\displaystyle f(x)}
is close to
f
(
A
)
.
{\displaystyle f(A).}
Instead of specifying topological spaces by their open subsets, any topology on
X
{\displaystyle X}
can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset
A
{\displaystyle A}
of a topological space
X
{\displaystyle X}
to its topological closure
cl
X
A
{\displaystyle \operatorname {cl} _{X}A}
satisfies the Kuratowski closure axioms. Conversely, for any closure operator
A
↦
cl
A
{\displaystyle A\mapsto \operatorname {cl} A}
there exists a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
X
∖
cl
A
:
A
⊆
X
}
{\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}}
) such that for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
cl
A
{\displaystyle \operatorname {cl} A}
is equal to the topological closure
cl
(
X
,
τ
)
A
{\displaystyle \operatorname {cl} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with closure operators (both denoted by
cl
{\displaystyle \operatorname {cl} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
(
cl
A
)
⊆
cl
(
f
(
A
)
)
{\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))}
for every subset
A
⊆
X
.
{\displaystyle A\subseteq X.}
Similarly, the map that sends a subset
A
{\displaystyle A}
of
X
{\displaystyle X}
to its topological interior
int
X
A
{\displaystyle \operatorname {int} _{X}A}
defines an interior operator. Conversely, any interior operator
A
↦
int
A
{\displaystyle A\mapsto \operatorname {int} A}
induces a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
int
A
:
A
⊆
X
}
{\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}}
) such that for every
A
⊆
X
,
{\displaystyle A\subseteq X,}
int
A
{\displaystyle \operatorname {int} A}
is equal to the topological interior
int
(
X
,
τ
)
A
{\displaystyle \operatorname {int} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with interior operators (both denoted by
int
{\displaystyle \operatorname {int} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
−
1
(
int
B
)
⊆
int
(
f
−
1
(
B
)
)
{\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)}
for every subset
B
⊆
Y
.
{\displaystyle B\subseteq Y.}
==== Filters and prefilters ====
Continuity can also be characterized in terms of filters. A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if whenever a filter
B
{\displaystyle {\mathcal {B}}}
on
X
{\displaystyle X}
converges in
X
{\displaystyle X}
to a point
x
∈
X
,
{\displaystyle x\in X,}
then the prefilter
f
(
B
)
{\displaystyle f({\mathcal {B}})}
converges in
Y
{\displaystyle Y}
to
f
(
x
)
.
{\displaystyle f(x).}
This characterization remains true if the word "filter" is replaced by "prefilter."
=== Properties ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
are continuous, then so is the composition
g
∘
f
:
X
→
Z
.
{\displaystyle g\circ f:X\to Z.}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology
τ
1
{\displaystyle \tau _{1}}
is said to be coarser than another topology
τ
2
{\displaystyle \tau _{2}}
(notation:
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
) if every open subset with respect to
τ
1
{\displaystyle \tau _{1}}
is also open with respect to
τ
2
.
{\displaystyle \tau _{2}.}
Then, the identity map
id
X
:
(
X
,
τ
2
)
→
(
X
,
τ
1
)
{\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)}
is continuous if and only if
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
(see also comparison of topologies). More generally, a continuous function
(
X
,
τ
X
)
→
(
Y
,
τ
Y
)
{\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)}
stays continuous if the topology
τ
Y
{\displaystyle \tau _{Y}}
is replaced by a coarser topology and/or
τ
X
{\displaystyle \tau _{X}}
is replaced by a finer topology.
=== Homeomorphisms ===
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function
f
−
1
{\displaystyle f^{-1}}
need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
=== Defining topologies via continuous functions ===
Given a function
f
:
X
→
S
,
{\displaystyle f:X\to S,}
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which
f
−
1
(
A
)
{\displaystyle f^{-1}(A)}
is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that
A
=
f
−
1
(
U
)
{\displaystyle A=f^{-1}(U)}
for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions
S
→
X
{\displaystyle S\to X}
into all topological spaces X. Dually, a similar idea can be applied to maps
X
→
S
.
{\displaystyle X\to S.}
== Related notions ==
If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a continuous function from some subset
S
{\displaystyle S}
of a topological space
X
{\displaystyle X}
then a continuous extension of
f
{\displaystyle f}
to
X
{\displaystyle X}
is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
such that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for every
s
∈
S
,
{\displaystyle s\in S,}
which is a condition that often written as
f
=
F
|
S
.
{\displaystyle f=F{\big \vert }_{S}.}
In words, it is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
that restricts to
f
{\displaystyle f}
on
S
.
{\displaystyle S.}
This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is not continuous, then it could not possibly have a continuous extension. If
Y
{\displaystyle Y}
is a Hausdorff space and
S
{\displaystyle S}
is a dense subset of
X
{\displaystyle X}
then a continuous extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
to
X
,
{\displaystyle X,}
if one exists, will be unique. The Blumberg theorem states that if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is an arbitrary function then there exists a dense subset
D
{\displaystyle D}
of
R
{\displaystyle \mathbb {R} }
such that the restriction
f
|
D
:
D
→
R
{\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} }
is continuous; in other words, every function
R
→
R
{\displaystyle \mathbb {R} \to \mathbb {R} }
can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between particular types of partially ordered sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is continuous if for each directed subset
A
{\displaystyle A}
of
X
,
{\displaystyle X,}
we have
sup
f
(
A
)
=
f
(
sup
A
)
.
{\displaystyle \sup f(A)=f(\sup A).}
Here
sup
{\displaystyle \,\sup \,}
is the supremum with respect to the orderings in
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
F
:
C
→
D
{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}
between two categories is called continuous if it commutes with small limits. That is to say,
lim
←
i
∈
I
F
(
C
i
)
≅
F
(
lim
←
i
∈
I
C
i
)
{\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)}
for any small (that is, indexed by a set
I
,
{\displaystyle I,}
as opposed to a class) diagram of objects in
C
{\displaystyle {\mathcal {C}}}
.
A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function
f
:
E
→
R
k
{\displaystyle f:E\to \mathbb {R} ^{k}}
defined on a Lebesgue measurable set
E
⊆
R
n
{\displaystyle E\subseteq \mathbb {R} ^{n}}
is called approximately continuous at a point
x
0
∈
E
{\displaystyle x_{0}\in E}
if the approximate limit of
f
{\displaystyle f}
at
x
0
{\displaystyle x_{0}}
exists and equals
f
(
x
0
)
{\displaystyle f(x_{0})}
. This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
== See also ==
Direction-preserving function - an analog of a continuous function in discrete spaces.
== References ==
== Bibliography ==
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
"Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Continuous_function |
In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties:
A is a topologically closed set in the norm topology of operators.
A is closed under the operation of taking adjoints of operators.
Another important class of non-Hilbert C*-algebras includes the algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space.
C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras that are now known as von Neumann algebras.
Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space.
C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras.
== Abstract characterization ==
We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark.
A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map
x
↦
x
∗
{\textstyle x\mapsto x^{*}}
for
x
∈
A
{\textstyle x\in A}
with the following properties:
It is an involution, for every x in A:
x
∗
∗
=
(
x
∗
)
∗
=
x
{\displaystyle x^{**}=(x^{*})^{*}=x}
For all x, y in A:
(
x
+
y
)
∗
=
x
∗
+
y
∗
{\displaystyle (x+y)^{*}=x^{*}+y^{*}}
(
x
y
)
∗
=
y
∗
x
∗
{\displaystyle (xy)^{*}=y^{*}x^{*}}
For every complex number
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
and every x in A:
(
λ
x
)
∗
=
λ
¯
x
∗
.
{\displaystyle (\lambda x)^{*}={\overline {\lambda }}x^{*}.}
For all x in A:
‖
x
x
∗
‖
=
‖
x
‖
‖
x
∗
‖
.
{\displaystyle \|xx^{*}\|=\|x\|\|x^{*}\|.}
Remark. The first four identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:
‖
x
x
∗
‖
=
‖
x
‖
2
,
{\displaystyle \|xx^{*}\|=\|x\|^{2},}
which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below.
The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:
‖
x
‖
2
=
‖
x
∗
x
‖
=
sup
{
|
λ
|
:
x
∗
x
−
λ
1
is not invertible
}
.
{\displaystyle \|x\|^{2}=\|x^{*}x\|=\sup\{|\lambda |:x^{*}x-\lambda \,1{\text{ is not invertible}}\}.}
A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if
For x and y in A
π
(
x
y
)
=
π
(
x
)
π
(
y
)
{\displaystyle \pi (xy)=\pi (x)\pi (y)\,}
For x in A
π
(
x
∗
)
=
π
(
x
)
∗
{\displaystyle \pi (x^{*})=\pi (x)^{*}\,}
In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity.
A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic.
== Some history: B*-algebras and C*-algebras ==
The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition:
‖
x
x
∗
‖
=
‖
x
‖
2
{\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert ^{2}}
for all x in the given B*-algebra. (B*-condition)
This condition automatically implies that the *-involution is isometric, that is,
‖
x
‖
=
‖
x
∗
‖
{\displaystyle \lVert x\rVert =\lVert x^{*}\rVert }
. Hence,
‖
x
x
∗
‖
=
‖
x
‖
‖
x
∗
‖
{\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert \lVert x^{*}\rVert }
, and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition
‖
x
‖
=
‖
x
∗
‖
{\displaystyle \lVert x\rVert =\lVert x^{*}\rVert }
. For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'.
The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space".
== Structure of C*-algebras ==
C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism.
=== Self-adjoint elements ===
Self-adjoint elements are those of the form
x
=
x
∗
{\displaystyle x=x^{*}}
. The set of elements of a C*-algebra A of the form
x
∗
x
{\displaystyle x^{*}x}
forms a closed convex cone. This cone is identical to the elements of the form
x
x
∗
{\displaystyle xx^{*}}
. Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of
R
{\displaystyle \mathbb {R} }
)
The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted
≥
{\displaystyle \geq }
. In this ordering, a self-adjoint element
x
∈
A
{\displaystyle x\in A}
satisfies
x
≥
0
{\displaystyle x\geq 0}
if and only if the spectrum of
x
{\displaystyle x}
is non-negative, if and only if
x
=
s
∗
s
{\displaystyle x=s^{*}s}
for some
s
∈
A
{\displaystyle s\in A}
. Two self-adjoint elements
x
{\displaystyle x}
and
y
{\displaystyle y}
of A satisfy
x
≥
y
{\displaystyle x\geq y}
if
x
−
y
≥
0
{\displaystyle x-y\geq 0}
.
This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction.
=== Quotients and approximate identities ===
Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that
x
e
λ
→
x
{\displaystyle xe_{\lambda }\rightarrow x}
0
≤
e
λ
≤
e
μ
≤
1
whenever
λ
≤
μ
.
{\displaystyle 0\leq e_{\lambda }\leq e_{\mu }\leq 1\quad {\mbox{ whenever }}\lambda \leq \mu .}
In case A is separable, A has a sequential approximate identity. More generally, A will have a sequential approximate identity if and only if A contains a strictly positive element, i.e. a positive element h such that hAh is dense in A.
Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra.
Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra.
== Examples ==
=== Finite-dimensional C*-algebras ===
The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, Cn, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type:
Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum
A
=
⨁
e
∈
min
A
A
e
{\displaystyle A=\bigoplus _{e\in \min A}Ae}
where min A is the set of minimal nonzero self-adjoint central projections of A.
Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A.
A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science.
An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras.
=== C*-algebras of operators ===
The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem.
=== C*-algebras of compact operators ===
Let H be a separable infinite-dimensional Hilbert space. The algebra K(H) of compact operators on H is a norm closed subalgebra of B(H). It is also closed under involution; hence it is a C*-algebra.
Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras:
Theorem. If A is a C*-subalgebra of K(H), then there exists Hilbert spaces {Hi}i∈I such that
A
≅
⨁
i
∈
I
K
(
H
i
)
,
{\displaystyle A\cong \bigoplus _{i\in I}K(H_{i}),}
where the (C*-)direct sum consists of elements (Ti) of the Cartesian product Π K(Hi) with ||Ti|| → 0.
Though K(H) does not have an identity element, a sequential approximate identity for K(H) can be developed. To be specific, H is isomorphic to the space of square summable sequences l2; we may assume that H = l2. For each natural number n let Hn be the subspace of sequences of l2 which vanish for indices k ≥ n and let en be the orthogonal projection onto Hn. The sequence {en}n is an approximate identity for K(H).
K(H) is a two-sided closed ideal of B(H). For separable Hilbert spaces, it is the unique ideal. The quotient of B(H) by K(H) is the Calkin algebra.
=== Commutative C*-algebras ===
Let X be a locally compact Hausdorff space. The space
C
0
(
X
)
{\displaystyle C_{0}(X)}
of complex-valued continuous functions on X that vanish at infinity (defined in the article on local compactness) forms a commutative C*-algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
under pointwise multiplication and addition. The involution is pointwise conjugation.
C
0
(
X
)
{\displaystyle C_{0}(X)}
has a multiplicative unit element if and only if
X
{\displaystyle X}
is compact. As does any C*-algebra,
C
0
(
X
)
{\displaystyle C_{0}(X)}
has an approximate identity. In the case of
C
0
(
X
)
{\displaystyle C_{0}(X)}
this is immediate: consider the directed set of compact subsets of
X
{\displaystyle X}
, and for each compact
K
{\displaystyle K}
let
f
K
{\displaystyle f_{K}}
be a function of compact support which is identically 1 on
K
{\displaystyle K}
. Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions
{
f
K
}
{\displaystyle \{f_{K}\}}
is an approximate identity.
The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
, where
X
{\displaystyle X}
is the space of characters equipped with the weak* topology. Furthermore, if
C
0
(
X
)
{\displaystyle C_{0}(X)}
is isomorphic to
C
0
(
Y
)
{\displaystyle C_{0}(Y)}
as C*-algebras, it follows that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs.
=== C*-enveloping algebra ===
Given a Banach *-algebra A with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E(A) and *-morphism π from A into E(A) that is universal, that is, every other continuous *-morphism π ' : A → B factors uniquely through π. The algebra E(A) is called the C*-enveloping algebra of the Banach *-algebra A.
Of particular importance is the C*-algebra of a locally compact group G. This is defined as the enveloping C*-algebra of the group algebra of G. The C*-algebra of G provides context for general harmonic analysis of G in the case G is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra.
=== Von Neumann algebras ===
Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology.
The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it.
== Type for C*-algebras ==
A C*-algebra A is of type I if and only if for all non-degenerate representations π of A the von Neumann algebra π(A)″ (that is, the bicommutant of π(A)) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π(A)″ is a factor.
A locally compact group is said to be of type I if and only if its group C*-algebra is type I.
However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties.
== C*-algebras and quantum field theory ==
In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x).
This C*-algebra approach is used in the Haag–Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra.
== See also ==
Banach algebra
Banach *-algebra
*-algebra
Hilbert C*-module
Operator K-theory
Operator system, a unital subspace of a C*-algebra that is *-closed.
Gelfand–Naimark–Segal construction
Jordan operator algebra
== Notes ==
== References ==
Arveson, W. (1976), An Invitation to C*-Algebra, Springer-Verlag, ISBN 0-387-90176-0. An excellent introduction to the subject, accessible for those with a knowledge of basic functional analysis.
Connes, Alain (1994), Non-commutative geometry, Gulf Professional, ISBN 0-12-185860-X. This book is widely regarded as a source of new research material, providing much supporting intuition, but it is difficult.
Dixmier, Jacques (1969), Les C*-algèbres et leurs représentations, Gauthier-Villars, ISBN 0-7204-0762-1. This is a somewhat dated reference, but is still considered as a high-quality technical exposition. It is available in English from North Holland press.
Doran, Robert S.; Belfi, Victor A. (1986), Characterizations of C*-algebras: The Gelfand-Naimark Theorems, CRC Press, ISBN 978-0-8247-7569-8.
Emch, G. (1972), Algebraic Methods in Statistical Mechanics and Quantum Field Theory, Wiley-Interscience, ISBN 0-471-23900-3. Mathematically rigorous reference which provides extensive physics background.
A.I. Shtern (2001) [1994], "C*-algebra", Encyclopedia of Mathematics, EMS Press
Sakai, S. (1971), C*-algebras and W*-algebras, Springer, ISBN 3-540-63633-1.
Segal, Irving (1947), "Irreducible representations of operator algebras", Bulletin of the American Mathematical Society, 53 (2): 73–88, doi:10.1090/S0002-9904-1947-08742-5. | Wikipedia/B*-algebra |
In abstract algebra, an alternative algebra is an algebra in which multiplication need not be associative, only alternative. That is, one must have
x
(
x
y
)
=
(
x
x
)
y
{\displaystyle x(xy)=(xx)y}
(
y
x
)
x
=
y
(
x
x
)
{\displaystyle (yx)x=y(xx)}
for all x and y in the algebra.
Every associative algebra is obviously alternative, but so too are some strictly non-associative algebras such as the octonions.
== The associator ==
Alternative algebras are so named because they are the algebras for which the associator is alternating. The associator is a trilinear map given by
[
x
,
y
,
z
]
=
(
x
y
)
z
−
x
(
y
z
)
{\displaystyle [x,y,z]=(xy)z-x(yz)}
.
By definition, a multilinear map is alternating if it vanishes whenever two of its arguments are equal. The left and right alternative identities for an algebra are equivalent to
[
x
,
x
,
y
]
=
0
{\displaystyle [x,x,y]=0}
[
y
,
x
,
x
]
=
0
{\displaystyle [y,x,x]=0}
Both of these identities together imply that:
[
x
,
y
,
x
]
=
[
x
,
x
,
x
]
+
[
x
,
y
,
x
]
+
{\displaystyle [x,y,x]=[x,x,x]+[x,y,x]+}
−
[
x
,
x
+
y
,
x
+
y
]
=
{\displaystyle -[x,x+y,x+y]=}
=
[
x
,
x
+
y
,
−
y
]
=
{\displaystyle =[x,x+y,-y]=}
=
[
x
,
x
,
−
y
]
−
[
x
,
y
,
y
]
=
0
{\displaystyle =[x,x,-y]-[x,y,y]=0}
for all
x
{\displaystyle x}
and
y
{\displaystyle y}
. This is equivalent to the flexible identity
(
x
y
)
x
=
x
(
y
x
)
.
{\displaystyle (xy)x=x(yx).}
The associator of an alternative algebra is therefore alternating. Conversely, any algebra whose associator is alternating is clearly alternative. By symmetry, any algebra which satisfies any two of:
left alternative identity:
x
(
x
y
)
=
(
x
x
)
y
{\displaystyle x(xy)=(xx)y}
right alternative identity:
(
y
x
)
x
=
y
(
x
x
)
{\displaystyle (yx)x=y(xx)}
flexible identity:
(
x
y
)
x
=
x
(
y
x
)
.
{\displaystyle (xy)x=x(yx).}
is alternative and therefore satisfies all three identities.
An alternating associator is always totally skew-symmetric. That is,
[
x
σ
(
1
)
,
x
σ
(
2
)
,
x
σ
(
3
)
]
=
sgn
(
σ
)
[
x
1
,
x
2
,
x
3
]
{\displaystyle [x_{\sigma (1)},x_{\sigma (2)},x_{\sigma (3)}]=\operatorname {sgn}(\sigma )[x_{1},x_{2},x_{3}]}
for any permutation
σ
{\displaystyle \sigma }
. The converse holds so long as the characteristic of the base field is not 2.
== Examples ==
Every associative algebra is alternative.
The octonions form a non-associative alternative algebra, a normed division algebra of dimension 8 over the real numbers.
More generally, any octonion algebra is alternative.
=== Non-examples ===
The sedenions, trigintaduonions, and all higher Cayley–Dickson algebras lose alternativity.
== Properties ==
Artin's theorem states that in an alternative algebra the subalgebra generated by any two elements is associative. Conversely, any algebra for which this is true is clearly alternative. It follows that expressions involving only two variables can be written unambiguously without parentheses in an alternative algebra. A generalization of Artin's theorem states that whenever three elements
x
,
y
,
z
{\displaystyle x,y,z}
in an alternative algebra associate (i.e.,
[
x
,
y
,
z
]
=
0
{\displaystyle [x,y,z]=0}
), the subalgebra generated by those elements is associative.
A corollary of Artin's theorem is that alternative algebras are power-associative, that is, the subalgebra generated by a single element is associative. The converse need not hold: the sedenions are power-associative but not alternative.
The Moufang identities
a
(
x
(
a
y
)
)
=
(
a
x
a
)
y
{\displaystyle a(x(ay))=(axa)y}
(
(
x
a
)
y
)
a
=
x
(
a
y
a
)
{\displaystyle ((xa)y)a=x(aya)}
(
a
x
)
(
y
a
)
=
a
(
x
y
)
a
{\displaystyle (ax)(ya)=a(xy)a}
hold in any alternative algebra.
In a unital alternative algebra, multiplicative inverses are unique whenever they exist. Moreover, for any invertible element
x
{\displaystyle x}
and all
y
{\displaystyle y}
one has
y
=
x
−
1
(
x
y
)
.
{\displaystyle y=x^{-1}(xy).}
This is equivalent to saying the associator
[
x
−
1
,
x
,
y
]
{\displaystyle [x^{-1},x,y]}
vanishes for all such
x
{\displaystyle x}
and
y
{\displaystyle y}
.
If
x
{\displaystyle x}
and
y
{\displaystyle y}
are invertible then
x
y
{\displaystyle xy}
is also invertible with inverse
(
x
y
)
−
1
=
y
−
1
x
−
1
{\displaystyle (xy)^{-1}=y^{-1}x^{-1}}
. The set of all invertible elements is therefore closed under multiplication and forms a Moufang loop. This loop of units in an alternative ring or algebra is analogous to the group of units in an associative ring or algebra.
Kleinfeld's theorem states that any simple non-associative alternative ring is a generalized octonion algebra over its center.
The structure theory of alternative rings is presented in the book Rings That Are Nearly Associative by Zhevlakov, Slin'ko, Shestakov, and Shirshov.
== Occurrence ==
The projective plane over any alternative division ring is a Moufang plane.
Every composition algebra is an alternative algebra, as shown by Guy Roos in 2008: A composition algebra A over a field K has a norm n that is a multiplicative homomorphism:
n
(
a
×
b
)
=
n
(
a
)
×
n
(
b
)
{\displaystyle n(a\times b)=n(a)\times n(b)}
connecting (A, ×) and (K, ×).
Define the form ( _ : _ ): A × A → K by
(
a
:
b
)
=
n
(
a
+
b
)
−
n
(
a
)
−
n
(
b
)
.
{\displaystyle (a:b)=n(a+b)-n(a)-n(b).}
Then the trace of a is given by (a:1) and the conjugate by a* = (a:1)e – a where e is the basis element for 1. A series of exercises prove that a composition algebra is always an alternative algebra.
== See also ==
Algebra over a field
Maltsev algebra
Zorn ring
== References ==
== Sources ==
Schafer, Richard D. (1995). An Introduction to Nonassociative Algebras. New York: Dover Publications. ISBN 0-486-68813-5. Zbl 0145.25601.
Zhevlakov, K.A.; Slin'ko, A.M.; Shestakov, I.P.; Shirshov, A.I. (1982) [1978]. Rings That Are Nearly Associative. Academic Press. ISBN 0-12-779850-1. MR 0518614. Zbl 0487.17001.
== External links ==
Zhevlakov, K.A. (2001) [1994], "Alternative rings and algebras", Encyclopedia of Mathematics, EMS Press | Wikipedia/Alternative_algebra |
In mathematics, the composition operator
takes two functions,
f
{\displaystyle f}
and
g
{\displaystyle g}
, and returns a new function
h
(
x
)
:=
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle h(x):=(g\circ f)(x)=g(f(x))}
. Thus, the function g is applied after applying f to x.
(
g
∘
f
)
{\displaystyle (g\circ f)}
is pronounced "the composition of g and f".
Reverse composition, sometimes denoted
, applies the operation in the opposite order, applying
f
{\displaystyle f}
first and
g
{\displaystyle g}
second. Intuitively, reverse composition is a chaining process in which the output of function f feeds the input of function g.
The composition of functions is a special case of the composition of relations, sometimes also denoted by
∘
{\displaystyle \circ }
. As a result, all properties of composition of relations are true of composition of functions, such as associativity.
== Examples ==
Composition of functions on a finite set: If f = {(1, 1), (2, 3), (3, 1), (4, 2)}, and g = {(1, 2), (2, 3), (3, 1), (4, 2)}, then g ∘ f = {(1, 2), (2, 1), (3, 2), (4, 3)}, as shown in the figure.
Composition of functions on an infinite set: If f: R → R (where R is the set of all real numbers) is given by f(x) = 2x + 4 and g: R → R is given by g(x) = x3, then:
If an airplane's altitude at time t is a(t), and the air pressure at altitude x is p(x), then (p ∘ a)(t) is the pressure around the plane at time t.
Function defined on finite sets which change the order of their elements such as permutations can be composed on the same set, this being composition of permutations.
== Properties ==
The composition of functions is always associative—a property inherited from the composition of relations. That is, if f, g, and h are composable, then f ∘ (g ∘ h) = (f ∘ g) ∘ h. Since the parentheses do not change the result, they are generally omitted.
In a strict sense, the composition g ∘ f is only meaningful if the codomain of f equals the domain of g; in a wider sense, it is sufficient that the former be an improper subset of the latter.
Moreover, it is often convenient to tacitly restrict the domain of f, such that f produces only values in the domain of g. For example, the composition g ∘ f of the functions f : R → (−∞,+9] defined by f(x) = 9 − x2 and g : [0,+∞) → R defined by
g
(
x
)
=
x
{\displaystyle g(x)={\sqrt {x}}}
can be defined on the interval [−3,+3].
The functions g and f are said to commute with each other if g ∘ f = f ∘ g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, |x| + 3 = |x + 3| only when x ≥ 0. The picture shows another example.
The composition of one-to-one (injective) functions is always one-to-one. Similarly, the composition of onto (surjective) functions is always onto. It follows that the composition of two bijections is also a bijection. The inverse function of a composition (assumed invertible) has the property that (f ∘ g)−1 = g−1∘ f−1.
Derivatives of compositions involving differentiable functions can be found using the chain rule. Higher derivatives of such functions are given by Faà di Bruno's formula.
Composition of functions is sometimes described as a kind of multiplication on a function space, but has very different properties from pointwise multiplication of functions (e.g. composition is not commutative).
== Composition monoids ==
Suppose one has two (or more) functions f: X → X, g: X → X having the same domain and codomain; these are often called transformations. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f. Such chains have the algebraic structure of a monoid, called a transformation monoid or (much more seldom) a composition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is the de Rham curve. The set of all functions f: X → X is called the full transformation semigroup or symmetric semigroup on X. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.)
If the given transformations are bijective (and thus invertible), then the set of all possible combinations of these functions forms a transformation group (also known as a permutation group); and one says that the group is generated by these functions.
The set of all bijective functions f: X → X (called permutations) forms a group with respect to function composition. This is the symmetric group, also sometimes called the composition group. A fundamental result in group theory, Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up to isomorphism).
In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is a regular semigroup.
== Functional powers ==
If Y ⊆ X, then
f
:
X
→
Y
{\displaystyle f:X\to Y}
may compose with itself; this is sometimes denoted as
f
2
{\displaystyle f^{2}}
. That is:
More generally, for any natural number n ≥ 2, the nth functional power can be defined inductively by f n = f ∘ f n−1 = f n−1 ∘ f, a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel. Repeated composition of such a function with itself is called function iteration.
By convention, f 0 is defined as the identity map on f 's domain, idX.
If Y = X and f: X → X admits an inverse function f −1, negative functional powers f −n are defined for n > 0 as the negated power of the inverse function: f −n = (f −1)n.
Note: If f takes its values in a ring (in particular for real or complex-valued f ), there is a risk of confusion, as f n could also stand for the n-fold product of f, e.g. f 2(x) = f(x) · f(x). For trigonometric functions, usually the latter is meant, at least for positive exponents. For example, in trigonometry, this superscript notation represents standard exponentiation when used with trigonometric functions:
sin2(x) = sin(x) · sin(x).
However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g., tan−1 = arctan ≠ 1/tan.
In some cases, when, for a given function f, the equation g ∘ g = f has a unique solution g, that function can be defined as the functional square root of f, then written as g = f 1/2.
More generally, when gn = f has a unique solution for some natural number n > 0, then f m/n can be defined as gm.
Under additional restrictions, this idea can be generalized so that the iteration count becomes a continuous parameter; in this case, such a system is called a flow, specified through solutions of Schröder's equation. Iterated functions and flows occur naturally in the study of fractals and dynamical systems.
To avoid ambiguity, some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f[n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead.
== Alternative notations ==
Many mathematicians, particularly in group theory, omit the composition symbol, writing gf for g ∘ f.
During the mid-20th century, some mathematicians adopted postfix notation, writing xf for f(x) and (xf)g for g(f(x)). This can be more natural than prefix notation in many cases, such as in linear algebra when x is a row vector and f and g denote matrices and the composition is by matrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence.
Mathematicians who use postfix notation may write "fg", meaning first apply f and then apply g, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f ; g" for this, thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in the Z notation the ⨾ character is used for left relation composition. Since all functions are binary relations, it is correct to use the [fat] semicolon for function composition as well (see the article on composition of relations for further details on this notation).
== Composition operator ==
Given a function g, the composition operator Cg is defined as that operator which maps functions to functions as
C
g
f
=
f
∘
g
.
{\displaystyle C_{g}f=f\circ g.}
Composition operators are studied in the field of operator theory.
== In programming languages ==
Function composition appears in one form or another in numerous programming languages.
== Multivariate functions ==
Partial composition is possible for multivariate functions. The function resulting when some argument xi of the function f is replaced by the function g is called a composition of f and g in some computer engineering contexts, and is denoted f |xi = g
f
|
x
i
=
g
=
f
(
x
1
,
…
,
x
i
−
1
,
g
(
x
1
,
x
2
,
…
,
x
n
)
,
x
i
+
1
,
…
,
x
n
)
.
{\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).}
When g is a simple constant b, composition degenerates into a (partial) valuation, whose result is also known as restriction or co-factor.
f
|
x
i
=
b
=
f
(
x
1
,
…
,
x
i
−
1
,
b
,
x
i
+
1
,
…
,
x
n
)
.
{\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).}
In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given f, a n-ary function, and n m-ary functions g1, ..., gn, the composition of f with g1, ..., gn, is the m-ary function
h
(
x
1
,
…
,
x
m
)
=
f
(
g
1
(
x
1
,
…
,
x
m
)
,
…
,
g
n
(
x
1
,
…
,
x
m
)
)
.
{\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).}
This is sometimes called the generalized composite or superposition of f with g1, ..., gn. The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here g1, ..., gn can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.
A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities. The notion of commutation also finds an interesting generalization in the multivariate case; a function f of arity n is said to commute with a function g of arity m if f is a homomorphism preserving g, and vice versa, that is:
f
(
g
(
a
11
,
…
,
a
1
m
)
,
…
,
g
(
a
n
1
,
…
,
a
n
m
)
)
=
g
(
f
(
a
11
,
…
,
a
n
1
)
,
…
,
f
(
a
1
m
,
…
,
a
n
m
)
)
.
{\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).}
A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is called medial or entropic.
== Generalizations ==
Composition can be generalized to arbitrary binary relations.
If R ⊆ X × Y and S ⊆ Y × Z are two binary relations, then their composition amounts to
R
∘
S
=
{
(
x
,
z
)
∈
X
×
Z
:
(
∃
y
∈
Y
)
(
(
x
,
y
)
∈
R
∧
(
y
,
z
)
∈
S
)
}
{\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}}
.
Considering a function as a special case of a binary relation (namely functional relations), function composition satisfies the definition for relation composition. A small circle R∘S has been used for the infix notation of composition of relations, as well as functions. When used to represent composition of functions
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle (g\circ f)(x)\ =\ g(f(x))}
however, the text sequence is reversed to illustrate the different operation sequences accordingly.
The composition is defined in the same way for partial functions and Cayley's theorem has its analogue called the Wagner–Preston theorem.
The category of sets with functions as morphisms is the prototypical category. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition. The structures given by composition are axiomatized and generalized in category theory with the concept of morphism as the category-theoretical replacement of functions. The reversed order of composition in the formula (f ∘ g)−1 = (g−1 ∘ f −1) applies for composition of relations using converse relations, and thus in group theory. These structures form dagger categories.The standard "foundation" for mathematics starts with sets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions.
. . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms (like functions) form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics.
- Saunders Mac Lane, Mathematics: Form and Function
== Typography ==
The composition symbol ∘ is encoded as U+2218 ∘ RING OPERATOR (∘, ∘); see the Degree symbol article for similar-appearing Unicode characters. In TeX, it is written \circ.
== See also ==
Cobweb plot – a graphical technique for functional composition
Combinatory logic
Composition ring, a formal axiomatization of the composition operation
Flow (mathematics)
Function composition (computer science)
Function of random variable, distribution of a function of a random variable
Functional decomposition
Functional square root
Functional equation
Higher-order function
Infinite compositions of analytic functions
Iterated function
Lambda calculus
== Notes ==
== References ==
== External links ==
"Composite function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Composition of Functions" by Bruce Atwood, the Wolfram Demonstrations Project, 2007. | Wikipedia/Functional_composition |
In mathematics, particularly abstract algebra, a binary operation • on a set is flexible if it satisfies the flexible identity:
a
∙
(
b
∙
a
)
=
(
a
∙
b
)
∙
a
{\displaystyle a\bullet \left(b\bullet a\right)=\left(a\bullet b\right)\bullet a}
for any two elements a and b of the set. A magma (that is, a set equipped with a binary operation) is flexible if the binary operation with which it is equipped is flexible. Similarly, a nonassociative algebra is flexible if its multiplication operator is flexible.
Every commutative or associative operation is flexible, so flexibility becomes important for binary operations that are neither commutative nor associative, e.g. for the multiplication of sedenions, which are not even alternative.
In 1954, Richard D. Schafer examined the algebras generated by the Cayley–Dickson process over a field and showed that they satisfy the flexible identity.
== Examples ==
Besides associative algebras, the following classes of nonassociative algebras are flexible:
Alternative algebras
Lie algebras
Jordan algebras (which are commutative)
Okubo algebras
In the world of magmas, there is only a binary multiplication operation with no addition or scaling with from a base ring or field like in algebras. In this setting, alternative and commutative magmas are all flexible - the alternative and commutative laws all imply flexibility. This includes many important classes of magmas: all groups, semigroups and moufang loops are flexible.
The sedenions and trigintaduonions, and all algebras constructed from these by iterating the Cayley–Dickson construction, are also flexible.
== See also ==
Zorn ring
Maltsev algebra
== References ==
Schafer, Richard D. (1995) [1966]. An introduction to non-associative algebras. Dover Publications. ISBN 0-486-68813-5. Zbl 0145.25601. | Wikipedia/Flexible_algebra |
In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.
More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations.
A natural example of a differential field is the field of rational functions in one variable over the complex numbers,
C
(
t
)
,
{\displaystyle \mathbb {C} (t),}
where the derivation is differentiation with respect to
t
.
{\displaystyle t.}
More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation.
== History ==
Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups.
== Differential rings ==
=== Definition ===
A derivation
∂
{\textstyle \partial }
on a ring
R
{\textstyle R}
is a function
∂
:
R
→
R
{\displaystyle \partial :R\to R\,}
such that
∂
(
r
1
+
r
2
)
=
∂
r
1
+
∂
r
2
{\displaystyle \partial (r_{1}+r_{2})=\partial r_{1}+\partial r_{2}}
and
∂
(
r
1
r
2
)
=
(
∂
r
1
)
r
2
+
r
1
(
∂
r
2
)
{\displaystyle \partial (r_{1}r_{2})=(\partial r_{1})r_{2}+r_{1}(\partial r_{2})\quad }
(Leibniz product rule),
for every
r
1
{\displaystyle r_{1}}
and
r
2
{\displaystyle r_{2}}
in
R
.
{\displaystyle R.}
A derivation is linear over the integers since these identities imply
∂
(
0
)
=
∂
(
1
)
=
0
{\displaystyle \partial (0)=\partial (1)=0}
and
∂
(
−
r
)
=
−
∂
(
r
)
.
{\displaystyle \partial (-r)=-\partial (r).}
A differential ring is a commutative ring
R
{\displaystyle R}
equipped with one or more derivations that commute pairwise; that is,
∂
1
(
∂
2
(
r
)
)
=
∂
2
(
∂
1
(
r
)
)
{\displaystyle \partial _{1}(\partial _{2}(r))=\partial _{2}(\partial _{1}(r))}
for every pair of derivations and every
r
∈
R
.
{\displaystyle r\in R.}
When there is only one derivation one talks often of an ordinary differential ring; otherwise, one talks of a partial differential ring.
A differential field is a differential ring that is also a field. A differential algebra
A
{\displaystyle A}
over a differential field
K
{\displaystyle K}
is a differential ring that contains
K
{\displaystyle K}
as a subring such that the restriction to
K
{\displaystyle K}
of the derivations of
A
{\displaystyle A}
equal the derivations of
K
.
{\displaystyle K.}
(A more general definition is given below, which covers the case where
K
{\displaystyle K}
is not a field, and is essentially equivalent when
K
{\displaystyle K}
is a field.)
A Witt algebra is a differential ring that contains the field
Q
{\displaystyle \mathbb {Q} }
of the rational numbers. Equivalently, this is a differential algebra over
Q
,
{\displaystyle \mathbb {Q} ,}
since
Q
{\displaystyle \mathbb {Q} }
can be considered as a differential field on which every derivation is the zero function.
The constants of a differential ring are the elements
r
{\displaystyle r}
such that
∂
r
=
0
{\displaystyle \partial r=0}
for every derivation
∂
.
{\displaystyle \partial .}
The constants of a differential ring form a subring and the constants of a differentiable field form a subfield. This meaning of "constant" generalizes the concept of a constant function, and must not be confused with the common meaning of a constant.
=== Basic formulas ===
In the following identities,
δ
{\displaystyle \delta }
is a derivation of a differential ring
R
.
{\displaystyle R.}
If
r
∈
R
{\displaystyle r\in R}
and
c
{\displaystyle c}
is a constant in
R
{\displaystyle R}
(that is,
δ
c
=
0
{\displaystyle \delta c=0}
), then
δ
(
c
r
)
=
c
δ
(
r
)
.
{\displaystyle \delta (cr)=c\delta (r).}
If
r
∈
R
{\displaystyle r\in R}
and
u
{\displaystyle u}
is a unit in
R
,
{\displaystyle R,}
then
δ
(
r
u
)
=
δ
(
r
)
u
−
r
δ
(
u
)
u
2
{\displaystyle \delta \left({\frac {r}{u}}\right)={\frac {\delta (r)u-r\delta (u)}{u^{2}}}}
If
n
{\displaystyle n}
is a nonnegative integer and
r
∈
R
{\displaystyle r\in R}
then
δ
(
r
n
)
=
n
r
n
−
1
δ
(
r
)
{\displaystyle \delta (r^{n})=nr^{n-1}\delta (r)}
If
u
1
,
…
,
u
n
{\displaystyle u_{1},\ldots ,u_{n}}
are units in
R
,
{\displaystyle R,}
and
e
1
,
…
,
e
n
{\displaystyle e_{1},\ldots ,e_{n}}
are integers, one has the logarithmic derivative identity:
δ
(
u
1
e
1
…
u
n
e
n
)
u
1
e
1
…
u
n
e
n
=
e
1
δ
(
u
1
)
u
1
+
⋯
+
e
n
δ
(
u
n
)
u
n
.
{\displaystyle {\frac {\delta (u_{1}^{e_{1}}\ldots u_{n}^{e_{n}})}{u_{1}^{e_{1}}\ldots u_{n}^{e_{n}}}}=e_{1}{\frac {\delta (u_{1})}{u_{1}}}+\dots +e_{n}{\frac {\delta (u_{n})}{u_{n}}}.}
=== Higher-order derivations ===
A derivation operator or higher-order derivation is the composition of several derivations. As the derivations of a differential ring are supposed to commute, the order of the derivations does not matter, and a derivation operator may be written as
δ
1
e
1
∘
⋯
∘
δ
n
e
n
,
{\displaystyle \delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}},}
where
δ
1
,
…
,
δ
n
{\displaystyle \delta _{1},\ldots ,\delta _{n}}
are the derivations under consideration,
e
1
,
…
,
e
n
{\displaystyle e_{1},\ldots ,e_{n}}
are nonnegative integers, and the exponent of a derivation denotes the number of times this derivation is composed in the operator.
The sum
o
=
e
1
+
⋯
+
e
n
{\displaystyle o=e_{1}+\cdots +e_{n}}
is called the order of derivation. If
o
=
1
{\displaystyle o=1}
the derivation operator is one of the original derivations. If
o
=
0
{\displaystyle o=0}
, one has the identity function, which is generally considered as the unique derivation operator of order zero. With these conventions, the derivation operators form a free commutative monoid on the set of derivations under consideration.
A derivative of an element
x
{\displaystyle x}
of a differential ring is the application of a derivation operator to
x
,
{\displaystyle x,}
that is, with the above notation,
δ
1
e
1
∘
⋯
∘
δ
n
e
n
(
x
)
.
{\displaystyle \delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}}(x).}
A proper derivative is a derivative of positive order.
=== Differential ideals ===
A differential ideal
I
{\displaystyle I}
of a differential ring
R
{\displaystyle R}
is an ideal of the ring
R
{\displaystyle R}
that is closed (stable) under the derivations of the ring; that is,
∂
x
∈
I
,
{\textstyle \partial x\in I,}
for every derivation
∂
{\displaystyle \partial }
and every
x
∈
I
.
{\displaystyle x\in I.}
A differential ideal is said to be proper if it is not the whole ring. For avoiding confusion, an ideal that is not a differential ideal is sometimes called an algebraic ideal.
The radical of a differential ideal is the same as its radical as an algebraic ideal, that is, the set of the ring elements that have a power in the ideal. The radical of a differential ideal is also a differential ideal. A radical or perfect differential ideal is a differential ideal that equals its radical. A prime differential ideal is a differential ideal that is prime in the usual sense; that is, if a product belongs to the ideal, at least one of the factors belongs to the ideal. A prime differential ideal is always a radical differential ideal.
A discovery of Ritt is that, although the classical theory of algebraic ideals does not work for differential ideals, a large part of it can be extended to radical differential ideals, and this makes them fundamental in differential algebra.
The intersection of any family of differential ideals is a differential ideal, and the intersection of any family of radical differential ideals is a radical differential ideal.
It follows that, given a subset
S
{\displaystyle S}
of a differential ring, there are three ideals generated by it, which are the intersections of, respectively, all algebraic ideals, all differential ideals, and all radical differential ideals that contain it.
The algebraic ideal generated by
S
{\displaystyle S}
is the set of finite linear combinations of elements of
S
,
{\displaystyle S,}
and is commonly denoted as
(
S
)
{\displaystyle (S)}
or
⟨
S
⟩
.
{\displaystyle \langle S\rangle .}
The differential ideal generated by
S
{\displaystyle S}
is the set of the finite linear combinations of elements of
S
{\displaystyle S}
and of the derivatives of any order of these elements; it is commonly denoted as
[
S
]
.
{\displaystyle [S].}
When
S
{\displaystyle S}
is finite,
[
S
]
{\displaystyle [S]}
is generally not finitely generated as an algebraic ideal.
The radical differential ideal generated by
S
{\displaystyle S}
is commonly denoted as
{
S
}
.
{\displaystyle \{S\}.}
There is no known way to characterize its element in a similar way as for the two other cases.
== Differential polynomials ==
A differential polynomial over a differential field
K
{\displaystyle K}
is a formalization of the concept of differential equation such that the known functions appearing in the equation belong to
K
,
{\displaystyle K,}
and the indeterminates are symbols for the unknown functions.
So, let
K
{\displaystyle K}
be a differential field, which is typically (but not necessarily) a field of rational fractions
K
(
X
)
=
K
(
x
1
,
…
,
x
n
)
{\displaystyle K(X)=K(x_{1},\ldots ,x_{n})}
(fractions of multivariate polynomials), equipped with derivations
∂
i
{\displaystyle \partial _{i}}
such that
∂
i
x
i
=
1
{\displaystyle \partial _{i}x_{i}=1}
and
∂
i
x
j
=
0
{\displaystyle \partial _{i}x_{j}=0}
if
i
≠
j
{\displaystyle i\neq j}
(the usual partial derivatives).
For defining the ring
K
{
Y
}
=
K
{
y
1
,
…
,
y
n
}
{\textstyle K\{Y\}=K\{y_{1},\ldots ,y_{n}\}}
of differential polynomials over
K
{\displaystyle K}
with indeterminates in
Y
=
{
y
1
,
…
,
y
n
}
{\displaystyle Y=\{y_{1},\ldots ,y_{n}\}}
with derivations
∂
1
,
…
,
∂
n
,
{\displaystyle \partial _{1},\ldots ,\partial _{n},}
one introduces an infinity of new indeterminates of the form
Δ
y
i
,
{\displaystyle \Delta y_{i},}
where
Δ
{\displaystyle \Delta }
is any derivation operator of order higher than 1. With this notation,
K
{
Y
}
{\displaystyle K\{Y\}}
is the set of polynomials in all these indeterminates, with the natural derivations (each polynomial involves only a finite number of indeterminates). In particular, if
n
=
1
,
{\displaystyle n=1,}
one has
K
{
y
}
=
K
[
y
,
∂
y
,
∂
2
y
,
∂
3
y
,
…
]
.
{\displaystyle K\{y\}=K\left[y,\partial y,\partial ^{2}y,\partial ^{3}y,\ldots \right].}
Even when
n
=
1
,
{\displaystyle n=1,}
a ring of differential polynomials is not Noetherian. This makes the theory of this generalization of polynomial rings difficult. However, two facts allow such a generalization.
Firstly, a finite number of differential polynomials involves together a finite number of indeterminates. Its follows that every property of polynomials that involves a finite number of polynomials remains true for differential polynomials. In particular, greatest common divisors exist, and a ring of differential polynomials is a unique factorization domain.
The second fact is that, if the field
K
{\displaystyle K}
contains the field of rational numbers, the rings of differential polynomials over
K
{\displaystyle K}
satisfy the ascending chain condition on radical differential ideals. This Ritt’s theorem is implied by its generalization, sometimes called the Ritt-Raudenbush basis theorem which asserts that if
R
{\displaystyle R}
is a Ritt Algebra (that, is a differential ring containing the field of rational numbers), that satisfies the ascending chain condition on radical differential ideals, then the ring of differential polynomials
R
{
y
}
{\displaystyle R\{y\}}
satisfies the same property (one passes from the univariate to the multivariate case by applying the theorem iteratively).
This Noetherian property implies that, in a ring of differential polynomials, every radical differential ideal I is finitely generated as a radical differential ideal; this means that there exists a finite set S of differential polynomials such that I is the smallest radical differential ideal containing S. This allows representing a radical differential ideal by such a finite set of generators, and computing with these ideals. However, some usual computations of the algebraic case cannot be extended. In particular no algorithm is known for testing membership of an element in a radical differential ideal or the equality of two radical differential ideals.
Another consequence of the Noetherian property is that a radical differential ideal can be uniquely expressed as the intersection of a finite number of prime differential ideals, called essential prime components of the ideal.
== Elimination methods ==
Elimination methods are algorithms that preferentially eliminate a specified set of derivatives from a set of differential equations, commonly done to better understand and solve sets of differential equations.
Categories of elimination methods include characteristic set methods, differential Gröbner bases methods and resultant based methods.
Common operations used in elimination algorithms include 1) ranking derivatives, polynomials, and polynomial sets, 2) identifying a polynomial's leading derivative, initial and separant, 3) polynomial reduction, and 4) creating special polynomial sets.
=== Ranking derivatives ===
The ranking of derivatives is a total order and an admisible order, defined as:
∀
p
∈
Θ
Y
,
∀
θ
μ
∈
Θ
:
θ
μ
p
>
p
.
{\textstyle \forall p\in \Theta Y,\ \forall \theta _{\mu }\in \Theta :\theta _{\mu }p>p.}
∀
p
,
q
∈
Θ
Y
,
∀
θ
μ
∈
Θ
:
p
≥
q
⇒
θ
μ
p
≥
θ
μ
q
.
{\textstyle \forall p,q\in \Theta Y,\ \forall \theta _{\mu }\in \Theta :p\geq q\Rightarrow \theta _{\mu }p\geq \theta _{\mu }q.}
Each derivative has an integer tuple, and a monomial order ranks the derivative by ranking the derivative's integer tuple. The integer tuple identifies the differential indeterminate, the derivative's multi-index and may identify the derivative's order. Types of ranking include:
Orderly ranking:
∀
y
i
,
y
j
∈
Y
,
∀
θ
μ
,
θ
ν
∈
Θ
:
ord
(
θ
μ
)
≥
ord
(
θ
ν
)
⇒
θ
μ
y
i
≥
θ
ν
y
j
{\displaystyle \forall y_{i},y_{j}\in Y,\ \forall \theta _{\mu },\theta _{\nu }\in \Theta \ :\ \operatorname {ord} (\theta _{\mu })\geq \operatorname {ord} (\theta _{\nu })\Rightarrow \theta _{\mu }y_{i}\geq \theta _{\nu }y_{j}}
Elimination ranking:
∀
y
i
,
y
j
∈
Y
,
∀
θ
μ
,
θ
ν
∈
Θ
:
y
i
≥
y
j
⇒
θ
μ
y
i
≥
θ
ν
y
j
{\displaystyle \forall y_{i},y_{j}\in Y,\ \forall \theta _{\mu },\theta _{\nu }\in \Theta \ :\ y_{i}\geq y_{j}\Rightarrow \theta _{\mu }y_{i}\geq \theta _{\nu }y_{j}}
In this example, the integer tuple identifies the differential indeterminate and derivative's multi-index, and lexicographic monomial order,
≥
lex
{\textstyle \geq _{\text{lex}}}
, determines the derivative's rank.
η
(
δ
1
e
1
∘
⋯
∘
δ
n
e
n
(
y
j
)
)
=
(
j
,
e
1
,
…
,
e
n
)
{\displaystyle \eta (\delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}}(y_{j}))=(j,e_{1},\ldots ,e_{n})}
.
η
(
θ
μ
y
j
)
≥
lex
η
(
θ
ν
y
k
)
⇒
θ
μ
y
j
≥
θ
ν
y
k
.
{\displaystyle \eta (\theta _{\mu }y_{j})\geq _{\text{lex}}\eta (\theta _{\nu }y_{k})\Rightarrow \theta _{\mu }y_{j}\geq \theta _{\nu }y_{k}.}
=== Leading derivative, initial and separant ===
This is the standard polynomial form:
p
=
a
d
⋅
u
p
d
+
a
d
−
1
⋅
u
p
d
−
1
+
⋯
+
a
1
⋅
u
p
+
a
0
{\displaystyle p=a_{d}\cdot u_{p}^{d}+a_{d-1}\cdot u_{p}^{d-1}+\cdots +a_{1}\cdot u_{p}+a_{0}}
.
Leader or leading derivative is the polynomial's highest ranked derivative:
u
p
{\displaystyle u_{p}}
.
Coefficients
a
d
,
…
,
a
0
{\displaystyle a_{d},\ldots ,a_{0}}
do not contain the leading derivative
u
p
{\textstyle u_{p}}
.
Degree of polynomial is the leading derivative's greatest exponent:
deg
u
p
(
p
)
=
d
{\displaystyle \deg _{u_{p}}(p)=d}
.
Initial is the coefficient:
I
p
=
a
d
{\displaystyle I_{p}=a_{d}}
.
Rank is the leading derivative raised to the polynomial's degree:
u
p
d
{\displaystyle u_{p}^{d}}
.
Separant is the derivative:
S
p
=
∂
p
∂
u
p
{\displaystyle S_{p}={\frac {\partial p}{\partial u_{p}}}}
.
Separant set is
S
A
=
{
S
p
∣
p
∈
A
}
{\displaystyle S_{A}=\{S_{p}\mid p\in A\}}
, initial set is
I
A
=
{
I
p
∣
p
∈
A
}
{\displaystyle I_{A}=\{I_{p}\mid p\in A\}}
and combined set is
H
A
=
S
A
∪
I
A
{\textstyle H_{A}=S_{A}\cup I_{A}}
.
=== Reduction ===
Partially reduced (partial normal form) polynomial
q
{\textstyle q}
with respect to polynomial
p
{\textstyle p}
indicates these polynomials are non-ground field elements,
p
,
q
∈
K
{
Y
}
∖
K
{\textstyle p,q\in {\mathcal {K}}\{Y\}\setminus {\mathcal {K}}}
, and
q
{\displaystyle q}
contains no proper derivative of
u
p
{\displaystyle u_{p}}
.
Partially reduced polynomial
q
{\textstyle q}
with respect to polynomial
p
{\textstyle p}
becomes
reduced (normal form) polynomial
q
{\textstyle q}
with respect to
p
{\textstyle p}
if the degree of
u
p
{\textstyle u_{p}}
in
q
{\textstyle q}
is less than the degree of
u
p
{\textstyle u_{p}}
in
p
{\textstyle p}
.
An autoreduced polynomial set has every polynomial reduced with respect to every other polynomial of the set. Every autoreduced set is finite. An autoreduced set is triangular meaning each polynomial element has a distinct leading derivative.
Ritt's reduction algorithm identifies integers
i
A
k
,
s
A
k
{\textstyle i_{A_{k}},s_{A_{k}}}
and transforms a differential polynomial
f
{\textstyle f}
using pseudodivision to a lower or equally ranked remainder polynomial
f
r
e
d
{\textstyle f_{red}}
that is reduced with respect to the autoreduced polynomial set
A
{\textstyle A}
. The algorithm's first step partially reduces the input polynomial and the algorithm's second step fully reduces the polynomial. The formula for reduction is:
f
red
≡
∏
A
k
∈
A
I
A
k
i
A
k
⋅
S
A
k
i
A
k
⋅
f
,
(
mod
[
A
]
)
with
i
A
k
,
s
A
k
∈
N
.
{\displaystyle f_{\text{red}}\equiv \prod _{A_{k}\in A}I_{A_{k}}^{i_{A_{k}}}\cdot S_{A_{k}}^{i_{A_{k}}}\cdot f,{\pmod {[A]}}{\text{ with }}i_{A_{k}},s_{A_{k}}\in \mathbb {N} .}
=== Ranking polynomial sets ===
Set
A
{\textstyle A}
is a differential chain if the rank of the leading derivatives is
u
A
1
<
⋯
<
u
A
m
{\textstyle u_{A_{1}}<\dots <u_{A_{m}}}
and
∀
i
,
A
i
{\textstyle \forall i,\ A_{i}}
is reduced with respect to
A
i
+
1
{\textstyle A_{i+1}}
Autoreduced sets
A
{\textstyle A}
and
B
{\textstyle B}
each contain ranked polynomial elements. This procedure ranks two autoreduced sets by comparing pairs of identically indexed
polynomials from both autoreduced sets.
A
1
<
⋯
<
A
m
∈
A
{\displaystyle A_{1}<\cdots <A_{m}\in A}
and
B
1
<
⋯
<
B
n
∈
B
{\displaystyle B_{1}<\cdots <B_{n}\in B}
and
i
,
j
,
k
∈
N
{\displaystyle i,j,k\in \mathbb {N} }
.
rank
A
<
rank
B
{\displaystyle {\text{rank }}A<{\text{rank }}B}
if there is a
k
≤
minimum
(
m
,
n
)
{\displaystyle k\leq \operatorname {minimum} (m,n)}
such that
A
i
=
B
i
{\displaystyle A_{i}=B_{i}}
for
1
≤
i
<
k
{\textstyle 1\leq i<k}
and
A
k
<
B
k
{\displaystyle A_{k}<B_{k}}
.
rank
A
<
rank
B
{\displaystyle \operatorname {rank} A<\operatorname {rank} B}
if
n
<
m
{\displaystyle n<m}
and
A
i
=
B
i
{\displaystyle A_{i}=B_{i}}
for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
.
rank
A
=
rank
B
{\displaystyle \operatorname {rank} A=\operatorname {rank} B}
if
n
=
m
{\displaystyle n=m}
and
A
i
=
B
i
{\displaystyle A_{i}=B_{i}}
for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
.
=== Polynomial sets ===
A characteristic set
C
{\textstyle C}
is the lowest ranked autoreduced subset among all the ideal's autoreduced subsets whose subset polynomial separants are non-members of the ideal
I
{\textstyle {\mathcal {I}}}
.
The delta polynomial applies to polynomial pair
p
,
q
{\textstyle p,q}
whose leaders share a common derivative,
θ
α
u
p
=
θ
β
u
q
{\textstyle \theta _{\alpha }u_{p}=\theta _{\beta }u_{q}}
. The least common derivative operator for the polynomial pair's leading derivatives is
θ
p
q
{\textstyle \theta _{pq}}
, and the delta polynomial is:
Δ
-
p
o
l
y
(
p
,
q
)
=
S
q
⋅
θ
p
q
p
θ
p
−
S
p
⋅
θ
p
q
q
θ
q
{\displaystyle \operatorname {\Delta -poly} (p,q)=S_{q}\cdot {\frac {\theta _{pq}p}{\theta _{p}}}-S_{p}\cdot {\frac {\theta _{pq}q}{\theta _{q}}}}
A coherent set is a polynomial set that reduces its delta polynomial pairs to zero.
=== Regular system and regular ideal ===
A regular system
Ω
{\textstyle \Omega }
contains a autoreduced and coherent set of differential equations
A
{\textstyle A}
and a inequation set
H
Ω
⊇
H
A
{\textstyle H_{\Omega }\supseteq H_{A}}
with set
H
Ω
{\textstyle H_{\Omega }}
reduced with respect to the equation set.
Regular differential ideal
I
dif
{\textstyle {\mathcal {I}}_{\text{dif}}}
and regular algebraic ideal
I
alg
{\textstyle {\mathcal {I}}_{\text{alg}}}
are saturation ideals that arise from a regular system. Lazard's lemma states that the regular differential and regular algebraic ideals are radical ideals.
Regular differential ideal:
I
dif
=
[
A
]
:
H
Ω
∞
.
{\textstyle {\mathcal {I}}_{\text{dif}}=[A]:H_{\Omega }^{\infty }.}
Regular algebraic ideal:
I
alg
=
(
A
)
:
H
Ω
∞
.
{\textstyle {\mathcal {I}}_{\text{alg}}=(A):H_{\Omega }^{\infty }.}
=== Rosenfeld–Gröbner algorithm ===
The Rosenfeld–Gröbner algorithm decomposes the radical differential ideal as a finite intersection of regular radical differential ideals. These regular differential radical ideals, represented by characteristic sets, are not necessarily prime ideals and the representation is not necessarily minimal.
The membership problem is to determine if a differential polynomial
p
{\textstyle p}
is a member of an ideal generated from a set of differential polynomials
S
{\textstyle S}
. The Rosenfeld–Gröbner algorithm generates sets of Gröbner bases. The algorithm determines that a polynomial is a member of the ideal if and only if the partially reduced remainder polynomial is a member of the algebraic ideal generated by the Gröbner bases.
The Rosenfeld–Gröbner algorithm facilitates creating Taylor series expansions of solutions to the differential equations.
== Examples ==
=== Differential fields ===
Example 1:
(
Mer
(
f
(
y
)
,
∂
y
)
)
{\textstyle (\operatorname {Mer} (\operatorname {f} (y),\partial _{y}))}
is the differential meromorphic function field with a single standard derivation.
Example 2:
(
C
{
y
}
,
p
(
y
)
⋅
∂
y
)
{\textstyle (\mathbb {C} \{y\},p(y)\cdot \partial _{y})}
is a differential field with a linear differential operator as the derivation, for any polynomial
p
(
y
)
{\displaystyle p(y)}
.
=== Derivation ===
Define
E
a
(
p
(
y
)
)
=
p
(
y
+
a
)
{\textstyle E^{a}(p(y))=p(y+a)}
as shift operator
E
a
{\textstyle E^{a}}
for polynomial
p
(
y
)
{\textstyle p(y)}
.
A shift-invariant operator
T
{\textstyle T}
commutes with the shift operator:
E
a
∘
T
=
T
∘
E
a
{\textstyle E^{a}\circ T=T\circ E^{a}}
.
The Pincherle derivative, a derivation of shift-invariant operator
T
{\textstyle T}
, is
T
′
=
T
∘
y
−
y
∘
T
{\textstyle T^{\prime }=T\circ y-y\circ T}
.
=== Constants ===
Ring of integers is
(
Z
.
δ
)
{\displaystyle (\mathbb {Z} .\delta )}
, and every integer is a constant.
The derivation of 1 is zero.
δ
(
1
)
=
δ
(
1
⋅
1
)
=
δ
(
1
)
⋅
1
+
1
⋅
δ
(
1
)
=
2
⋅
δ
(
1
)
⇒
δ
(
1
)
=
0
{\textstyle \delta (1)=\delta (1\cdot 1)=\delta (1)\cdot 1+1\cdot \delta (1)=2\cdot \delta (1)\Rightarrow \delta (1)=0}
.
Also,
δ
(
m
+
1
)
=
δ
(
m
)
+
δ
(
1
)
=
δ
(
m
)
⇒
δ
(
m
+
1
)
=
δ
(
m
)
{\displaystyle \delta (m+1)=\delta (m)+\delta (1)=\delta (m)\Rightarrow \delta (m+1)=\delta (m)}
.
By induction,
δ
(
1
)
=
0
∧
δ
(
m
+
1
)
=
δ
(
m
)
⇒
∀
m
∈
Z
,
δ
(
m
)
=
0
{\displaystyle \delta (1)=0\ \wedge \ \delta (m+1)=\delta (m)\Rightarrow \forall \ m\in \mathbb {Z} ,\ \delta (m)=0}
.
Field of rational numbers is
(
Q
.
δ
)
{\displaystyle (\mathbb {Q} .\delta )}
, and every rational number is a constant.
Every rational number is a quotient of integers.
∀
r
∈
Q
,
∃
a
∈
Z
,
b
∈
Z
/
{
0
}
,
r
=
a
b
{\displaystyle \forall r\in \mathbb {Q} ,\ \exists \ a\in \mathbb {Z} ,\ b\in \mathbb {Z} /\{0\},\ r={\frac {a}{b}}}
Apply the derivation formula for quotients recognizing that derivations of integers are zero:
δ
(
r
)
=
δ
(
a
b
)
=
δ
(
a
)
⋅
b
−
a
⋅
δ
(
b
)
b
2
=
0
{\displaystyle \delta (r)=\delta \left({\frac {a}{b}}\right)={\frac {\delta (a)\cdot b-a\cdot \delta (b)}{b^{2}}}=0}
.
=== Differential subring ===
Constants form the subring of constants
(
C
,
∂
y
)
⊂
(
C
{
y
}
,
∂
y
)
{\textstyle (\mathbb {C} ,\partial _{y})\subset (\mathbb {C} \{y\},\partial _{y})}
.
=== Differential ideal ===
Element
exp
(
y
)
{\textstyle \exp(y)}
simply generates differential ideal
[
exp
(
y
)
]
{\textstyle [\exp(y)]}
in the differential ring
(
C
{
y
,
exp
(
y
)
}
,
∂
y
)
{\textstyle (\mathbb {C} \{y,\exp(y)\},\partial _{y})}
.
=== Algebra over a differential ring ===
Any ring with identity is a
Z
-
{\textstyle \operatorname {{\mathcal {Z}}-} }
algebra. Thus a differential ring is a
Z
-
{\textstyle \operatorname {{\mathcal {Z}}-} }
algebra.
If ring
R
{\textstyle {\mathcal {R}}}
is a subring of the center of unital ring
M
{\textstyle {\mathcal {M}}}
, then
M
{\textstyle {\mathcal {M}}}
is an
R
-
{\textstyle \operatorname {{\mathcal {R}}-} }
algebra. Thus, a differential ring is an algebra over its differential subring. This is the natural structure of an algebra over its subring.
=== Special and normal polynomials ===
Ring
(
Q
{
y
,
z
}
,
∂
y
)
{\textstyle (\mathbb {Q} \{y,z\},\partial _{y})}
has irreducible polynomials,
p
{\textstyle p}
(normal, squarefree) and
q
{\textstyle q}
(special, ideal generator).
∂
y
(
y
)
=
1
,
∂
y
(
z
)
=
1
+
z
2
,
z
=
tan
(
y
)
{\textstyle \partial _{y}(y)=1,\ \partial _{y}(z)=1+z^{2},\ z=\tan(y)}
p
(
y
)
=
1
+
y
2
,
∂
y
(
p
)
=
2
⋅
y
,
gcd
(
p
,
∂
y
(
p
)
)
=
1
{\textstyle p(y)=1+y^{2},\ \partial _{y}(p)=2\cdot y,\ \gcd(p,\partial _{y}(p))=1}
q
(
z
)
=
1
+
z
2
,
∂
y
(
q
)
=
2
⋅
z
⋅
(
1
+
z
2
)
,
gcd
(
q
,
∂
y
(
q
)
)
=
q
{\textstyle q(z)=1+z^{2},\ \partial _{y}(q)=2\cdot z\cdot (1+z^{2}),\ \gcd(q,\partial _{y}(q))=q}
=== Polynomials ===
==== Ranking ====
Ring
(
Q
{
y
1
,
y
2
}
,
δ
)
{\textstyle (\mathbb {Q} \{y_{1},y_{2}\},\delta )}
has derivatives
δ
(
y
1
)
=
y
1
′
{\textstyle \delta (y_{1})=y_{1}^{\prime }}
and
δ
(
y
2
)
=
y
2
′
{\textstyle \delta (y_{2})=y_{2}^{\prime }}
Map each derivative to an integer tuple:
η
(
δ
(
i
2
)
(
y
i
1
)
)
=
(
i
1
,
i
2
)
{\textstyle \eta (\delta ^{(i_{2})}(y_{i_{1}}))=(i_{1},i_{2})}
.
Rank derivatives and integer tuples:
y
2
′
′
(
2
,
2
)
>
y
2
′
(
2
,
1
)
>
y
2
(
2
,
0
)
>
y
1
′
′
(
1
,
2
)
>
y
1
′
(
1
,
1
)
>
y
1
(
1
,
0
)
{\textstyle y_{2}^{\prime \prime }\ (2,2)>y_{2}^{\prime }\ (2,1)>y_{2}\ (2,0)>y_{1}^{\prime \prime }\ (1,2)>y_{1}^{\prime }\ (1,1)>y_{1}\ (1,0)}
.
==== Leading derivative and initial ====
The leading derivatives, and initials are:
p
=
(
y
1
+
y
1
′
)
⋅
(
y
2
′
′
)
2
+
3
⋅
y
1
2
⋅
y
2
′
′
+
(
y
1
′
)
2
{\textstyle p={\color {Blue}(y_{1}+y_{1}^{\prime })}\cdot ({\color {Red}y_{2}^{\prime \prime }})^{2}+3\cdot y_{1}^{2}\cdot {\color {Red}y_{2}^{\prime \prime }}+(y_{1}^{\prime })^{2}}
q
=
(
y
1
+
3
⋅
y
1
′
)
⋅
y
2
′
′
+
y
1
⋅
y
2
′
+
(
y
1
′
)
2
{\textstyle q={\color {Blue}(y_{1}+3\cdot y_{1}^{\prime })}\cdot {\color {Red}y_{2}^{\prime \prime }}+y_{1}\cdot y_{2}^{\prime }+(y_{1}^{\prime })^{2}}
r
=
(
y
1
+
3
)
⋅
(
y
1
′
′
)
2
+
y
1
2
⋅
y
1
′
′
+
2
⋅
y
1
{\textstyle r={\color {Blue}(y_{1}+3)}\cdot ({\color {Red}y_{1}^{\prime \prime }})^{2}+y_{1}^{2}\cdot {\color {Red}y_{1}^{\prime \prime }}+2\cdot y_{1}}
==== Separants ====
S
p
=
2
⋅
(
y
1
+
y
1
′
)
⋅
y
2
′
′
+
3
⋅
y
1
2
{\textstyle S_{p}=2\cdot (y_{1}+y_{1}^{\prime })\cdot y_{2}^{\prime \prime }+3\cdot y_{1}^{2}}
.
S
q
=
y
1
+
3
⋅
y
1
′
{\textstyle S_{q}=y_{1}+3\cdot y_{1}^{\prime }}
S
r
=
2
⋅
(
y
1
+
3
)
⋅
y
1
′
′
+
y
1
2
{\textstyle S_{r}=2\cdot (y_{1}+3)\cdot y_{1}^{\prime \prime }+y_{1}^{2}}
==== Autoreduced sets ====
Autoreduced sets are
{
p
,
r
}
{\textstyle \{p,r\}}
and
{
q
,
r
}
{\textstyle \{q,r\}}
. Each set is triangular with a distinct polynomial leading derivative.
The non-autoreduced set
{
p
,
q
}
{\textstyle \{p,q\}}
contains only partially reduced
p
{\textstyle p}
with respect to
q
{\textstyle q}
; this set is non-triangular because the polynomials have the same leading derivative.
== Applications ==
=== Symbolic integration ===
Symbolic integration uses algorithms involving polynomials and their derivatives such as Hermite reduction, Czichowski algorithm, Lazard-Rioboo-Trager algorithm, Horowitz-Ostrogradsky algorithm, squarefree factorization and splitting factorization to special and normal polynomials.
=== Differential equations ===
Differential algebra can determine if a set of differential polynomial equations has a solution. A total order ranking may identify algebraic constraints. An elimination ranking may determine if one or a selected group of independent variables can express the differential equations. Using triangular decomposition and elimination order, it may be possible to solve the differential equations one differential indeterminate at a time in a step-wise method. Another approach is to create a class of differential equations with a known solution form; matching a differential equation to its class identifies the equation's solution. Methods are available to facilitate the numerical integration of a differential-algebraic system of equations.
In a study of non-linear dynamical systems with chaos, researchers used differential elimination to reduce differential equations to ordinary differential equations involving a single state variable. They were successful in most cases, and this facilitated developing approximate solutions, efficiently evaluating chaos, and constructing Lyapunov functions. Researchers have applied differential elimination to understanding cellular biology, compartmental biochemical models, parameter estimation and quasi-steady state approximation (QSSA) for biochemical reactions. Using differential Gröbner bases, researchers have investigated non-classical symmetry properties of non-linear differential equations. Other applications include control theory, model theory, and algebraic geometry. Differential algebra also applies to differential-difference equations.
== Algebras with derivations ==
=== Differential graded vector space ===
A
Z
-
g
r
a
d
e
d
{\textstyle \operatorname {\mathbb {Z} -graded} }
vector space
V
∙
{\textstyle V_{\bullet }}
is a collection of vector spaces
V
m
{\textstyle V_{m}}
with integer degree
|
v
|
=
m
{\textstyle |v|=m}
for
v
∈
V
m
{\textstyle v\in V_{m}}
. A direct sum can represent this graded vector space:
V
∙
=
⨁
m
∈
Z
V
m
{\displaystyle V_{\bullet }=\bigoplus _{m\in \mathbb {Z} }V_{m}}
A differential graded vector space or chain complex, is a graded vector space
V
∙
{\textstyle V_{\bullet }}
with a differential map or boundary map
d
m
:
V
m
→
V
m
−
1
{\textstyle d_{m}:V_{m}\to V_{m-1}}
with
d
m
∘
d
m
+
1
=
0
{\displaystyle d_{m}\circ d_{m+1}=0}
.
A cochain complex is a graded vector space
V
∙
{\textstyle V^{\bullet }}
with a differential map or coboundary map
d
m
:
V
m
→
V
m
+
1
{\textstyle d_{m}:V_{m}\to V_{m+1}}
with
d
m
+
1
∘
d
m
=
0
{\displaystyle d_{m+1}\circ d_{m}=0}
.
=== Differential graded algebra ===
A differential graded algebra is a graded algebra
A
{\textstyle A}
with a linear derivation
d
:
A
→
A
{\textstyle d:A\to A}
with
d
∘
d
=
0
{\displaystyle d\circ d=0}
that follows the graded Leibniz product rule.
Graded Leibniz product rule:
∀
a
,
b
∈
A
,
d
(
a
⋅
b
)
=
d
(
a
)
⋅
b
+
(
−
1
)
|
a
|
⋅
a
⋅
d
(
b
)
{\displaystyle \forall a,b\in A,\ d(a\cdot b)=d(a)\cdot b+(-1)^{|a|}\cdot a\cdot d(b)}
with
|
a
|
{\displaystyle |a|}
the degree of vector
a
{\displaystyle a}
.
=== Lie algebra ===
A Lie algebra is a finite-dimensional real or complex vector space
g
{\textstyle {\mathcal {g}}}
with a bilinear bracket operator
[
,
]
:
g
×
g
→
g
{\textstyle [,]:{\mathcal {g}}\times {\mathcal {g}}\to {\mathcal {g}}}
with Skew symmetry and the Jacobi identity property.
Skew symmetry:
[
X
,
Y
]
=
−
[
Y
,
X
]
{\displaystyle [X,Y]=-[Y,X]}
Jacobi identity property:
[
X
,
[
Y
,
Z
]
]
+
[
Y
,
[
Z
,
X
]
]
+
[
Z
,
[
X
,
Y
]
]
=
0
{\displaystyle [X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0}
for all
X
,
Y
,
Z
∈
g
{\displaystyle X,Y,Z\in {\mathcal {g}}}
.
The adjoint operator,
a
d
X
(
Y
)
=
[
Y
,
X
]
{\textstyle \operatorname {ad_{X}} (Y)=[Y,X]}
is a derivation of the bracket because the adjoint's effect on the binary bracket operation is analogous to the derivation's effect on the binary product operation. This is the inner derivation determined by
X
{\textstyle X}
.
ad
X
(
[
Y
,
Z
]
)
=
[
ad
X
(
Y
)
,
Z
]
+
[
Y
,
ad
X
(
Z
)
]
{\displaystyle \operatorname {ad} _{X}([Y,Z])=[\operatorname {ad} _{X}(Y),Z]+[Y,\operatorname {ad} _{X}(Z)]}
The universal enveloping algebra
U
(
g
)
{\textstyle U({\mathcal {g}})}
of Lie algebra
g
{\textstyle {\mathcal {g}}}
is a maximal associative algebra with identity, generated by Lie algebra elements
g
{\textstyle {\mathcal {g}}}
and containing products defined by the bracket operation. Maximal means that a linear homomorphism maps the universal algebra to any other algebra that otherwise has these properties. The adjoint operator is a derivation following the Leibniz product rule.
Product in
U
(
g
)
{\displaystyle U({\mathcal {g}})}
:
X
⋅
Y
−
Y
⋅
X
=
[
X
,
Y
]
{\displaystyle X\cdot Y-Y\cdot X=[X,Y]}
Leibniz product rule:
ad
X
(
Y
⋅
Z
)
=
ad
X
(
Y
)
⋅
Z
+
Y
⋅
ad
X
(
Z
)
{\displaystyle \operatorname {ad} _{X}(Y\cdot Z)=\operatorname {ad} _{X}(Y)\cdot Z+Y\cdot \operatorname {ad} _{X}(Z)}
for all
X
,
Y
,
Z
∈
U
(
g
)
{\displaystyle X,Y,Z\in U({\mathcal {g}})}
.
=== Weyl algebra ===
The Weyl algebra is an algebra
A
n
(
K
)
{\textstyle A_{n}(K)}
over a ring
K
[
p
1
,
q
1
,
…
,
p
n
,
q
n
]
{\textstyle K[p_{1},q_{1},\dots ,p_{n},q_{n}]}
with a specific noncommutative product:
p
i
⋅
q
i
−
q
i
⋅
p
i
=
1
,
:
i
∈
{
1
,
…
,
n
}
{\displaystyle p_{i}\cdot q_{i}-q_{i}\cdot p_{i}=1,\ :\ i\in \{1,\dots ,n\}}
.
All other indeterminate products are commutative for
i
,
j
∈
{
1
,
…
,
n
}
{\textstyle i,j\in \{1,\dots ,n\}}
:
p
i
⋅
q
j
−
q
j
⋅
p
i
=
0
if
i
≠
j
,
p
i
⋅
p
j
−
p
j
⋅
p
i
=
0
,
q
i
⋅
q
j
−
q
j
⋅
q
i
=
0
{\displaystyle p_{i}\cdot q_{j}-q_{j}\cdot p_{i}=0{\text{ if }}i\neq j,\ p_{i}\cdot p_{j}-p_{j}\cdot p_{i}=0,\ q_{i}\cdot q_{j}-q_{j}\cdot q_{i}=0}
.
A Weyl algebra can represent the derivations for a commutative ring's polynomials
f
∈
K
[
y
1
,
…
,
y
n
]
{\textstyle f\in K[y_{1},\ldots ,y_{n}]}
. The Weyl algebra's elements are endomorphisms, the elements
p
1
,
…
,
p
n
{\textstyle p_{1},\ldots ,p_{n}}
function as standard derivations, and map compositions generate linear differential operators. D-module is a related approach for understanding differential operators. The endomorphisms are:
q
j
(
y
k
)
=
y
j
⋅
y
k
,
q
j
(
c
)
=
c
⋅
y
j
with
c
∈
K
,
p
j
(
y
j
)
=
1
,
p
j
(
y
k
)
=
0
if
j
≠
k
,
p
j
(
c
)
=
0
with
c
∈
K
{\displaystyle q_{j}(y_{k})=y_{j}\cdot y_{k},\ q_{j}(c)=c\cdot y_{j}{\text{ with }}c\in K,\ p_{j}(y_{j})=1,\ p_{j}(y_{k})=0{\text{ if }}j\neq k,\ p_{j}(c)=0{\text{ with }}c\in K}
=== Pseudodifferential operator ring ===
The associative, possibly noncommutative ring
A
{\textstyle A}
has derivation
d
:
A
→
A
{\textstyle d:A\to A}
.
The pseudo-differential operator ring
A
(
(
∂
−
1
)
)
{\textstyle A((\partial ^{-1}))}
is a left
A
-
m
o
d
u
l
e
{\textstyle \operatorname {A-module} }
containing ring elements
L
{\textstyle L}
:
a
i
∈
A
,
i
,
i
min
∈
N
,
|
i
min
|
>
0
:
L
=
∑
i
≥
i
min
n
a
i
⋅
∂
i
{\displaystyle a_{i}\in A,\ i,i_{\min }\in \mathbb {N} ,\ |i_{\min }|>0\ :\ L=\sum _{i\geq i_{\min }}^{n}a_{i}\cdot \partial ^{i}}
The derivative operator is
d
(
a
)
=
∂
∘
a
−
a
∘
∂
{\textstyle d(a)=\partial \circ a-a\circ \partial }
.
The binomial coefficient is
(
i
k
)
{\displaystyle {\Bigl (}{i \atop k}{\Bigr )}}
.
Pseudo-differential operator multiplication is:
∑
i
≥
i
min
n
a
i
⋅
∂
i
⋅
∑
j
≥
j
min
m
b
i
⋅
∂
j
=
∑
i
,
j
;
k
≥
0
(
i
k
)
⋅
a
i
⋅
d
k
(
b
j
)
⋅
∂
i
+
j
−
k
{\displaystyle \sum _{i\geq i_{\min }}^{n}a_{i}\cdot \partial ^{i}\cdot \sum _{j\geq j_{\min }}^{m}b_{i}\cdot \partial ^{j}=\sum _{i,j;k\geq 0}{\Bigl (}{i \atop k}{\Bigr )}\cdot a_{i}\cdot d^{k}(b_{j})\cdot \partial ^{i+j-k}}
== Open problems ==
The Ritt problem asks is there an algorithm that determines if one prime differential ideal contains a second prime differential ideal when characteristic sets identify both ideals.
The Kolchin catenary conjecture states given a
d
>
0
{\textstyle d>0}
dimensional irreducible differential algebraic variety
V
{\textstyle V}
and an arbitrary point
p
∈
V
{\textstyle p\in V}
, a long gap chain of irreducible differential algebraic subvarieties occurs from
p
{\textstyle p}
to V.
The Jacobi bound conjecture concerns the upper bound for the order of an differential variety's irreducible component. The polynomial's orders determine a Jacobi number, and the conjecture is the Jacobi number determines this bound.
== See also ==
Arithmetic derivative – Function defined on integers in number theory
Difference algebra
Differential algebraic geometry
Differential calculus over commutative algebras – part of commutative algebraPages displaying wikidata descriptions as a fallback
Differential Galois theory – Study of Galois symmetry groups of differential fields
Differentially closed field
Differential graded algebra – Algebraic structure in homological algebra
D-module – Module over a sheaf of differential operators
Hardy field – Mathematical concept
Kähler differential – Differential form in commutative algebra
Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions
Picard–Vessiot theory – Study of differential field extensions induced by linear differential equations
Kolchin's problems
== Citations ==
== References ==
== External links ==
David Marker's home page has several online surveys discussing differential fields. | Wikipedia/Differential_algebra |
In order theory, a field of mathematics, an incidence algebra is an associative algebra, defined for every locally finite partially ordered set
and commutative ring with unity. Subalgebras called reduced incidence algebras give a natural construction of various types of generating functions used in combinatorics and number theory.
== Definition ==
A locally finite poset is one in which every closed interval
[a, b] = {x : a ≤ x ≤ b}
is finite.
The members of the incidence algebra are the functions f assigning to each nonempty interval [a, b] a scalar f(a, b), which is taken from the ring of scalars, a commutative ring with unity. On this underlying set one defines addition and scalar multiplication pointwise, and "multiplication" in the incidence algebra is a convolution defined by
(
f
∗
g
)
(
a
,
b
)
=
∑
a
≤
x
≤
b
f
(
a
,
x
)
g
(
x
,
b
)
.
{\displaystyle (f*g)(a,b)=\sum _{a~\leq ~x~\leq ~b}f(a,x)g(x,b).}
An incidence algebra is finite-dimensional if and only if the underlying poset is finite.
=== Related concepts ===
An incidence algebra is analogous to a group algebra; indeed, both the group algebra and the incidence algebra are special cases of a category algebra, defined analogously; groups and posets being special kinds of categories.
==== Upper-triangular matrices ====
Consider the case of a partial order ≤ over any n-element set S. We enumerate S as s1, …, sn, and in such a way that the enumeration is compatible with the order ≤ on S, that is, si ≤ sj implies i ≤ j, which is always possible.
Then, functions f as above, from intervals to scalars, can be thought of as matrices Aij, where Aij = f(si, sj) whenever i ≤ j, and Aij = 0 otherwise. Since we arranged S in a way consistent with the usual order on the indices of the matrices, they will appear as upper-triangular matrices with a prescribed zero-pattern determined by the incomparable elements in S under ≤.
The incidence algebra of ≤ is then isomorphic to the algebra of upper-triangular matrices with this prescribed zero-pattern and arbitrary (including possibly zero) scalar entries everywhere else, with the operations being ordinary matrix addition, scaling and multiplication.
== Special elements ==
The multiplicative identity element of the incidence algebra is the delta function, defined by
δ
(
a
,
b
)
=
{
1
if
a
=
b
,
0
if
a
≠
b
.
{\displaystyle \delta (a,b)={\begin{cases}1&{\text{if }}a=b,\\0&{\text{if }}a\neq b.\end{cases}}}
The zeta function of an incidence algebra is the constant function ζ(a, b) = 1 for every nonempty interval [a, b]. Multiplying by ζ is analogous to integration.
One can show that ζ is invertible in the incidence algebra (with respect to the convolution defined above). (Generally, a member h of the incidence algebra is invertible if and only if h(x, x) is invertible for every x.) The multiplicative inverse of the zeta function is the Möbius function μ(a, b); every value of μ(a, b) is an integral multiple of 1 in the base ring.
The Möbius function can also be defined inductively by the following relation:
μ
(
x
,
y
)
=
{
1
if
x
=
y
−
∑
z
:
x
≤
z
<
y
μ
(
x
,
z
)
for
x
<
y
0
otherwise
.
{\displaystyle \mu (x,y)={\begin{cases}{}\qquad 1&{\text{if }}x=y\\[6pt]\displaystyle -\!\!\!\!\sum _{z\,:\,x\,\leq \,z\,<\,y}\mu (x,z)&{\text{for }}x<y\\{}\qquad 0&{\text{otherwise }}.\end{cases}}}
Multiplying by μ is analogous to differentiation, and is called Möbius inversion.
The square of the zeta function gives the number of elements in an interval:
ζ
2
(
x
,
y
)
=
∑
z
∈
[
x
,
y
]
ζ
(
x
,
z
)
ζ
(
z
,
y
)
=
∑
z
∈
[
x
,
y
]
1
=
#
[
x
,
y
]
.
{\displaystyle \zeta ^{2}(x,y)=\sum _{z\in [x,y]}\zeta (x,z)\,\zeta (z,y)=\sum _{z\in [x,y]}1=\#[x,y].}
== Examples ==
Positive integers ordered by divisibility
The convolution associated to the incidence algebra for intervals [1, n] becomes the Dirichlet convolution, hence the Möbius function is μ(a, b) = μ(b/a), where the second "μ" is the classical Möbius function introduced into number theory in the 19th century.
Finite subsets of some set E, ordered by inclusion
The Möbius function is
μ
(
S
,
T
)
=
(
−
1
)
|
T
∖
S
|
{\displaystyle \mu (S,T)=(-1)^{\left|T\smallsetminus S\right|}}
whenever S and T are finite subsets of E with S ⊆ T, and Möbius inversion is called the principle of inclusion-exclusion.
Geometrically, this is a hypercube:
2
E
=
{
0
,
1
}
E
.
{\displaystyle 2^{E}=\{0,1\}^{E}.}
Natural numbers with their usual order
The Möbius function is
μ
(
x
,
y
)
=
{
1
if
y
=
x
,
−
1
if
y
=
x
+
1
,
0
if
y
>
x
+
1
,
{\displaystyle \mu (x,y)={\begin{cases}1&{\text{if }}y=x,\\-1&{\text{if }}y=x+1,\\0&{\text{if }}y>x+1,\end{cases}}}
and Möbius inversion is called the (backwards) difference operator.
Geometrically, this corresponds to the discrete number line.
The convolution of functions in the incidence algebra corresponds to multiplication of formal power series: see the discussion of reduced incidence algebras below. The Möbius function corresponds to the sequence (1, −1, 0, 0, 0, ... ) of coefficients of the formal power series 1 − t, and the zeta function corresponds to the sequence of coefficients (1, 1, 1, 1, ...) of the formal power series
(
1
−
t
)
−
1
=
1
+
t
+
t
2
+
t
3
+
⋯
{\displaystyle (1-t)^{-1}=1+t+t^{2}+t^{3}+\cdots }
, which is inverse. The delta function in this incidence algebra similarly corresponds to the formal power series 1.
Finite sub-multisets of some multiset E, ordered by inclusion
The above three examples can be unified and generalized by considering a multiset E, and finite sub-multisets S and T of E. The Möbius function is
μ
(
S
,
T
)
=
{
0
if
T
∖
S
is a proper multiset (has repeated elements)
(
−
1
)
|
T
∖
S
|
if
T
∖
S
is a set (has no repeated elements)
.
{\displaystyle \mu (S,T)={\begin{cases}0&{\text{if }}T\smallsetminus S{\text{ is a proper multiset (has repeated elements)}}\\(-1)^{\left|T\smallsetminus S\right|}&{\text{if }}T\smallsetminus S{\text{ is a set (has no repeated elements)}}.\end{cases}}}
This generalizes the positive integers ordered by divisibility by a positive integer corresponding to its multiset of prime factors with multiplicity, e.g., 12 corresponds to the multiset
{
2
,
2
,
3
}
.
{\displaystyle \{2,2,3\}.}
This generalizes the natural numbers with their usual order by a natural number corresponding to a multiset of one underlying element and cardinality equal to that number, e.g., 3 corresponds to the multiset
{
1
,
1
,
1
}
.
{\displaystyle \{1,1,1\}.}
Subgroups of a finite p-group G, ordered by inclusion
The Möbius function is
μ
G
(
H
1
,
H
2
)
=
(
−
1
)
k
p
(
k
2
)
{\displaystyle \mu _{G}(H_{1},H_{2})=(-1)^{k}p^{\binom {k}{2}}}
if
H
1
{\displaystyle H_{1}}
is a normal subgroup of
H
2
{\displaystyle H_{2}}
and
H
2
/
H
1
≅
(
Z
/
p
Z
)
k
{\displaystyle H_{2}/H_{1}\cong (\mathbb {Z} /p\mathbb {Z} )^{k}}
and it is 0 otherwise. This is a theorem of Weisner (1935).
Partitions of a set
Partially order the set of all partitions of a finite set by saying σ ≤ τ if σ is a finer partition than τ. In particular, let τ have t blocks which respectively split into s1, ..., st finer blocks of σ, which has a total of s = s1 + ⋅⋅⋅ + st blocks. Then the Möbius function is:
μ
(
σ
,
τ
)
=
(
−
1
)
s
−
t
(
s
1
−
1
)
!
…
(
s
t
−
1
)
!
{\displaystyle \mu (\sigma ,\tau )=(-1)^{s-t}(s_{1}{-}1)!\dots (s_{t}{-}1)!}
== Euler characteristic ==
A poset is bounded if it has smallest and largest elements, which we call 0 and 1 respectively (not to be confused with the 0 and 1 of the ring of scalars). The Euler characteristic of a bounded finite poset is μ(0,1). The reason for this terminology is the following: If P has a 0 and 1, then μ(0,1) is the reduced Euler characteristic of the simplicial complex whose faces are chains in P \ {0, 1}. This can be shown using Philip Hall's theorem, relating the value of μ(0,1) to the number of chains of length i.
== Reduced incidence algebras ==
The reduced incidence algebra consists of functions which assign the same value to any two intervals which are equivalent in an appropriate sense, usually meaning isomorphic as posets. This is a subalgebra of the incidence algebra, and it clearly contains the incidence algebra's identity element and zeta function. Any element of the reduced incidence algebra that is invertible in the larger incidence algebra has its inverse in the reduced incidence algebra. Thus the Möbius function is also in the reduced incidence algebra.
Reduced incidence algebras were introduced by Doubillet, Rota, and Stanley to give a natural construction of various rings of generating functions.
=== Natural numbers and ordinary generating functions ===
For the poset
(
N
,
≤
)
,
{\displaystyle (\mathbb {N} ,\leq ),}
the reduced incidence algebra consists of functions
f
(
a
,
b
)
{\displaystyle f(a,b)}
invariant under translation,
f
(
a
+
k
,
b
+
k
)
=
f
(
a
,
b
)
{\displaystyle f(a+k,b+k)=f(a,b)}
for all
k
≥
0
,
{\displaystyle k\geq 0,}
so as to have the same value on isomorphic intervals [a+k, b+k] and [a, b]. Let t denote the function with t(a, a+1) = 1 and t(a, b) = 0 otherwise, a kind of invariant delta function on isomorphism classes of intervals. Its powers in the incidence algebra are the other invariant delta functions t n(a, a+n) = 1 and t n(x, y) = 0 otherwise. These form a basis for the reduced incidence algebra, and we may write any invariant function as
f
=
∑
n
≥
0
f
(
0
,
n
)
t
n
{\displaystyle \textstyle f=\sum _{n\geq 0}f(0,n)t^{n}}
. This notation makes clear the isomorphism between the reduced incidence algebra and the ring of formal power series
R
[
[
t
]
]
{\displaystyle R[[t]]}
over the scalars R, also known as the ring of ordinary generating functions. We may write the zeta function as
ζ
=
1
+
t
+
t
2
+
⋯
=
1
1
−
t
,
{\displaystyle \zeta =1+t+t^{2}+\cdots ={\tfrac {1}{1-t}},}
the reciprocal of the Möbius function
μ
=
1
−
t
.
{\displaystyle \mu =1-t.}
=== Subset poset and exponential generating functions ===
For the Boolean poset of finite subsets
S
⊂
{
1
,
2
,
3
,
…
}
{\displaystyle S\subset \{1,2,3,\ldots \}}
ordered by inclusion
S
⊂
T
{\displaystyle S\subset T}
, the reduced incidence algebra consists of invariant functions
f
(
S
,
T
)
,
{\displaystyle f(S,T),}
defined to have the same value on isomorphic intervals [S,T] and [S′,T′] with |T \ S| = |T′ \ S′|. Again, let t denote the invariant delta function with t(S,T) = 1 for |T \ S| = 1 and t(S,T) = 0 otherwise. Its powers are:
t
n
(
S
,
T
)
=
∑
t
(
T
0
,
T
1
)
t
(
T
1
,
T
2
)
…
t
(
T
n
−
1
,
T
n
)
=
{
n
!
if
|
T
∖
S
|
=
n
0
otherwise,
{\displaystyle t^{n}(S,T)=\,\sum t(T_{0},T_{1})\,t(T_{1},T_{2})\dots t(T_{n-1},T_{n})=\left\{{\begin{array}{cl}n!&{\text{if}}\,\,|T\smallsetminus S|=n\\0&{\text{otherwise,}}\end{array}}\right.}
where the sum is over all chains
S
=
T
0
⊂
T
1
⊂
⋯
⊂
T
n
=
T
,
{\displaystyle S=T_{0}\subset T_{1}\subset \cdots \subset T_{n}=T,}
and the only non-zero terms occur for saturated chains with
|
T
i
∖
T
i
−
1
|
=
1
;
{\displaystyle |T_{i}\smallsetminus T_{i-1}|=1;}
since these correspond to permutations of n, we get the unique non-zero value n!. Thus, the invariant delta functions are the divided powers
t
n
n
!
,
{\displaystyle {\tfrac {t^{n}}{n!}},}
and we may write any invariant function as
f
=
∑
n
≥
0
f
(
∅
,
[
n
]
)
t
n
n
!
,
{\displaystyle \textstyle f=\sum _{n\geq 0}f(\emptyset ,[n]){\frac {t^{n}}{n!}},}
where [n] = {1, . . . , n}. This gives a natural isomorphism between the reduced incidence algebra and the ring of exponential generating functions. The zeta function is
ζ
=
∑
n
≥
0
t
n
n
!
=
exp
(
t
)
,
{\displaystyle \textstyle \zeta =\sum _{n\geq 0}{\frac {t^{n}}{n!}}=\exp(t),}
with Möbius function:
μ
=
1
ζ
=
exp
(
−
t
)
=
∑
n
≥
0
(
−
1
)
n
t
n
n
!
.
{\displaystyle \mu ={\frac {1}{\zeta }}=\exp(-t)=\sum _{n\geq 0}(-1)^{n}{\frac {t^{n}}{n!}}.}
Indeed, this computation with formal power series proves that
μ
(
S
,
T
)
=
(
−
1
)
|
T
∖
S
|
.
{\displaystyle \mu (S,T)=(-1)^{|T\smallsetminus S|}.}
Many combinatorial counting sequences involving subsets or labeled objects can be interpreted in terms of the reduced incidence algebra, and computed using exponential generating functions.
=== Divisor poset and Dirichlet series ===
Consider the poset D of positive integers ordered by divisibility, denoted by
a
|
b
.
{\displaystyle a\,|\,b.}
The reduced incidence algebra consists of functions
f
(
a
,
b
)
{\displaystyle f(a,b)}
that are invariant under multiplication:
f
(
k
a
,
k
b
)
=
f
(
a
,
b
)
{\displaystyle f(ka,kb)=f(a,b)}
for all
k
≥
1.
{\displaystyle k\geq 1.}
(This multiplicative equivalence of intervals is a much stronger relation than poset isomorphism; e.g., for primes p, the two-element intervals [1,p] are all inequivalent.) For an invariant function, f(a,b) depends only on b/a, so a natural basis consists of invariant delta functions
δ
n
{\displaystyle \delta _{n}}
defined by
δ
n
(
a
,
b
)
=
1
{\displaystyle \delta _{n}(a,b)=1}
if b/a = n and 0 otherwise; then any invariant function can be written
f
=
∑
n
≥
0
f
(
1
,
n
)
δ
n
.
{\displaystyle \textstyle f=\sum _{n\geq 0}f(1,n)\,\delta _{n}.}
The product of two invariant delta functions is:
(
δ
n
δ
m
)
(
a
,
b
)
=
∑
a
|
c
|
b
δ
n
(
a
,
c
)
δ
m
(
c
,
b
)
=
δ
n
m
(
a
,
b
)
,
{\displaystyle (\delta _{n}\delta _{m})(a,b)=\sum _{a|c|b}\delta _{n}(a,c)\,\delta _{m}(c,b)=\delta _{nm}(a,b),}
since the only non-zero term comes from c = na and b = mc = nma. Thus, we get an isomorphism from the reduced incidence algebra to the ring of formal Dirichlet series by sending
δ
n
{\displaystyle \delta _{n}}
to
n
−
s
,
{\displaystyle n^{-s}\!,}
so that f corresponds to
∑
n
≥
1
f
(
1
,
n
)
n
s
.
{\textstyle \sum _{n\geq 1}{\frac {f(1,n)}{n^{s}}}.}
The incidence algebra zeta function ζD(a,b) = 1 corresponds to the classical Riemann zeta function
ζ
(
s
)
=
∑
n
≥
1
1
n
s
,
{\displaystyle \zeta (s)=\textstyle \sum _{n\geq 1}{\frac {1}{n^{s}}},}
having reciprocal
1
ζ
(
s
)
=
∑
n
≥
1
μ
(
n
)
n
s
,
{\textstyle {\frac {1}{\zeta (s)}}=\sum _{n\geq 1}{\frac {\mu (n)}{n^{s}}},}
where
μ
(
n
)
=
μ
D
(
1
,
n
)
{\displaystyle \mu (n)=\mu _{D}(1,n)}
is the classical Möbius function of number theory. Many other arithmetic functions arise naturally within the reduced incidence algebra, and equivalently in terms of Dirichlet series. For example, the divisor function
σ
0
(
n
)
{\displaystyle \sigma _{0}(n)}
is the square of the zeta function,
σ
0
(
n
)
=
ζ
2
(
1
,
n
)
,
{\displaystyle \sigma _{0}(n)=\zeta ^{2}\!(1,n),}
a special case of the above result that
ζ
2
(
x
,
y
)
{\displaystyle \zeta ^{2}\!(x,y)}
gives the number of elements in the interval [x,y]; equivalenty,
ζ
(
s
)
2
=
∑
n
≥
1
σ
0
(
n
)
n
s
.
{\textstyle \zeta (s)^{2}=\sum _{n\geq 1}{\frac {\sigma _{0}(n)}{n^{s}}}.}
The product structure of the divisor poset facilitates the computation of its Möbius function. Unique factorization into primes implies D is isomorphic to an infinite Cartesian product
N
×
N
×
…
{\displaystyle \mathbb {N} \times \mathbb {N} \times \dots }
, with the order given by coordinatewise comparison:
n
=
p
1
e
1
p
2
e
2
…
{\displaystyle n=p_{1}^{e_{1}}p_{2}^{e_{2}}\dots }
, where
p
k
{\displaystyle p_{k}}
is the kth prime, corresponds to its sequence of exponents
(
e
1
,
e
2
,
…
)
.
{\displaystyle (e_{1},e_{2},\dots ).}
Now the Möbius function of D is the product of the Möbius functions for the factor posets, computed above, giving the classical formula:
μ
(
n
)
=
μ
D
(
1
,
n
)
=
∏
k
≥
1
μ
N
(
0
,
e
k
)
=
{
(
−
1
)
d
for
n
squarefree with
d
prime factors
0
otherwise.
{\displaystyle \mu (n)=\mu _{D}(1,n)=\prod _{k\geq 1}\mu _{\mathbb {N} }(0,e_{k})\,=\,\left\{{\begin{array}{cl}(-1)^{d}&{\text{for }}n{\text{ squarefree with }}d{\text{ prime factors}}\\0&{\text{otherwise.}}\end{array}}\right.}
The product structure also explains the classical Euler product for the zeta function. The zeta function of D corresponds to a Cartesian product of zeta functions of the factors, computed above as
1
1
−
t
,
{\textstyle {\frac {1}{1-t}},}
so that
ζ
D
≅
∏
k
≥
1
1
1
−
t
,
{\textstyle \zeta _{D}\cong \prod _{k\geq 1}\!{\frac {1}{1-t}},}
where the right side is a Cartesian product. Applying the isomorphism which sends t in the kth factor to
p
k
−
s
{\displaystyle p_{k}^{-s}}
, we obtain the usual Euler product.
== See also ==
Graph algebra
Incidence coalgebra
Path algebra
== Literature ==
Incidence algebras of locally finite posets were treated in a number of papers of Gian-Carlo Rota beginning in 1964, and by many later combinatorialists. Rota's 1964 paper was:
Rota, Gian-Carlo (1964), "On the Foundations of Combinatorial Theory I: Theory of Möbius Functions", Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 2 (4): 340–368, doi:10.1007/BF00531932, S2CID 121334025
N. Jacobson, Basic Algebra. I, W. H. Freeman and Co., 1974. See section 8.6 for a treatment of Mobius functions on posets
== Further reading ==
Spiegel, Eugene; O'Donnell, Christopher J. (1997), Incidence algebras, Pure and Applied Mathematics, vol. 206, Marcel Dekker, ISBN 0-8247-0036-8 | Wikipedia/Incidence_algebra |
In the theory of algebras over a field, mutation is a construction of a new binary operation related to the multiplication of the algebra. In specific cases the resulting algebra may be referred to as a homotope or an isotope of the original.
== Definitions ==
Let A be an algebra over a field F with multiplication (not assumed to be associative) denoted by juxtaposition. For an element a of A, define the left a-homotope
A
(
a
)
{\displaystyle A(a)}
to be the algebra with multiplication
x
∗
y
=
(
x
a
)
y
.
{\displaystyle x*y=(xa)y.\,}
Similarly define the left (a,b) mutation
A
(
a
,
b
)
{\displaystyle A(a,b)}
x
∗
y
=
(
x
a
)
y
−
(
y
b
)
x
.
{\displaystyle x*y=(xa)y-(yb)x.\,}
Right homotope and mutation are defined analogously. Since the right (p,q) mutation of A is the left (−q, −p) mutation of the opposite algebra to A, it suffices to study left mutations.
If A is a unital algebra and a is invertible, we refer to the isotope by a.
== Properties ==
If A is associative then so is any homotope of A, and any mutation of A is Lie-admissible.
If A is alternative then so is any homotope of A, and any mutation of A is Malcev-admissible.
Any isotope of a Hurwitz algebra is isomorphic to the original.
A homotope of a Bernstein algebra by an element of non-zero weight is again a Bernstein algebra.
== Jordan algebras ==
A Jordan algebra is a commutative algebra satisfying the Jordan identity
(
x
y
)
(
x
x
)
=
x
(
y
(
x
x
)
)
{\displaystyle (xy)(xx)=x(y(xx))}
. The Jordan triple product is defined by
{
a
,
b
,
c
}
=
(
a
b
)
c
+
(
c
b
)
a
−
(
a
c
)
b
.
{\displaystyle \{a,b,c\}=(ab)c+(cb)a-(ac)b.\,}
For y in A the mutation or homotope Ay is defined as the vector space A with multiplication
a
∘
b
=
{
a
,
y
,
b
}
.
{\displaystyle a\circ b=\{a,y,b\}.\,}
and if y is invertible this is referred to as an isotope. A homotope of a Jordan algebra is again a Jordan algebra: isotopy defines an equivalence relation. If y is nuclear then the isotope by y is isomorphic to the original.
== References ==
Elduque, Alberto; Myung, Hyo Chyl (1994). Mutations of Alternative Algebras. Mathematics and Its Applications. Vol. 278. Springer-Verlag. ISBN 0792327357.
Jacobson, Nathan (1996). Finite-dimensional division algebras over fields. Berlin: Springer-Verlag. ISBN 3-540-57029-2. Zbl 0874.16002.
Koecher, Max (1999) [1962]. Krieg, Aloys; Walcher, Sebastian (eds.). The Minnesota Notes on Jordan Algebras and Their Applications. Lecture Notes in Mathematics. Vol. 1710 (reprint ed.). Springer-Verlag. ISBN 3-540-66360-6. Zbl 1072.17513.
McCrimmon, Kevin (2004). A taste of Jordan algebras. Universitext. Berlin, New York: Springer-Verlag. doi:10.1007/b97489. ISBN 0-387-95447-3. MR 2014924.
Okubo, Susumo (1995). Introduction to Octonion and Other Non-Associative Algebras in Physics. Montroll Memorial Lecture Series in Mathematical Physics. Berlin, New York: Cambridge University Press. ISBN 0-521-47215-6. MR 1356224. Archived from the original on 2012-11-16. Retrieved 2014-02-04. | Wikipedia/Mutation_(algebra) |
In algebra, Zariski's lemma, proved by Oscar Zariski (1947), states that, if a field K is finitely generated as an associative algebra over another field k, then K is a finite field extension of k (that is, it is also finitely generated as a vector space).
An important application of the lemma is a proof of the weak form of Hilbert's Nullstellensatz: if I is a proper ideal of
k
[
t
1
,
.
.
.
,
t
n
]
{\displaystyle k[t_{1},...,t_{n}]}
(k an algebraically closed field), then I has a zero; i.e., there is a point x in
k
n
{\displaystyle k^{n}}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
for all f in I. (Proof: replacing I by a maximal ideal
m
{\displaystyle {\mathfrak {m}}}
, we can assume
I
=
m
{\displaystyle I={\mathfrak {m}}}
is maximal. Let
A
=
k
[
t
1
,
.
.
.
,
t
n
]
{\displaystyle A=k[t_{1},...,t_{n}]}
and
ϕ
:
A
→
A
/
m
{\displaystyle \phi :A\to A/{\mathfrak {m}}}
be the natural surjection. By the lemma
A
/
m
{\displaystyle A/{\mathfrak {m}}}
is a finite extension. Since k is algebraically closed that extension must be k. Then for any
f
∈
m
{\displaystyle f\in {\mathfrak {m}}}
,
f
(
ϕ
(
t
1
)
,
⋯
,
ϕ
(
t
n
)
)
=
ϕ
(
f
(
t
1
,
⋯
,
t
n
)
)
=
0
{\displaystyle f(\phi (t_{1}),\cdots ,\phi (t_{n}))=\phi (f(t_{1},\cdots ,t_{n}))=0}
;
that is to say,
x
=
(
ϕ
(
t
1
)
,
⋯
,
ϕ
(
t
n
)
)
{\displaystyle x=(\phi (t_{1}),\cdots ,\phi (t_{n}))}
is a zero of
m
{\displaystyle {\mathfrak {m}}}
.)
The lemma may also be understood from the following perspective. In general, a ring R is a Jacobson ring if and only if every finitely generated R-algebra that is a field is finite over R. Thus, the lemma follows from the fact that a field is a Jacobson ring.
== Proofs ==
Two direct proofs are given in Atiyah–MacDonald; the one is due to Zariski and the other uses the Artin–Tate lemma. For Zariski's original proof, see the original paper. Another direct proof in the language of Jacobson rings is given below. The lemma is also a consequence of the Noether normalization lemma. Indeed, by the normalization lemma, K is a finite module over the polynomial ring
k
[
x
1
,
…
,
x
d
]
{\displaystyle k[x_{1},\ldots ,x_{d}]}
where
x
1
,
…
,
x
d
{\displaystyle x_{1},\ldots ,x_{d}}
are elements of K that are algebraically independent over k. But since K has Krull dimension zero and since an integral ring extension (e.g., a finite ring extension) preserves Krull dimensions, the polynomial ring must have dimension zero; i.e.,
d
=
0
{\displaystyle d=0}
.
The following characterization of a Jacobson ring contains Zariski's lemma as a special case. Recall that a ring is a Jacobson ring if every prime ideal is an intersection of maximal ideals. (When A is a field, A is a Jacobson ring and the theorem below is precisely Zariski's lemma.)
Proof: 2.
⇒
{\displaystyle \Rightarrow }
1.: Let
p
{\displaystyle {\mathfrak {p}}}
be a prime ideal of A and set
B
=
A
/
p
{\displaystyle B=A/{\mathfrak {p}}}
. We need to show the Jacobson radical of B is zero. For that end, let f be a nonzero element of B. Let
m
{\displaystyle {\mathfrak {m}}}
be a maximal ideal of the localization
B
[
f
−
1
]
{\displaystyle B[f^{-1}]}
. Then
B
[
f
−
1
]
/
m
{\displaystyle B[f^{-1}]/{\mathfrak {m}}}
is a field that is a finitely generated A-algebra and so is finite over A by assumption; thus it is finite over
B
=
A
/
p
{\displaystyle B=A/{\mathfrak {p}}}
and so is finite over the subring
B
/
q
{\displaystyle B/{\mathfrak {q}}}
where
q
=
m
∩
B
{\displaystyle {\mathfrak {q}}={\mathfrak {m}}\cap B}
. By integrality,
q
{\displaystyle {\mathfrak {q}}}
is a maximal ideal not containing f.
1.
⇒
{\displaystyle \Rightarrow }
2.: Since a factor ring of a Jacobson ring is Jacobson, we can assume B contains A as a subring. Then the assertion is a consequence of the next algebraic fact:
(*) Let
B
⊇
A
{\displaystyle B\supseteq A}
be integral domains such that B is finitely generated as A-algebra. Then there exists a nonzero a in A such that every ring homomorphism
ϕ
:
A
→
K
{\displaystyle \phi :A\to K}
, K an algebraically closed field, with
ϕ
(
a
)
≠
0
{\displaystyle \phi (a)\neq 0}
extends to
ϕ
~
:
B
→
K
{\displaystyle {\widetilde {\phi }}:B\to K}
.
Indeed, choose a maximal ideal
m
{\displaystyle {\mathfrak {m}}}
of A not containing a. Writing K for some algebraic closure of
A
/
m
{\displaystyle A/{\mathfrak {m}}}
, the canonical map
ϕ
:
A
→
A
/
m
↪
K
{\displaystyle \phi :A\to A/{\mathfrak {m}}\hookrightarrow K}
extends to
ϕ
~
:
B
→
K
{\displaystyle {\widetilde {\phi }}:B\to K}
. Since B is a field,
ϕ
~
{\displaystyle {\widetilde {\phi }}}
is injective and so B is algebraic (thus finite algebraic) over
A
/
m
{\displaystyle A/{\mathfrak {m}}}
. We now prove (*). If B contains an element that is transcendental over A, then it contains a polynomial ring over A to which φ extends (without a requirement on a) and so we can assume B is algebraic over A (by Zorn's lemma, say). Let
x
1
,
…
,
x
r
{\displaystyle x_{1},\dots ,x_{r}}
be the generators of B as A-algebra. Then each
x
i
{\displaystyle x_{i}}
satisfies the relation
a
i
0
x
i
n
+
a
i
1
x
i
n
−
1
+
⋯
+
a
i
n
=
0
,
a
i
j
∈
A
{\displaystyle a_{i0}x_{i}^{n}+a_{i1}x_{i}^{n-1}+\dots +a_{in}=0,\,\,a_{ij}\in A}
where n depends on i and
a
i
0
≠
0
{\displaystyle a_{i0}\neq 0}
. Set
a
=
a
10
a
20
…
a
r
0
{\displaystyle a=a_{10}a_{20}\dots a_{r0}}
. Then
B
[
a
−
1
]
{\displaystyle B[a^{-1}]}
is integral over
A
[
a
−
1
]
{\displaystyle A[a^{-1}]}
. Now given
ϕ
:
A
→
K
{\displaystyle \phi :A\to K}
, we first extend it to
ϕ
~
:
A
[
a
−
1
]
→
K
{\displaystyle {\widetilde {\phi }}:A[a^{-1}]\to K}
by setting
ϕ
~
(
a
−
1
)
=
ϕ
(
a
)
−
1
{\displaystyle {\widetilde {\phi }}(a^{-1})=\phi (a)^{-1}}
. Next, let
m
=
ker
ϕ
~
{\displaystyle {\mathfrak {m}}=\operatorname {ker} {\widetilde {\phi }}}
. By integrality,
m
=
n
∩
A
[
a
−
1
]
{\displaystyle {\mathfrak {m}}={\mathfrak {n}}\cap A[a^{-1}]}
for some maximal ideal
n
{\displaystyle {\mathfrak {n}}}
of
B
[
a
−
1
]
{\displaystyle B[a^{-1}]}
. Then
ϕ
~
:
A
[
a
−
1
]
→
A
[
a
−
1
]
/
m
→
K
{\displaystyle {\widetilde {\phi }}:A[a^{-1}]\to A[a^{-1}]/{\mathfrak {m}}\to K}
extends to
B
[
a
−
1
]
→
B
[
a
−
1
]
/
n
→
K
{\displaystyle B[a^{-1}]\to B[a^{-1}]/{\mathfrak {n}}\to K}
. Restrict the last map to B to finish the proof.
◻
{\displaystyle \square }
== Notes ==
== Sources == | Wikipedia/Zariski's_lemma |
In functional analysis, a branch of mathematics, an operator algebra is an algebra of continuous linear operators on a topological vector space, with the multiplication given by the composition of mappings.
The results obtained in the study of operator algebras are often phrased in algebraic terms, while the techniques used are often highly analytic. Although the study of operator algebras is usually classified as a branch of functional analysis, it has direct applications to representation theory, differential geometry, quantum statistical mechanics, quantum information, and quantum field theory.
== Overview ==
Operator algebras can be used to study arbitrary sets of operators with little algebraic relation simultaneously. From this point of view, operator algebras can be regarded as a generalization of spectral theory of a single operator. In general, operator algebras are non-commutative rings.
An operator algebra is typically required to be closed in a specified operator topology inside the whole algebra of continuous linear operators. In particular, it is a set of operators with both algebraic and topological closure properties. In some disciplines such properties are axiomatized and algebras with certain topological structure become the subject of the research.
Though algebras of operators are studied in various contexts (for example, algebras of pseudo-differential operators acting on spaces of distributions), the term operator algebra is usually used in reference to algebras of bounded operators on a Banach space or, even more specially in reference to algebras of operators on a separable Hilbert space, endowed with the operator norm topology.
In the case of operators on a Hilbert space, the Hermitian adjoint map on operators gives a natural involution, which provides an additional algebraic structure that can be imposed on the algebra. In this context, the best studied examples are self-adjoint operator algebras, meaning that they are closed under taking adjoints. These include C*-algebras, von Neumann algebras, and AW*-algebras. C*-algebras can be easily characterized abstractly by a condition relating the norm, involution and multiplication. Such abstractly defined C*-algebras can be identified to a certain closed subalgebra of the algebra of the continuous linear operators on a suitable Hilbert space. A similar result holds for von Neumann algebras.
Commutative self-adjoint operator algebras can be regarded as the algebra of complex-valued continuous functions on a locally compact space, or that of measurable functions on a standard measurable space. Thus, general operator algebras are often regarded as a noncommutative generalizations of these algebras, or the structure of the base space on which the functions are defined. This point of view is elaborated as the philosophy of noncommutative geometry, which tries to study various non-classical and/or pathological objects by noncommutative operator algebras.
Examples of operator algebras that are not self-adjoint include:
nest algebras,
many commutative subspace lattice algebras,
many limit algebras.
== See also ==
Banach algebra – Particular kind of algebraic structure
Matrix mechanics – Formulation of quantum mechanics
Topologies on the set of operators on a Hilbert space
Vertex operator algebra – Algebra used in 2D conformal field theories and string theory
== References ==
== Further reading ==
Blackadar, Bruce (2005). Operator Algebras: Theory of C*-Algebras and von Neumann Algebras. Encyclopaedia of Mathematical Sciences. Springer-Verlag. ISBN 3-540-28486-9.
M. Takesaki, Theory of Operator Algebras I, Springer, 2001. | Wikipedia/Operator_algebra |
In idempotent analysis, the tropical semiring is a semiring of extended real numbers with the operations of minimum (or maximum) and addition replacing the usual ("classical") operations of addition and multiplication, respectively.
The tropical semiring has various applications (see tropical analysis), and forms the basis of tropical geometry. The name tropical is a reference to the Hungarian-born computer scientist Imre Simon, so named because he lived and worked in Brazil.
== Definition ==
The min tropical semiring (or min-plus semiring or min-plus algebra) is the semiring (
R
∪
{
+
∞
}
{\displaystyle \mathbb {R} \cup \{+\infty \}}
,
⊕
{\displaystyle \oplus }
,
⊗
{\displaystyle \otimes }
), with the operations:
x
⊕
y
=
min
{
x
,
y
}
,
{\displaystyle x\oplus y=\min\{x,y\},}
x
⊗
y
=
x
+
y
.
{\displaystyle x\otimes y=x+y.}
The operations
⊕
{\displaystyle \oplus }
and
⊗
{\displaystyle \otimes }
are referred to as tropical addition and tropical multiplication respectively. The identity element for
⊕
{\displaystyle \oplus }
is
+
∞
{\displaystyle +\infty }
, and the identity element for
⊗
{\displaystyle \otimes }
is 0.
Similarly, the max tropical semiring (or max-plus semiring or max-plus algebra or Arctic semiring) is the semiring (
R
∪
{
−
∞
}
{\displaystyle \mathbb {R} \cup \{-\infty \}}
,
⊕
{\displaystyle \oplus }
,
⊗
{\displaystyle \otimes }
), with operations:
x
⊕
y
=
max
{
x
,
y
}
,
{\displaystyle x\oplus y=\max\{x,y\},}
x
⊗
y
=
x
+
y
.
{\displaystyle x\otimes y=x+y.}
The identity element unit for
⊕
{\displaystyle \oplus }
is
−
∞
{\displaystyle -\infty }
, and the identity element unit for
⊗
{\displaystyle \otimes }
is 0.
The two semirings are isomorphic under negation
x
↦
−
x
{\displaystyle x\mapsto -x}
, and generally one of these is chosen and referred to simply as the tropical semiring. Conventions differ between authors and subfields: some use the min convention, some use the max convention.
The two tropical semirings are the limit ("tropicalization", "dequantization") of the log semiring as the base goes to infinity
b
→
∞
{\displaystyle b\to \infty }
(max-plus semiring) or to zero
b
→
0
{\displaystyle b\to 0}
(min-plus semiring).
Tropical addition is idempotent, thus a tropical semiring is an example of an idempotent semiring.
A tropical semiring is also referred to as a tropical algebra, though this should not be confused with an associative algebra over a tropical semiring.
Tropical exponentiation is defined in the usual way as iterated tropical products.
== Valued fields ==
The tropical semiring operations model how valuations behave under addition and multiplication in a valued field. A real-valued field
K
{\displaystyle K}
is a field equipped with a function
v
:
K
→
R
∪
{
∞
}
{\displaystyle v:K\to \mathbb {R} \cup \{\infty \}}
which satisfies the following properties for all
a
{\displaystyle a}
,
b
{\displaystyle b}
in
K
{\displaystyle K}
:
v
(
a
)
=
∞
{\displaystyle v(a)=\infty }
if and only if
a
=
0
,
{\displaystyle a=0,}
v
(
a
b
)
=
v
(
a
)
+
v
(
b
)
=
v
(
a
)
⊗
v
(
b
)
,
{\displaystyle v(ab)=v(a)+v(b)=v(a)\otimes v(b),}
v
(
a
+
b
)
≥
min
{
v
(
a
)
,
v
(
b
)
}
=
v
(
a
)
⊕
v
(
b
)
,
{\displaystyle v(a+b)\geq \min\{v(a),v(b)\}=v(a)\oplus v(b),}
with equality if
v
(
a
)
≠
v
(
b
)
.
{\displaystyle v(a)\neq v(b).}
Therefore the valuation v is almost a semiring homomorphism from K to the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together.
Some common valued fields:
Q
{\displaystyle \mathbb {Q} }
or
C
{\displaystyle \mathbb {C} }
with the trivial valuation,
v
(
a
)
=
0
{\displaystyle v(a)=0}
for all
a
≠
0
{\displaystyle a\neq 0}
,
Q
{\displaystyle \mathbb {Q} }
or its extensions with the p-adic valuation,
v
(
p
n
a
/
b
)
=
n
{\displaystyle v(p^{n}a/b)=n}
for
a
{\displaystyle a}
and
b
{\displaystyle b}
coprime to
p
{\displaystyle p}
,
the field of formal Laurent series
K
(
(
t
)
)
{\displaystyle K((t))}
(integer powers), or the field of Puiseux series
K
{
{
t
}
}
{\displaystyle K\{\{t\}\}}
, or the field of Hahn series, with valuation returning the smallest exponent of
t
{\displaystyle t}
appearing in the series.
== References == | Wikipedia/Max-plus_algebra |
In mathematics, specifically in abstract algebra, power associativity is a property of a binary operation that is a weak form of associativity.
== Definition ==
An algebra (or more generally a magma) is said to be power-associative if the subalgebra generated by any element is associative. Concretely, this means that if an element
x
{\displaystyle x}
is performed an operation
∗
{\displaystyle *}
by itself several times, it doesn't matter in which order the operations are carried out, so for instance
x
∗
(
x
∗
(
x
∗
x
)
)
=
(
x
∗
(
x
∗
x
)
)
∗
x
=
(
x
∗
x
)
∗
(
x
∗
x
)
{\displaystyle x*(x*(x*x))=(x*(x*x))*x=(x*x)*(x*x)}
.
== Examples and properties ==
Every associative algebra is power-associative, but so are all other alternative algebras (like the octonions, which are non-associative) and even non-alternative flexible algebras like the sedenions, trigintaduonions, and Okubo algebras. Any algebra whose elements are idempotent is also power-associative.
Exponentiation to the power of any positive integer can be defined consistently whenever multiplication is power-associative. For example, there is no need to distinguish whether x3 should be defined as (xx)x or as x(xx), since these are equal. Exponentiation to the power of zero can also be defined if the operation has an identity element, so the existence of identity elements is useful in power-associative contexts.
Over a field of characteristic 0, an algebra is power-associative if and only if it satisfies
[
x
,
x
,
x
]
=
0
{\displaystyle [x,x,x]=0}
and
[
x
2
,
x
,
x
]
=
0
{\displaystyle [x^{2},x,x]=0}
, where
[
x
,
y
,
z
]
:=
(
x
y
)
z
−
x
(
y
z
)
{\displaystyle [x,y,z]:=(xy)z-x(yz)}
is the associator (Albert 1948).
Over an infinite field of prime characteristic
p
>
0
{\displaystyle p>0}
there is no finite set of identities that characterizes power-associativity, but there are infinite independent sets, as described by Gainov (1970):
For
p
=
2
{\displaystyle p=2}
:
[
x
,
x
2
,
x
]
=
0
{\displaystyle [x,x^{2},x]=0}
and
[
x
n
−
2
,
x
,
x
]
=
0
{\displaystyle [x^{n-2},x,x]=0}
for
n
=
3
,
2
k
{\displaystyle n=3,2^{k}}
(
k
=
2
,
3...
)
{\displaystyle k=2,3...)}
For
p
=
3
{\displaystyle p=3}
:
[
x
n
−
2
,
x
,
x
]
=
0
{\displaystyle [x^{n-2},x,x]=0}
for
n
=
4
,
5
,
3
k
{\displaystyle n=4,5,3^{k}}
(
k
=
1
,
2...
)
{\displaystyle k=1,2...)}
For
p
=
5
{\displaystyle p=5}
:
[
x
n
−
2
,
x
,
x
]
=
0
{\displaystyle [x^{n-2},x,x]=0}
for
n
=
3
,
4
,
6
,
5
k
{\displaystyle n=3,4,6,5^{k}}
(
k
=
1
,
2...
)
{\displaystyle k=1,2...)}
For
p
>
5
{\displaystyle p>5}
:
[
x
n
−
2
,
x
,
x
]
=
0
{\displaystyle [x^{n-2},x,x]=0}
for
n
=
3
,
4
,
p
k
{\displaystyle n=3,4,p^{k}}
(
k
=
1
,
2...
)
{\displaystyle k=1,2...)}
A substitution law holds for real power-associative algebras with unit, which basically asserts that multiplication of polynomials works as expected. For f a real polynomial in x, and for any a in such an algebra define f(a) to be the element of the algebra resulting from the obvious substitution of a into f. Then for any two such polynomials f and g, we have that (fg)(a) = f(a)g(a).
== See also ==
Alternativity
== References ==
Albert, A. Adrian (1948). "Power-associative rings". Transactions of the American Mathematical Society. 64 (3): 552–593. doi:10.2307/1990399. ISSN 0002-9947. JSTOR 1990399. MR 0027750. Zbl 0033.15402.
Gainov, A. T. (1970). "Power-associative algebras over a finite-characteristic field". Algebra and Logic. 9 (1): 5–19. doi:10.1007/BF02219846. ISSN 0002-9947. MR 0281764. Zbl 0208.04001.
Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by Jacques Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. Zbl 0955.16001.
Okubo, Susumu (1995). Introduction to octonion and other non-associative algebras in physics. Montroll Memorial Lecture Series in Mathematical Physics. Vol. 2. Cambridge University Press. p. 17. ISBN 0-521-01792-0. MR 1356224. Zbl 0841.17001.
Schafer, R. D. (1995) [1966]. An introduction to non-associative algebras. Dover. pp. 128–148. ISBN 0-486-68813-5. | Wikipedia/Power-associative_algebra |
In mathematics, an injective function (also known as injection, or one-to-one function ) is a function f that maps distinct elements of its domain to distinct elements of its codomain; that is, x1 ≠ x2 implies f(x1) ≠ f(x2) (equivalently by contraposition, f(x1) = f(x2) implies x1 = x2). In other words, every element of the function's codomain is the image of at most one element of its domain. The term one-to-one function must not be confused with one-to-one correspondence that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain.
A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an injective homomorphism is also called a monomorphism. However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see Homomorphism § Monomorphism for more details.
A function
f
{\displaystyle f}
that is not injective is sometimes called many-to-one.
== Definition ==
Let
f
{\displaystyle f}
be a function whose domain is a set
X
.
{\displaystyle X.}
The function
f
{\displaystyle f}
is said to be injective provided that for all
a
{\displaystyle a}
and
b
{\displaystyle b}
in
X
,
{\displaystyle X,}
if
f
(
a
)
=
f
(
b
)
,
{\displaystyle f(a)=f(b),}
then
a
=
b
{\displaystyle a=b}
; that is,
f
(
a
)
=
f
(
b
)
{\displaystyle f(a)=f(b)}
implies
a
=
b
.
{\displaystyle a=b.}
Equivalently, if
a
≠
b
,
{\displaystyle a\neq b,}
then
f
(
a
)
≠
f
(
b
)
{\displaystyle f(a)\neq f(b)}
in the contrapositive statement.
Symbolically,
∀
a
,
b
∈
X
,
f
(
a
)
=
f
(
b
)
⇒
a
=
b
,
{\displaystyle \forall a,b\in X,\;\;f(a)=f(b)\Rightarrow a=b,}
which is logically equivalent to the contrapositive,
∀
a
,
b
∈
X
,
a
≠
b
⇒
f
(
a
)
≠
f
(
b
)
.
{\displaystyle \forall a,b\in X,\;\;a\neq b\Rightarrow f(a)\neq f(b).}
An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example,
f
:
A
↣
B
{\displaystyle f:A\rightarrowtail B}
or
f
:
A
↪
B
{\displaystyle f:A\hookrightarrow B}
), although some authors specifically reserve ↪ for an inclusion map.
== Examples ==
For visual examples, readers are directed to the gallery section.
For any set
X
{\displaystyle X}
and any subset
S
⊆
X
,
{\displaystyle S\subseteq X,}
the inclusion map
S
→
X
{\displaystyle S\to X}
(which sends any element
s
∈
S
{\displaystyle s\in S}
to itself) is injective. In particular, the identity function
X
→
X
{\displaystyle X\to X}
is always injective (and in fact bijective).
If the domain of a function is the empty set, then the function is the empty function, which is injective.
If the domain of a function has one element (that is, it is a singleton set), then the function is always injective.
The function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
defined by
f
(
x
)
=
2
x
+
1
{\displaystyle f(x)=2x+1}
is injective.
The function
g
:
R
→
R
{\displaystyle g:\mathbb {R} \to \mathbb {R} }
defined by
g
(
x
)
=
x
2
{\displaystyle g(x)=x^{2}}
is not injective, because (for example)
g
(
1
)
=
1
=
g
(
−
1
)
.
{\displaystyle g(1)=1=g(-1).}
However, if
g
{\displaystyle g}
is redefined so that its domain is the non-negative real numbers [0,+∞), then
g
{\displaystyle g}
is injective.
The exponential function
exp
:
R
→
R
{\displaystyle \exp :\mathbb {R} \to \mathbb {R} }
defined by
exp
(
x
)
=
e
x
{\displaystyle \exp(x)=e^{x}}
is injective (but not surjective, as no real value maps to a negative number).
The natural logarithm function
ln
:
(
0
,
∞
)
→
R
{\displaystyle \ln :(0,\infty )\to \mathbb {R} }
defined by
x
↦
ln
x
{\displaystyle x\mapsto \ln x}
is injective.
The function
g
:
R
→
R
{\displaystyle g:\mathbb {R} \to \mathbb {R} }
defined by
g
(
x
)
=
x
n
−
x
{\displaystyle g(x)=x^{n}-x}
is not injective, since, for example,
g
(
0
)
=
g
(
1
)
=
0.
{\displaystyle g(0)=g(1)=0.}
More generally, when
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are both the real line
R
,
{\displaystyle \mathbb {R} ,}
then an injective function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the horizontal line test.
== Injections can be undone ==
Functions with left inverses are always injections. That is, given
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
if there is a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that for every
x
∈
X
{\displaystyle x\in X}
,
g
(
f
(
x
)
)
=
x
{\displaystyle g(f(x))=x}
, then
f
{\displaystyle f}
is injective. The proof is that
f
(
a
)
=
f
(
b
)
→
g
(
f
(
a
)
)
=
g
(
f
(
b
)
)
→
a
=
b
.
{\displaystyle f(a)=f(b)\rightarrow g(f(a))=g(f(b))\rightarrow a=b.}
In this case,
g
{\displaystyle g}
is called a retraction of
f
.
{\displaystyle f.}
Conversely,
f
{\displaystyle f}
is called a section of
g
.
{\displaystyle g.}
Conversely, every injection
f
{\displaystyle f}
with a non-empty domain has a left inverse
g
{\displaystyle g}
. It can be defined by choosing an element
a
{\displaystyle a}
in the domain of
f
{\displaystyle f}
and setting
g
(
y
)
{\displaystyle g(y)}
to the unique element of the pre-image
f
−
1
[
y
]
{\displaystyle f^{-1}[y]}
(if it is non-empty) or to
a
{\displaystyle a}
(otherwise).
The left inverse
g
{\displaystyle g}
is not necessarily an inverse of
f
,
{\displaystyle f,}
because the composition in the other order,
f
∘
g
,
{\displaystyle f\circ g,}
may differ from the identity on
Y
.
{\displaystyle Y.}
In other words, an injective function can be "reversed" by a left inverse, but is not necessarily invertible, which requires that the function is bijective.
== Injections may be made invertible ==
In fact, to turn an injective function
f
:
X
→
Y
{\displaystyle f:X\to Y}
into a bijective (hence invertible) function, it suffices to replace its codomain
Y
{\displaystyle Y}
by its actual image
J
=
f
(
X
)
.
{\displaystyle J=f(X).}
That is, let
g
:
X
→
J
{\displaystyle g:X\to J}
such that
g
(
x
)
=
f
(
x
)
{\displaystyle g(x)=f(x)}
for all
x
∈
X
{\displaystyle x\in X}
; then
g
{\displaystyle g}
is bijective. Indeed,
f
{\displaystyle f}
can be factored as
In
J
,
Y
∘
g
,
{\displaystyle \operatorname {In} _{J,Y}\circ g,}
where
In
J
,
Y
{\displaystyle \operatorname {In} _{J,Y}}
is the inclusion function from
J
{\displaystyle J}
into
Y
.
{\displaystyle Y.}
More generally, injective partial functions are called partial bijections.
== Other properties ==
If
f
{\displaystyle f}
and
g
{\displaystyle g}
are both injective then
f
∘
g
{\displaystyle f\circ g}
is injective.
If
g
∘
f
{\displaystyle g\circ f}
is injective, then
f
{\displaystyle f}
is injective (but
g
{\displaystyle g}
need not be).
f
:
X
→
Y
{\displaystyle f:X\to Y}
is injective if and only if, given any functions
g
,
{\displaystyle g,}
h
:
W
→
X
{\displaystyle h:W\to X}
whenever
f
∘
g
=
f
∘
h
,
{\displaystyle f\circ g=f\circ h,}
then
g
=
h
.
{\displaystyle g=h.}
In other words, injective functions are precisely the monomorphisms in the category Set of sets.
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is injective and
A
{\displaystyle A}
is a subset of
X
,
{\displaystyle X,}
then
f
−
1
(
f
(
A
)
)
=
A
.
{\displaystyle f^{-1}(f(A))=A.}
Thus,
A
{\displaystyle A}
can be recovered from its image
f
(
A
)
.
{\displaystyle f(A).}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is injective and
A
{\displaystyle A}
and
B
{\displaystyle B}
are both subsets of
X
,
{\displaystyle X,}
then
f
(
A
∩
B
)
=
f
(
A
)
∩
f
(
B
)
.
{\displaystyle f(A\cap B)=f(A)\cap f(B).}
Every function
h
:
W
→
Y
{\displaystyle h:W\to Y}
can be decomposed as
h
=
f
∘
g
{\displaystyle h=f\circ g}
for a suitable injection
f
{\displaystyle f}
and surjection
g
.
{\displaystyle g.}
This decomposition is unique up to isomorphism, and
f
{\displaystyle f}
may be thought of as the inclusion function of the range
h
(
W
)
{\displaystyle h(W)}
of
h
{\displaystyle h}
as a subset of the codomain
Y
{\displaystyle Y}
of
h
.
{\displaystyle h.}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is an injective function, then
Y
{\displaystyle Y}
has at least as many elements as
X
,
{\displaystyle X,}
in the sense of cardinal numbers. In particular, if, in addition, there is an injection from
Y
{\displaystyle Y}
to
X
,
{\displaystyle X,}
then
X
{\displaystyle X}
and
Y
{\displaystyle Y}
have the same cardinal number. (This is known as the Cantor–Bernstein–Schroeder theorem.)
If both
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are finite with the same number of elements, then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is injective if and only if
f
{\displaystyle f}
is surjective (in which case
f
{\displaystyle f}
is bijective).
An injective function which is a homomorphism between two algebraic structures is an embedding.
Unlike surjectivity, which is a relation between the graph of a function and its codomain, injectivity is a property of the graph of the function alone; that is, whether a function
f
{\displaystyle f}
is injective can be decided by only considering the graph (and not the codomain) of
f
.
{\displaystyle f.}
== Proving that functions are injective ==
A proof that a function
f
{\displaystyle f}
is injective depends on how the function is presented and what properties the function holds.
For functions that are given by some formula there is a basic idea. We use the definition of injectivity, namely that if
f
(
x
)
=
f
(
y
)
,
{\displaystyle f(x)=f(y),}
then
x
=
y
.
{\displaystyle x=y.}
Here is an example:
f
(
x
)
=
2
x
+
3
{\displaystyle f(x)=2x+3}
Proof: Let
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
Suppose
f
(
x
)
=
f
(
y
)
.
{\displaystyle f(x)=f(y).}
So
2
x
+
3
=
2
y
+
3
{\displaystyle 2x+3=2y+3}
implies
2
x
=
2
y
,
{\displaystyle 2x=2y,}
which implies
x
=
y
.
{\displaystyle x=y.}
Therefore, it follows from the definition that
f
{\displaystyle f}
is injective.
There are multiple other methods of proving that a function is injective. For example, in calculus if
f
{\displaystyle f}
is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, if
f
{\displaystyle f}
is a linear transformation it is sufficient to show that the kernel of
f
{\displaystyle f}
contains only the zero vector. If
f
{\displaystyle f}
is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list.
A graphical approach for a real-valued function
f
{\displaystyle f}
of a real variable
x
{\displaystyle x}
is the horizontal line test. If every horizontal line intersects the curve of
f
(
x
)
{\displaystyle f(x)}
in at most one point, then
f
{\displaystyle f}
is injective or one-to-one.
== Gallery ==
== See also ==
Bijection, injection and surjection – Properties of mathematical functions
Injective metric space – Type of metric space
Monotonic function – Order-preserving mathematical function
Univalent function – Mathematical concept
== Notes ==
== References ==
Bartle, Robert G. (1976), The Elements of Real Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-05464-1, p. 17 ff.
Halmos, Paul R. (1974), Naive Set Theory, New York: Springer, ISBN 978-0-387-90092-6, p. 38 ff.
== External links ==
Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms.
Khan Academy – Surjective (onto) and Injective (one-to-one) functions: Introduction to surjective and injective functions | Wikipedia/Injective_function |
In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field. It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension.
For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. We say
V
{\displaystyle V}
is finite-dimensional if the dimension of
V
{\displaystyle V}
is finite, and infinite-dimensional if its dimension is infinite.
The dimension of the vector space
V
{\displaystyle V}
over the field
F
{\displaystyle F}
can be written as
dim
F
(
V
)
{\displaystyle \dim _{F}(V)}
or as
[
V
:
F
]
,
{\displaystyle [V:F],}
read "dimension of
V
{\displaystyle V}
over
F
{\displaystyle F}
". When
F
{\displaystyle F}
can be inferred from context,
dim
(
V
)
{\displaystyle \dim(V)}
is typically written.
== Examples ==
The vector space
R
3
{\displaystyle \mathbb {R} ^{3}}
has
{
(
1
0
0
)
,
(
0
1
0
)
,
(
0
0
1
)
}
{\displaystyle \left\{{\begin{pmatrix}1\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\1\\0\end{pmatrix}},{\begin{pmatrix}0\\0\\1\end{pmatrix}}\right\}}
as a standard basis, and therefore
dim
R
(
R
3
)
=
3.
{\displaystyle \dim _{\mathbb {R} }(\mathbb {R} ^{3})=3.}
More generally,
dim
R
(
R
n
)
=
n
,
{\displaystyle \dim _{\mathbb {R} }(\mathbb {R} ^{n})=n,}
and even more generally,
dim
F
(
F
n
)
=
n
{\displaystyle \dim _{F}(F^{n})=n}
for any field
F
.
{\displaystyle F.}
The complex numbers
C
{\displaystyle \mathbb {C} }
are both a real and complex vector space; we have
dim
R
(
C
)
=
2
{\displaystyle \dim _{\mathbb {R} }(\mathbb {C} )=2}
and
dim
C
(
C
)
=
1.
{\displaystyle \dim _{\mathbb {C} }(\mathbb {C} )=1.}
So the dimension depends on the base field.
The only vector space with dimension
0
{\displaystyle 0}
is
{
0
}
,
{\displaystyle \{0\},}
the vector space consisting only of its zero element.
== Properties ==
If
W
{\displaystyle W}
is a linear subspace of
V
{\displaystyle V}
then
dim
(
W
)
≤
dim
(
V
)
.
{\displaystyle \dim(W)\leq \dim(V).}
To show that two finite-dimensional vector spaces are equal, the following criterion can be used: if
V
{\displaystyle V}
is a finite-dimensional vector space and
W
{\displaystyle W}
is a linear subspace of
V
{\displaystyle V}
with
dim
(
W
)
=
dim
(
V
)
,
{\displaystyle \dim(W)=\dim(V),}
then
W
=
V
.
{\displaystyle W=V.}
The space
R
n
{\displaystyle \mathbb {R} ^{n}}
has the standard basis
{
e
1
,
…
,
e
n
}
,
{\displaystyle \left\{e_{1},\ldots ,e_{n}\right\},}
where
e
i
{\displaystyle e_{i}}
is the
i
{\displaystyle i}
-th column of the corresponding identity matrix. Therefore,
R
n
{\displaystyle \mathbb {R} ^{n}}
has dimension
n
.
{\displaystyle n.}
Any two finite dimensional vector spaces over
F
{\displaystyle F}
with the same dimension are isomorphic. Any bijective map between their bases can be uniquely extended to a bijective linear map between the vector spaces. If
B
{\displaystyle B}
is some set, a vector space with dimension
|
B
|
{\displaystyle |B|}
over
F
{\displaystyle F}
can be constructed as follows: take the set
F
(
B
)
{\displaystyle F(B)}
of all functions
f
:
B
→
F
{\displaystyle f:B\to F}
such that
f
(
b
)
=
0
{\displaystyle f(b)=0}
for all but finitely many
b
{\displaystyle b}
in
B
.
{\displaystyle B.}
These functions can be added and multiplied with elements of
F
{\displaystyle F}
to obtain the desired
F
{\displaystyle F}
-vector space.
An important result about dimensions is given by the rank–nullity theorem for linear maps.
If
F
/
K
{\displaystyle F/K}
is a field extension, then
F
{\displaystyle F}
is in particular a vector space over
K
.
{\displaystyle K.}
Furthermore, every
F
{\displaystyle F}
-vector space
V
{\displaystyle V}
is also a
K
{\displaystyle K}
-vector space. The dimensions are related by the formula
dim
K
(
V
)
=
dim
K
(
F
)
dim
F
(
V
)
.
{\displaystyle \dim _{K}(V)=\dim _{K}(F)\dim _{F}(V).}
In particular, every complex vector space of dimension
n
{\displaystyle n}
is a real vector space of dimension
2
n
.
{\displaystyle 2n.}
Some formulae relate the dimension of a vector space with the cardinality of the base field and the cardinality of the space itself.
If
V
{\displaystyle V}
is a vector space over a field
F
{\displaystyle F}
and if the dimension of
V
{\displaystyle V}
is denoted by
dim
V
,
{\displaystyle \dim V,}
then:
If dim
V
{\displaystyle V}
is finite then
|
V
|
=
|
F
|
dim
V
.
{\displaystyle |V|=|F|^{\dim V}.}
If dim
V
{\displaystyle V}
is infinite then
|
V
|
=
max
(
|
F
|
,
dim
V
)
.
{\displaystyle |V|=\max(|F|,\dim V).}
== Generalizations ==
A vector space can be seen as a particular case of a matroid, and in the latter there is a well-defined notion of dimension. The length of a module and the rank of an abelian group both have several properties similar to the dimension of vector spaces.
The Krull dimension of a commutative ring, named after Wolfgang Krull (1899–1971), is defined to be the maximal number of strict inclusions in an increasing chain of prime ideals in the ring.
=== Trace ===
The dimension of a vector space may alternatively be characterized as the trace of the identity operator. For instance,
tr
id
R
2
=
tr
(
1
0
0
1
)
=
1
+
1
=
2.
{\displaystyle \operatorname {tr} \ \operatorname {id} _{\mathbb {R} ^{2}}=\operatorname {tr} \left({\begin{smallmatrix}1&0\\0&1\end{smallmatrix}}\right)=1+1=2.}
This appears to be a circular definition, but it allows useful generalizations.
Firstly, it allows for a definition of a notion of dimension when one has a trace but no natural sense of basis. For example, one may have an algebra
A
{\displaystyle A}
with maps
η
:
K
→
A
{\displaystyle \eta :K\to A}
(the inclusion of scalars, called the unit) and a map
ϵ
:
A
→
K
{\displaystyle \epsilon :A\to K}
(corresponding to trace, called the counit). The composition
ϵ
∘
η
:
K
→
K
{\displaystyle \epsilon \circ \eta :K\to K}
is a scalar (being a linear operator on a 1-dimensional space) corresponds to "trace of identity", and gives a notion of dimension for an abstract algebra. In practice, in bialgebras, this map is required to be the identity, which can be obtained by normalizing the counit by dividing by dimension (
ϵ
:=
1
n
tr
{\displaystyle \epsilon :=\textstyle {\frac {1}{n}}\operatorname {tr} }
), so in these cases the normalizing constant corresponds to dimension.
Alternatively, it may be possible to take the trace of operators on an infinite-dimensional space; in this case a (finite) trace is defined, even though no (finite) dimension exists, and gives a notion of "dimension of the operator". These fall under the rubric of "trace class operators" on a Hilbert space, or more generally nuclear operators on a Banach space.
A subtler generalization is to consider the trace of a family of operators as a kind of "twisted" dimension. This occurs significantly in representation theory, where the character of a representation is the trace of the representation, hence a scalar-valued function on a group
χ
:
G
→
K
,
{\displaystyle \chi :G\to K,}
whose value on the identity
1
∈
G
{\displaystyle 1\in G}
is the dimension of the representation, as a representation sends the identity in the group to the identity matrix:
χ
(
1
G
)
=
tr
I
V
=
dim
V
.
{\displaystyle \chi (1_{G})=\operatorname {tr} \ I_{V}=\dim V.}
The other values
χ
(
g
)
{\displaystyle \chi (g)}
of the character can be viewed as "twisted" dimensions, and find analogs or generalizations of statements about dimensions to statements about characters or representations. A sophisticated example of this occurs in the theory of monstrous moonshine: the
j
{\displaystyle j}
-invariant is the graded dimension of an infinite-dimensional graded representation of the monster group, and replacing the dimension with the character gives the McKay–Thompson series for each element of the Monster group.
== See also ==
Fractal dimension – Ratio providing a statistical index of complexity variation with scale
Krull dimension – In mathematics, dimension of a ring
Matroid rank – Maximum size of an independent set of the matroid
Rank (linear algebra) – Dimension of the column space of a matrix
Topological dimension – Topologically invariant definition of the dimension of a spacePages displaying short descriptions of redirect targets, also called Lebesgue covering dimension
== Notes ==
== References ==
== Sources ==
Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0.
== External links ==
MIT Linear Algebra Lecture on Independence, Basis, and Dimension by Gilbert Strang at MIT OpenCourseWare | Wikipedia/Dimension_(linear_algebra) |
In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties:
A is a topologically closed set in the norm topology of operators.
A is closed under the operation of taking adjoints of operators.
Another important class of non-Hilbert C*-algebras includes the algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space.
C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras that are now known as von Neumann algebras.
Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space.
C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras.
== Abstract characterization ==
We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark.
A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map
x
↦
x
∗
{\textstyle x\mapsto x^{*}}
for
x
∈
A
{\textstyle x\in A}
with the following properties:
It is an involution, for every x in A:
x
∗
∗
=
(
x
∗
)
∗
=
x
{\displaystyle x^{**}=(x^{*})^{*}=x}
For all x, y in A:
(
x
+
y
)
∗
=
x
∗
+
y
∗
{\displaystyle (x+y)^{*}=x^{*}+y^{*}}
(
x
y
)
∗
=
y
∗
x
∗
{\displaystyle (xy)^{*}=y^{*}x^{*}}
For every complex number
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
and every x in A:
(
λ
x
)
∗
=
λ
¯
x
∗
.
{\displaystyle (\lambda x)^{*}={\overline {\lambda }}x^{*}.}
For all x in A:
‖
x
x
∗
‖
=
‖
x
‖
‖
x
∗
‖
.
{\displaystyle \|xx^{*}\|=\|x\|\|x^{*}\|.}
Remark. The first four identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:
‖
x
x
∗
‖
=
‖
x
‖
2
,
{\displaystyle \|xx^{*}\|=\|x\|^{2},}
which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below.
The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:
‖
x
‖
2
=
‖
x
∗
x
‖
=
sup
{
|
λ
|
:
x
∗
x
−
λ
1
is not invertible
}
.
{\displaystyle \|x\|^{2}=\|x^{*}x\|=\sup\{|\lambda |:x^{*}x-\lambda \,1{\text{ is not invertible}}\}.}
A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if
For x and y in A
π
(
x
y
)
=
π
(
x
)
π
(
y
)
{\displaystyle \pi (xy)=\pi (x)\pi (y)\,}
For x in A
π
(
x
∗
)
=
π
(
x
)
∗
{\displaystyle \pi (x^{*})=\pi (x)^{*}\,}
In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity.
A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic.
== Some history: B*-algebras and C*-algebras ==
The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition:
‖
x
x
∗
‖
=
‖
x
‖
2
{\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert ^{2}}
for all x in the given B*-algebra. (B*-condition)
This condition automatically implies that the *-involution is isometric, that is,
‖
x
‖
=
‖
x
∗
‖
{\displaystyle \lVert x\rVert =\lVert x^{*}\rVert }
. Hence,
‖
x
x
∗
‖
=
‖
x
‖
‖
x
∗
‖
{\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert \lVert x^{*}\rVert }
, and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition
‖
x
‖
=
‖
x
∗
‖
{\displaystyle \lVert x\rVert =\lVert x^{*}\rVert }
. For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'.
The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space".
== Structure of C*-algebras ==
C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism.
=== Self-adjoint elements ===
Self-adjoint elements are those of the form
x
=
x
∗
{\displaystyle x=x^{*}}
. The set of elements of a C*-algebra A of the form
x
∗
x
{\displaystyle x^{*}x}
forms a closed convex cone. This cone is identical to the elements of the form
x
x
∗
{\displaystyle xx^{*}}
. Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of
R
{\displaystyle \mathbb {R} }
)
The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted
≥
{\displaystyle \geq }
. In this ordering, a self-adjoint element
x
∈
A
{\displaystyle x\in A}
satisfies
x
≥
0
{\displaystyle x\geq 0}
if and only if the spectrum of
x
{\displaystyle x}
is non-negative, if and only if
x
=
s
∗
s
{\displaystyle x=s^{*}s}
for some
s
∈
A
{\displaystyle s\in A}
. Two self-adjoint elements
x
{\displaystyle x}
and
y
{\displaystyle y}
of A satisfy
x
≥
y
{\displaystyle x\geq y}
if
x
−
y
≥
0
{\displaystyle x-y\geq 0}
.
This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction.
=== Quotients and approximate identities ===
Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that
x
e
λ
→
x
{\displaystyle xe_{\lambda }\rightarrow x}
0
≤
e
λ
≤
e
μ
≤
1
whenever
λ
≤
μ
.
{\displaystyle 0\leq e_{\lambda }\leq e_{\mu }\leq 1\quad {\mbox{ whenever }}\lambda \leq \mu .}
In case A is separable, A has a sequential approximate identity. More generally, A will have a sequential approximate identity if and only if A contains a strictly positive element, i.e. a positive element h such that hAh is dense in A.
Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra.
Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra.
== Examples ==
=== Finite-dimensional C*-algebras ===
The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, Cn, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type:
Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum
A
=
⨁
e
∈
min
A
A
e
{\displaystyle A=\bigoplus _{e\in \min A}Ae}
where min A is the set of minimal nonzero self-adjoint central projections of A.
Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A.
A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science.
An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras.
=== C*-algebras of operators ===
The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem.
=== C*-algebras of compact operators ===
Let H be a separable infinite-dimensional Hilbert space. The algebra K(H) of compact operators on H is a norm closed subalgebra of B(H). It is also closed under involution; hence it is a C*-algebra.
Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras:
Theorem. If A is a C*-subalgebra of K(H), then there exists Hilbert spaces {Hi}i∈I such that
A
≅
⨁
i
∈
I
K
(
H
i
)
,
{\displaystyle A\cong \bigoplus _{i\in I}K(H_{i}),}
where the (C*-)direct sum consists of elements (Ti) of the Cartesian product Π K(Hi) with ||Ti|| → 0.
Though K(H) does not have an identity element, a sequential approximate identity for K(H) can be developed. To be specific, H is isomorphic to the space of square summable sequences l2; we may assume that H = l2. For each natural number n let Hn be the subspace of sequences of l2 which vanish for indices k ≥ n and let en be the orthogonal projection onto Hn. The sequence {en}n is an approximate identity for K(H).
K(H) is a two-sided closed ideal of B(H). For separable Hilbert spaces, it is the unique ideal. The quotient of B(H) by K(H) is the Calkin algebra.
=== Commutative C*-algebras ===
Let X be a locally compact Hausdorff space. The space
C
0
(
X
)
{\displaystyle C_{0}(X)}
of complex-valued continuous functions on X that vanish at infinity (defined in the article on local compactness) forms a commutative C*-algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
under pointwise multiplication and addition. The involution is pointwise conjugation.
C
0
(
X
)
{\displaystyle C_{0}(X)}
has a multiplicative unit element if and only if
X
{\displaystyle X}
is compact. As does any C*-algebra,
C
0
(
X
)
{\displaystyle C_{0}(X)}
has an approximate identity. In the case of
C
0
(
X
)
{\displaystyle C_{0}(X)}
this is immediate: consider the directed set of compact subsets of
X
{\displaystyle X}
, and for each compact
K
{\displaystyle K}
let
f
K
{\displaystyle f_{K}}
be a function of compact support which is identically 1 on
K
{\displaystyle K}
. Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions
{
f
K
}
{\displaystyle \{f_{K}\}}
is an approximate identity.
The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
, where
X
{\displaystyle X}
is the space of characters equipped with the weak* topology. Furthermore, if
C
0
(
X
)
{\displaystyle C_{0}(X)}
is isomorphic to
C
0
(
Y
)
{\displaystyle C_{0}(Y)}
as C*-algebras, it follows that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs.
=== C*-enveloping algebra ===
Given a Banach *-algebra A with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E(A) and *-morphism π from A into E(A) that is universal, that is, every other continuous *-morphism π ' : A → B factors uniquely through π. The algebra E(A) is called the C*-enveloping algebra of the Banach *-algebra A.
Of particular importance is the C*-algebra of a locally compact group G. This is defined as the enveloping C*-algebra of the group algebra of G. The C*-algebra of G provides context for general harmonic analysis of G in the case G is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra.
=== Von Neumann algebras ===
Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology.
The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it.
== Type for C*-algebras ==
A C*-algebra A is of type I if and only if for all non-degenerate representations π of A the von Neumann algebra π(A)″ (that is, the bicommutant of π(A)) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π(A)″ is a factor.
A locally compact group is said to be of type I if and only if its group C*-algebra is type I.
However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties.
== C*-algebras and quantum field theory ==
In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x).
This C*-algebra approach is used in the Haag–Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra.
== See also ==
Banach algebra
Banach *-algebra
*-algebra
Hilbert C*-module
Operator K-theory
Operator system, a unital subspace of a C*-algebra that is *-closed.
Gelfand–Naimark–Segal construction
Jordan operator algebra
== Notes ==
== References ==
Arveson, W. (1976), An Invitation to C*-Algebra, Springer-Verlag, ISBN 0-387-90176-0. An excellent introduction to the subject, accessible for those with a knowledge of basic functional analysis.
Connes, Alain (1994), Non-commutative geometry, Gulf Professional, ISBN 0-12-185860-X. This book is widely regarded as a source of new research material, providing much supporting intuition, but it is difficult.
Dixmier, Jacques (1969), Les C*-algèbres et leurs représentations, Gauthier-Villars, ISBN 0-7204-0762-1. This is a somewhat dated reference, but is still considered as a high-quality technical exposition. It is available in English from North Holland press.
Doran, Robert S.; Belfi, Victor A. (1986), Characterizations of C*-algebras: The Gelfand-Naimark Theorems, CRC Press, ISBN 978-0-8247-7569-8.
Emch, G. (1972), Algebraic Methods in Statistical Mechanics and Quantum Field Theory, Wiley-Interscience, ISBN 0-471-23900-3. Mathematically rigorous reference which provides extensive physics background.
A.I. Shtern (2001) [1994], "C*-algebra", Encyclopedia of Mathematics, EMS Press
Sakai, S. (1971), C*-algebras and W*-algebras, Springer, ISBN 3-540-63633-1.
Segal, Irving (1947), "Irreducible representations of operator algebras", Bulletin of the American Mathematical Society, 53 (2): 73–88, doi:10.1090/S0002-9904-1947-08742-5. | Wikipedia/C*-algebra |
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
x
1
=
x
0
−
f
(
x
0
)
f
′
(
x
0
)
{\displaystyle x_{1}=x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}}
is a better approximation of the root than x0. Geometrically, (x1, 0) is the x-intercept of the tangent of the graph of f at (x0, f(x0)): that is, the improved guess, x1, is the unique root of the linear approximation of f at the initial guess, x0. The process is repeated as
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}}
until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods, and was succeeded by Halley's method. The method can also be extended to complex functions and to systems of equations.
== Description ==
The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by its tangent line near the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can be iterated.
The best linear approximation to an arbitrary differentiable function
f
(
x
)
{\displaystyle f(x)}
near the point
x
=
x
n
{\displaystyle x=x_{n}}
is the tangent line to the curve, with equation
f
(
x
)
≈
f
(
x
n
)
+
f
′
(
x
n
)
(
x
−
x
n
)
.
{\displaystyle f(x)\approx f(x_{n})+f'(x_{n})(x-x_{n}).}
The root of this linear function, the place where it intercepts the
x
{\displaystyle x}
-axis, can be taken as a closer approximate root
x
n
+
1
{\displaystyle x_{n+1}}
:
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.}
The process can be started with any arbitrary initial guess
x
0
{\displaystyle x_{0}}
, though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge if
f
′
(
x
0
)
≠
0
{\displaystyle f'(x_{0})\neq 0}
. Furthermore, for a root of multiplicity 1, the convergence is at least quadratic (see Rate of convergence) in some sufficiently small neighbourhood of the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in § Analysis below.
Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if
f
{\displaystyle f}
or its derivatives are computationally expensive to evaluate.
== History ==
In the Old Babylonian period (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method, described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found in Heron of Alexandria's Metrica (1st–2nd century CE), so is often called Heron's method. Jamshīd al-Kāshī used a method to solve xP − N = 0 to find roots of N, a method that was algebraically equivalent to Newton's method, and in which a similar method was found in Trigonometria Britannica, published by Henry Briggs in 1633.
The method first appeared roughly in Isaac Newton's work in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producing Taylor series in the latter case.
Newton may have derived his method from a similar, less precise method by mathematician François Viète, however, the two methods are not the same. The essence of Viète's own method can be found in the work of the mathematician Sharaf al-Din al-Tusi.
The Japanese mathematician Seki Kōwa used a form of Newton's method in the 1680s to solve single-variable equations, though the connection with calculus was missing.
Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayley in 1879 in The Newton–Fourier imaginary problem was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions.
== Practical considerations ==
Newton's method is a powerful technique—if the derivative of the function at the root is nonzero, then the convergence is at least quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.
=== Difficulty in calculating the derivative of a function ===
Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method.
=== Failure of the method to converge to the root ===
It is important to review the proof of quadratic convergence of Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. For situations where the method fails to converge, it is because the assumptions made in this proof are not met.
For example, in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by using successive over-relaxation, or the speed of convergence can be increased by using the same method.
In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method.
=== Slow convergence for roots of multiplicity greater than 1 ===
If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicity m of the root is known, the following modified algorithm preserves the quadratic convergence rate:
x
n
+
1
=
x
n
−
m
f
(
x
n
)
f
′
(
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-m{\frac {f(x_{n})}{f'(x_{n})}}.}
This is equivalent to using successive over-relaxation. On the other hand, if the multiplicity m of the root is not known, it is possible to estimate m after carrying out one or two iterations, and then use that value to increase the rate of convergence.
If the multiplicity m of the root is finite then g(x) = f(x)/f′(x) will have a root at the same location with multiplicity 1. Applying Newton's method to find the root of g(x) recovers quadratic convergence in many cases although it generally involves the second derivative of f(x). In a particularly simple case, if f(x) = xm then g(x) = x/m and Newton's method finds the root in a single iteration with
x
n
+
1
=
x
n
−
g
(
x
n
)
g
′
(
x
n
)
=
x
n
−
x
n
m
1
m
=
0
.
{\displaystyle x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}=x_{n}-{\frac {\;{\frac {x_{n}}{m}}\;}{\frac {1}{m}}}=0\,.}
=== Slow convergence ===
The function f(x) = x2 has a root at 0. Since f is continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivative f ′ is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
1
2
x
n
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}={\frac {1}{2}}x_{n}.}
It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved.
The function f(x) = x + x4/3 also has a root at 0, where it is continuously differentiable. Although the first derivative f ′ is nonzero at the root, the second derivative f ′′ is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
4
/
3
3
+
4
x
n
1
/
3
≈
x
n
⋅
x
n
1
/
3
3
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}={\frac {x_{n}^{4/3}}{3+4x_{n}^{1/3}}}\approx x_{n}\cdot {\frac {x_{n}^{1/3}}{3}}.}
From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the above f(x) = x + x4/3 and the right of which shows Newton's method applied to f(x) = x + x2. The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...).
The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the function f(x) = x20 − 1 has a root at 1. Since f ′(1) ≠ 0 and f is smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root.
=== Convergence dependent on initialization ===
The function f(x) = x(1 + x2)−1/2 has a root at 0. The Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
(
1
+
x
n
2
)
−
1
/
2
(
1
+
x
n
2
)
−
3
/
2
=
−
x
n
3
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}(1+x_{n}^{2})^{-1/2}}{(1+x_{n}^{2})^{-3/2}}}=-x_{n}^{3}.}
From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between ±1, the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at 1 or −1, the Newton iteration will oscillate endlessly between ±1; if initialized anywhere else, the Newton iteration will diverge. This same trichotomy occurs for f(x) = arctan x.
In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the function f(x) = x(x2 − 1)(x − 3)e−(x − 1)2/2 has roots at −1, 0, 1, and 3. If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to ∞; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to −∞; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to 1. This kind of subtle dependence on initialization is not uncommon; it is frequently studied in the complex plane in the form of the Newton fractal.
=== Divergence even when initialization is close to the root ===
Consider the problem of finding a root of f(x) = x1/3. The Newton iteration is
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
1
/
3
1
3
x
n
−
2
/
3
=
−
2
x
n
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{1/3}}{{\frac {1}{3}}x_{n}^{-2/3}}}=-2x_{n}.}
Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this case f is not differentiable at its root.
In the above example, failure of convergence is reflected by the failure of f(xn) to get closer to zero as n increases, as well as by the fact that successive iterates are growing further and further apart. However, the function f(x) = x1/3e−x2 also has a root at 0. The Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
(
1
−
3
1
−
6
x
n
2
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}\left(1-{\frac {3}{1-6x_{n}^{2}}}\right).}
In this example, where again f is not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with both xn + 1 − xn and f(xn) converging to zero. This is seen in the following table showing the iterates with initialization 1:
Although the convergence of xn + 1 − xn in this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness of xn + 1 − xn and f(xn) might falsely identify a root.
=== Oscillatory behavior ===
It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a function f to oscillate between 0 and 1, it is only necessary that the tangent line to f at 0 intersects the x-axis at 1 and that the tangent line to f at 1 intersects the x-axis at 0. This is the case, for example, if f(x) = x3 − 2x + 2. For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 will asymptotically oscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root of f approximately equal to −1.76929.
=== Undefinedness of Newton's method ===
In some cases, it is not even possible to perform the Newton iteration. For example, if f(x) = x2 − 1, then the Newton iteration is defined by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
2
−
1
2
x
n
=
x
n
2
+
1
2
x
n
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-1}{2x_{n}}}={\frac {x_{n}^{2}+1}{2x_{n}}}.}
So Newton's method cannot be initialized at 0, since this would make x1 undefined. Geometrically, this is because the tangent line to f at 0 is horizontal (i.e. f ′(0) = 0), never intersecting the x-axis.
Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued.
If f has an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration. For example, the natural logarithm function f(x) = ln x has a root at 1, and is defined only for positive x. Newton's iteration in this case is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
(
1
−
ln
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}(1-\ln x_{n}).}
So if the iteration is initialized at e, the next iterate is 0; if the iteration is initialized at a value larger than e, then the next iterate is negative. In either case, the method cannot be continued.
== Analysis ==
Suppose that the function f has a zero at α, i.e., f(α) = 0, and f is differentiable in a neighborhood of α.
If f is continuously differentiable and its derivative is nonzero at α, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence (xn) will converge to α.
If f is continuously differentiable, its derivative is nonzero at α, and it has a second derivative at α, then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of α, then:
Δ
x
i
+
1
=
f
″
(
α
)
2
f
′
(
α
)
(
Δ
x
i
)
2
+
O
(
Δ
x
i
)
3
,
{\displaystyle \Delta x_{i+1}={\frac {f''(\alpha )}{2f'(\alpha )}}\left(\Delta x_{i}\right)^{2}+O\left(\Delta x_{i}\right)^{3}\,,}
where
Δ
x
i
≜
x
i
−
α
.
{\displaystyle \Delta x_{i}\triangleq x_{i}-\alpha \,.}
If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if f is twice continuously differentiable, f′(α) = 0 and f″(α) ≠ 0, then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly, with rate 1/2. Alternatively, if f′(α) = 0 and f′(x) ≠ 0 for x ≠ α, x in a neighborhood U of α, α being a zero of multiplicity r, and if f ∈ Cr(U), then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly.
However, even linear convergence is not guaranteed in pathological situations.
In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhood U+ of α, if f is twice differentiable in U+ and if f′ ≠ 0, f · f″ > 0 in U+, then, for each x0 in U+ the sequence xk is monotonically decreasing to α.
=== Proof of quadratic convergence for Newton's iterative method ===
According to Taylor's theorem, any function f(x) which has a continuous second derivative can be represented by an expansion about a point that is close to a root of f(x). Suppose this root is α. Then the expansion of f(α) about xn is:
where the Lagrange form of the Taylor series expansion remainder is
R
1
=
1
2
!
f
″
(
ξ
n
)
(
α
−
x
n
)
2
,
{\displaystyle R_{1}={\frac {1}{2!}}f''(\xi _{n})\left(\alpha -x_{n}\right)^{2}\,,}
where ξn is in between xn and α.
Since α is the root, (1) becomes:
Dividing equation (2) by f′(xn) and rearranging gives
Remembering that xn + 1 is defined by
one finds that
α
−
x
n
+
1
⏟
ε
n
+
1
=
−
f
″
(
ξ
n
)
2
f
′
(
x
n
)
(
α
−
x
n
⏟
ε
n
)
2
.
{\displaystyle \underbrace {\alpha -x_{n+1}} _{\varepsilon _{n+1}}={\frac {-f''(\xi _{n})}{2f'(x_{n})}}{(\,\underbrace {\alpha -x_{n}} _{\varepsilon _{n}}\,)}^{2}\,.}
That is,
Taking the absolute value of both sides gives
Equation (6) shows that the order of convergence is at least quadratic if the following conditions are satisfied:
f′(x) ≠ 0; for all x ∈ I, where I is the interval [α − |ε0|, α + |ε0|];
f″(x) is continuous, for all x ∈ I;
M |ε0| < 1
where M is given by
M
=
1
2
(
sup
x
∈
I
|
f
″
(
x
)
|
)
(
sup
x
∈
I
1
|
f
′
(
x
)
|
)
.
{\displaystyle M={\frac {1}{2}}\left(\sup _{x\in I}\vert f''(x)\vert \right)\left(\sup _{x\in I}{\frac {1}{\vert f'(x)\vert }}\right).\,}
If these conditions hold,
|
ε
n
+
1
|
≤
M
⋅
ε
n
2
.
{\displaystyle \vert \varepsilon _{n+1}\vert \leq M\cdot \varepsilon _{n}^{2}\,.}
=== Fourier conditions ===
Suppose that f(x) is a concave function on an interval, which is strictly increasing. If it is negative at the left endpoint and positive at the right endpoint, the intermediate value theorem guarantees that there is a zero ζ of f somewhere in the interval. From geometrical principles, it can be seen that the Newton iteration xi starting at the left endpoint is monotonically increasing and convergent, necessarily to ζ.
Joseph Fourier introduced a modification of Newton's method starting at the right endpoint:
y
i
+
1
=
y
i
−
f
(
y
i
)
f
′
(
x
i
)
.
{\displaystyle y_{i+1}=y_{i}-{\frac {f(y_{i})}{f'(x_{i})}}.}
This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit of yi must also be the zero ζ.
So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. If f is twice continuously differentiable, it can be proved using Taylor's theorem that
lim
i
→
∞
y
i
+
1
−
x
i
+
1
(
y
i
−
x
i
)
2
=
−
1
2
f
″
(
ζ
)
f
′
(
ζ
)
,
{\displaystyle \lim _{i\to \infty }{\frac {y_{i+1}-x_{i+1}}{(y_{i}-x_{i})^{2}}}=-{\frac {1}{2}}{\frac {f''(\zeta )}{f'(\zeta )}},}
showing that this difference in locations converges quadratically to zero.
All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts of monotonicity and concavity are more subtle to formulate. In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative of f. However, in this generalization, Newton's iteration is modified so as to be based on Taylor polynomials rather than the tangent line. In the case of concavity, this modification coincides with the standard Newton method.
=== Error for n>1 variables ===
If we seek the root of a single function
f
:
R
n
→
R
{\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} }
then the error
ϵ
n
=
x
n
−
α
{\displaystyle \epsilon _{n}=x_{n}-\alpha }
is a vector such that its components obey
ϵ
k
(
n
+
1
)
=
1
2
(
ϵ
(
n
)
)
T
Q
k
ϵ
(
n
)
+
O
(
‖
ϵ
(
n
)
‖
3
)
{\displaystyle \epsilon _{k}^{(n+1)}={\frac {1}{2}}(\epsilon ^{(n)})^{T}Q_{k}\epsilon ^{(n)}+O(\|\epsilon ^{(n)}\|^{3})}
where
Q
k
{\displaystyle Q_{k}}
is a quadratic form:
(
Q
k
)
i
,
j
=
∑
ℓ
(
(
D
2
f
)
−
1
)
i
,
ℓ
∂
3
f
∂
x
j
∂
x
k
∂
x
ℓ
{\displaystyle (Q_{k})_{i,j}=\sum _{\ell }((D^{2}f)^{-1})_{i,\ell }{\frac {\partial ^{3}f}{\partial x_{j}\partial x_{k}\partial x_{\ell }}}}
evaluated at the root
α
{\displaystyle \alpha }
(where
D
2
f
{\displaystyle D^{2}f}
is the 2nd derivative Hessian matrix).
== Examples ==
=== Use of Newton's method to compute square roots ===
Newton's method is one of many known methods of computing square roots. Given a positive number a, the problem of finding a number x such that x2 = a is equivalent to finding a root of the function f(x) = x2 − a. The Newton iteration defined by this function is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
2
−
a
2
x
n
=
1
2
(
x
n
+
a
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-a}{2x_{n}}}={\frac {1}{2}}\left(x_{n}+{\frac {a}{x_{n}}}\right).}
This happens to coincide with the "Babylonian" method of finding square roots, which consists of replacing an approximate root xn by the arithmetic mean of xn and a⁄xn. By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basic arithmetic operations.
The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "xn" column is obtained by applying the preceding formula to the entry above it, for instance
306.5
=
1
2
(
1
+
612
1
)
.
{\displaystyle 306.5={\frac {1}{2}}\left(1+{\frac {612}{1}}\right).}
The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of 1.
When computing any nonzero square root, the first derivative of f must be nonzero at the root, and that f is a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration.
=== Solution of cos(x) = x3 using Newton's method ===
Consider the problem of finding the positive number x with cos x = x3. We can rephrase that as finding the zero of f(x) = cos(x) − x3. We have f′(x) = −sin(x) − 3x2. Since cos(x) ≤ 1 for all x and x3 > 1 for x > 1, we know that our solution lies between 0 and 1.
A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guess x0 = 0.5, the sequence given by Newton's method is:
x
1
=
x
0
−
f
(
x
0
)
f
′
(
x
0
)
=
0.5
−
cos
0.5
−
0.5
3
−
sin
0.5
−
3
×
0.5
2
=
1.112
141
637
097
…
x
2
=
x
1
−
f
(
x
1
)
f
′
(
x
1
)
=
⋮
=
0.
_
909
672
693
736
…
x
3
=
⋮
=
⋮
=
0.86
_
7
263
818
209
…
x
4
=
⋮
=
⋮
=
0.865
47
_
7
135
298
…
x
5
=
⋮
=
⋮
=
0.865
474
033
1
_
11
…
x
6
=
⋮
=
⋮
=
0.865
474
033
102
_
…
{\displaystyle {\begin{matrix}x_{1}&=&x_{0}-{\dfrac {f(x_{0})}{f'(x_{0})}}&=&0.5-{\dfrac {\cos 0.5-0.5^{3}}{-\sin 0.5-3\times 0.5^{2}}}&=&1.112\,141\,637\,097\dots \\x_{2}&=&x_{1}-{\dfrac {f(x_{1})}{f'(x_{1})}}&=&\vdots &=&{\underline {0.}}909\,672\,693\,736\dots \\x_{3}&=&\vdots &=&\vdots &=&{\underline {0.86}}7\,263\,818\,209\dots \\x_{4}&=&\vdots &=&\vdots &=&{\underline {0.865\,47}}7\,135\,298\dots \\x_{5}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,1}}11\dots \\x_{6}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,102}}\dots \end{matrix}}}
The correct digits are underlined in the above example. In particular, x6 is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for x3) to 5 and 10, illustrating the quadratic convergence.
== Multidimensional formulations ==
=== Systems of equations ===
==== k variables, k functions ====
One may also use Newton's method to solve systems of k equations, which amounts to finding the (simultaneous) zeroes of k continuously differentiable functions
f
:
R
k
→
R
.
{\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} .}
This is equivalent to finding the zeroes of a single vector-valued function
F
:
R
k
→
R
k
.
{\displaystyle F:\mathbb {R} ^{k}\to \mathbb {R} ^{k}.}
In the formulation given above, the scalars xn are replaced by vectors xn and instead of dividing the function f(xn) by its derivative f′(xn) one instead has to left multiply the function F(xn) by the inverse of its k × k Jacobian matrix JF(xn). This results in the expression
x
n
+
1
=
x
n
−
J
F
(
x
n
)
−
1
F
(
x
n
)
.
{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-J_{F}(\mathbf {x} _{n})^{-1}F(\mathbf {x} _{n}).}
or, by solving the system of linear equations
J
F
(
x
n
)
(
x
n
+
1
−
x
n
)
=
−
F
(
x
n
)
{\displaystyle J_{F}(\mathbf {x} _{n})(\mathbf {x} _{n+1}-\mathbf {x} _{n})=-F(\mathbf {x} _{n})}
for the unknown xn + 1 − xn.
==== k variables, m equations, with m > k ====
The k-dimensional variant of Newton's method can be used to solve systems of greater than k (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-square Jacobian matrix J+ = (JTJ)−1JT instead of the inverse of J. If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. See Gauss–Newton algorithm for more information.
==== Example ====
For example, the following set of equations needs to be solved for vector of points
[
x
1
,
x
2
]
,
{\displaystyle \ [\ x_{1},x_{2}\ ]\ ,}
given the vector of known values
[
2
,
3
]
.
{\displaystyle \ [\ 2,3\ ]~.}
5
x
1
2
+
x
1
x
2
2
+
sin
2
(
2
x
2
)
=
2
e
2
x
1
−
x
2
+
4
x
2
=
3
{\displaystyle {\begin{array}{lcr}5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})&=\quad 2\\e^{2\ x_{1}-x_{2}}+4\ x_{2}&=\quad 3\end{array}}}
the function vector,
F
(
X
k
)
,
{\displaystyle \ F(X_{k})\ ,}
and Jacobian Matrix,
J
(
X
k
)
{\displaystyle \ J(X_{k})\ }
for iteration k, and the vector of known values,
Y
,
{\displaystyle \ Y\ ,}
are defined below.
F
(
X
k
)
=
[
f
1
(
X
k
)
f
2
(
X
k
)
]
=
[
5
x
1
2
+
x
1
x
2
2
+
sin
2
(
2
x
2
)
e
2
x
1
−
x
2
+
4
x
2
]
k
J
(
X
k
)
=
[
∂
f
1
(
X
)
∂
x
1
,
∂
f
1
(
X
)
∂
x
2
∂
f
2
(
X
)
∂
x
1
,
∂
f
2
(
X
)
∂
x
2
]
k
=
[
10
x
1
+
x
2
2
,
2
x
1
x
2
+
4
sin
(
2
x
2
)
cos
(
2
x
2
)
2
e
2
x
1
−
x
2
,
−
e
2
x
1
−
x
2
+
4
]
k
Y
=
[
2
3
]
{\displaystyle {\begin{aligned}~&F(X_{k})~=~{\begin{bmatrix}{\begin{aligned}~&f_{1}(X_{k})\\~&f_{2}(X_{k})\end{aligned}}\end{bmatrix}}~=~{\begin{bmatrix}{\begin{aligned}~&5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})\\~&e^{2\ x_{1}-x_{2}}+4\ x_{2}\end{aligned}}\end{bmatrix}}_{k}\\~&J(X_{k})={\begin{bmatrix}~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{2}}}}~\\~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{2}}}}~\end{bmatrix}}_{k}~=~{\begin{bmatrix}{\begin{aligned}~&10\ x_{1}+x_{2}^{2}\ ,&&2\ x_{1}\ x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})\\~&2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4\end{aligned}}\end{bmatrix}}_{k}\\~&Y={\begin{bmatrix}~2~\\~3~\end{bmatrix}}\end{aligned}}}
Note that
F
(
X
k
)
{\displaystyle \ F(X_{k})\ }
could have been rewritten to absorb
Y
,
{\displaystyle \ Y\ ,}
and thus eliminate
Y
{\displaystyle Y}
from the equations. The equation to solve for each iteration are
[
10
x
1
+
x
2
2
,
2
x
1
x
2
+
4
sin
(
2
x
2
)
cos
(
2
x
2
)
2
e
2
x
1
−
x
2
,
−
e
2
x
1
−
x
2
+
4
]
k
[
c
1
c
2
]
k
+
1
=
[
5
x
1
2
+
x
1
x
2
2
+
sin
2
(
2
x
2
)
−
2
e
2
x
1
−
x
2
+
4
x
2
−
3
]
k
{\displaystyle {\begin{aligned}{\begin{bmatrix}{\begin{aligned}~&~10\ x_{1}+x_{2}^{2}\ ,&&2x_{1}x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})~\\~&~2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4~\end{aligned}}\end{bmatrix}}_{k}{\begin{bmatrix}~c_{1}~\\~c_{2}~\end{bmatrix}}_{k+1}={\begin{bmatrix}~5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})-2~\\~e^{2\ x_{1}-x_{2}}+4\ x_{2}-3~\end{bmatrix}}_{k}\end{aligned}}}
and
X
k
+
1
=
X
k
−
C
k
+
1
{\displaystyle X_{k+1}~=~X_{k}-C_{k+1}}
The iterations should be repeated until
[
∑
i
=
1
i
=
2
|
f
(
x
i
)
k
−
(
y
i
)
k
|
]
<
E
,
{\displaystyle \ {\Bigg [}\sum _{i=1}^{i=2}{\Bigl |}f(x_{i})_{k}-(y_{i})_{k}{\Bigr |}{\Bigg ]}<E\ ,}
where
E
{\displaystyle \ E\ }
is a value acceptably small enough to meet application requirements.
If vector
X
0
{\displaystyle \ X_{0}\ }
is initially chosen to be
[
1
1
]
,
{\displaystyle \ {\begin{bmatrix}~1~&~1~\end{bmatrix}}\ ,}
that is,
x
1
=
1
,
{\displaystyle \ x_{1}=1\ ,}
and
x
2
=
1
,
{\displaystyle \ x_{2}=1\ ,}
and
E
,
{\displaystyle \ E\ ,}
is chosen to be 1.10−3, then the example converges after four iterations to a value of
X
4
=
[
0.567297
,
−
0.309442
]
.
{\displaystyle \ X_{4}=\left[~0.567297,\ -0.309442~\right]~.}
==== Iterations ====
The following iterations were made during the course of the solution.
=== Complex functions ===
When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction are fractals.
In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example, if one uses a real initial condition to seek a root of x2 + 1, all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this case almost all real initial conditions lead to chaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length.
Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3. Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least.
=== In a Banach space ===
Another generalization is Newton's method to find a root of a functional F defined in a Banach space. In this case the formulation is
X
n
+
1
=
X
n
−
(
F
′
(
X
n
)
)
−
1
F
(
X
n
)
,
{\displaystyle X_{n+1}=X_{n}-{\bigl (}F'(X_{n}){\bigr )}^{-1}F(X_{n}),\,}
where F′(Xn) is the Fréchet derivative computed at Xn. One needs the Fréchet derivative to be boundedly invertible at each Xn in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem.
==== Nash–Moser iteration ====
In the 1950s, John Nash developed a version of the Newton's method to apply to the problem of constructing isometric embeddings of general Riemannian manifolds in Euclidean space. The loss of derivatives problem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction of smoothing operators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving an implicit function theorem for isometric embeddings. In the 1960s, Jürgen Moser showed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly in celestial mechanics. Since then, a number of mathematicians, including Mikhael Gromov and Richard Hamilton, have found generalized abstract versions of the Nash–Moser theory. In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certain Fréchet spaces.
== Modifications ==
=== Quasi-Newton methods ===
When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used.
=== Chebyshev's third-order method ===
Since higher-order Taylor expansions offer more accurate local approximations of a function f, it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations.
=== Over p-adic numbers ===
In p-adic analysis, the standard method to show a polynomial equation in one variable has a p-adic root is Hensel's lemma, which uses the recursion from Newton's method on the p-adic numbers. Because of the more stable behavior of addition and multiplication in the p-adic numbers compared to the real numbers (specifically, the unit ball in the p-adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line.
=== q-analog ===
Newton's method can be generalized with the q-analog of the usual derivative.
=== Modified Newton methods ===
==== Maehly's procedure ====
A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found N solutions of
f
(
x
)
=
0
{\displaystyle f(x)=0}
, then the next root can be found by applying Newton's method to the next equation:
F
(
x
)
=
f
(
x
)
∏
i
=
1
N
(
x
−
x
i
)
=
0.
{\displaystyle F(x)={\frac {f(x)}{\prod _{i=1}^{N}(x-x_{i})}}=0.}
This method is applied to obtain zeros of the Bessel function of the second kind.
==== Hirano's modified Newton method ====
Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials.
==== Interval Newton's method ====
Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial).
Consider f → C1(X), where X is a real interval, and suppose that we have an interval extension F′ of f′, meaning that F′ takes as input an interval Y ⊆ X and outputs an interval F′(Y) such that:
F
′
(
[
y
,
y
]
)
=
{
f
′
(
y
)
}
F
′
(
Y
)
⊇
{
f
′
(
y
)
∣
y
∈
Y
}
.
{\displaystyle {\begin{aligned}F'([y,y])&=\{f'(y)\}\\[5pt]F'(Y)&\supseteq \{f'(y)\mid y\in Y\}.\end{aligned}}}
We also assume that 0 ∉ F′(X), so in particular f has at most one root in X.
We then define the interval Newton operator by:
N
(
Y
)
=
m
−
f
(
m
)
F
′
(
Y
)
=
{
m
−
f
(
m
)
z
|
z
∈
F
′
(
Y
)
}
{\displaystyle N(Y)=m-{\frac {f(m)}{F'(Y)}}=\left\{\left.m-{\frac {f(m)}{z}}~\right|~z\in F'(Y)\right\}}
where m ∈ Y. Note that the hypothesis on F′ implies that N(Y) is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence:
X
0
=
X
X
k
+
1
=
N
(
X
k
)
∩
X
k
.
{\displaystyle {\begin{aligned}X_{0}&=X\\X_{k+1}&=N(X_{k})\cap X_{k}.\end{aligned}}}
The mean value theorem ensures that if there is a root of f in Xk, then it is also in Xk + 1. Moreover, the hypothesis on F′ ensures that Xk + 1 is at most half the size of Xk when m is the midpoint of Y, so this sequence converges towards [x*, x*], where x* is the root of f in X.
If F′(X) strictly contains 0, the use of extended interval division produces a union of two intervals for N(X); multiple roots are therefore automatically separated and bounded.
== Applications ==
=== Minimization and maximization problems ===
Newton's method can be used to find a minimum or maximum of a function f(x). The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes:
x
n
+
1
=
x
n
−
f
′
(
x
n
)
f
″
(
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f'(x_{n})}{f''(x_{n})}}.}
=== Multiplicative inverses of numbers and power series ===
An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number a, using only multiplication and subtraction, that is to say the number x such that 1/x = a. We can rephrase that as finding the zero of f(x) = 1/x − a. We have f′(x) = −1/x2.
Newton's iteration is
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
+
1
x
n
−
a
1
x
n
2
=
x
n
(
2
−
a
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}+{\frac {{\frac {1}{x_{n}}}-a}{\frac {1}{x_{n}^{2}}}}=x_{n}(2-ax_{n}).}
Therefore, Newton's iteration needs only two multiplications and one subtraction.
This method is also very efficient to compute the multiplicative inverse of a power series.
=== Solving transcendental equations ===
Many transcendental equations can be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulative probability density function, such as a Normal distribution to fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to the inverse Normal cumulative distribution.
=== Numerical verification for solutions of nonlinear equations ===
A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates.
== Code ==
The following is an example of a possible implementation of Newton's method in the Python (version 3.x) programming language for finding a root of a function f which has derivative f_prime.
The initial guess will be x0 = 1 and the function will be f(x) = x2 − 2 so that f′(x) = 2x.
Each new iteration of Newton's method will be denoted by x1. We will check during the computation whether the denominator (yprime) becomes too small (smaller than epsilon), which would be the case if f′(xn) ≈ 0, since otherwise a large amount of error could be introduced.
== See also ==
== Notes ==
== References ==
Gil, A.; Segura, J.; Temme, N. M. (2007). Numerical methods for special functions. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-634-4.
Süli, Endre; Mayers, David (2003). An Introduction to Numerical Analysis. Cambridge University Press. ISBN 0-521-00794-1.
== Further reading ==
Kendall E. Atkinson: An Introduction to Numerical Analysis, John Wiley & Sons Inc., ISBN 0-471-62489-6 (1989).
Tjalling J. Ypma: "Historical development of the Newton–Raphson method", SIAM Review, vol.37, no.4, (1995), pp.531–551. doi:10.1137/1037125.
Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006). Numerical optimization: Theoretical and practical aspects. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: Springer-Verlag. pp. xiv+490. doi:10.1007/978-3-540-35447-5. ISBN 3-540-35445-X. MR 2265882.
P. Deuflhard: Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Springer Berlin (Series in Computational Mathematics, Vol. 35) (2004). ISBN 3-540-21099-7.
C. T. Kelley: Solving Nonlinear Equations with Newton's Method, SIAM (Fundamentals of Algorithms, 1) (2003). ISBN 0-89871-546-6.
J. M. Ortega, and W. C. Rheinboldt: Iterative Solution of Nonlinear Equations in Several Variables, SIAM (Classics in Applied Mathematics) (2000). ISBN 0-89871-461-3.
Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Chapter 9. Root Finding and Nonlinear Sets of Equations Importance Sampling". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge Univ. Press. ISBN 978-0-521-88068-8.. See especially Sections 9.4, 9.6, and 9.7.
Avriel, Mordecai (1976). Nonlinear Programming: Analysis and Methods. Prentice Hall. pp. 216–221. ISBN 0-13-623603-0.
== External links ==
"Newton method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Newton's Method". MathWorld.
Newton's method, Citizendium.
Mathews, J., The Accelerated and Modified Newton Methods, Course notes.
Wu, X., Roots of Equations, Course notes. | Wikipedia/Newton's_method |
In mathematics and, more specifically, in theory of equations, the principal form of an irreducible polynomial of degree at least three is a polynomial of the same degree n without terms of degrees n−1 and n−2, such that each root of either polynomial is a rational function of a root of the other polynomial.
The principal form of a polynomial can be found by applying a suitable Tschirnhaus transformation to the given polynomial.
== Definition ==
Let
f
(
x
)
=
x
n
+
a
1
x
n
−
1
+
⋯
+
a
n
−
1
x
+
a
n
{\displaystyle f(x)=x^{n}+a_{1}x^{n-1}+\cdots +a_{n-1}x+a_{n}}
be an irreducible polynomial of degree at least three.
Its principal form is a polynomial
g
(
y
)
=
y
n
+
b
3
y
n
−
3
+
⋯
+
b
n
−
1
y
+
b
n
,
{\displaystyle g(y)=y^{n}+b_{3}y^{n-3}+\cdots +b_{n-1}y+b_{n},}
together with a Tschirnhaus transformation of degree two
φ
(
x
)
=
x
2
+
α
x
+
β
{\displaystyle \varphi (x)=x^{2}+\alpha x+\beta }
such that, if r is a root of f,
ϕ
(
r
)
{\displaystyle \phi (r)}
is a root of
g
{\displaystyle g}
.
Expressing that
g
{\displaystyle g}
does not has terms in
y
n
−
1
{\displaystyle y^{n-1}}
and
y
n
−
2
{\displaystyle y^{n-2}}
leads to a system of two equations in
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
, one of degree one and one of degree two. In general, this system has two solutions, giving two principal forms involving a square root. One passes from one principal form to the secong by changing the sign of the square root.
== Cubic case ==
=== Tschirnhaus transformation with three clues ===
The Tschirnhaus transformation always transforms one polynome into another polynome of the same degree but with a different unknown variable. The mathematical relation of the new variable to the old variable shall be called the Tschirnhaus key. This key is a polynome that has to satisfy special criteria about its coefficients. To fulfill these criteria a separate equation system of several unknowns has to be solved. The singular equations of that system are important clues that are composed in tables that are formulated in the following sections:
This is the given cubic equation:
x
3
−
a
x
2
+
b
x
−
c
=
0
{\displaystyle x^{3}-ax^{2}+bx-c=0}
Following quadratic equation system shall be solved:
So exactly this Tschirnhaus transformation appears:
(
x
2
+
u
x
+
v
)
3
−
w
=
0
{\displaystyle (x^{2}+ux+v)^{3}-w=0}
The solutions of this system, accurately the expression of u, v and w in terms of a, b and c can be found out by the substitution method. It means for instance, the first of the three chested equations can be resolved after the unknown v and this resolved equation can be inserted into the second chested equation, so that a quadratic equation after the unknown u appears. In this way, from the three to be solved unknowns only one unknown remains and can be solved directly. By finding out the first unknown, the further unknowns can be found out by inserting the computed unknown. By detecting all these unknown coefficients the mentioned Tschirnhaus key and the new polynome resulting from the mentioned transformation can be constructed. In this way the Tschirnhaus transformation is done.
=== Cubic calculation examples ===
The quadratic radical components of the coefficients are identical to the square root terms appearing along with the Cardano theorem and therefore the Cubic Tschirnhaus transformation even can be used to derive the general Cardano formula itself.
Plastic constant:
Supergolden constant:
Tribonacci constant:
=== Cardano formula ===
The direct solving of the mentioned system of three clues leads to the Cardano formula for the mentioned case:
x
3
−
a
x
2
+
b
x
−
c
=
0
{\displaystyle x^{3}-ax^{2}+bx-c=0}
x
=
1
3
a
+
1
3
[
a
3
−
9
2
a
b
+
27
2
c
−
(
a
3
−
9
2
a
b
+
27
2
c
)
2
−
(
a
2
−
3
b
)
3
]
1
/
3
{\displaystyle x={\tfrac {1}{3}}a+{\tfrac {1}{3}}{\bigl [}a^{3}-{\tfrac {9}{2}}ab+{\tfrac {27}{2}}c-{\sqrt {{\bigl (}a^{3}-{\tfrac {9}{2}}ab+{\tfrac {27}{2}}c{\bigr )}^{2}-{\bigl (}a^{2}-3b{\bigr )}^{3}}}\,{\bigr ]}^{1/3}}
+
1
3
[
a
3
−
9
2
a
b
+
27
2
c
+
(
a
3
−
9
2
a
b
+
27
2
c
)
2
−
(
a
2
−
3
b
)
3
]
1
/
3
{\displaystyle +{\tfrac {1}{3}}{\bigl [}a^{3}-{\tfrac {9}{2}}ab+{\tfrac {27}{2}}c+{\sqrt {{\bigl (}a^{3}-{\tfrac {9}{2}}ab+{\tfrac {27}{2}}c{\bigr )}^{2}-{\bigl (}a^{2}-3b{\bigr )}^{3}}}\,{\bigr ]}^{1/3}}
== Quartic case ==
=== Tschirnhaus transformation with four clues ===
This is the given quartic equation:
x
4
−
a
x
3
+
b
x
2
−
c
x
+
d
=
0
{\displaystyle x^{4}-ax^{3}+bx^{2}-cx+d=0}
Now this quadratic equation system shall be solved:
And so accurately that Tschirnhaus transformation appears:
(
x
2
+
t
x
+
u
)
4
−
v
(
x
2
+
t
x
+
u
)
+
w
=
0
{\displaystyle (x^{2}+tx+u)^{4}-v(x^{2}+tx+u)+w=0}
=== Quartic calculation examples ===
The Tschirnhaus transformation of the equation for the Tetranacci constant contains only rational coefficients:
x
4
−
x
3
−
x
2
−
x
−
1
=
0
{\displaystyle x^{4}-x^{3}-x^{2}-x-1=0}
y
=
x
2
−
3
x
{\displaystyle y=x^{2}-3x}
y
4
−
11
y
−
41
=
0
{\displaystyle y^{4}-11y-41=0}
In this way following expression can be made about the Tetranacci constant:
x
2
−
3
x
=
(
41
3
)
1
/
4
sinh
[
1
3
arsinh
(
363
26896
123
)
]
−
{\displaystyle x^{2}-3x=({\tfrac {41}{3}})^{1/4}{\sqrt {\sinh {\bigl [}{\tfrac {1}{3}}\operatorname {arsinh} ({\tfrac {363}{26896}}{\sqrt {123}}){\bigr ]}}}-}
−
(
41
3
)
1
/
4
{
11
4
(
3
41
)
3
/
4
csch
[
1
3
arsinh
(
363
26896
123
)
]
−
sinh
[
1
3
arsinh
(
363
26896
123
)
]
}
1
/
2
{\displaystyle -({\tfrac {41}{3}})^{1/4}{\bigl \{}{\tfrac {11}{4}}({\tfrac {3}{41}})^{3/4}{\sqrt {\operatorname {csch} {\bigl [}{\tfrac {1}{3}}\operatorname {arsinh} ({\tfrac {363}{26896}}{\sqrt {123}}){\bigr ]}}}-\sinh {\bigl [}{\tfrac {1}{3}}\operatorname {arsinh} ({\tfrac {363}{26896}}{\sqrt {123}}){\bigr ]}{\bigr \}}^{1/2}}
That calculation example however does contain the element of the square root in the Tschirnhaus transformation:
x
4
+
x
3
+
x
2
−
x
−
1
=
0
{\displaystyle x^{4}+x^{3}+x^{2}-x-1=0}
y
=
x
2
+
1
5
(
19
+
4
21
)
x
+
1
5
(
6
+
21
)
{\displaystyle y=x^{2}+{\tfrac {1}{5}}(19+4{\sqrt {21}})x+{\tfrac {1}{5}}(6+{\sqrt {21}})}
y
4
−
1
125
(
38267
+
8272
21
)
y
−
1
625
(
101277
21
+
463072
)
=
0
{\displaystyle y^{4}-{\tfrac {1}{125}}(38267+8272{\sqrt {21}})y-{\tfrac {1}{625}}(101277{\sqrt {21}}+463072)=0}
=== Special form of the quartic ===
In the following we solve a special equation pattern that is easily solvable by using elliptic functions:
x
4
−
6
x
2
−
8
S
2
+
1
x
−
3
=
0
{\displaystyle x^{4}-6x^{2}-8{\sqrt {S^{2}+1}}\,x-3=0}
Q
=
q
{
tanh
[
1
2
arsinh
(
S
)
]
}
=
q
[
S
÷
(
S
2
+
1
+
1
)
]
{\displaystyle Q=q{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {arsinh} (S){\bigr ]}{\bigr \}}=q{\bigl [}S\div ({\sqrt {S^{2}+1}}+1){\bigr ]}}
x
=
3
ϑ
01
(
Q
3
)
2
ϑ
01
(
Q
)
2
{\displaystyle x={\frac {3\,\vartheta _{01}(Q^{3})^{2}}{\vartheta _{01}(Q)^{2}}}}
These are important additional informations about the elliptic nome and the mentioned Jacobi theta function:
q
(
ε
)
=
exp
[
−
π
K
(
1
−
ε
2
)
÷
K
(
ε
)
]
{\displaystyle q(\varepsilon )=\exp {\bigl [}-\pi K({\sqrt {1-\varepsilon ^{2}}})\div K(\varepsilon ){\bigr ]}}
ϑ
01
(
r
)
=
∑
k
=
−
∞
∞
(
−
1
)
k
r
k
2
=
∏
n
=
1
∞
(
1
−
r
2
n
)
(
1
−
r
2
n
−
1
)
2
{\displaystyle \vartheta _{01}(r)=\sum _{k=-\infty }^{\infty }(-1)^{k}r^{k^{2}}=\prod _{n=1}^{\infty }(1-r^{2n})(1-r^{2n-1})^{2}}
Computation rule for the mentioned theta quotient:
3
ϑ
01
{
q
[
κ
3
÷
(
κ
6
+
1
+
1
)
]
3
}
2
ϑ
01
{
q
[
κ
3
÷
(
κ
6
+
1
+
1
)
]
}
2
=
2
κ
4
−
κ
2
+
1
−
κ
2
+
2
+
κ
2
+
1
{\displaystyle {\frac {3\,\vartheta _{01}\{q[\kappa ^{3}\div ({\sqrt {\kappa ^{6}+1}}+1)]^{3}\}^{2}}{\vartheta _{01}\{q[\kappa ^{3}\div ({\sqrt {\kappa ^{6}+1}}+1)]\}^{2}}}={\sqrt {2{\sqrt {\kappa ^{4}-\kappa ^{2}+1}}-\kappa ^{2}+2}}+{\sqrt {\kappa ^{2}+1}}}
Accurately the Jacobi theta function is used for solving that equation.
Now we create a Tschirnhaus transformation on that:
x
4
−
6
x
2
−
8
S
2
+
1
x
−
3
=
0
{\displaystyle x^{4}-6x^{2}-8{\sqrt {S^{2}+1}}\,x-3=0}
y
=
x
2
−
2
(
S
2
+
1
−
S
)
x
−
3
{\displaystyle y=x^{2}-2({\sqrt {S^{2}+1}}-S)x-3}
y
4
+
64
S
2
(
4
S
2
+
1
−
4
S
S
2
+
1
)
y
−
384
S
3
(
S
2
+
1
−
S
)
=
0
{\displaystyle y^{4}+64\,S^{2}(4S^{2}+1-4S{\sqrt {S^{2}+1}})y-384\,S^{3}({\sqrt {S^{2}+1}}-S)=0}
=== Elliptic solving of principal quartics ===
Given principal quartic equation:
x
4
+
ψ
x
−
ω
=
0
{\displaystyle x^{4}+\psi x-\omega =0}
If this equation pattern is given, the modulus tangent duplication value S can be determined in this way:
ψ
4
[
384
S
3
(
S
2
+
1
−
S
)
]
3
=
ω
3
[
64
S
2
(
4
S
2
+
1
−
4
S
S
2
+
1
)
]
4
{\displaystyle \psi ^{4}{\bigl [}384\,S^{3}({\sqrt {S^{2}+1}}-S){\bigr ]}^{3}=\omega ^{3}{\bigl [}64\,S^{2}(4S^{2}+1-4S{\sqrt {S^{2}+1}}){\bigr ]}^{4}}
The solution of the now mentioned formula always is in pure biquadratic radical relation to psi and omega and therefore it is a useful tool to solve principal quartic equations.
Q
=
exp
⟨
−
π
K
{
sech
[
1
2
arsinh
(
S
)
]
}
÷
K
{
tanh
[
1
2
arsinh
(
S
)
]
}
⟩
=
{\displaystyle Q=\exp {\bigl \langle }-\pi K{\bigl \{}\operatorname {sech} {\bigl [}{\tfrac {1}{2}}\operatorname {arsinh} (S){\bigr ]}{\bigr \}}\div K{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {arsinh} (S){\bigr ]}{\bigr \}}{\bigr \rangle }=}
=
q
{
tanh
[
1
2
arsinh
(
S
)
]
}
=
q
{
tanh
[
1
2
artanh
(
S
÷
S
2
+
1
)
]
}
{\displaystyle =q{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {arsinh} (S){\bigr ]}{\bigr \}}=q{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {artanh} (S\div {\sqrt {S^{2}+1}}){\bigr ]}{\bigr \}}}
And this can be solved in that way:
x
=
ω
[
64
S
2
(
4
S
2
+
1
−
4
S
S
2
+
1
)
]
ψ
[
384
S
3
(
S
2
+
1
−
S
)
]
[
9
ϑ
01
(
Q
3
)
4
ϑ
01
(
Q
)
4
−
2
(
S
2
+
1
−
S
)
3
ϑ
01
(
Q
3
)
2
ϑ
01
(
Q
)
2
−
3
]
{\displaystyle x={\frac {\omega [64\,S^{2}(4S^{2}+1-4S{\sqrt {S^{2}+1}})]}{\psi [384\,S^{3}({\sqrt {S^{2}+1}}-S)]}}{\biggl [}{\frac {9\,\vartheta _{01}(Q^{3})^{4}}{\vartheta _{01}(Q)^{4}}}-2({\sqrt {S^{2}+1}}-S){\frac {3\,\vartheta _{01}(Q^{3})^{2}}{\vartheta _{01}(Q)^{2}}}-3{\biggr ]}}
=== Calculation examples with elliptic solutions ===
Now this solving pattern shall be used for solving some principal quartic equations:
First calculation example:
x
4
+
x
−
1
=
0
{\displaystyle x^{4}+x-1=0}
Q
=
q
{
tanh
[
1
2
artanh
(
31
100
+
1
300
849
−
1
300
386
849
−
1902
)
]
}
{\displaystyle Q=q{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {artanh} ({\tfrac {31}{100}}+{\tfrac {1}{300}}{\sqrt {849}}-{\tfrac {1}{300}}{\sqrt {386{\sqrt {849}}-1902}}){\bigr ]}{\bigr \}}}
x
=
4
2
849
+
18
−
6
[
9
ϑ
01
(
Q
3
)
4
ϑ
01
(
Q
)
4
−
1
4
32
+
2
6
849
−
54
3
ϑ
01
(
Q
3
)
2
ϑ
01
(
Q
)
2
−
3
]
{\displaystyle x={\frac {4}{{\sqrt {2{\sqrt {849}}+18}}-6}}{\biggl [}{\frac {9\,\vartheta _{01}(Q^{3})^{4}}{\vartheta _{01}(Q)^{4}}}-{\frac {1}{4}}{\sqrt {32+2{\sqrt {6{\sqrt {849}}-54}}}}\,{\frac {3\,\vartheta _{01}(Q^{3})^{2}}{\vartheta _{01}(Q)^{2}}}-3{\biggr ]}}
Second calculation example:
x
4
+
2
x
−
1
=
0
{\displaystyle x^{4}+2x-1=0}
Q
=
q
{
tanh
[
1
2
artanh
(
1
10
+
1
30
129
−
1
30
26
129
−
102
)
]
}
{\displaystyle Q=q{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {artanh} ({\tfrac {1}{10}}+{\tfrac {1}{30}}{\sqrt {129}}-{\tfrac {1}{30}}{\sqrt {26{\sqrt {129}}-102}}){\bigr ]}{\bigr \}}}
x
=
2
2
129
+
18
−
6
[
9
ϑ
01
(
Q
3
)
4
ϑ
01
(
Q
)
4
−
1
2
8
+
2
6
129
−
54
3
ϑ
01
(
Q
3
)
2
ϑ
01
(
Q
)
2
−
3
]
{\displaystyle x={\frac {2}{{\sqrt {2{\sqrt {129}}+18}}-6}}{\biggl [}{\frac {9\,\vartheta _{01}(Q^{3})^{4}}{\vartheta _{01}(Q)^{4}}}-{\frac {1}{2}}{\sqrt {8+2{\sqrt {6{\sqrt {129}}-54}}}}\,{\frac {3\,\vartheta _{01}(Q^{3})^{2}}{\vartheta _{01}(Q)^{2}}}-3{\biggr ]}}
Third calculation example:
x
4
+
5
x
−
3
=
0
{\displaystyle x^{4}+5x-3=0}
Q
=
q
{
tanh
[
1
2
artanh
(
239
5092
+
75
5092
881
−
5
5092
11618
881
−
112750
)
]
}
{\displaystyle Q=q{\bigl \{}\tanh {\bigl [}{\tfrac {1}{2}}\operatorname {artanh} ({\tfrac {239}{5092}}+{\tfrac {75}{5092}}{\sqrt {881}}-{\tfrac {5}{5092}}{\sqrt {11618{\sqrt {881}}-112750}}){\bigr ]}{\bigr \}}}
x
=
4
2
881
+
50
−
10
[
9
ϑ
01
(
Q
3
)
4
ϑ
01
(
Q
)
4
−
1
4
32
+
10
2
881
−
50
3
ϑ
01
(
Q
3
)
2
ϑ
01
(
Q
)
2
−
3
]
{\displaystyle x={\frac {4}{{\sqrt {2{\sqrt {881}}+50}}-10}}{\biggl [}{\frac {9\,\vartheta _{01}(Q^{3})^{4}}{\vartheta _{01}(Q)^{4}}}-{\frac {1}{4}}{\sqrt {32+10{\sqrt {2{\sqrt {881}}-50}}}}\,{\frac {3\,\vartheta _{01}(Q^{3})^{2}}{\vartheta _{01}(Q)^{2}}}-3{\biggr ]}}
== Quintic case ==
=== Synthesis advice for the quadratic Tschirnhaus key ===
This is the given quintic equation:
x
5
−
a
x
4
+
b
x
3
−
c
x
2
+
d
x
−
e
=
0
{\displaystyle x^{5}-ax^{4}+bx^{3}-cx^{2}+dx-e=0}
That quadratic equation system leads to the coefficients of the quadratic Tschirnhaus key:
By polynomial division that Tschirnhaus transformation can be made:
(
x
2
+
s
x
+
t
)
5
−
u
(
x
2
+
s
x
+
t
)
2
+
v
(
x
2
+
s
x
+
t
)
−
w
=
0
{\displaystyle (x^{2}+sx+t)^{5}-u(x^{2}+sx+t)^{2}+v(x^{2}+sx+t)-w=0}
=== Calculation examples ===
This is the first example:
x
5
−
x
4
−
x
2
−
1
=
0
{\displaystyle x^{5}-x^{4}-x^{2}-1=0}
y
=
x
2
−
1
4
(
19
−
265
)
x
−
1
20
(
265
−
15
)
{\displaystyle y=x^{2}-{\tfrac {1}{4}}(19-{\sqrt {265}})x-{\tfrac {1}{20}}({\sqrt {265}}-15)}
y
5
+
1
80
(
24455
−
1501
265
)
y
2
−
1
160
(
5789
265
−
93879
)
y
−
1
4000
(
5393003
265
−
87785025
)
=
0
{\displaystyle y^{5}+{\tfrac {1}{80}}(24455-1501{\sqrt {265}})y^{2}-{\tfrac {1}{160}}(5789{\sqrt {265}}-93879)y-{\tfrac {1}{4000}}(5393003{\sqrt {265}}-87785025)=0}
And this is the second example:
x
5
+
x
4
+
x
3
+
x
2
−
1
=
0
{\displaystyle x^{5}+x^{4}+x^{3}+x^{2}-1=0}
y
=
x
2
+
1
3
(
30
−
3
)
x
+
1
15
30
{\displaystyle y=x^{2}+{\tfrac {1}{3}}({\sqrt {30}}-3)x+{\tfrac {1}{15}}{\sqrt {30}}}
y
5
−
1
45
(
465
−
61
30
)
y
2
+
2
45
(
1616
−
289
30
)
y
−
1
1125
(
33758
30
−
183825
)
=
0
{\displaystyle y^{5}-{\tfrac {1}{45}}(465-61{\sqrt {30}})y^{2}+{\tfrac {2}{45}}(1616-289{\sqrt {30}})y-{\tfrac {1}{1125}}(33758{\sqrt {30}}-183825)=0}
=== Solving the principal quintic via Adamchik and Jeffrey transformation ===
The mathematicians Victor Adamchik and David Jeffrey found out how to solve every principal quintic equation. In their essay Polynomial Transformations of Tschirnhaus, Bring and Jerrard they wrote this way down. These two mathematicians solved this principal form by transforming it into the Bring Jerrard form. Their method contains the construction of a quartic Tschirnhaus transformation key. Also in this case that key is a polynome in relation to the unknown variable of the given principal equation y that results in the unknown variable z of the transformed Bring Jerrard final equation.
For the construction of the mentioned Tschirnhaus transformation key they executed a disjunction of the linear term key coefficient in order to get a system that solves all other terms in a quadratic radical way and to only solve a further cubic equation to get the coefficient of the linear term key coefficient.
In their essay they constructed the quartic Tschirnhaus key in this way:
In order to do the transformation the mathematicians Adamchik and Jeffrey constructed a special equation system that generates the coefficients of the cubic, quadratic and absolute term coefficients of the Tschirnhaus key. Along with their essay of polynomial transformations, these coefficients can be found out by combining the expressions of the quartic and cubic term of the final Bring Jerrard form that are equal to zero because in this way the Bring Jerrard equation form is defined.
By combining these expressions of the zero valued quartic and cubic term of the Bring Jerrard final form, an equation system for the unknown Tschirnhaus key coefficients can be constructed. And this resulting equation system can be simplified by combining the equation clues in the essay into each other. In this way the following simplified equation system of two unknown key coefficients can be set up:
On the basis of the essay by Adamchik and Jeffrey, the just mentioned equation system of two unknowns results from setting the zero valued quartic coefficient of the Bring Jerrard final form into the zero valued cubic coefficient and eliminating all terms of the linear key coefficients and absolute key coefficients. In other words, eliminating all gamma and delta terms. In this way you get the red colored cubic term coefficient and the green colored quadratic term coefficient of the Tschirnhaus key. The mentioned zero valued quartic coefficient of the Bring Jerrard final form is accurately this one here:
3
u
α
+
5
δ
−
4
v
=
0
{\displaystyle 3u{\color {crimson}\alpha }+5{\color {blue}\delta }-4v=0}
Solving the zero valued quartic coefficient of the Bring Jerrard final form leads directly to the blue colored absolute term coefficient of the Tschirnhaus key.
δ
=
4
5
v
−
3
5
u
α
{\displaystyle {\color {blue}\delta }={\frac {4}{5}}v-{\frac {3}{5}}u{\color {crimson}\alpha }}
And for receiving the orange colored linear term coefficient of the Tschirnhaus key, the zero valued quadratic coefficient of the Bring Jerrard final form must be solved after the mentioned linear term coefficient of the Tschirnhaus key. And accurate that is done by solving
this cubic equation:
u
γ
3
+
(
5
w
α
−
4
v
β
+
3
u
2
)
γ
2
+
(
u
v
α
2
+
6
u
w
α
+
5
w
β
2
−
8
u
v
β
+
3
u
3
+
v
w
)
γ
+
{\displaystyle u{\color {orange}\gamma }^{3}+(5w{\color {crimson}\alpha }-4v{\color {green}\beta }+3u^{2}){\color {orange}\gamma }^{2}+(uv{\color {crimson}\alpha }^{2}+6uw{\color {crimson}\alpha }+5w{\color {green}\beta }^{2}-8uv{\color {green}\beta }+3u^{3}+vw){\color {orange}\gamma }+}
+
u
3
α
3
+
v
w
α
3
−
2
u
2
β
3
+
2
u
v
α
β
2
−
2
u
2
α
2
δ
+
10
δ
3
−
{\displaystyle +u^{3}{\color {crimson}\alpha }^{3}+vw{\color {crimson}\alpha }^{3}-2u^{2}{\color {green}\beta }^{3}+2uv{\color {crimson}\alpha }{\color {green}\beta }^{2}-2u^{2}{\color {crimson}\alpha }^{2}{\color {blue}\delta }+10{\color {blue}\delta }^{3}-}
−
4
u
2
v
α
2
+
v
w
α
β
+
3
u
v
α
δ
+
2
u
2
w
α
+
2
u
2
v
β
−
v
2
δ
+
u
4
−
4
v
3
+
10
u
v
w
=
0
{\displaystyle -4u^{2}v{\color {crimson}\alpha }^{2}+vw{\color {crimson}\alpha }{\color {green}\beta }+3uv{\color {crimson}\alpha }{\color {blue}\delta }+2u^{2}w{\color {crimson}\alpha }+2u^{2}v{\color {green}\beta }-v^{2}{\color {blue}\delta }+u^{4}-4v^{3}+10uvw=0}
The solution of that system then has to be entered in the already mentioned key to get the mentioned final form:
z
=
y
4
+
α
y
3
+
β
y
2
+
γ
y
+
δ
{\displaystyle z=y^{4}+{\color {crimson}\alpha }y^{3}+{\color {green}\beta }y^{2}+{\color {orange}\gamma }y+{\color {blue}\delta }}
z
5
+
λ
z
−
μ
=
0
{\displaystyle z^{5}+\lambda z-\mu =0}
The coefficients Lambda and My ofthe Bring Jerrard final form can be found out by doing a polynomial division of z^5 divided by the initial principal polynome and reading the resulting remainder rest. So a Bring Jerrard equation appears that contains only the quintic, the linear and the absolute term.
=== Examples of solving the principal form ===
Along with the Abel Ruffini theorem the following equations are examples that can not be solved by elementary expressions, but can be reduced to the Bring Jerrard form by only using cubic radical elements. This shall be demonstrated here. To do this on the given principal quintics, we solve the equations for the coefficients of the cubic, quadratic and absolute term of the quartic Tschirnhaus key after the shown pattern. So this Tschirnhaus key can be determinded. By doing a polynomial division on the fifth power of the quartic Tschirnhaus transformation key and analyzing the remainder rest the coefficients of the mold can be determined too. And so the solutions of following given principal quintic equations can be computed:
This is a further example for that algorithm:
=== Clues for creating the Moduli and Nomes ===
That Bring Jerrard equation can be solved by an elliptic Jacobi theta quotient that contains the fifth powers and the fifth roots of the corresponding elliptic nome in the theta function terms.
For doing this, following elliptic modulus or numeric eccentricity and their Pythagorean counterparts and corresponding elliptic nome should be used in relation to Lambda and My after the essay Sulla risoluzione delle equazioni del quinto grado from Charles Hermite and Francesco Brioschi and the recipe on page 258 accurately:
f
=
5
μ
4
λ
(
5
λ
)
1
/
4
{\displaystyle f={\frac {5\mu }{4\lambda }}{\bigl (}{\frac {5}{\lambda }}{\bigr )}^{1/4}}
These are the elliptic moduli and thus the numeric eccentricities:
With the abbreviations ctlh abd tlh the Hyperbolic Lemniscatic functions are represented. The abbreviation aclh is the Hyperbolic Lemniscate Areacosine accurately.
== Literature ==
"Polynomial Transformations of Tschirnhaus", Bring and Jerrard, ACM Sigsam Bulletin, Vol 37, No. 3, September 2003
F. Brioschi, Sulla risoluzione delle equazioni del quinto grado: Hermite — Sur la résolution de l'Équation du cinquiéme degré Comptes rendus —. N. 11. Mars. 1858. 1. Dezember 1858, doi:10.1007/bf03197334
Bruce and King, Beyond the Quartic Equation, Birkhäuser, 1996.
== References == | Wikipedia/Principal_equation_form |
In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation f(x)=0, where f is a given polynomial, which can be taken to be scaled so that the leading coefficient is 1.
== Explanation ==
This explanation considers equations of degree four. It is easily generalized to other degrees.
Let the polynomial f be defined by
f
(
x
)
=
x
4
+
a
x
3
+
b
x
2
+
c
x
+
d
{\displaystyle f(x)=x^{4}+ax^{3}+bx^{2}+cx+d}
for all x. The known numbers a, b, c, d are the coefficients.
Let the (potentially complex) numbers P, Q, R, S be the roots of this polynomial f. Then
f
(
x
)
=
(
x
−
P
)
(
x
−
Q
)
(
x
−
R
)
(
x
−
S
)
{\displaystyle f(x)=(x-P)(x-Q)(x-R)(x-S)}
for all x. One can isolate the value P from this equation:
P
=
x
−
f
(
x
)
(
x
−
Q
)
(
x
−
R
)
(
x
−
S
)
.
{\displaystyle P=x-{\frac {f(x)}{(x-Q)(x-R)(x-S)}}.}
So if used as a fixed-point iteration
x
1
:=
x
0
−
f
(
x
0
)
(
x
0
−
Q
)
(
x
0
−
R
)
(
x
0
−
S
)
,
{\displaystyle x_{1}:=x_{0}-{\frac {f(x_{0})}{(x_{0}-Q)(x_{0}-R)(x_{0}-S)}},}
it is strongly stable in that every initial point x0 ≠ Q, R, S delivers after one iteration the root P = x1. Furthermore, if one replaces the zeros Q, R and S by approximations q ≈ Q, r ≈ R, s ≈ S, such that q, r, s are not equal to P, then P is still a fixed point of the perturbed fixed-point iteration
x
k
+
1
:=
x
k
−
f
(
x
k
)
(
x
k
−
q
)
(
x
k
−
r
)
(
x
k
−
s
)
,
{\displaystyle x_{k+1}:=x_{k}-{\frac {f(x_{k})}{(x_{k}-q)(x_{k}-r)(x_{k}-s)}},}
since
P
−
f
(
P
)
(
P
−
q
)
(
P
−
r
)
(
P
−
s
)
=
P
−
0
=
P
.
{\displaystyle P-{\frac {f(P)}{(P-q)(P-r)(P-s)}}=P-0=P.}
Note that the denominator is still different from zero. This fixed-point iteration is a contraction mapping for x around P.
The clue to the method now is to combine the fixed-point iteration for P with similar iterations for Q, R, S into a simultaneous iteration for all roots.
Initialize p, q, r, s:
p0 := (0.4 + 0.9i)0,
q0 := (0.4 + 0.9i)1,
r0 := (0.4 + 0.9i)2,
s0 := (0.4 + 0.9i)3.
There is nothing special about choosing 0.4 + 0.9i except that it is neither a real number nor a root of unity.
Make the substitutions for n = 1, 2, 3, ...:
p
n
=
p
n
−
1
−
f
(
p
n
−
1
)
(
p
n
−
1
−
q
n
−
1
)
(
p
n
−
1
−
r
n
−
1
)
(
p
n
−
1
−
s
n
−
1
)
,
{\displaystyle p_{n}=p_{n-1}-{\frac {f(p_{n-1})}{(p_{n-1}-q_{n-1})(p_{n-1}-r_{n-1})(p_{n-1}-s_{n-1})}},}
q
n
=
q
n
−
1
−
f
(
q
n
−
1
)
(
q
n
−
1
−
p
n
)
(
q
n
−
1
−
r
n
−
1
)
(
q
n
−
1
−
s
n
−
1
)
,
{\displaystyle q_{n}=q_{n-1}-{\frac {f(q_{n-1})}{(q_{n-1}-p_{n})(q_{n-1}-r_{n-1})(q_{n-1}-s_{n-1})}},}
r
n
=
r
n
−
1
−
f
(
r
n
−
1
)
(
r
n
−
1
−
p
n
)
(
r
n
−
1
−
q
n
)
(
r
n
−
1
−
s
n
−
1
)
,
{\displaystyle r_{n}=r_{n-1}-{\frac {f(r_{n-1})}{(r_{n-1}-p_{n})(r_{n-1}-q_{n})(r_{n-1}-s_{n-1})}},}
s
n
=
s
n
−
1
−
f
(
s
n
−
1
)
(
s
n
−
1
−
p
n
)
(
s
n
−
1
−
q
n
)
(
s
n
−
1
−
r
n
)
.
{\displaystyle s_{n}=s_{n-1}-{\frac {f(s_{n-1})}{(s_{n-1}-p_{n})(s_{n-1}-q_{n})(s_{n-1}-r_{n})}}.}
Re-iterate until the numbers p, q, r, s essentially stop changing relative to the desired precision. They then have the values P, Q, R, S in some order and in the chosen precision. So the problem is solved.
Note that complex number arithmetic must be used, and that the roots are found simultaneously rather than one at a time.
== Variations ==
This iteration procedure, like the Gauss–Seidel method for linear equations, computes one number at a time based on the already computed numbers. A variant of this procedure, like the Jacobi method, computes a vector of root approximations at a time. Both variants are effective root-finding algorithms.
One could also choose the initial values for p, q, r, s by some other procedure, even randomly, but in a way that
they are inside some not-too-large circle containing also the roots of f(x), e.g. the circle around the origin with radius
1
+
max
(
|
a
|
,
|
b
|
,
|
c
|
,
|
d
|
)
{\displaystyle 1+\max {\big (}|a|,|b|,|c|,|d|{\big )}}
, (where 1, a, b, c, d are the coefficients of f(x))
and that
they are not too close to each other,
which may increasingly become a concern as the degree of the polynomial increases.
If the coefficients are real and the polynomial has odd degree, then it must have at least one real root. To find this, use a real value of p0 as the initial guess and make q0 and r0, etc., complex conjugate pairs. Then the iteration will preserve these properties; that is, pn will always be real, and qn and rn, etc., will always be conjugate. In this way, the pn will converge to a real root P. Alternatively, make all of the initial guesses real; they will remain so.
== Example ==
This example is from the reference Jacoby (1992). The equation solved is x3 − 3x2 + 3x − 5 = 0. The first 4 iterations move p, q, r seemingly chaotically, but then the roots are located to 1 decimal. After iteration number 5 we have 4 correct decimals, and the subsequent iteration number 6 confirms that the computed roots are fixed. This general behaviour is characteristic for the method. Also notice that, in this example, the roots are used as soon as they are computed in each iteration. In other words, the computation of each second column uses the value of the previous computed columns.
Note that the equation has one real root and one pair of complex conjugate roots, and that the sum of the roots is 3.
== Derivation of the method via Newton's method ==
For every n-tuple of complex numbers, there is exactly one monic polynomial of degree n that has them as its zeros (keeping multiplicities). This polynomial is given by multiplying all the corresponding linear factors, that is
g
z
→
(
X
)
=
(
X
−
z
1
)
⋯
(
X
−
z
n
)
.
{\displaystyle g_{\vec {z}}(X)=(X-z_{1})\cdots (X-z_{n}).}
This polynomial has coefficients that depend on the prescribed zeros,
g
z
→
(
X
)
=
X
n
+
g
n
−
1
(
z
→
)
X
n
−
1
+
⋯
+
g
0
(
z
→
)
.
{\displaystyle g_{\vec {z}}(X)=X^{n}+g_{n-1}({\vec {z}})X^{n-1}+\cdots +g_{0}({\vec {z}}).}
Those coefficients are, up to a sign, the elementary symmetric polynomials
α
1
(
z
→
)
,
…
,
α
n
(
z
→
)
{\displaystyle \alpha _{1}({\vec {z}}),\dots ,\alpha _{n}({\vec {z}})}
of degrees 1,...,n.
To find all the roots of a given polynomial
f
(
X
)
=
X
n
+
c
n
−
1
X
n
−
1
+
⋯
+
c
0
{\displaystyle f(X)=X^{n}+c_{n-1}X^{n-1}+\cdots +c_{0}}
with coefficient vector
(
c
n
−
1
,
…
,
c
0
)
{\displaystyle (c_{n-1},\dots ,c_{0})}
simultaneously is now the same as to find a solution vector to the Vieta's system
c
0
=
g
0
(
z
→
)
=
(
−
1
)
n
α
n
(
z
→
)
=
(
−
1
)
n
z
1
⋯
z
n
c
1
=
g
1
(
z
→
)
=
(
−
1
)
n
−
1
α
n
−
1
(
z
→
)
⋮
c
n
−
1
=
g
n
−
1
(
z
→
)
=
−
α
1
(
z
→
)
=
−
(
z
1
+
z
2
+
⋯
+
z
n
)
.
{\displaystyle {\begin{matrix}c_{0}&=&g_{0}({\vec {z}})&=&(-1)^{n}\alpha _{n}({\vec {z}})&=&(-1)^{n}z_{1}\cdots z_{n}\\c_{1}&=&g_{1}({\vec {z}})&=&(-1)^{n-1}\alpha _{n-1}({\vec {z}})\\&\vdots &\\c_{n-1}&=&g_{n-1}({\vec {z}})&=&-\alpha _{1}({\vec {z}})&=&-(z_{1}+z_{2}+\cdots +z_{n}).\end{matrix}}}
The Durand–Kerner method is obtained as the multidimensional Newton's method applied to this system. It is algebraically more comfortable to treat those identities of coefficients as the identity of the corresponding polynomials,
g
z
→
(
X
)
=
f
(
X
)
{\displaystyle g_{\vec {z}}(X)=f(X)}
. In the Newton's method one looks, given some initial vector
z
→
{\displaystyle {\vec {z}}}
, for an increment vector
w
→
{\displaystyle {\vec {w}}}
such that
g
z
→
+
w
→
(
X
)
=
f
(
X
)
{\displaystyle g_{{\vec {z}}+{\vec {w}}}(X)=f(X)}
is satisfied up to second and higher order terms in the increment. For this one solves the identity
f
(
X
)
−
g
z
→
(
X
)
=
∑
k
=
1
n
∂
g
z
→
(
X
)
∂
z
k
w
k
=
−
∑
k
=
1
n
w
k
∏
j
≠
k
(
X
−
z
j
)
.
{\displaystyle f(X)-g_{\vec {z}}(X)=\sum _{k=1}^{n}{\frac {\partial g_{\vec {z}}(X)}{\partial z_{k}}}w_{k}=-\sum _{k=1}^{n}w_{k}\prod _{j\neq k}(X-z_{j}).}
If the numbers
z
1
,
…
,
z
n
{\displaystyle z_{1},\dots ,z_{n}}
are pairwise different, then the polynomials in the terms of the right hand side form a basis of the n-dimensional space
C
[
X
]
n
−
1
{\displaystyle \mathbb {C} [X]_{n-1}}
of polynomials with maximal degree n − 1. Thus a solution
w
→
{\displaystyle {\vec {w}}}
to the increment equation exists in this case. The coordinates of the increment
w
→
{\displaystyle {\vec {w}}}
are simply obtained by evaluating the increment equation
−
∑
k
=
1
n
w
k
∏
j
≠
k
(
X
−
z
j
)
=
f
(
X
)
−
∏
j
=
1
n
(
X
−
z
j
)
{\displaystyle -\sum _{k=1}^{n}w_{k}\prod _{j\neq k}(X-z_{j})=f(X)-\prod _{j=1}^{n}(X-z_{j})}
at the points
X
=
z
k
{\displaystyle X=z_{k}}
, which results in
−
w
k
∏
j
≠
k
(
z
k
−
z
j
)
=
−
w
k
g
z
→
′
(
z
k
)
=
f
(
z
k
)
{\displaystyle -w_{k}\prod _{j\neq k}(z_{k}-z_{j})=-w_{k}g_{\vec {z}}'(z_{k})=f(z_{k})}
, that is
w
k
=
−
f
(
z
k
)
∏
j
≠
k
(
z
k
−
z
j
)
.
{\displaystyle w_{k}=-{\frac {f(z_{k})}{\prod _{j\neq k}(z_{k}-z_{j})}}.}
== Root inclusion via Gerschgorin's circles ==
In the quotient ring (algebra) of residue classes modulo ƒ(X), the multiplication by X defines an endomorphism that has the zeros of ƒ(X) as eigenvalues with the corresponding multiplicities. Choosing a basis, the multiplication operator is represented by its coefficient matrix A, the companion matrix of ƒ(X) for this basis.
Since every polynomial can be reduced modulo ƒ(X) to a polynomial of degree n − 1 or lower, the space of residue classes can be identified with the space of polynomials of degree bounded by n − 1. A problem-specific basis can be taken from Lagrange interpolation as the set of n polynomials
b
k
(
X
)
=
∏
1
≤
j
≤
n
,
j
≠
k
(
X
−
z
j
)
,
k
=
1
,
…
,
n
,
{\displaystyle b_{k}(X)=\prod _{1\leq j\leq n,\;j\neq k}(X-z_{j}),\quad k=1,\dots ,n,}
where
z
1
,
…
,
z
n
∈
C
{\displaystyle z_{1},\dots ,z_{n}\in \mathbb {C} }
are pairwise different complex numbers. Note that the kernel functions for the Lagrange interpolation are
L
k
(
X
)
=
b
k
(
X
)
b
k
(
z
k
)
{\displaystyle L_{k}(X)={\frac {b_{k}(X)}{b_{k}(z_{k})}}}
.
For the multiplication operator applied to the basis polynomials one obtains from the Lagrange interpolation
where
w
j
=
−
f
(
z
j
)
b
j
(
z
j
)
{\displaystyle w_{j}=-{\frac {f(z_{j})}{b_{j}(z_{j})}}}
are again the Weierstrass updates.
The companion matrix of ƒ(X) is therefore
A
=
d
i
a
g
(
z
1
,
…
,
z
n
)
+
(
1
⋮
1
)
⋅
(
w
1
,
…
,
w
n
)
.
{\displaystyle A=\mathrm {diag} (z_{1},\dots ,z_{n})+{\begin{pmatrix}1\\\vdots \\1\end{pmatrix}}\cdot \left(w_{1},\dots ,w_{n}\right).}
From the transposed matrix case of the Gershgorin circle theorem it follows that all eigenvalues of A, that is, all roots of ƒ(X), are contained in the union of the disks
D
(
a
k
,
k
,
r
k
)
{\displaystyle D(a_{k,k},r_{k})}
with a radius
r
k
=
∑
j
≠
k
|
a
j
,
k
|
{\displaystyle r_{k}=\sum _{j\neq k}{\big |}a_{j,k}{\big |}}
.
Here one has
a
k
,
k
=
z
k
+
w
k
{\displaystyle a_{k,k}=z_{k}+w_{k}}
, so the centers are the next iterates of the Weierstrass iteration, and radii
r
k
=
(
n
−
1
)
|
w
k
|
{\displaystyle r_{k}=(n-1)\left|w_{k}\right|}
that are multiples of the Weierstrass updates. If the roots of ƒ(X) are all well isolated (relative to the computational precision) and the points
z
1
,
…
,
z
n
∈
C
{\displaystyle z_{1},\dots ,z_{n}\in \mathbb {C} }
are sufficiently close approximations to these roots, then all the disks will become disjoint, so each one contains exactly one zero. The midpoints of the circles will be better approximations of the zeros.
Every conjugate matrix
T
A
T
−
1
{\displaystyle TAT^{-1}}
of A is as well a companion matrix of ƒ(X). Choosing T as diagonal matrix leaves the structure of A invariant. The root close to
z
k
{\displaystyle z_{k}}
is contained in any isolated circle with center
z
k
{\displaystyle z_{k}}
regardless of T. Choosing the optimal diagonal matrix T for every index results in better estimates (see ref. Petkovic et al. 1995).
== Convergence results ==
The connection between the Taylor series expansion and Newton's method suggests that the distance from
z
k
+
w
k
{\displaystyle z_{k}+w_{k}}
to the corresponding root is of the order
O
(
|
w
k
|
2
)
{\displaystyle O{\big (}|w_{k}|^{2}{\big )}}
, if the root is well isolated from nearby roots and the approximation is sufficiently close to the root. So after the approximation is close, Newton's method converges quadratically; that is, the error is squared with every step (which will greatly reduce the error once it is less than 1). In the case of the Durand–Kerner method, convergence is quadratic if the vector
z
→
=
(
z
1
,
…
,
z
n
)
{\displaystyle {\vec {z}}=(z_{1},\dots ,z_{n})}
is close to some permutation of the vector of the roots of f.
For the conclusion of linear convergence there is a more specific result (see ref. Petkovic et al. 1995). If the initial vector
z
→
{\displaystyle {\vec {z}}}
and its vector of Weierstrass updates
w
→
=
(
w
1
,
…
,
w
n
)
{\displaystyle {\vec {w}}=(w_{1},\dots ,w_{n})}
satisfies the inequality
max
1
≤
k
≤
n
|
w
k
|
≤
1
5
n
min
1
≤
j
<
k
≤
n
|
z
k
−
z
j
|
,
{\displaystyle \max _{1\leq k\leq n}|w_{k}|\leq {\frac {1}{5n}}\min _{1\leq j<k\leq n}|z_{k}-z_{j}|,}
then this inequality also holds for all iterates, all inclusion disks
D
(
z
k
+
w
k
,
(
n
−
1
)
|
w
k
|
)
{\displaystyle D{\big (}z_{k}+w_{k},(n-1)|w_{k}|{\big )}}
are disjoint, and linear convergence with a contraction factor of 1/2 holds. Further, the inclusion disks can in this case be chosen as
D
(
z
k
+
w
k
,
1
4
|
w
k
|
)
,
k
=
1
,
…
,
n
,
{\displaystyle D\left(z_{k}+w_{k},{\tfrac {1}{4}}|w_{k}|\right),\quad k=1,\dots ,n,}
each containing exactly one zero of f.
== Failing general convergence ==
The Weierstrass / Durand-Kerner method is not generally convergent: in other words, it is not true that for every polynomial, the set of initial vectors that eventually converges to roots is open and dense. In fact, there are open sets of polynomials that have open sets of initial vectors that converge to periodic cycles other than roots (see Reinke et al.)
== References ==
Weierstraß, Karl (1891). "Neuer Beweis des Satzes, dass jede ganze rationale Function einer Veränderlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Veränderlichen". Sitzungsberichte der königlich preussischen Akademie der Wissenschaften zu Berlin. Archived from the original on 2013-11-02. Retrieved 2013-10-31.
Durand, E. (1960). "Equations du type F(x) = 0: Racines d'un polynome". In Masson; et al. (eds.). Solutions Numériques des Equations Algébriques. Vol. 1.
Kerner, Immo O. (1966). "Ein Gesamtschrittverfahren zur Berechnung der Nullstellen von Polynomen". Numerische Mathematik. 8 (3): 290–294. doi:10.1007/BF02162564. S2CID 115307022.
Prešić, Marica (1980). "A convergence theorem for a method for simultaneous determination of all zeros of a polynomial" (PDF). Publications de l'Institut Mathématique. Nouvelle Série. 28 (42): 158–168.
Petkovic, M.S., Carstensen, C. and Trajkovic, M. (1995). "Weierstrass formula and zero-finding methods". Numerische Mathematik. 69 (3): 353–372. CiteSeerX 10.1.1.53.7516. doi:10.1007/s002110050097. S2CID 18594004.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Bo Jacoby, Nulpunkter for polynomier, CAE-nyt (a periodical for Dansk CAE Gruppe [Danish CAE Group]), 1988.
Agnethe Knudsen, Numeriske Metoder (lecture notes), Københavns Teknikum.
Bo Jacoby, Numerisk løsning af ligninger, Bygningsstatiske meddelelser (Published by Danish Society for Structural Science and Engineering) volume 63 no. 3–4, 1992, pp. 83–105.
Gourdon, Xavier (1996). Combinatoire, Algorithmique et Geometrie des Polynomes. Paris: École Polytechnique. Archived from the original on 2006-10-28. Retrieved 2006-08-22.
Victor Pan (May 2002): Univariate Polynomial Root-Finding with Lower Computational Precision and Higher Convergence Rates. Tech-Report, City University of New York
Neumaier, Arnold (2003). "Enclosing clusters of zeros of polynomials". Journal of Computational and Applied Mathematics. 156 (2): 389–401. Bibcode:2003JCoAM.156..389N. doi:10.1016/S0377-0427(03)00380-7.
Jan Verschelde, The method of Weierstrass (also known as the Durand–Kerner method), 2003.
Bernhard Reinke, Dierk Schleicher, and Michael Stoll, ``The Weierstrass root finder is not generally convergent, 2020
== External links ==
Ada Generic_Roots using the Durand–Kerner Method (archive) — an open-source implementation in Ada
Polynomial Roots — an open-source implementation in Java
Roots Extraction from Polynomials : The Durand–Kerner Method — contains a Java applet demonstration | Wikipedia/Durand–Kerner_method |
In geometry and complex analysis, a Möbius transformation of the complex plane is a rational function of the form
f
(
z
)
=
a
z
+
b
c
z
+
d
{\displaystyle f(z)={\frac {az+b}{cz+d}}}
of one complex variable z; here the coefficients a, b, c, d are complex numbers satisfying ad − bc ≠ 0.
Geometrically, a Möbius transformation can be obtained by first applying the inverse stereographic projection from the plane to the unit sphere, moving and rotating the sphere to a new location and orientation in space, and then applying a stereographic projection to map from the sphere back to the plane. These transformations preserve angles, map every straight line to a line or circle, and map every circle to a line or circle.
The Möbius transformations are the projective transformations of the complex projective line. They form a group called the Möbius group, which is the projective linear group PGL(2, C). Together with its subgroups, it has numerous applications in mathematics and physics.
Möbius geometries and their transformations generalize this case to any number of dimensions over other fields.
Möbius transformations are named in honor of August Ferdinand Möbius; they are an example of homographies, linear fractional transformations, bilinear transformations, and spin transformations (in relativity theory).
== Overview ==
Möbius transformations are defined on the extended complex plane
C
^
=
C
∪
{
∞
}
{\displaystyle {\widehat {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}}
(i.e., the complex plane augmented by the point at infinity).
Stereographic projection identifies
C
^
{\displaystyle {\widehat {\mathbb {C} }}}
with a sphere, which is then called the Riemann sphere; alternatively,
C
^
{\displaystyle {\widehat {\mathbb {C} }}}
can be thought of as the complex projective line
C
P
1
{\displaystyle \mathbb {C} \mathbb {P} ^{1}}
. The Möbius transformations are exactly the bijective conformal maps from the Riemann sphere to itself, i.e., the automorphisms of the Riemann sphere as a complex manifold; alternatively, they are the automorphisms of
C
P
1
{\displaystyle \mathbb {C} \mathbb {P} ^{1}}
as an algebraic variety. Therefore, the set of all Möbius transformations forms a group under composition. This group is called the Möbius group, and is sometimes denoted
Aut
(
C
^
)
{\displaystyle \operatorname {Aut} ({\widehat {\mathbb {C} }})}
.
The Möbius group is isomorphic to the group of orientation-preserving isometries of hyperbolic 3-space and therefore plays an important role when studying hyperbolic 3-manifolds.
In physics, the identity component of the Lorentz group acts on the celestial sphere in the same way that the Möbius group acts on the Riemann sphere. In fact, these two groups are isomorphic. An observer who accelerates to relativistic velocities will see the pattern of constellations as seen near the Earth continuously transform according to infinitesimal Möbius transformations. This observation is often taken as the starting point of twistor theory.
Certain subgroups of the Möbius group form the automorphism groups of the other simply-connected Riemann surfaces (the complex plane and the hyperbolic plane). As such, Möbius transformations play an important role in the theory of Riemann surfaces. The fundamental group of every Riemann surface is a discrete subgroup of the Möbius group (see Fuchsian group and Kleinian group). A particularly important discrete subgroup of the Möbius group is the modular group; it is central to the theory of many fractals, modular forms, elliptic curves and Pellian equations.
Möbius transformations can be more generally defined in spaces of dimension n > 2 as the bijective conformal orientation-preserving maps from the n-sphere to the n-sphere. Such a transformation is the most general form of conformal mapping of a domain. According to Liouville's theorem a Möbius transformation can be expressed as a composition of translations, similarities, orthogonal transformations and inversions.
== Definition ==
The general form of a Möbius transformation is given by
f
(
z
)
=
a
z
+
b
c
z
+
d
,
{\displaystyle f(z)={\frac {az+b}{cz+d}},}
where a, b, c, d are any complex numbers that satisfy ad − bc ≠ 0.
In case c ≠ 0, this definition is extended to the whole Riemann sphere by defining
f
(
−
d
c
)
=
∞
,
f
(
∞
)
=
a
c
.
{\displaystyle {\begin{aligned}f\left({\frac {-d}{c}}\right)&=\infty ,\\f(\infty )&={\frac {a}{c}}.\end{aligned}}}
If c = 0, we define
f
(
∞
)
=
∞
.
{\displaystyle f(\infty )=\infty .}
Thus a Möbius transformation is always a bijective holomorphic function from the Riemann sphere to the Riemann sphere.
The set of all Möbius transformations forms a group under composition. This group can be given the structure of a complex manifold in such a way that composition and inversion are holomorphic maps. The Möbius group is then a complex Lie group. The Möbius group is usually denoted
Aut
(
C
^
)
{\displaystyle \operatorname {Aut} ({\widehat {\mathbb {C} }})}
as it is the automorphism group of the Riemann sphere.
If ad = bc, the rational function defined above is a constant (unless c = d = 0, when it is undefined):
a
z
+
b
c
z
+
d
=
a
c
=
b
d
,
{\displaystyle {\frac {az+b}{cz+d}}={\frac {a}{c}}={\frac {b}{d}},}
where a fraction with a zero denominator is ignored. A constant function is not bijective and is thus not considered a Möbius transformation.
An alternative definition is given as the kernel of the Schwarzian derivative.
== Fixed points ==
Every non-identity Möbius transformation has two fixed points
γ
1
,
γ
2
{\displaystyle \gamma _{1},\gamma _{2}}
on the Riemann sphere. The fixed points are counted here with multiplicity; the parabolic transformations are those where the fixed points coincide. Either or both of these fixed points may be the point at infinity.
=== Determining the fixed points ===
The fixed points of the transformation
f
(
z
)
=
a
z
+
b
c
z
+
d
{\displaystyle f(z)={\frac {az+b}{cz+d}}}
are obtained by solving the fixed point equation f(γ) = γ. For c ≠ 0, this has two roots obtained by expanding this equation to
c
γ
2
−
(
a
−
d
)
γ
−
b
=
0
,
{\displaystyle c\gamma ^{2}-(a-d)\gamma -b=0\ ,}
and applying the quadratic formula. The roots are
γ
1
,
2
=
(
a
−
d
)
±
(
a
−
d
)
2
+
4
b
c
2
c
=
(
a
−
d
)
±
Δ
2
c
{\displaystyle \gamma _{1,2}={\frac {(a-d)\pm {\sqrt {(a-d)^{2}+4bc}}}{2c}}={\frac {(a-d)\pm {\sqrt {\Delta }}}{2c}}}
with discriminant
Δ
=
(
tr
H
)
2
−
4
det
H
=
(
a
+
d
)
2
−
4
(
a
d
−
b
c
)
,
{\displaystyle \Delta =(\operatorname {tr} {\mathfrak {H}})^{2}-4\det {\mathfrak {H}}=(a+d)^{2}-4(ad-bc),}
where the matrix
H
=
(
a
b
c
d
)
{\displaystyle {\mathfrak {H}}={\begin{pmatrix}a&b\\c&d\end{pmatrix}}}
represents the transformation.
Parabolic transforms have coincidental fixed points due to zero discriminant. For c nonzero and nonzero discriminant the transform is elliptic or hyperbolic.
When c = 0, the quadratic equation degenerates into a linear equation and the transform is linear. This corresponds to the situation that one of the fixed points is the point at infinity. When a ≠ d the second fixed point is finite and is given by
γ
=
−
b
a
−
d
.
{\displaystyle \gamma =-{\frac {b}{a-d}}.}
In this case the transformation will be a simple transformation composed of translations, rotations, and dilations:
z
↦
α
z
+
β
.
{\displaystyle z\mapsto \alpha z+\beta .}
If c = 0 and a = d, then both fixed points are at infinity, and the Möbius transformation corresponds to a pure translation:
z
↦
z
+
β
.
{\displaystyle z\mapsto z+\beta .}
=== Topological proof ===
Topologically, the fact that (non-identity) Möbius transformations fix 2 points (with multiplicity) corresponds to the Euler characteristic of the sphere being 2:
χ
(
C
^
)
=
2.
{\displaystyle \chi ({\hat {\mathbb {C} }})=2.}
Firstly, the projective linear group PGL(2, K) is sharply 3-transitive – for any two ordered triples of distinct points, there is a unique map that takes one triple to the other, just as for Möbius transforms, and by the same algebraic proof (essentially dimension counting, as the group is 3-dimensional). Thus any map that fixes at least 3 points is the identity.
Next, one can see by identifying the Möbius group with
P
G
L
(
2
,
C
)
{\displaystyle \mathrm {PGL} (2,\mathbb {C} )}
that any Möbius function is homotopic to the identity. Indeed, any member of the general linear group can be reduced to the identity map by Gauss-Jordan elimination, this shows that the projective linear group is path-connected as well, providing a homotopy to the identity map. The Lefschetz–Hopf theorem states that the sum of the indices (in this context, multiplicity) of the fixed points of a map with finitely many fixed points equals the Lefschetz number of the map, which in this case is the trace of the identity map on homology groups, which is simply the Euler characteristic.
By contrast, the projective linear group of the real projective line, PGL(2, R) need not fix any points – for example
(
1
+
x
)
/
(
1
−
x
)
{\displaystyle (1+x)/(1-x)}
has no (real) fixed points: as a complex transformation it fixes ±i – while the map 2x fixes the two points of 0 and ∞. This corresponds to the fact that the Euler characteristic of the circle (real projective line) is 0, and thus the Lefschetz fixed-point theorem says only that it must fix at least 0 points, but possibly more.
=== Normal form ===
Möbius transformations are also sometimes written in terms of their fixed points in so-called normal form. We first treat the non-parabolic case, for which there are two distinct fixed points.
Non-parabolic case:
Every non-parabolic transformation is conjugate to a dilation/rotation, i.e., a transformation of the form
z
↦
k
z
{\displaystyle z\mapsto kz}
(k ∈ C) with fixed points at 0 and ∞. To see this define a map
g
(
z
)
=
z
−
γ
1
z
−
γ
2
{\displaystyle g(z)={\frac {z-\gamma _{1}}{z-\gamma _{2}}}}
which sends the points (γ1, γ2) to (0, ∞). Here we assume that γ1 and γ2 are distinct and finite. If one of them is already at infinity then g can be modified so as to fix infinity and send the other point to 0.
If f has distinct fixed points (γ1, γ2) then the transformation
g
f
g
−
1
{\displaystyle gfg^{-1}}
has fixed points at 0 and ∞ and is therefore a dilation:
g
f
g
−
1
(
z
)
=
k
z
{\displaystyle gfg^{-1}(z)=kz}
. The fixed point equation for the transformation f can then be written
f
(
z
)
−
γ
1
f
(
z
)
−
γ
2
=
k
z
−
γ
1
z
−
γ
2
.
{\displaystyle {\frac {f(z)-\gamma _{1}}{f(z)-\gamma _{2}}}=k{\frac {z-\gamma _{1}}{z-\gamma _{2}}}.}
Solving for f gives (in matrix form):
H
(
k
;
γ
1
,
γ
2
)
=
(
γ
1
−
k
γ
2
(
k
−
1
)
γ
1
γ
2
1
−
k
k
γ
1
−
γ
2
)
{\displaystyle {\mathfrak {H}}(k;\gamma _{1},\gamma _{2})={\begin{pmatrix}\gamma _{1}-k\gamma _{2}&(k-1)\gamma _{1}\gamma _{2}\\1-k&k\gamma _{1}-\gamma _{2}\end{pmatrix}}}
or, if one of the fixed points is at infinity:
H
(
k
;
γ
,
∞
)
=
(
k
(
1
−
k
)
γ
0
1
)
.
{\displaystyle {\mathfrak {H}}(k;\gamma ,\infty )={\begin{pmatrix}k&(1-k)\gamma \\0&1\end{pmatrix}}.}
From the above expressions one can calculate the derivatives of f at the fixed points:
f
′
(
γ
1
)
=
k
{\displaystyle f'(\gamma _{1})=k}
and
f
′
(
γ
2
)
=
1
/
k
.
{\displaystyle f'(\gamma _{2})=1/k.}
Observe that, given an ordering of the fixed points, we can distinguish one of the multipliers (k) of f as the characteristic constant of f. Reversing the order of the fixed points is equivalent to taking the inverse multiplier for the characteristic constant:
H
(
k
;
γ
1
,
γ
2
)
=
H
(
1
/
k
;
γ
2
,
γ
1
)
.
{\displaystyle {\mathfrak {H}}(k;\gamma _{1},\gamma _{2})={\mathfrak {H}}(1/k;\gamma _{2},\gamma _{1}).}
For loxodromic transformations, whenever |k| > 1, one says that γ1 is the repulsive fixed point, and γ2 is the attractive fixed point. For |k| < 1, the roles are reversed.
Parabolic case:
In the parabolic case there is only one fixed point γ. The transformation sending that point to ∞ is
g
(
z
)
=
1
z
−
γ
{\displaystyle g(z)={\frac {1}{z-\gamma }}}
or the identity if γ is already at infinity. The transformation
g
f
g
−
1
{\displaystyle gfg^{-1}}
fixes infinity and is therefore a translation:
g
f
g
−
1
(
z
)
=
z
+
β
.
{\displaystyle gfg^{-1}(z)=z+\beta \,.}
Here, β is called the translation length. The fixed point formula for a parabolic transformation is then
1
f
(
z
)
−
γ
=
1
z
−
γ
+
β
.
{\displaystyle {\frac {1}{f(z)-\gamma }}={\frac {1}{z-\gamma }}+\beta .}
Solving for f (in matrix form) gives
H
(
β
;
γ
)
=
(
1
+
γ
β
−
β
γ
2
β
1
−
γ
β
)
{\displaystyle {\mathfrak {H}}(\beta ;\gamma )={\begin{pmatrix}1+\gamma \beta &-\beta \gamma ^{2}\\\beta &1-\gamma \beta \end{pmatrix}}}
Note that
det
H
(
β
;
γ
)
=
|
H
(
β
;
γ
)
|
=
det
(
1
+
γ
β
−
β
γ
2
β
1
−
γ
β
)
=
1
−
γ
2
β
2
+
γ
2
β
2
=
1
{\displaystyle \det {\mathfrak {H}}(\beta ;\gamma )=|{\mathfrak {H}}(\beta ;\gamma )|=\det {\begin{pmatrix}1+\gamma \beta &-\beta \gamma ^{2}\\\beta &1-\gamma \beta \end{pmatrix}}=1-\gamma ^{2}\beta ^{2}+\gamma ^{2}\beta ^{2}=1}
If γ = ∞:
H
(
β
;
∞
)
=
(
1
β
0
1
)
{\displaystyle {\mathfrak {H}}(\beta ;\infty )={\begin{pmatrix}1&\beta \\0&1\end{pmatrix}}}
Note that β is not the characteristic constant of f, which is always 1 for a parabolic transformation. From the above expressions one can calculate:
f
′
(
γ
)
=
1.
{\displaystyle f'(\gamma )=1.}
== Poles of the transformation ==
The point
z
∞
=
−
d
c
{\textstyle z_{\infty }=-{\frac {d}{c}}}
is called the pole of
H
{\displaystyle {\mathfrak {H}}}
; it is that point which is transformed to the point at infinity under
H
{\displaystyle {\mathfrak {H}}}
.
The inverse pole
Z
∞
=
a
c
{\textstyle Z_{\infty }={\frac {a}{c}}}
is that point to which the point at infinity is transformed. The point midway between the two poles is always the same as the point midway between the two fixed points:
γ
1
+
γ
2
=
z
∞
+
Z
∞
.
{\displaystyle \gamma _{1}+\gamma _{2}=z_{\infty }+Z_{\infty }.}
These four points are the vertices of a parallelogram which is sometimes called the characteristic parallelogram of the transformation.
A transform
H
{\displaystyle {\mathfrak {H}}}
can be specified with two fixed points γ1, γ2 and the pole
z
∞
{\displaystyle z_{\infty }}
.
H
=
(
Z
∞
−
γ
1
γ
2
1
−
z
∞
)
,
Z
∞
=
γ
1
+
γ
2
−
z
∞
.
{\displaystyle {\mathfrak {H}}={\begin{pmatrix}Z_{\infty }&-\gamma _{1}\gamma _{2}\\1&-z_{\infty }\end{pmatrix}},\;\;Z_{\infty }=\gamma _{1}+\gamma _{2}-z_{\infty }.}
This allows us to derive a formula for conversion between k and
z
∞
{\displaystyle z_{\infty }}
given
γ
1
,
γ
2
{\displaystyle \gamma _{1},\gamma _{2}}
:
z
∞
=
k
γ
1
−
γ
2
1
−
k
{\displaystyle z_{\infty }={\frac {k\gamma _{1}-\gamma _{2}}{1-k}}}
k
=
γ
2
−
z
∞
γ
1
−
z
∞
=
Z
∞
−
γ
1
Z
∞
−
γ
2
=
a
−
c
γ
1
a
−
c
γ
2
,
{\displaystyle k={\frac {\gamma _{2}-z_{\infty }}{\gamma _{1}-z_{\infty }}}={\frac {Z_{\infty }-\gamma _{1}}{Z_{\infty }-\gamma _{2}}}={\frac {a-c\gamma _{1}}{a-c\gamma _{2}}},}
which reduces down to
k
=
(
a
+
d
)
+
(
a
−
d
)
2
+
4
b
c
(
a
+
d
)
−
(
a
−
d
)
2
+
4
b
c
.
{\displaystyle k={\frac {(a+d)+{\sqrt {(a-d)^{2}+4bc}}}{(a+d)-{\sqrt {(a-d)^{2}+4bc}}}}.}
The last expression coincides with one of the (mutually reciprocal) eigenvalue ratios
λ
1
λ
2
{\textstyle {\frac {\lambda _{1}}{\lambda _{2}}}}
of
H
{\displaystyle {\mathfrak {H}}}
(compare the discussion in the preceding section about the characteristic constant of a transformation). Its characteristic polynomial is equal to
det
(
λ
I
2
−
H
)
=
λ
2
−
tr
H
λ
+
det
H
=
λ
2
−
(
a
+
d
)
λ
+
(
a
d
−
b
c
)
{\displaystyle \det(\lambda I_{2}-{\mathfrak {H}})=\lambda ^{2}-\operatorname {tr} {\mathfrak {H}}\,\lambda +\det {\mathfrak {H}}=\lambda ^{2}-(a+d)\lambda +(ad-bc)}
which has roots
λ
i
=
(
a
+
d
)
±
(
a
−
d
)
2
+
4
b
c
2
=
(
a
+
d
)
±
(
a
+
d
)
2
−
4
(
a
d
−
b
c
)
2
=
c
γ
i
+
d
.
{\displaystyle \lambda _{i}={\frac {(a+d)\pm {\sqrt {(a-d)^{2}+4bc}}}{2}}={\frac {(a+d)\pm {\sqrt {(a+d)^{2}-4(ad-bc)}}}{2}}=c\gamma _{i}+d\,.}
== Simple Möbius transformations and composition ==
A Möbius transformation can be composed as a sequence of simple transformations.
The following simple transformations are also Möbius transformations:
f
(
z
)
=
z
+
b
(
a
=
1
,
c
=
0
,
d
=
1
)
{\displaystyle f(z)=z+b\quad (a=1,c=0,d=1)}
is a translation.
f
(
z
)
=
a
z
(
b
=
0
,
c
=
0
,
d
=
1
)
{\displaystyle f(z)=az\quad (b=0,c=0,d=1)}
is a combination of a homothety (uniform scaling) and a rotation. If
|
a
|
=
1
{\displaystyle |a|=1}
then it is a rotation, if
a
∈
R
{\displaystyle a\in \mathbb {R} }
then it is a homothety.
f
(
z
)
=
1
/
z
(
a
=
0
,
b
=
1
,
c
=
1
,
d
=
0
)
{\displaystyle f(z)=1/z\quad (a=0,b=1,c=1,d=0)}
(inversion and reflection with respect to the real axis)
=== Composition of simple transformations ===
If
c
≠
0
{\displaystyle c\neq 0}
, let:
f
1
(
z
)
=
z
+
d
/
c
{\displaystyle f_{1}(z)=z+d/c\quad }
(translation by d/c)
f
2
(
z
)
=
1
/
z
{\displaystyle f_{2}(z)=1/z\quad }
(inversion and reflection with respect to the real axis)
f
3
(
z
)
=
b
c
−
a
d
c
2
z
{\displaystyle f_{3}(z)={\frac {bc-ad}{c^{2}}}z\quad }
(homothety and rotation)
f
4
(
z
)
=
z
+
a
/
c
{\displaystyle f_{4}(z)=z+a/c\quad }
(translation by a/c)
Then these functions can be composed, showing that, if
f
(
z
)
=
a
z
+
b
c
z
+
d
,
{\displaystyle f(z)={\frac {az+b}{cz+d}},}
one has
f
=
f
4
∘
f
3
∘
f
2
∘
f
1
.
{\displaystyle f=f_{4}\circ f_{3}\circ f_{2}\circ f_{1}.}
In other terms, one has
a
z
+
b
c
z
+
d
=
a
c
+
e
z
+
d
c
,
{\displaystyle {\frac {az+b}{cz+d}}={\frac {a}{c}}+{\frac {e}{z+{\frac {d}{c}}}},}
with
e
=
b
c
−
a
d
c
2
.
{\displaystyle e={\frac {bc-ad}{c^{2}}}.}
This decomposition makes many properties of the Möbius transformation obvious.
== Elementary properties ==
A Möbius transformation is equivalent to a sequence of simpler transformations. The composition makes many properties of the Möbius transformation obvious.
=== Formula for the inverse transformation ===
The existence of the inverse Möbius transformation and its explicit formula are easily derived by the composition of the inverse functions of the simpler transformations. That is, define functions g1, g2, g3, g4 such that each gi is the inverse of fi. Then the composition
g
1
∘
g
2
∘
g
3
∘
g
4
(
z
)
=
f
−
1
(
z
)
=
d
z
−
b
−
c
z
+
a
{\displaystyle g_{1}\circ g_{2}\circ g_{3}\circ g_{4}(z)=f^{-1}(z)={\frac {dz-b}{-cz+a}}}
gives a formula for the inverse.
=== Preservation of angles and generalized circles ===
From this decomposition, we see that Möbius transformations carry over all non-trivial properties of circle inversion. For example, the preservation of angles is reduced to proving that circle inversion preserves angles since the other types of transformations are dilations and isometries (translation, reflection, rotation), which trivially preserve angles.
Furthermore, Möbius transformations map generalized circles to generalized circles since circle inversion has this property. A generalized circle is either a circle or a line, the latter being considered as a circle through the point at infinity. Note that a Möbius transformation does not necessarily map circles to circles and lines to lines: it can mix the two. Even if it maps a circle to another circle, it does not necessarily map the first circle's center to the second circle's center.
=== Cross-ratio preservation ===
Cross-ratios are invariant under Möbius transformations. That is, if a Möbius transformation maps four distinct points
z
1
,
z
2
,
z
3
,
z
4
{\displaystyle z_{1},z_{2},z_{3},z_{4}}
to four distinct points
w
1
,
w
2
,
w
3
,
w
4
{\displaystyle w_{1},w_{2},w_{3},w_{4}}
respectively, then
(
z
1
−
z
3
)
(
z
2
−
z
4
)
(
z
2
−
z
3
)
(
z
1
−
z
4
)
=
(
w
1
−
w
3
)
(
w
2
−
w
4
)
(
w
2
−
w
3
)
(
w
1
−
w
4
)
.
{\displaystyle {\frac {(z_{1}-z_{3})(z_{2}-z_{4})}{(z_{2}-z_{3})(z_{1}-z_{4})}}={\frac {(w_{1}-w_{3})(w_{2}-w_{4})}{(w_{2}-w_{3})(w_{1}-w_{4})}}.}
If one of the points
z
1
,
z
2
,
z
3
,
z
4
{\displaystyle z_{1},z_{2},z_{3},z_{4}}
is the point at infinity, then the cross-ratio has to be defined by taking the appropriate limit; e.g. the cross-ratio of
z
1
,
z
2
,
z
3
,
∞
{\displaystyle z_{1},z_{2},z_{3},\infty }
is
(
z
1
−
z
3
)
(
z
2
−
z
3
)
.
{\displaystyle {\frac {(z_{1}-z_{3})}{(z_{2}-z_{3})}}.}
The cross ratio of four different points is real if and only if there is a line or a circle passing through them. This is another way to show that Möbius transformations preserve generalized circles.
=== Conjugation ===
Two points z1 and z2 are conjugate with respect to a generalized circle C, if, given a generalized circle D passing through z1 and z2 and cutting C in two points a and b, (z1, z2; a, b) are in harmonic cross-ratio (i.e. their cross ratio is −1). This property does not depend on the choice of the circle D. This property is also sometimes referred to as being symmetric with respect to a line or circle.
Two points z, z∗ are conjugate with respect to a line, if they are symmetric with respect to the line. Two points are conjugate with respect to a circle if they are exchanged by the inversion with respect to this circle.
The point z∗ is conjugate to z when L is the line determined by the vector based upon eiθ, at the point z0. This can be explicitly given as
z
∗
=
e
2
i
θ
z
−
z
0
¯
+
z
0
.
{\displaystyle z^{*}=e^{2i\theta }\,{\overline {z-z_{0}}}+z_{0}.}
The point z∗ is conjugate to z when C is the circle of a radius r, centered about z0. This can be explicitly given as
z
∗
=
r
2
z
−
z
0
¯
+
z
0
.
{\displaystyle z^{*}={\frac {r^{2}}{\overline {z-z_{0}}}}+z_{0}.}
Since Möbius transformations preserve generalized circles and cross-ratios, they also preserve the conjugation.
== Projective matrix representations ==
=== Isomorphism between the Möbius group and PGL(2, C) ===
The natural action of PGL(2, C) on the complex projective line CP1 is exactly the natural action of the Möbius group on the Riemann sphere
==== Correspondance between the complex projective line and the Riemann sphere ====
Here, the projective line CP1 and the Riemann sphere are identified as follows:
[
z
1
:
z
2
]
∼
z
1
z
2
.
{\displaystyle [z_{1}:z_{2}]\ \thicksim {\frac {z_{1}}{z_{2}}}.}
Here [z1:z2] are homogeneous coordinates on CP1; the point [1:0] corresponds to the point ∞ of the Riemann sphere. By using homogeneous coordinates, many calculations involving Möbius transformations can be simplified, since no case distinctions dealing with ∞ are required.
==== Action of PGL(2, C) on the complex projective line ====
Every invertible complex 2×2 matrix
H
=
(
a
b
c
d
)
{\displaystyle {\mathfrak {H}}={\begin{pmatrix}a&b\\c&d\end{pmatrix}}}
acts on the projective line as
z
=
[
z
1
:
z
2
]
↦
w
=
[
w
1
:
w
2
]
,
{\displaystyle z=[z_{1}:z_{2}]\mapsto w=[w_{1}:w_{2}],}
where
(
w
1
w
2
)
=
(
a
b
c
d
)
(
z
1
z
2
)
=
(
a
z
1
+
b
z
2
c
z
1
+
d
z
2
)
.
{\displaystyle {\begin{pmatrix}w_{1}\\w_{2}\end{pmatrix}}={\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}z_{1}\\z_{2}\end{pmatrix}}={\begin{pmatrix}az_{1}+bz_{2}\\cz_{1}+dz_{2}\end{pmatrix}}.}
The result is therefore
w
=
[
w
1
:
w
2
]
=
[
a
z
1
+
b
z
2
:
c
z
1
+
d
z
2
]
{\displaystyle w=[w_{1}:w_{2}]=[az_{1}+bz_{2}:cz_{1}+dz_{2}]}
Which, using the above identification, corresponds to the following point on the Riemann sphere :
w
=
[
a
z
1
+
b
z
2
:
c
z
1
+
d
z
2
]
∼
a
z
1
+
b
z
2
c
z
1
+
d
z
2
=
a
z
1
z
2
+
b
c
z
1
z
2
+
d
.
{\displaystyle w=[az_{1}+bz_{2}:cz_{1}+dz_{2}]\thicksim {\frac {az_{1}+bz_{2}}{cz_{1}+dz_{2}}}={\frac {a{\frac {z_{1}}{z_{2}}}+b}{c{\frac {z_{1}}{z_{2}}}+d}}.}
==== Equivalence with a Möbius transformation on the Riemann sphere ====
Since the above matrix is invertible if and only if its determinant ad − bc is not zero, this induces an identification of the action of the group of Möbius transformations with the action of PGL(2, C) on the complex projective line. In this identification, the above matrix
H
{\displaystyle {\mathfrak {H}}}
corresponds to the Möbius transformation
z
↦
a
z
+
b
c
z
+
d
.
{\displaystyle z\mapsto {\frac {az+b}{cz+d}}.}
This identification is a group isomorphism, since the multiplication of
H
{\displaystyle {\mathfrak {H}}}
by a non zero scalar
λ
{\displaystyle \lambda }
does not change the element of PGL(2, C), and, as this multiplication consists of multiplying all matrix entries by
λ
,
{\displaystyle \lambda ,}
this does not change the corresponding Möbius transformation.
=== Other groups ===
For any field K, one can similarly identify the group PGL(2, K) of the projective linear automorphisms with the group of fractional linear transformations. This is widely used; for example in the study of homographies of the real line and its applications in optics.
If one divides
H
{\displaystyle {\mathfrak {H}}}
by a square root of its determinant, one gets a matrix of determinant one. This induces a surjective group homomorphism from the special linear group SL(2, C) to PGL(2, C), with
±
I
{\displaystyle \pm I}
as its kernel.
This allows showing that the Möbius group is a 3-dimensional complex Lie group (or a 6-dimensional real Lie group), which is a semisimple and non-compact, and that SL(2,C) is a double cover of PSL(2, C). Since SL(2, C) is simply-connected, it is the universal cover of the Möbius group, and the fundamental group of the Möbius group is Z2.
=== Specifying a transformation by three points ===
Given a set of three distinct points
z
1
,
z
2
,
z
3
{\displaystyle z_{1},z_{2},z_{3}}
on the Riemann sphere and a second set of distinct points
w
1
,
w
2
,
w
3
{\displaystyle w_{1},w_{2},w_{3}}
, there exists precisely one Möbius transformation
f
(
z
)
{\displaystyle f(z)}
with
f
(
z
j
)
=
w
j
{\displaystyle f(z_{j})=w_{j}}
for
j
=
1
,
2
,
3
{\displaystyle j=1,2,3}
. (In other words: the action of the Möbius group on the Riemann sphere is sharply 3-transitive.) There are several ways to determine
f
(
z
)
{\displaystyle f(z)}
from the given sets of points.
==== Mapping first to 0, 1, ∞ ====
It is easy to check that the Möbius transformation
f
1
(
z
)
=
(
z
−
z
1
)
(
z
2
−
z
3
)
(
z
−
z
3
)
(
z
2
−
z
1
)
{\displaystyle f_{1}(z)={\frac {(z-z_{1})(z_{2}-z_{3})}{(z-z_{3})(z_{2}-z_{1})}}}
with matrix
H
1
=
(
z
2
−
z
3
−
z
1
(
z
2
−
z
3
)
z
2
−
z
1
−
z
3
(
z
2
−
z
1
)
)
{\displaystyle {\mathfrak {H}}_{1}={\begin{pmatrix}z_{2}-z_{3}&-z_{1}(z_{2}-z_{3})\\z_{2}-z_{1}&-z_{3}(z_{2}-z_{1})\end{pmatrix}}}
maps
z
1
,
z
2
and
z
3
{\displaystyle z_{1},z_{2}{\text{ and }}z_{3}}
to
0
,
1
,
and
∞
{\displaystyle 0,1,\ {\text{and}}\ \infty }
, respectively. If one of the
z
j
{\displaystyle z_{j}}
is
∞
{\displaystyle \infty }
, then the proper formula for
H
1
{\displaystyle {\mathfrak {H}}_{1}}
is obtained from the above one by first dividing all entries by
z
j
{\displaystyle z_{j}}
and then taking the limit
z
j
→
∞
{\displaystyle z_{j}\to \infty }
.
If
H
2
{\displaystyle {\mathfrak {H}}_{2}}
is similarly defined to map
w
1
,
w
2
,
w
3
{\displaystyle w_{1},w_{2},w_{3}}
to
0
,
1
,
and
∞
,
{\displaystyle 0,1,\ {\text{and}}\ \infty ,}
then the matrix
H
{\displaystyle {\mathfrak {H}}}
which maps
z
1
,
2
,
3
{\displaystyle z_{1,2,3}}
to
w
1
,
2
,
3
{\displaystyle w_{1,2,3}}
becomes
H
=
H
2
−
1
H
1
.
{\displaystyle {\mathfrak {H}}={\mathfrak {H}}_{2}^{-1}{\mathfrak {H}}_{1}.}
The stabilizer of
{
0
,
1
,
∞
}
{\displaystyle \{0,1,\infty \}}
(as an unordered set) is a subgroup known as the anharmonic group.
==== Explicit determinant formula ====
The equation
w
=
a
z
+
b
c
z
+
d
{\displaystyle w={\frac {az+b}{cz+d}}}
is equivalent to the equation of a standard hyperbola
c
w
z
−
a
z
+
d
w
−
b
=
0
{\displaystyle cwz-az+dw-b=0}
in the
(
z
,
w
)
{\displaystyle (z,w)}
-plane. The problem of constructing a Möbius transformation
H
(
z
)
{\displaystyle {\mathfrak {H}}(z)}
mapping a triple
(
z
1
,
z
2
,
z
3
)
{\displaystyle (z_{1},z_{2},z_{3})}
to another triple
(
w
1
,
w
2
,
w
3
)
{\displaystyle (w_{1},w_{2},w_{3})}
is thus equivalent to finding the coefficients
a
,
b
,
c
,
d
{\displaystyle a,b,c,d}
of the hyperbola passing through the points
(
z
i
,
w
i
)
{\displaystyle (z_{i},w_{i})}
. An explicit equation can be found by evaluating the determinant
|
z
w
z
w
1
z
1
w
1
z
1
w
1
1
z
2
w
2
z
2
w
2
1
z
3
w
3
z
3
w
3
1
|
{\displaystyle {\begin{vmatrix}zw&z&w&1\\z_{1}w_{1}&z_{1}&w_{1}&1\\z_{2}w_{2}&z_{2}&w_{2}&1\\z_{3}w_{3}&z_{3}&w_{3}&1\end{vmatrix}}\,}
by means of a Laplace expansion along the first row, resulting in explicit formulae,
a
=
z
1
w
1
(
w
2
−
w
3
)
+
z
2
w
2
(
w
3
−
w
1
)
+
z
3
w
3
(
w
1
−
w
2
)
,
b
=
z
1
w
1
(
z
2
w
3
−
z
3
w
2
)
+
z
2
w
2
(
z
3
w
1
−
z
1
w
3
)
+
z
3
w
3
(
z
1
w
2
−
z
2
w
1
)
,
c
=
w
1
(
z
3
−
z
2
)
+
w
2
(
z
1
−
z
3
)
+
w
3
(
z
2
−
z
1
)
,
d
=
z
1
w
1
(
z
2
−
z
3
)
+
z
2
w
2
(
z
3
−
z
1
)
+
z
3
w
3
(
z
1
−
z
2
)
{\displaystyle {\begin{aligned}a&=z_{1}w_{1}(w_{2}-w_{3})+z_{2}w_{2}(w_{3}-w_{1})+z_{3}w_{3}(w_{1}-w_{2}),\\[5mu]b&=z_{1}w_{1}(z_{2}w_{3}-z_{3}w_{2})+z_{2}w_{2}(z_{3}w_{1}-z_{1}w_{3})+z_{3}w_{3}(z_{1}w_{2}-z_{2}w_{1}),\\[5mu]c&=w_{1}(z_{3}-z_{2})+w_{2}(z_{1}-z_{3})+w_{3}(z_{2}-z_{1}),\\[5mu]d&=z_{1}w_{1}(z_{2}-z_{3})+z_{2}w_{2}(z_{3}-z_{1})+z_{3}w_{3}(z_{1}-z_{2})\end{aligned}}}
for the coefficients
a
,
b
,
c
,
d
{\displaystyle a,b,c,d}
of the representing matrix
H
{\displaystyle {\mathfrak {H}}}
. The constructed matrix
H
{\displaystyle {\mathfrak {H}}}
has determinant equal to
(
z
1
−
z
2
)
(
z
1
−
z
3
)
(
z
2
−
z
3
)
(
w
1
−
w
2
)
(
w
1
−
w
3
)
(
w
2
−
w
3
)
{\displaystyle (z_{1}-z_{2})(z_{1}-z_{3})(z_{2}-z_{3})(w_{1}-w_{2})(w_{1}-w_{3})(w_{2}-w_{3})}
, which does not vanish if the
z
j
{\displaystyle z_{j}}
resp.
w
j
{\displaystyle w_{j}}
are pairwise different thus the Möbius transformation is well-defined. If one of the points
z
j
{\displaystyle z_{j}}
or
w
j
{\displaystyle w_{j}}
is
∞
{\displaystyle \infty }
, then we first divide all four determinants by this variable and then take the limit as the variable approaches
∞
{\displaystyle \infty }
.
== Subgroups of the Möbius group ==
If we require the coefficients
a
,
b
,
c
,
d
{\displaystyle a,b,c,d}
of a Möbius transformation to be real numbers with
a
d
−
b
c
=
1
{\displaystyle ad-bc=1}
, we obtain a subgroup of the Möbius group denoted as PSL(2, R). This is the group of those Möbius transformations that map the upper half-plane H = {x + iy : y > 0} to itself, and is equal to the group of all biholomorphic (or equivalently: bijective, conformal and orientation-preserving) maps H → H. If a proper metric is introduced, the upper half-plane becomes a model of the hyperbolic plane H2, the Poincaré half-plane model, and PSL(2, R) is the group of all orientation-preserving isometries of H2 in this model.
The subgroup of all Möbius transformations that map the open disk D = {z : |z| < 1} to itself consists of all transformations of the form
f
(
z
)
=
e
i
ϕ
z
+
b
b
¯
z
+
1
{\displaystyle f(z)=e^{i\phi }{\frac {z+b}{{\bar {b}}z+1}}}
with
ϕ
{\displaystyle \phi }
∈ R, b ∈ C and |b| < 1. This is equal to the group of all biholomorphic (or equivalently: bijective, angle-preserving and orientation-preserving) maps D → D. By introducing a suitable metric, the open disk turns into another model of the hyperbolic plane, the Poincaré disk model, and this group is the group of all orientation-preserving isometries of H2 in this model.
Since both of the above subgroups serve as isometry groups of H2, they are isomorphic. A concrete isomorphism is given by conjugation with the transformation
f
(
z
)
=
z
+
i
i
z
+
1
{\displaystyle f(z)={\frac {z+i}{iz+1}}}
which bijectively maps the open unit disk to the upper half plane.
Alternatively, consider an open disk with radius r, centered at r i. The Poincaré disk model in this disk becomes identical to the upper-half-plane model as r approaches ∞.
A maximal compact subgroup of the Möbius group
M
{\displaystyle {\mathcal {M}}}
is given by (Tóth 2002)
M
0
:=
{
z
↦
u
z
−
v
¯
v
z
+
u
¯
:
|
u
|
2
+
|
v
|
2
=
1
}
,
{\displaystyle {\mathcal {M}}_{0}:=\left\{z\mapsto {\frac {uz-{\bar {v}}}{vz+{\bar {u}}}}:|u|^{2}+|v|^{2}=1\right\},}
and corresponds under the isomorphism
M
≅
PSL
(
2
,
C
)
{\displaystyle {\mathcal {M}}\cong \operatorname {PSL} (2,\mathbb {C} )}
to the projective special unitary group PSU(2, C) which is isomorphic to the special orthogonal group SO(3) of rotations in three dimensions, and can be interpreted as rotations of the Riemann sphere. Every finite subgroup is conjugate into this maximal compact group, and thus these correspond exactly to the polyhedral groups, the point groups in three dimensions.
Icosahedral groups of Möbius transformations were used by Felix Klein to give an analytic solution to the quintic equation in (Klein 1913); a modern exposition is given in (Tóth 2002).
If we require the coefficients a, b, c, d of a Möbius transformation to be integers with ad − bc = 1, we obtain the modular group PSL(2, Z), a discrete subgroup of PSL(2, R) important in the study of lattices in the complex plane, elliptic functions and elliptic curves. The discrete subgroups of PSL(2, R) are known as Fuchsian groups; they are important in the study of Riemann surfaces.
== Classification ==
In the following discussion we will always assume that the representing matrix
H
{\displaystyle {\mathfrak {H}}}
is normalized such that
det
H
=
a
d
−
b
c
=
1
{\displaystyle \det {\mathfrak {H}}=ad-bc=1}
.
Non-identity Möbius transformations are commonly classified into four types, parabolic, elliptic, hyperbolic and loxodromic, with the hyperbolic ones being a subclass of the loxodromic ones. The classification has both algebraic and geometric significance. Geometrically, the different types result in different transformations of the complex plane, as the figures below illustrate.
The four types can be distinguished by looking at the trace
tr
H
=
a
+
d
{\displaystyle \operatorname {tr} {\mathfrak {H}}=a+d}
. The trace is invariant under conjugation, that is,
tr
G
H
G
−
1
=
tr
H
,
{\displaystyle \operatorname {tr} \,{\mathfrak {GHG}}^{-1}=\operatorname {tr} \,{\mathfrak {H}},}
and so every member of a conjugacy class will have the same trace. Every Möbius transformation can be written such that its representing matrix
H
{\displaystyle {\mathfrak {H}}}
has determinant one (by multiplying the entries with a suitable scalar). Two Möbius transformations
H
,
H
′
{\displaystyle {\mathfrak {H}},{\mathfrak {H}}'}
(both not equal to the identity transform) with
det
H
=
det
H
′
=
1
{\displaystyle \det {\mathfrak {H}}=\det {\mathfrak {H}}'=1}
are conjugate if and only if
tr
2
H
=
tr
2
H
′
.
{\displaystyle \operatorname {tr} ^{2}{\mathfrak {H}}=\operatorname {tr} ^{2}{\mathfrak {H}}'.}
=== Parabolic transforms ===
A non-identity Möbius transformation defined by a matrix
H
{\displaystyle {\mathfrak {H}}}
of determinant one is said to be parabolic if
tr
2
H
=
(
a
+
d
)
2
=
4
{\displaystyle \operatorname {tr} ^{2}{\mathfrak {H}}=(a+d)^{2}=4}
(so the trace is plus or minus 2; either can occur for a given transformation since
H
{\displaystyle {\mathfrak {H}}}
is determined only up to sign). In fact one of the choices for
H
{\displaystyle {\mathfrak {H}}}
has the same characteristic polynomial X2 − 2X + 1 as the identity matrix, and is therefore unipotent. A Möbius transform is parabolic if and only if it has exactly one fixed point in the extended complex plane
C
^
=
C
∪
{
∞
}
{\displaystyle {\widehat {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}}
, which happens if and only if it can be defined by a matrix conjugate to
(
1
1
0
1
)
{\displaystyle {\begin{pmatrix}1&1\\0&1\end{pmatrix}}}
which describes a translation in the complex plane.
The set of all parabolic Möbius transformations with a given fixed point in
C
^
{\displaystyle {\widehat {\mathbb {C} }}}
, together with the identity, forms a subgroup isomorphic to the group of matrices
{
(
1
b
0
1
)
∣
b
∈
C
}
;
{\displaystyle \left\{{\begin{pmatrix}1&b\\0&1\end{pmatrix}}\mid b\in \mathbb {C} \right\};}
this is an example of the unipotent radical of a Borel subgroup (of the Möbius group, or of SL(2, C) for the matrix group; the notion is defined for any reductive Lie group).
=== Characteristic constant ===
All non-parabolic transformations have two fixed points and are defined by a matrix conjugate to
(
λ
0
0
λ
−
1
)
{\displaystyle {\begin{pmatrix}\lambda &0\\0&\lambda ^{-1}\end{pmatrix}}}
with the complex number λ not equal to 0, 1 or −1, corresponding to a dilation/rotation through multiplication by the complex number k = λ2, called the characteristic constant or multiplier of the transformation.
=== Elliptic transforms ===
The transformation is said to be elliptic if it can be represented by a matrix
H
{\displaystyle {\mathfrak {H}}}
of determinant 1 such that
0
≤
tr
2
H
<
4.
{\displaystyle 0\leq \operatorname {tr} ^{2}{\mathfrak {H}}<4.}
A transform is elliptic if and only if |λ| = 1 and λ ≠ ±1. Writing
λ
=
e
i
α
{\displaystyle \lambda =e^{i\alpha }}
, an elliptic transform is conjugate to
(
cos
α
−
sin
α
sin
α
cos
α
)
{\displaystyle {\begin{pmatrix}\cos \alpha &-\sin \alpha \\\sin \alpha &\cos \alpha \end{pmatrix}}}
with α real.
For any
H
{\displaystyle {\mathfrak {H}}}
with characteristic constant k, the characteristic constant of
H
n
{\displaystyle {\mathfrak {H}}^{n}}
is kn. Thus, all Möbius transformations of finite order are elliptic transformations, namely exactly those where λ is a root of unity, or, equivalently, where α is a rational multiple of π. The simplest possibility of a fractional multiple means α = π/2, which is also the unique case of
tr
H
=
0
{\displaystyle \operatorname {tr} {\mathfrak {H}}=0}
, is also denoted as a circular transform; this corresponds geometrically to rotation by 180° about two fixed points. This class is represented in matrix form as:
(
0
−
1
1
0
)
.
{\displaystyle {\begin{pmatrix}0&-1\\1&0\end{pmatrix}}.}
There are 3 representatives fixing {0, 1, ∞}, which are the three transpositions in the symmetry group of these 3 points:
1
/
z
,
{\displaystyle 1/z,}
which fixes 1 and swaps 0 with ∞ (rotation by 180° about the points 1 and −1),
1
−
z
{\displaystyle 1-z}
, which fixes ∞ and swaps 0 with 1 (rotation by 180° about the points 1/2 and ∞), and
z
/
(
z
−
1
)
{\displaystyle z/(z-1)}
which fixes 0 and swaps 1 with ∞ (rotation by 180° about the points 0 and 2).
=== Hyperbolic transforms ===
The transform is said to be hyperbolic if it can be represented by a matrix
H
{\displaystyle {\mathfrak {H}}}
whose trace is real with
tr
2
H
>
4.
{\displaystyle \operatorname {tr} ^{2}{\mathfrak {H}}>4.}
A transform is hyperbolic if and only if λ is real and λ ≠ ±1.
=== Loxodromic transforms ===
The transform is said to be loxodromic if
tr
2
H
{\displaystyle \operatorname {tr} ^{2}{\mathfrak {H}}}
is not in [0, 4]. A transformation is loxodromic if and only if
|
λ
|
≠
1
{\displaystyle |\lambda |\neq 1}
.
Historically, navigation by loxodrome or rhumb line refers to a path of constant bearing; the resulting path is a logarithmic spiral, similar in shape to the transformations of the complex plane that a loxodromic Möbius transformation makes. See the geometric figures below.
=== General classification ===
=== The real case and a note on terminology ===
Over the real numbers (if the coefficients must be real), there are no non-hyperbolic loxodromic transformations, and the classification is into elliptic, parabolic, and hyperbolic, as for real conics. The terminology is due to considering half the absolute value of the trace, |tr|/2, as the eccentricity of the transformation – division by 2 corrects for the dimension, so the identity has eccentricity 1 (tr/n is sometimes used as an alternative for the trace for this reason), and absolute value corrects for the trace only being defined up to a factor of ±1 due to working in PSL. Alternatively one may use half the trace squared as a proxy for the eccentricity squared, as was done above; these classifications (but not the exact eccentricity values, since squaring and absolute values are different) agree for real traces but not complex traces. The same terminology is used for the classification of elements of SL(2, R) (the 2-fold cover), and analogous classifications are used elsewhere. Loxodromic transformations are an essentially complex phenomenon, and correspond to complex eccentricities.
== Geometric interpretation of the characteristic constant ==
The following picture depicts (after stereographic transformation from the sphere to the plane) the two fixed points of a Möbius transformation in the non-parabolic case:
The characteristic constant can be expressed in terms of its logarithm:
e
ρ
+
α
i
=
k
.
{\displaystyle e^{\rho +\alpha i}=k.}
When expressed in this way, the real number ρ becomes an expansion factor. It indicates how repulsive the fixed point γ1 is, and how attractive γ2 is. The real number α is a rotation factor, indicating to what extent the transform rotates the plane anti-clockwise about γ1 and clockwise about γ2.
=== Elliptic transformations ===
If ρ = 0, then the fixed points are neither attractive nor repulsive but indifferent, and the transformation is said to be elliptic. These transformations tend to move all points in circles around the two fixed points. If one of the fixed points is at infinity, this is equivalent to doing an affine rotation around a point.
If we take the one-parameter subgroup generated by any elliptic Möbius transformation, we obtain a continuous transformation, such that every transformation in the subgroup fixes the same two points. All other points flow along a family of circles which is nested between the two fixed points on the Riemann sphere. In general, the two fixed points can be any two distinct points.
This has an important physical interpretation.
Imagine that some observer rotates with constant angular velocity about some axis. Then we can take the two fixed points to be the North and South poles of the celestial sphere. The appearance of the night sky is now transformed continuously in exactly the manner described by the one-parameter subgroup of elliptic transformations sharing the fixed points 0, ∞, and with the number α corresponding to the constant angular velocity of our observer.
Here are some figures illustrating the effect of an elliptic Möbius transformation on the Riemann sphere (after stereographic projection to the plane):
These pictures illustrate the effect of a single Möbius transformation. The one-parameter subgroup which it generates continuously moves points along the family of circular arcs suggested by the pictures.
=== Hyperbolic transformations ===
If α is zero (or a multiple of 2π), then the transformation is said to be hyperbolic. These transformations tend to move points along circular paths from one fixed point toward the other.
If we take the one-parameter subgroup generated by any hyperbolic Möbius transformation, we obtain a continuous transformation, such that every transformation in the subgroup fixes the same two points. All other points flow along a certain family of circular arcs away from the first fixed point and toward the second fixed point. In general, the two fixed points may be any two distinct points on the Riemann sphere.
This too has an important physical interpretation. Imagine that an observer accelerates (with constant magnitude of acceleration) in the direction of the North pole on his celestial sphere. Then the appearance of the night sky is transformed in exactly the manner described by the one-parameter subgroup of hyperbolic transformations sharing the fixed points 0, ∞, with the real number ρ corresponding to the magnitude of his acceleration vector. The stars seem to move along longitudes, away from the South pole toward the North pole. (The longitudes appear as circular arcs under stereographic projection from the sphere to the plane.)
Here are some figures illustrating the effect of a hyperbolic Möbius transformation on the Riemann sphere (after stereographic projection to the plane):
These pictures resemble the field lines of a positive and a negative electrical charge located at the fixed points, because the circular flow lines subtend a constant angle between the two fixed points.
=== Loxodromic transformations ===
If both ρ and α are nonzero, then the transformation is said to be loxodromic. These transformations tend to move all points in S-shaped paths from one fixed point to the other.
The word "loxodrome" is from the Greek: "λοξος (loxos), slanting + δρόμος (dromos), course". When sailing on a constant bearing – if you maintain a heading of (say) north-east, you will eventually wind up sailing around the north pole in a logarithmic spiral. On the mercator projection such a course is a straight line, as the north and south poles project to infinity. The angle that the loxodrome subtends relative to the lines of longitude (i.e. its slope, the "tightness" of the spiral) is the argument of k. Of course, Möbius transformations may have their two fixed points anywhere, not just at the north and south poles. But any loxodromic transformation will be conjugate to a transform that moves all points along such loxodromes.
If we take the one-parameter subgroup generated by any loxodromic Möbius transformation, we obtain a continuous transformation, such that every transformation in the subgroup fixes the same two points. All other points flow along a certain family of curves, away from the first fixed point and toward the second fixed point. Unlike the hyperbolic case, these curves are not circular arcs, but certain curves which under stereographic projection from the sphere to the plane appear as spiral curves which twist counterclockwise infinitely often around one fixed point and twist clockwise infinitely often around the other fixed point. In general, the two fixed points may be any two distinct points on the Riemann sphere.
You can probably guess the physical interpretation in the case when the two fixed points are 0, ∞: an observer who is both rotating (with constant angular velocity) about some axis and moving along the same axis, will see the appearance of the night sky transform according to the one-parameter subgroup of loxodromic transformations with fixed points 0, ∞, and with ρ, α determined respectively by the magnitude of the actual linear and angular velocities.
=== Stereographic projection ===
These images show Möbius transformations stereographically projected onto the Riemann sphere. Note in particular that when projected onto a sphere, the special case of a fixed point at infinity looks no different from having the fixed points in an arbitrary location.
== Iterating a transformation ==
If a transformation
H
{\displaystyle {\mathfrak {H}}}
has fixed points γ1, γ2, and characteristic constant k, then
H
′
=
H
n
{\displaystyle {\mathfrak {H}}'={\mathfrak {H}}^{n}}
will have
γ
1
′
=
γ
1
,
γ
2
′
=
γ
2
,
k
′
=
k
n
{\displaystyle \gamma _{1}'=\gamma _{1},\gamma _{2}'=\gamma _{2},k'=k^{n}}
.
This can be used to iterate a transformation, or to animate one by breaking it up into steps.
These images show three points (red, blue and black) continuously iterated under transformations with various characteristic constants.
And these images demonstrate what happens when you transform a circle under Hyperbolic, Elliptical, and Loxodromic transforms. In the elliptical and loxodromic images, the value of α is 1/10.
== Higher dimensions ==
In higher dimensions, a Möbius transformation is a homeomorphism of
R
n
¯
{\displaystyle {\overline {\mathbb {R} ^{n}}}}
, the one-point compactification of
R
n
{\displaystyle \mathbb {R} ^{n}}
, which is a finite composition of inversions in spheres and reflections in hyperplanes. Liouville's theorem in conformal geometry states that in dimension at least three, all conformal transformations are Möbius transformations. Every Möbius transformation can be put in the form
f
(
x
)
=
b
+
α
A
(
x
−
a
)
|
x
−
a
|
ε
,
{\displaystyle f(x)=b+{\frac {\alpha A(x-a)}{|x-a|^{\varepsilon }}},}
where
a
,
b
∈
R
n
{\displaystyle a,b\in \mathbb {R} ^{n}}
,
α
∈
R
{\displaystyle \alpha \in \mathbb {R} }
,
A
{\displaystyle A}
is an orthogonal matrix, and
ε
{\displaystyle \varepsilon }
is 0 or 2. The group of Möbius transformations is also called the Möbius group.
The orientation-preserving Möbius transformations form the connected component of the identity in the Möbius group. In dimension n = 2, the orientation-preserving Möbius transformations are exactly the maps of the Riemann sphere covered here. The orientation-reversing ones are obtained from these by complex conjugation.
The domain of Möbius transformations, i.e.
R
n
¯
{\displaystyle {\overline {\mathbb {R} ^{n}}}}
, is homeomorphic to the n-dimensional sphere
S
n
{\displaystyle S^{n}}
. The canonical isomorphism between these two spaces is the Cayley transform, which is itself a Möbius transformation of
R
n
+
1
¯
{\displaystyle {\overline {\mathbb {R} ^{n+1}}}}
. This identification means that Möbius transformations can also be thought of as conformal isomorphisms of
S
n
{\displaystyle S^{n}}
. The n-sphere, together with action of the Möbius group, is a geometric structure (in the sense of Klein's Erlangen program) called Möbius geometry.
== Applications ==
=== Lorentz transformation ===
An isomorphism of the Möbius group with the Lorentz group was noted by several authors: Based on previous work of Felix Klein (1893, 1897) on automorphic functions related to hyperbolic geometry and Möbius geometry, Gustav Herglotz (1909) showed that hyperbolic motions (i.e. isometric automorphisms of a hyperbolic space) transforming the unit sphere into itself correspond to Lorentz transformations, by which Herglotz was able to classify the one-parameter Lorentz transformations into loxodromic, elliptic, hyperbolic, and parabolic groups. Other authors include Emil Artin (1957), H. S. M. Coxeter (1965), and Roger Penrose, Wolfgang Rindler (1984), Tristan Needham (1997) and W. M. Olivia (2002).
Minkowski space consists of the four-dimensional real coordinate space R4 consisting of the space of ordered quadruples (x0, x1, x2, x3) of real numbers, together with a quadratic form
Q
(
x
0
,
x
1
,
x
2
,
x
3
)
=
x
0
2
−
x
1
2
−
x
2
2
−
x
3
2
.
{\displaystyle Q(x_{0},x_{1},x_{2},x_{3})=x_{0}^{2}-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}.}
Borrowing terminology from special relativity, points with Q > 0 are considered timelike; in addition, if x0 > 0, then the point is called future-pointing. Points with Q < 0 are called spacelike. The null cone S consists of those points where Q = 0; the future null cone N+ are those points on the null cone with x0 > 0. The celestial sphere is then identified with the collection of rays in N+ whose initial point is the origin of R4. The collection of linear transformations on R4 with positive determinant preserving the quadratic form Q and preserving the time direction form the restricted Lorentz group SO+(1, 3).
In connection with the geometry of the celestial sphere, the group of transformations SO+(1, 3) is identified with the group PSL(2, C) of Möbius transformations of the sphere. To each (x0, x1, x2, x3) ∈ R4, associate the hermitian matrix
X
=
[
x
0
+
x
1
x
2
+
i
x
3
x
2
−
i
x
3
x
0
−
x
1
]
.
{\displaystyle X={\begin{bmatrix}x_{0}+x_{1}&x_{2}+ix_{3}\\x_{2}-ix_{3}&x_{0}-x_{1}\end{bmatrix}}.}
The determinant of the matrix X is equal to Q(x0, x1, x2, x3). The special linear group acts on the space of such matrices via
for each A ∈ SL(2, C), and this action of SL(2, C) preserves the determinant of X because det A = 1. Since the determinant of X is identified with the quadratic form Q, SL(2, C) acts by Lorentz transformations. On dimensional grounds, SL(2, C) covers a neighborhood of the identity of SO(1, 3). Since SL(2, C) is connected, it covers the entire restricted Lorentz group SO+(1, 3). Furthermore, since the kernel of the action (1) is the subgroup {±I}, then passing to the quotient group gives the group isomorphism
Focusing now attention on the case when (x0, x1, x2, x3) is null, the matrix X has zero determinant, and therefore splits as the outer product of a complex two-vector ξ with its complex conjugate:
The two-component vector ξ is acted upon by SL(2, C) in a manner compatible with (1). It is now clear that the kernel of the representation of SL(2, C) on hermitian matrices is {±I}.
The action of PSL(2, C) on the celestial sphere may also be described geometrically using stereographic projection. Consider first the hyperplane in R4 given by x0 = 1. The celestial sphere may be identified with the sphere S+ of intersection of the hyperplane with the future null cone N+. The stereographic projection from the north pole (1, 0, 0, 1) of this sphere onto the plane x3 = 0 takes a point with coordinates (1, x1, x2, x3) with
x
1
2
+
x
2
2
+
x
3
2
=
1
{\displaystyle x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1}
to the point
(
1
,
x
1
1
−
x
3
,
x
2
1
−
x
3
,
0
)
.
{\displaystyle \left(1,{\frac {x_{1}}{1-x_{3}}},{\frac {x_{2}}{1-x_{3}}},0\right).}
Introducing the complex coordinate
ζ
=
x
1
+
i
x
2
1
−
x
3
,
{\displaystyle \zeta ={\frac {x_{1}+ix_{2}}{1-x_{3}}},}
the inverse stereographic projection gives the following formula for a point (x1, x2, x3) on S+:
The action of SO+(1, 3) on the points of N+ does not preserve the hyperplane S+, but acting on points in S+ and then rescaling so that the result is again in S+ gives an action of SO+(1, 3) on the sphere which goes over to an action on the complex variable ζ. In fact, this action is by fractional linear transformations, although this is not easily seen from this representation of the celestial sphere. Conversely, for any fractional linear transformation of ζ variable goes over to a unique Lorentz transformation on N+, possibly after a suitable (uniquely determined) rescaling.
A more invariant description of the stereographic projection which allows the action to be more clearly seen is to consider the variable ζ = z:w as a ratio of a pair of homogeneous coordinates for the complex projective line CP1. The stereographic projection goes over to a transformation from C2 − {0} to N+ which is homogeneous of degree two with respect to real scalings
which agrees with (4) upon restriction to scales in which
z
z
¯
+
w
w
¯
=
1.
{\displaystyle z{\bar {z}}+w{\bar {w}}=1.}
The components of (5) are precisely those obtained from the outer product
[
x
0
+
x
1
x
2
+
i
x
3
x
2
−
i
x
3
x
0
−
x
1
]
=
2
[
z
w
]
[
z
¯
w
¯
]
.
{\displaystyle {\begin{bmatrix}x_{0}+x_{1}&x_{2}+ix_{3}\\x_{2}-ix_{3}&x_{0}-x_{1}\end{bmatrix}}=2{\begin{bmatrix}z\\w\end{bmatrix}}{\begin{bmatrix}{\bar {z}}&{\bar {w}}\end{bmatrix}}.}
In summary, the action of the restricted Lorentz group SO+(1,3) agrees with that of the Möbius group PSL(2, C). This motivates the following definition. In dimension n ≥ 2, the Möbius group Möb(n) is the group of all orientation-preserving conformal isometries of the round sphere Sn to itself. By realizing the conformal sphere as the space of future-pointing rays of the null cone in the Minkowski space R1,n+1, there is an isomorphism of Möb(n) with the restricted Lorentz group SO+(1,n+1) of Lorentz transformations with positive determinant, preserving the direction of time.
Coxeter began instead with the equivalent quadratic form
Q
(
x
1
,
x
2
,
x
3
x
4
)
=
x
1
2
+
x
2
2
+
x
3
2
−
x
4
2
{\displaystyle Q(x_{1},\ x_{2},\ x_{3}\ x_{4})=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}-x_{4}^{2}}
.
He identified the Lorentz group with transformations for which {x | Q(x) = −1} is stable. Then he interpreted the x's as homogeneous coordinates and {x | Q(x) = 0}, the null cone, as the Cayley absolute for a hyperbolic space of points {x | Q(x) < 0}. Next, Coxeter introduced the variables
ξ
=
x
1
x
4
,
η
=
x
2
x
4
,
ζ
=
x
3
x
4
{\displaystyle \xi ={\frac {x_{1}}{x_{4}}},\ \eta ={\frac {x_{2}}{x_{4}}},\ \zeta ={\frac {x_{3}}{x_{4}}}}
so that the Lorentz-invariant quadric corresponds to the sphere
ξ
2
+
η
2
+
ζ
2
=
1
{\displaystyle \xi ^{2}+\eta ^{2}+\zeta ^{2}=1}
. Coxeter notes that Felix Klein also wrote of this correspondence, applying stereographic projection from (0, 0, 1) to the complex plane
z
=
ξ
+
i
η
1
−
ζ
.
{\textstyle z={\frac {\xi +i\eta }{1-\zeta }}.}
Coxeter used the fact that circles of the inversive plane represent planes of hyperbolic space, and the general homography is the product of inversions in two or four circles, corresponding to the general hyperbolic displacement which is the product of inversions in two or four planes.
=== Hyperbolic space ===
As seen above, the Möbius group PSL(2, C) acts on Minkowski space as the group of those isometries that preserve the origin, the orientation of space and the direction of time. Restricting to the points where Q = 1 in the positive light cone, which form a model of hyperbolic 3-space H3, we see that the Möbius group acts on H3 as a group of orientation-preserving isometries. In fact, the Möbius group is equal to the group of orientation-preserving isometries of hyperbolic 3-space. If we use the Poincaré ball model, identifying the unit ball in R3 with H3, then we can think of the Riemann sphere as the "conformal boundary" of H3. Every orientation-preserving isometry of H3 gives rise to a Möbius transformation on the Riemann sphere and vice versa.
== See also ==
== Notes ==
== References ==
Specific
General
== Further reading ==
Lawson, M. V. (1998). "The Möbius Inverse Monoid". Journal of Algebra. 200 (2): 428. doi:10.1006/jabr.1997.7242.
== External links ==
"Quasi-conformal mapping", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Conformal maps gallery
Weisstein, Eric W. "Linear Fractional Transformation". MathWorld. | Wikipedia/Möbius_transformation |
In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class; "degeneracy" is the condition of being a degenerate case.
The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. Equivalently, it becomes a "line segment".
Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate.
For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if needed) in each specific situation.
A degenerate case thus has special features which makes it non-generic, or a special case. However, not all non-generic or special cases are degenerate. For example, right triangles, isosceles triangles and equilateral triangles are non-generic and non-degenerate. In fact, degenerate cases often correspond to singularities, either in the object or in some configuration space. For example, a conic section is degenerate if and only if it has singular points (e.g., point, line, intersecting lines).
== In geometry ==
=== Conic section ===
A degenerate conic is a conic section (a second-degree plane curve, defined by a polynomial equation of degree two) that fails to be an irreducible curve.
A point is a degenerate circle, namely one with radius 0.
The line is a degenerate case of a parabola if the parabola resides on a tangent plane. In inversive geometry, a line is a degenerate case of a circle, with infinite radius.
Two parallel lines also form a degenerate parabola.
A line segment can be viewed as a degenerate case of an ellipse in which the semiminor axis goes to zero, the foci go to the endpoints, and the eccentricity goes to one.
A circle can be thought of as a degenerate ellipse, as the eccentricity approaches 0 and the foci merge.
An ellipse can also degenerate into a single point.
A hyperbola can degenerate into two lines crossing at a point, through a family of hyperbolae having those lines as common asymptotes.
=== Triangle ===
A degenerate triangle is a "flat" triangle in the sense that it is contained in a line segment. It has thus collinear vertices and zero area. If the three vertices are all distinct, it has two 0° angles and one 180° angle. If two vertices are equal, it has one 0° angle and two undefined angles. If all three vertices are equal, all three angles are undefined.
=== Rectangle ===
A rectangle with one pair of opposite sides of length zero degenerates to a line segment, with zero area. If both of the rectangle's pairs of opposite sides have length zero, the rectangle degenerates to a point.
=== Hyperrectangle ===
A hyperrectangle is the n-dimensional analog of a rectangle. If its sides along any of the n axes has length zero, it degenerates to a lower-dimensional hyperrectangle, all the way down to a point if the sides aligned with every axis have length zero.
=== Convex polygon ===
A convex polygon is degenerate if at least two consecutive sides coincide at least partially, or at least one side has zero length, or at least one angle is 180°. Thus a degenerate convex polygon of n sides looks like a polygon with fewer sides. In the case of triangles, this definition coincides with the one that has been given above.
=== Convex polyhedron ===
A convex polyhedron is degenerate if either two adjacent facets are coplanar or two edges are aligned. In the case of a tetrahedron, this is equivalent to saying that all of its vertices lie in the same plane, giving it a volume of zero.
=== Standard torus ===
In contexts where self-intersection is allowed, a double-covered sphere is a degenerate standard torus where the axis of revolution passes through the center of the generating circle, rather than outside it.
A torus degenerates to a circle when its minor radius goes to 0.
=== Sphere ===
When the radius of a sphere goes to zero, the resulting degenerate sphere of zero volume is a point.
=== Other ===
See general position for other examples.
== Elsewhere ==
A set containing a single point is a degenerate continuum.
Objects such as the digon and monogon can be viewed as degenerate cases of polygons: valid in a general abstract mathematical sense, but not part of the original Euclidean conception of polygons.
A random variable which can only take one value has a degenerate distribution; if that value is the real number 0, then its probability density is the Dirac delta function.
A root of a polynomial is sometimes said to be degenerate if it is a multiple root, since generically the n roots of an nth degree polynomial are all distinct. This usage carries over to eigenproblems: a degenerate eigenvalue is a multiple root of the characteristic polynomial.
In quantum mechanics, any such multiplicity in the eigenvalues of the Hamiltonian operator gives rise to degenerate energy levels. Usually any such degeneracy indicates some underlying symmetry in the system.
== See also ==
Degeneracy (graph theory)
Degenerate form
Trivial (mathematics)
Pathological (mathematics)
Vacuous truth
== References == | Wikipedia/Degenerate_case |
Noncommutative algebraic geometry is a branch of mathematics, and more specifically a direction in noncommutative geometry, that studies the geometric properties of formal duals of non-commutative algebraic objects such as rings as well as geometric objects derived from them (e.g. by gluing along localizations or taking noncommutative stack quotients).
For example, noncommutative algebraic geometry is supposed to extend a notion of an algebraic scheme by suitable gluing of spectra of noncommutative rings; depending on how literally and how generally this aim (and a notion of spectrum) is understood in noncommutative setting, this has been achieved in various level of success. The noncommutative ring generalizes here a commutative ring of regular functions on a commutative scheme. Functions on usual spaces in the traditional (commutative) algebraic geometry have a product defined by pointwise multiplication; as the values of these functions commute, the functions also commute: a times b equals b times a. It is remarkable that viewing noncommutative associative algebras as algebras of functions on "noncommutative" would-be space is a far-reaching geometric intuition, though it formally looks like a fallacy.
Much of the motivation for noncommutative geometry, and in particular for the noncommutative algebraic geometry, is from physics; especially from quantum physics, where the algebras of observables are indeed viewed as noncommutative analogues of functions, hence having the ability to observe their geometric aspects is desirable.
One of the values of the field is that it also provides new techniques to study objects in commutative algebraic geometry such as Brauer groups.
The methods of noncommutative algebraic geometry are analogs of the methods of commutative algebraic geometry, but frequently the foundations are different. Local behavior in commutative algebraic geometry is captured by commutative algebra and especially the study of local rings. These do not have a ring-theoretic analogue in the noncommutative setting; though in a categorical setup one can talk about stacks of local categories of quasicoherent sheaves over noncommutative spectra. Global properties such as those arising from homological algebra and K-theory more frequently carry over to the noncommutative setting.
== History ==
=== Classical approach: the issue of non-commutative localization ===
Commutative algebraic geometry begins by constructing the spectrum of a ring. The points of the algebraic variety (or more generally, scheme) are the prime ideals of the ring, and the functions on the algebraic variety are the elements of the ring. A noncommutative ring, however, may not have any proper non-zero two-sided prime ideals. For instance, this is true of the Weyl algebra of polynomial differential operators on affine space: The Weyl algebra is a simple ring. Therefore, one can for instance attempt to replace a prime spectrum by a primitive spectrum: there are also the theory of non-commutative localization as well as descent theory. This works to some extent: for instance, Dixmier's enveloping algebras may be thought of as working out non-commutative algebraic geometry for the primitive spectrum of an enveloping algebra of a Lie algebra. Another work in a similar spirit is Michael Artin’s notes titled “noncommutative rings”, which in part is an attempt to study representation theory from a non-commutative-geometry point of view. The key insight to both approaches is that irreducible representations, or at least primitive ideals, can be thought of as “non-commutative points”.
=== Modern viewpoint using categories of sheaves ===
As it turned out, starting from, say, primitive spectra, it was not easy to develop a workable sheaf theory. One might imagine this difficulty is because of a sort of quantum phenomenon: points in a space can influence points far away (and in fact, it is not appropriate to treat points individually and view a space as a mere collection of the points).
Due to the above, one accepts a paradigm implicit in Pierre Gabriel's thesis and partly justified by the Gabriel–Rosenberg reconstruction theorem (after Pierre Gabriel and Alexander L. Rosenberg) that a commutative scheme can be reconstructed, up to isomorphism of schemes, solely from the abelian category of quasicoherent sheaves on the scheme. Alexander Grothendieck taught that to do geometry one does not need a space, it is enough to have a category of sheaves on that would be space; this idea has been transmitted to noncommutative algebra by Yuri Manin. There are, a bit weaker, reconstruction theorems from the derived categories of (quasi)coherent sheaves motivating the derived noncommutative algebraic geometry (see just below).
=== Derived algebraic geometry ===
Perhaps the most recent approach is through the deformation theory, placing non-commutative algebraic geometry in the realm of derived algebraic geometry.
As a motivating example, consider the one-dimensional Weyl algebra over the complex numbers C. This is the quotient of the free ring C<x, y> by the relation
xy - yx = 1.
This ring represents the polynomial differential operators in a single variable x; y stands in for the differential operator ∂x. This ring fits into a one-parameter family given by the relations xy - yx = α. When α is not zero, then this relation determines a ring isomorphic to the Weyl algebra. When α is zero, however, the relation is the commutativity relation for x and y, and the resulting quotient ring is the polynomial ring in two variables, C[x, y]. Geometrically, the polynomial ring in two variables represents the two-dimensional affine space A2, so the existence of this one-parameter family says that affine space admits non-commutative deformations to the space determined by the Weyl algebra. This deformation is related to the symbol of a differential operator and that A2 is the cotangent bundle of the affine line. (Studying the Weyl algebra can lead to information about affine space: The Dixmier conjecture about the Weyl algebra is equivalent to the Jacobian conjecture about affine space.)
In this line of the approach, the notion of operad, a set or space of operations, becomes prominent: in the introduction to (Francis 2008), Francis writes:
We begin the study of certain less commutative algebraic geometries. … algebraic geometry over
E
n
{\displaystyle {\mathcal {E}}_{n}}
-rings can be thought of as interpolating between some derived theories of noncommutative and commutative algebraic geometries. As n increases, these
E
n
{\displaystyle {\mathcal {E}}_{n}}
-algebras converge to the derived algebraic geometry of Toën-Vezzosi and Lurie.
== Proj of a noncommutative ring ==
One of the basic constructions in commutative algebraic geometry is the Proj construction of a graded commutative ring. This construction builds a projective algebraic variety together with a very ample line bundle whose homogeneous coordinate ring is the original ring. Building the underlying topological space of the variety requires localizing the ring, but building sheaves on that space does not. By a theorem of Jean-Pierre Serre, quasi-coherent sheaves on Proj of a graded ring are the same as graded modules over the ring up to finite dimensional factors. The philosophy of topos theory promoted by Alexander Grothendieck says that the category of sheaves on a space can serve as the space itself. Consequently, in non-commutative algebraic geometry one often defines Proj in the following fashion: Let R be a graded C-algebra, and let Mod-R denote the category of graded right R-modules. Let F denote the subcategory of Mod-R consisting of all modules of finite length. Proj R is defined to be the quotient of the abelian category Mod-R by F. Equivalently, it is a localization of Mod-R in which two modules become isomorphic if, after taking their direct sums with appropriately chosen objects of F, they are isomorphic in Mod-R.
This approach leads to a theory of non-commutative projective geometry. A non-commutative smooth projective curve turns out to be a smooth commutative curve, but for singular curves or smooth higher-dimensional spaces, the non-commutative setting allows new objects.
== See also ==
Derived noncommutative algebraic geometry
Q-category
quasi-free algebra
== Notes ==
== References ==
M. Artin, J. J. Zhang, Noncommutative projective schemes, Advances in Mathematics 109 (1994), no. 2, 228–287, doi.
Yuri I. Manin, Quantum groups and non-commutative geometry, CRM, Montreal 1988.
Yuri I Manin, Topics in noncommutative geometry, 176 pp. Princeton 1991.
A. Bondal, M. van den Bergh, Generators and representability of functors in commutative and noncommutative geometry, Moscow Mathematical Journal 3 (2003), no. 1, 1–36.
A. Bondal, D. Orlov, Reconstruction of a variety from the derived category and groups of autoequivalences, Compositio Mathematica 125 (2001), 327–344 doi
Francis, John (2008), Derived algebraic geometry over
E
n
{\displaystyle {\mathcal {E}}_{n}}
-rings (PDF) (Ph.D. thesis), Massachusetts Institute of Technology, MR 2717524, ProQuest 304382161
O. A. Laudal, Noncommutative algebraic geometry, Rev. Mat. Iberoamericana 19, n. 2 (2003), 509--580; euclid.
Fred Van Oystaeyen, Alain Verschoren, Non-commutative algebraic geometry, Springer Lect. Notes in Math. 887, 1981.
Fred van Oystaeyen, Algebraic geometry for associative algebras, Marcel Dekker 2000. vi+287 pp.
A. L. Rosenberg, Noncommutative algebraic geometry and representations of quantized algebras, MIA 330, Kluwer Academic Publishers Group, Dordrecht, 1995. xii+315 pp. ISBN 0-7923-3575-9
M. Kontsevich, A. Rosenberg, Noncommutative smooth spaces, The Gelfand Mathematical Seminars, 1996--1999, 85--108, Gelfand Math. Sem., Birkhäuser, Boston 2000; arXiv:math/9812158
A. L. Rosenberg, Noncommutative schemes, Compositio Mathematica 112 (1998) 93--125, doi; Underlying spaces of noncommutative schemes, preprint MPIM2003-111, dvi, ps; MSRI lecture Noncommutative schemes and spaces (Feb 2000): video
Pierre Gabriel, Des catégories abéliennes, Bulletin de la Société Mathématique de France 90 (1962), p. 323-448, numdam
Zoran Škoda, Some equivariant constructions in noncommutative algebraic geometry, Georgian Mathematical Journal 16 (2009), No. 1, 183--202, arXiv:0811.4770.
Dmitri Orlov, Quasi-coherent sheaves in commutative and non-commutative geometry, Izv. RAN. Ser. Mat., 2003, vol. 67, issue 3, 119–138 (MPI preprint version dvi, ps)
M. Kapranov, Noncommutative geometry based on commutator expansions, Journal für die reine und angewandte Mathematik 505 (1998), 73-118, math.AG/9802041.
== Further reading ==
A. Bondal, D. Orlov, Semi-orthogonal decomposition for algebraic varieties_, PreprintMPI/95–15, alg-geom/9506006
Tomasz Maszczyk, Noncommutative geometry through monoidal categories, math.QA/0611806
S. Mahanta, On some approaches towards non-commutative algebraic geometry, math.QA/0501166
Ludmil Katzarkov, Maxim Kontsevich, Tony Pantev, Hodge theoretic aspects of mirror symmetry, arxiv/0806.0107
Dmitri Kaledin, Tokyo lectures "Homological methods in non-commutative geometry", pdf, TeX; and (similar but different) Seoul lectures
== External links ==
MathOverflow, Theories of Noncommutative Geometry
noncommutative algebraic geometry at the nLab
equivariant noncommutative algebraic geometry at the nLab
noncommutative scheme at the nLab
Kapranov's noncommutative geometry at the nLab | Wikipedia/Noncommutative_algebraic_geometry |
In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory.
== Group theory ==
The commutator of two elements, g and h, of a group G, is the element
[g, h] = g−1h−1gh.
This element is equal to the group's identity if and only if g and h commute (that is, if and only if gh = hg).
The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group.
The definition of the commutator above is used throughout this article, but many group theorists define the commutator as
[g, h] = ghg−1h−1.
Using the first definition, this can be expressed as [g−1, h−1].
=== Identities (group theory) ===
Commutator identities are an important tool in group theory. The expression ax denotes the conjugate of a by x, defined as x−1ax.
x
y
=
x
−
1
[
x
,
y
]
.
{\displaystyle x^{y}=x^{-1}[x,y].}
[
y
,
x
]
=
[
x
,
y
]
−
1
.
{\displaystyle [y,x]=[x,y]^{-1}.}
[
x
,
z
y
]
=
[
x
,
y
]
⋅
[
x
,
z
]
y
{\displaystyle [x,zy]=[x,y]\cdot [x,z]^{y}}
and
[
x
z
,
y
]
=
[
x
,
y
]
z
⋅
[
z
,
y
]
.
{\displaystyle [xz,y]=[x,y]^{z}\cdot [z,y].}
[
x
,
y
−
1
]
=
[
y
,
x
]
y
−
1
{\displaystyle \left[x,y^{-1}\right]=[y,x]^{y^{-1}}}
and
[
x
−
1
,
y
]
=
[
y
,
x
]
x
−
1
.
{\displaystyle \left[x^{-1},y\right]=[y,x]^{x^{-1}}.}
[
[
x
,
y
−
1
]
,
z
]
y
⋅
[
[
y
,
z
−
1
]
,
x
]
z
⋅
[
[
z
,
x
−
1
]
,
y
]
x
=
1
{\displaystyle \left[\left[x,y^{-1}\right],z\right]^{y}\cdot \left[\left[y,z^{-1}\right],x\right]^{z}\cdot \left[\left[z,x^{-1}\right],y\right]^{x}=1}
and
[
[
x
,
y
]
,
z
x
]
⋅
[
[
z
,
x
]
,
y
z
]
⋅
[
[
y
,
z
]
,
x
y
]
=
1.
{\displaystyle \left[\left[x,y\right],z^{x}\right]\cdot \left[[z,x],y^{z}\right]\cdot \left[[y,z],x^{y}\right]=1.}
Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section).
N.B., the above definition of the conjugate of a by x is used by some group theorists. Many other group theorists define the conjugate of a by x as xax−1. This is often written
x
a
{\displaystyle {}^{x}a}
. Similar identities hold for these conventions.
Many identities that are true modulo certain subgroups are also used. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well:
(
x
y
)
2
=
x
2
y
2
[
y
,
x
]
[
[
y
,
x
]
,
y
]
.
{\displaystyle (xy)^{2}=x^{2}y^{2}[y,x][[y,x],y].}
If the derived subgroup is central, then
(
x
y
)
n
=
x
n
y
n
[
y
,
x
]
(
n
2
)
.
{\displaystyle (xy)^{n}=x^{n}y^{n}[y,x]^{\binom {n}{2}}.}
== Ring theory ==
Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associative algebra) is defined differently by
[
a
,
b
]
=
a
b
−
b
a
.
{\displaystyle [a,b]=ab-ba.}
The commutator is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra.
The anticommutator of two elements a and b of a ring or associative algebra is defined by
{
a
,
b
}
=
a
b
+
b
a
.
{\displaystyle \{a,b\}=ab+ba.}
Sometimes
[
a
,
b
]
+
{\displaystyle [a,b]_{+}}
is used to denote anticommutator, while
[
a
,
b
]
−
{\displaystyle [a,b]_{-}}
is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics.
The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned.
=== Identities (ring theory) ===
The commutator has the following properties:
==== Lie-algebra identities ====
[
A
+
B
,
C
]
=
[
A
,
C
]
+
[
B
,
C
]
{\displaystyle [A+B,C]=[A,C]+[B,C]}
[
A
,
A
]
=
0
{\displaystyle [A,A]=0}
[
A
,
B
]
=
−
[
B
,
A
]
{\displaystyle [A,B]=-[B,A]}
[
A
,
[
B
,
C
]
]
+
[
B
,
[
C
,
A
]
]
+
[
C
,
[
A
,
B
]
]
=
0
{\displaystyle [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0}
Relation (3) is called anticommutativity, while (4) is the Jacobi identity.
==== Additional identities ====
[
A
,
B
C
]
=
[
A
,
B
]
C
+
B
[
A
,
C
]
{\displaystyle [A,BC]=[A,B]C+B[A,C]}
[
A
,
B
C
D
]
=
[
A
,
B
]
C
D
+
B
[
A
,
C
]
D
+
B
C
[
A
,
D
]
{\displaystyle [A,BCD]=[A,B]CD+B[A,C]D+BC[A,D]}
[
A
,
B
C
D
E
]
=
[
A
,
B
]
C
D
E
+
B
[
A
,
C
]
D
E
+
B
C
[
A
,
D
]
E
+
B
C
D
[
A
,
E
]
{\displaystyle [A,BCDE]=[A,B]CDE+B[A,C]DE+BC[A,D]E+BCD[A,E]}
[
A
B
,
C
]
=
A
[
B
,
C
]
+
[
A
,
C
]
B
{\displaystyle [AB,C]=A[B,C]+[A,C]B}
[
A
B
C
,
D
]
=
A
B
[
C
,
D
]
+
A
[
B
,
D
]
C
+
[
A
,
D
]
B
C
{\displaystyle [ABC,D]=AB[C,D]+A[B,D]C+[A,D]BC}
[
A
B
C
D
,
E
]
=
A
B
C
[
D
,
E
]
+
A
B
[
C
,
E
]
D
+
A
[
B
,
E
]
C
D
+
[
A
,
E
]
B
C
D
{\displaystyle [ABCD,E]=ABC[D,E]+AB[C,E]D+A[B,E]CD+[A,E]BCD}
[
A
,
B
+
C
]
=
[
A
,
B
]
+
[
A
,
C
]
{\displaystyle [A,B+C]=[A,B]+[A,C]}
[
A
+
B
,
C
+
D
]
=
[
A
,
C
]
+
[
A
,
D
]
+
[
B
,
C
]
+
[
B
,
D
]
{\displaystyle [A+B,C+D]=[A,C]+[A,D]+[B,C]+[B,D]}
[
A
B
,
C
D
]
=
A
[
B
,
C
]
D
+
[
A
,
C
]
B
D
+
C
A
[
B
,
D
]
+
C
[
A
,
D
]
B
=
A
[
B
,
C
]
D
+
A
C
[
B
,
D
]
+
[
A
,
C
]
D
B
+
C
[
A
,
D
]
B
{\displaystyle [AB,CD]=A[B,C]D+[A,C]BD+CA[B,D]+C[A,D]B=A[B,C]D+AC[B,D]+[A,C]DB+C[A,D]B}
[
[
A
,
C
]
,
[
B
,
D
]
]
=
[
[
[
A
,
B
]
,
C
]
,
D
]
+
[
[
[
B
,
C
]
,
D
]
,
A
]
+
[
[
[
C
,
D
]
,
A
]
,
B
]
+
[
[
[
D
,
A
]
,
B
]
,
C
]
{\displaystyle [[A,C],[B,D]]=[[[A,B],C],D]+[[[B,C],D],A]+[[[C,D],A],B]+[[[D,A],B],C]}
If A is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map
ad
A
:
R
→
R
{\displaystyle \operatorname {ad} _{A}:R\rightarrow R}
given by
ad
A
(
B
)
=
[
A
,
B
]
{\displaystyle \operatorname {ad} _{A}(B)=[A,B]}
. In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity.
From identity (9), one finds that the commutator of integer powers of ring elements is:
[
A
N
,
B
M
]
=
∑
n
=
0
N
−
1
∑
m
=
0
M
−
1
A
n
B
m
[
A
,
B
]
B
N
−
n
−
1
A
M
−
m
−
1
=
∑
n
=
0
N
−
1
∑
m
=
0
M
−
1
B
n
A
m
[
A
,
B
]
A
N
−
n
−
1
B
M
−
m
−
1
{\displaystyle [A^{N},B^{M}]=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}A^{n}B^{m}[A,B]B^{N-n-1}A^{M-m-1}=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}B^{n}A^{m}[A,B]A^{N-n-1}B^{M-m-1}}
Some of the above identities can be extended to the anticommutator using the above ± subscript notation.
For example:
[
A
B
,
C
]
±
=
A
[
B
,
C
]
−
+
[
A
,
C
]
±
B
{\displaystyle [AB,C]_{\pm }=A[B,C]_{-}+[A,C]_{\pm }B}
[
A
B
,
C
D
]
±
=
A
[
B
,
C
]
−
D
+
A
C
[
B
,
D
]
−
+
[
A
,
C
]
−
D
B
+
C
[
A
,
D
]
±
B
{\displaystyle [AB,CD]_{\pm }=A[B,C]_{-}D+AC[B,D]_{-}+[A,C]_{-}DB+C[A,D]_{\pm }B}
[
[
A
,
B
]
,
[
C
,
D
]
]
=
[
[
[
B
,
C
]
+
,
A
]
+
,
D
]
−
[
[
[
B
,
D
]
+
,
A
]
+
,
C
]
+
[
[
[
A
,
D
]
+
,
B
]
+
,
C
]
−
[
[
[
A
,
C
]
+
,
B
]
+
,
D
]
{\displaystyle [[A,B],[C,D]]=[[[B,C]_{+},A]_{+},D]-[[[B,D]_{+},A]_{+},C]+[[[A,D]_{+},B]_{+},C]-[[[A,C]_{+},B]_{+},D]}
[
A
,
[
B
,
C
]
±
]
+
[
B
,
[
C
,
A
]
±
]
+
[
C
,
[
A
,
B
]
±
]
=
0
{\displaystyle \left[A,[B,C]_{\pm }\right]+\left[B,[C,A]_{\pm }\right]+\left[C,[A,B]_{\pm }\right]=0}
[
A
,
B
C
]
±
=
[
A
,
B
]
−
C
+
B
[
A
,
C
]
±
=
[
A
,
B
]
±
C
∓
B
[
A
,
C
]
−
{\displaystyle [A,BC]_{\pm }=[A,B]_{-}C+B[A,C]_{\pm }=[A,B]_{\pm }C\mp B[A,C]_{-}}
[
A
,
B
C
]
=
[
A
,
B
]
±
C
∓
B
[
A
,
C
]
±
{\displaystyle [A,BC]=[A,B]_{\pm }C\mp B[A,C]_{\pm }}
==== Exponential identities ====
Consider a ring or algebra in which the exponential
e
A
=
exp
(
A
)
=
1
+
A
+
1
2
!
A
2
+
⋯
{\displaystyle e^{A}=\exp(A)=1+A+{\tfrac {1}{2!}}A^{2}+\cdots }
can be meaningfully defined, such as a Banach algebra or a ring of formal power series.
In such a ring, Hadamard's lemma applied to nested commutators gives:
e
A
B
e
−
A
=
B
+
[
A
,
B
]
+
1
2
!
[
A
,
[
A
,
B
]
]
+
1
3
!
[
A
,
[
A
,
[
A
,
B
]
]
]
+
⋯
=
e
ad
A
(
B
)
.
{\textstyle e^{A}Be^{-A}\ =\ B+[A,B]+{\frac {1}{2!}}[A,[A,B]]+{\frac {1}{3!}}[A,[A,[A,B]]]+\cdots \ =\ e^{\operatorname {ad} _{A}}(B).}
(For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)).
A similar expansion expresses the group commutator of expressions
e
A
{\displaystyle e^{A}}
(analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets),
e
A
e
B
e
−
A
e
−
B
=
exp
(
[
A
,
B
]
+
1
2
!
[
A
+
B
,
[
A
,
B
]
]
+
1
3
!
(
1
2
[
A
,
[
B
,
[
B
,
A
]
]
]
+
[
A
+
B
,
[
A
+
B
,
[
A
,
B
]
]
]
)
+
⋯
)
.
{\displaystyle e^{A}e^{B}e^{-A}e^{-B}=\exp \!\left([A,B]+{\frac {1}{2!}}[A{+}B,[A,B]]+{\frac {1}{3!}}\left({\frac {1}{2}}[A,[B,[B,A]]]+[A{+}B,[A{+}B,[A,B]]]\right)+\cdots \right).}
== Graded rings and algebras ==
When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as
[
ω
,
η
]
g
r
:=
ω
η
−
(
−
1
)
deg
ω
deg
η
η
ω
.
{\displaystyle [\omega ,\eta ]_{gr}:=\omega \eta -(-1)^{\deg \omega \deg \eta }\eta \omega .}
== Adjoint derivation ==
Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element
x
∈
R
{\displaystyle x\in R}
, we define the adjoint mapping
a
d
x
:
R
→
R
{\displaystyle \mathrm {ad} _{x}:R\to R}
by:
ad
x
(
y
)
=
[
x
,
y
]
=
x
y
−
y
x
.
{\displaystyle \operatorname {ad} _{x}(y)=[x,y]=xy-yx.}
This mapping is a derivation on the ring R:
a
d
x
(
y
z
)
=
a
d
x
(
y
)
z
+
y
a
d
x
(
z
)
.
{\displaystyle \mathrm {ad} _{x}\!(yz)\ =\ \mathrm {ad} _{x}\!(y)\,z\,+\,y\,\mathrm {ad} _{x}\!(z).}
By the Jacobi identity, it is also a derivation over the commutation operation:
a
d
x
[
y
,
z
]
=
[
a
d
x
(
y
)
,
z
]
+
[
y
,
a
d
x
(
z
)
]
.
{\displaystyle \mathrm {ad} _{x}[y,z]\ =\ [\mathrm {ad} _{x}\!(y),z]\,+\,[y,\mathrm {ad} _{x}\!(z)].}
Composing such mappings, we get for example
ad
x
ad
y
(
z
)
=
[
x
,
[
y
,
z
]
]
{\displaystyle \operatorname {ad} _{x}\operatorname {ad} _{y}(z)=[x,[y,z]\,]}
and
ad
x
2
(
z
)
=
ad
x
(
ad
x
(
z
)
)
=
[
x
,
[
x
,
z
]
]
.
{\displaystyle \operatorname {ad} _{x}^{2}\!(z)\ =\ \operatorname {ad} _{x}\!(\operatorname {ad} _{x}\!(z))\ =\ [x,[x,z]\,].}
We may consider
a
d
{\displaystyle \mathrm {ad} }
itself as a mapping,
a
d
:
R
→
E
n
d
(
R
)
{\displaystyle \mathrm {ad} :R\to \mathrm {End} (R)}
, where
E
n
d
(
R
)
{\displaystyle \mathrm {End} (R)}
is the ring of mappings from R to itself with composition as the multiplication operation. Then
a
d
{\displaystyle \mathrm {ad} }
is a Lie algebra homomorphism, preserving the commutator:
ad
[
x
,
y
]
=
[
ad
x
,
ad
y
]
.
{\displaystyle \operatorname {ad} _{[x,y]}=\left[\operatorname {ad} _{x},\operatorname {ad} _{y}\right].}
By contrast, it is not always a ring homomorphism: usually
ad
x
y
≠
ad
x
ad
y
{\displaystyle \operatorname {ad} _{xy}\,\neq \,\operatorname {ad} _{x}\operatorname {ad} _{y}}
.
=== General Leibniz rule ===
The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation:
x
n
y
=
∑
k
=
0
n
(
n
k
)
ad
x
k
(
y
)
x
n
−
k
.
{\displaystyle x^{n}y=\sum _{k=0}^{n}{\binom {n}{k}}\operatorname {ad} _{x}^{k}\!(y)\,x^{n-k}.}
Replacing
x
{\displaystyle x}
by the differentiation operator
∂
{\displaystyle \partial }
, and
y
{\displaystyle y}
by the multiplication operator
m
f
:
g
↦
f
g
{\displaystyle m_{f}:g\mapsto fg}
, we get
ad
(
∂
)
(
m
f
)
=
m
∂
(
f
)
{\displaystyle \operatorname {ad} (\partial )(m_{f})=m_{\partial (f)}}
, and applying both sides to a function g, the identity becomes the usual Leibniz rule for the nth derivative
∂
n
(
f
g
)
{\displaystyle \partial ^{n}\!(fg)}
.
== See also ==
Anticommutativity
Associator
Baker–Campbell–Hausdorff formula
Canonical commutation relation
Centralizer a.k.a. commutant
Derivation (abstract algebra)
Moyal bracket
Pincherle derivative
Poisson bracket
Ternary commutator
Three subgroups lemma
== Notes ==
== References ==
Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1
Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, ISBN 0-13-805326-X
Herstein, I. N. (1975), Topics In Algebra (2nd ed.), Wiley, ISBN 0471010901
Lavrov, P.M. (2014), "Jacobi -type identities in algebras and superalgebras", Theoretical and Mathematical Physics, 179 (2): 550–558, arXiv:1304.5050, Bibcode:2014TMP...179..550L, doi:10.1007/s11232-014-0161-2, S2CID 119175276
Liboff, Richard L. (2003), Introductory Quantum Mechanics (4th ed.), Addison-Wesley, ISBN 0-8053-8714-5
McKay, Susan (2000), Finite p-groups, Queen Mary Maths Notes, vol. 18, University of London, ISBN 978-0-902480-17-9, MR 1802994
McMahon, D. (2008), Quantum Field Theory, McGraw Hill, ISBN 978-0-07-154382-8
== Further reading ==
McKenzie, R.; Snow, J. (2005), "Congruence modular varieties: commutator theory", in Kudryavtsev, V. B.; Rosenberg, I. G. (eds.), Structural Theory of Automata, Semigroups, and Universal Algebra, NATO Science Series II, vol. 207, Springer, pp. 273–329, doi:10.1007/1-4020-3817-8_11, ISBN 9781402038174
== External links ==
"Commutator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Commutator_(ring_theory) |
In algebraic geometry and commutative algebra, the Zariski topology is a topology defined on geometric objects called varieties. It is very different from topologies that are commonly used in real or complex analysis; in particular, it is not Hausdorff. This topology was introduced primarily by Oscar Zariski and later generalized for making the set of prime ideals of a commutative ring (called the spectrum of the ring) a topological space.
The Zariski topology allows tools from topology to be used to study algebraic varieties, even when the underlying field is not a topological field. This is one of the basic ideas of scheme theory, which allows one to build general algebraic varieties by gluing together affine varieties in a way similar to that in manifold theory, where manifolds are built by gluing together charts, which are open subsets of real affine spaces.
The Zariski topology of an algebraic variety is the topology whose closed sets are the algebraic subsets of the variety. In the case of an algebraic variety over the complex numbers, the Zariski topology is thus coarser than the usual topology, as every algebraic set is closed for the usual topology.
The generalization of the Zariski topology to the set of prime ideals of a commutative ring follows from Hilbert's Nullstellensatz, that establishes a bijective correspondence between the points of an affine variety defined over an algebraically closed field and the maximal ideals of the ring of its regular functions. This suggests defining the Zariski topology on the set of the maximal ideals of a commutative ring as the topology such that a set of maximal ideals is closed if and only if it is the set of all maximal ideals that contain a given ideal. Another basic idea of Grothendieck's scheme theory is to consider as points, not only the usual points corresponding to maximal ideals, but also all (irreducible) algebraic varieties, which correspond to prime ideals. Thus the Zariski topology on the set of prime ideals (spectrum) of a commutative ring is the topology such that a set of prime ideals is closed if and only if it is the set of all prime ideals that contain a fixed ideal.
== Zariski topology of varieties ==
In classical algebraic geometry (that is, the part of algebraic geometry in which one does not use schemes, which were introduced by Grothendieck around 1960), the Zariski topology is defined on algebraic varieties. The Zariski topology, defined on the points of the variety, is the topology such that the closed sets are the algebraic subsets of the variety. As the most elementary algebraic varieties are affine and projective varieties, it is useful to make this definition more explicit in both cases. We assume that we are working over a fixed, algebraically closed field k (in classical algebraic geometry, k is usually the field of complex numbers).
=== Affine varieties ===
First, we define the topology on the affine space
A
n
,
{\displaystyle \mathbb {A} ^{n},}
formed by the n-tuples of elements of k. The topology is defined by specifying its closed sets, rather than its open sets, and these are taken simply to be all the algebraic sets in
A
n
.
{\displaystyle \mathbb {A} ^{n}.}
That is, the closed sets are those of the form
V
(
S
)
=
{
x
∈
A
n
∣
f
(
x
)
=
0
for all
f
∈
S
}
{\displaystyle V(S)=\{x\in \mathbb {A} ^{n}\mid f(x)=0{\text{ for all }}f\in S\}}
where S is any set of polynomials in n variables over k. It is a straightforward verification to show that:
V(S) = V((S)), where (S) is the ideal generated by the elements of S;
For any two ideals of polynomials I, J, we have
V
(
I
)
∪
V
(
J
)
=
V
(
I
J
)
;
{\displaystyle V(I)\cup V(J)\,=\,V(IJ);}
V
(
I
)
∩
V
(
J
)
=
V
(
I
+
J
)
.
{\displaystyle V(I)\cap V(J)\,=\,V(I+J).}
It follows that finite unions and arbitrary intersections of the sets V(S) are also of this form, so that these sets form the closed sets of a topology (equivalently, their complements, denoted D(S) and called principal open sets, form the topology itself). This is the Zariski topology on
A
n
.
{\displaystyle \mathbb {A} ^{n}.}
If X is an affine algebraic set (irreducible or not) then the Zariski topology on it is defined simply to be the subspace topology induced by its inclusion into some
A
n
.
{\displaystyle \mathbb {A} ^{n}.}
Equivalently, it can be checked that:
The elements of the affine coordinate ring
A
(
X
)
=
k
[
x
1
,
…
,
x
n
]
/
I
(
X
)
{\displaystyle A(X)\,=\,k[x_{1},\dots ,x_{n}]/I(X)}
act as functions on X just as the elements of
k
[
x
1
,
…
,
x
n
]
{\displaystyle k[x_{1},\dots ,x_{n}]}
act as functions on
A
n
{\displaystyle \mathbb {A} ^{n}}
; here, I(X) is the ideal of all polynomials vanishing on X.
For any set of polynomials S, let T be the set of their images in A(X). Then the subset of X
V
′
(
T
)
=
{
x
∈
X
∣
f
(
x
)
=
0
,
∀
f
∈
T
}
{\displaystyle V'(T)=\{x\in X\mid f(x)=0,\forall f\in T\}}
(these notations are not standard) is equal to the intersection with X of V(S).
This establishes that the above equation, clearly a generalization of the definition of the closed sets in
A
n
{\displaystyle \mathbb {A} ^{n}}
above, defines the Zariski topology on any affine variety.
=== Projective varieties ===
Recall that
n
{\displaystyle n}
-dimensional projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
is defined to be the set of equivalence classes of non-zero points in
A
n
+
1
{\displaystyle \mathbb {A} ^{n+1}}
by identifying two points that differ by a scalar multiple in
k
{\displaystyle k}
. The elements of the polynomial ring
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\dots ,x_{n}]}
are not generally functions on
P
n
{\displaystyle \mathbb {P} ^{n}}
because any point has many representatives that yield different values in a polynomial; however, for homogeneous polynomials the condition of having zero or nonzero value on any given projective point is well-defined since the scalar multiple factors out of the polynomial. Therefore, if
S
{\displaystyle S}
is any set of homogeneous polynomials we may reasonably speak of
V
(
S
)
=
{
x
∈
P
n
∣
f
(
x
)
=
0
,
∀
f
∈
S
}
.
{\displaystyle V(S)=\{x\in \mathbb {P} ^{n}\mid f(x)=0,\forall f\in S\}.}
The same facts as above may be established for these sets, except that the word "ideal" must be replaced by the phrase "homogeneous ideal", so that the
V
(
S
)
{\displaystyle V(S)}
, for sets
S
{\displaystyle S}
of homogeneous polynomials, define a topology on
P
n
.
{\displaystyle \mathbb {P} ^{n}.}
As above the complements of these sets are denoted
D
(
S
)
{\displaystyle D(S)}
, or, if confusion is likely to result,
D
′
(
S
)
{\displaystyle D'(S)}
.
The projective Zariski topology is defined for projective algebraic sets just as the affine one is defined for affine algebraic sets, by taking the subspace topology. Similarly, it may be shown that this topology is defined intrinsically by sets of elements of the projective coordinate ring, by the same formula as above.
=== Properties ===
An important property of Zariski topologies is that they have a base consisting of simple elements, namely the D(f) for individual polynomials (or for projective varieties, homogeneous polynomials) f. That these form a basis follows from the formula for the intersection of two Zariski-closed sets given above (apply it repeatedly to the principal ideals generated by the generators of (S)). The open sets in this base are called distinguished or basic open sets. The importance of this property results in particular from its use in the definition of an affine scheme.
By Hilbert's basis theorem and the fact that Noetherian rings are closed under quotients, every affine or projective coordinate ring is Noetherian. As a consequence, affine or projective spaces with the Zariski topology are Noetherian topological spaces, which implies that any closed subset of these spaces is compact.
However, except for finite algebraic sets, no algebraic set is ever a Hausdorff space. In the old topological literature "compact" was taken to include the Hausdorff property, and this convention is still honored in algebraic geometry; therefore compactness in the modern sense is called "quasicompactness" in algebraic geometry. However, since every point (a1, ..., an) is the zero set of the polynomials x1 - a1, ..., xn - an, points are closed and so every variety satisfies the T1 axiom.
Every regular map of varieties is continuous in the Zariski topology. In fact, the Zariski topology is the weakest topology (with the fewest open sets) in which this is true and in which points are closed. This is easily verified by noting that the Zariski-closed sets are simply the intersections of the inverse images of 0 by the polynomial functions, considered as regular maps into
A
1
.
{\displaystyle \mathbb {A} ^{1}.}
== Spectrum of a ring ==
In modern algebraic geometry, an algebraic variety is often represented by its associated scheme, which is a topological space (equipped with additional structures) that is locally homeomorphic to the spectrum of a ring. The spectrum of a commutative ring A, denoted Spec A, is the set of the prime ideals of A, equipped with the Zariski topology, for which the closed sets are the sets
V
(
I
)
=
{
P
∈
Spec
A
∣
P
⊇
I
}
{\displaystyle V(I)=\{P\in \operatorname {Spec} A\mid P\supseteq I\}}
where I is an ideal.
To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal generated by the polynomials x1 − a1, ..., xn − an contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form. Thus, V(S) is "the same as" the maximal ideals containing S. Grothendieck's innovation in defining Spec was to replace maximal ideals with all prime ideals; in this formulation it is natural to simply generalize this observation to the definition of a closed set in the spectrum of a ring.
Another way, perhaps more similar to the original, to interpret the modern definition is to realize that the elements of A can actually be thought of as functions on the prime ideals of A; namely, as functions on Spec A. Simply, any prime ideal P has a corresponding residue field, which is the field of fractions of the quotient A/P, and any element of A has a reflection in this residue field. Furthermore, the elements that are actually in P are precisely those whose reflection vanishes at P. So if we think of the map, associated to any element a of A:
e
a
:
(
P
∈
Spec
A
)
↦
(
a
mod
P
1
∈
Frac
(
A
/
P
)
)
{\displaystyle e_{a}\colon {\bigl (}P\in \operatorname {Spec} A{\bigr )}\mapsto \left({\frac {a\;{\bmod {P}}}{1}}\in \operatorname {Frac} (A/P)\right)}
("evaluation of a"), which assigns to each point its reflection in the residue field there, as a function on Spec A (whose values, admittedly, lie in different fields at different points), then we have
e
a
(
P
)
=
0
⇔
P
∈
V
(
a
)
{\displaystyle e_{a}(P)=0\Leftrightarrow P\in V(a)}
More generally, V(I) for any ideal I is the common set on which all the "functions" in I vanish, which is formally similar to the classical definition. In fact, they agree in the sense that when A is the ring of polynomials over some algebraically closed field k, the maximal ideals of A are (as discussed in the previous paragraph) identified with n-tuples of elements of k, their residue fields are just k, and the "evaluation" maps are actually evaluation of polynomials at the corresponding n-tuples. Since as shown above, the classical definition is essentially the modern definition with only maximal ideals considered, this shows that the interpretation of the modern definition as "zero sets of functions" agrees with the classical definition where they both make sense.
Just as Spec replaces affine varieties, the Proj construction replaces projective varieties in modern algebraic geometry. Just as in the classical case, to move from the affine to the projective definition we need only replace "ideal" by "homogeneous ideal", though there is a complication involving the "irrelevant maximal ideal", which is discussed in the cited article.
=== Examples ===
Spec k, the spectrum of a field k is the topological space with one element.
Spec
Z
{\displaystyle \mathbb {Z} }
, the spectrum of the integers has a closed point for every prime number p corresponding to the maximal ideal
(
p
)
⊆
Z
{\displaystyle (p)\subseteq \mathbb {Z} }
, and one non-closed generic point (i.e., whose closure is the whole space) corresponding to the zero ideal (0). So the closed subsets of Spec
Z
{\displaystyle \mathbb {Z} }
are precisely the whole space and the finite unions of closed points.
Spec k[t], the spectrum of the polynomial ring over a field k: such a polynomial ring is known to be a principal ideal domain and the irreducible polynomials are the prime elements of k[t]. If k is algebraically closed, for example the field of complex numbers, a non-constant polynomial is irreducible if and only if it is linear, of the form t − a, for some element a of k. So, the spectrum consists of one closed point for every element a of k and a generic point, corresponding to the zero ideal, and the set of the closed points is homeomorphic with the affine line k equipped with its Zariski topology. Because of this homeomorphism, some authors use the term affine line for the spectrum of k[t]. If k is not algebraically closed, for example the field of the real numbers, the picture becomes more complicated because of the existence of non-linear irreducible polynomials. In this case, the spectrum consists of one closed point for each monic irreducible polynomial, and a generic point corresponding to the zero ideal. For example, the spectrum of
R
[
t
]
{\displaystyle \mathbb {R} [t]}
consists of the closed points (x − a), for a in
R
{\displaystyle \mathbb {R} }
, the closed points (x2 + px + q) where p, q are in
R
{\displaystyle \mathbb {R} }
and with negative discriminant p2 − 4q < 0, and finally a generic point (0). For any field, the closed subsets of Spec k[t] are finite unions of closed points, and the whole space. (This results from the fact that k[t] is a principal ideal domain, and, in a principal ideal domain, the prime ideals that contain an ideal are the prime factors of the prime factorization of a generator of the ideal).
=== Further properties ===
The most dramatic change in the topology from the classical picture to the new is that points are no longer necessarily closed; by expanding the definition, Grothendieck introduced generic points, which are the points with maximal closure, that is the minimal prime ideals. The closed points correspond to maximal ideals of A. However, the spectrum and projective spectrum are still T0 spaces: given two points P, Q that are prime ideals of A, at least one of them, say P, does not contain the other. Then D(Q) contains P but, of course, not Q.
Just as in classical algebraic geometry, any spectrum or projective spectrum is (quasi)compact, and if the ring in question is Noetherian then the space is a Noetherian topological space. However, these facts are counterintuitive: we do not normally expect open sets, other than connected components, to be compact, and for affine varieties (for example, Euclidean space) we do not even expect the space itself to be compact. This is one instance of the geometric unsuitability of the Zariski topology. Grothendieck solved this problem by defining the notion of properness of a scheme (actually, of a morphism of schemes), which recovers the intuitive idea of compactness: Proj is proper, but Spec is not.
== See also ==
Spectral space
== Citations ==
== References == | Wikipedia/Zariski_topology |
Invariant theory is a branch of abstract algebra dealing with actions of groups on algebraic varieties, such as vector spaces, from the point of view of their effect on functions. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. For example, if we consider the action of the special linear group SLn on the space of n by n matrices by left multiplication, then the determinant is an invariant of this action because the determinant of A X equals the determinant of X, when A is in SLn.
== Introduction ==
Let
G
{\displaystyle G}
be a group, and
V
{\displaystyle V}
a finite-dimensional vector space over a field
k
{\displaystyle k}
(which in classical invariant theory was usually assumed to be the complex numbers). A representation of
G
{\displaystyle G}
in
V
{\displaystyle V}
is a group homomorphism
π
:
G
→
G
L
(
V
)
{\displaystyle \pi :G\to GL(V)}
, which induces a group action of
G
{\displaystyle G}
on
V
{\displaystyle V}
. If
k
[
V
]
{\displaystyle k[V]}
is the space of polynomial functions on
V
{\displaystyle V}
, then the group action of
G
{\displaystyle G}
on
V
{\displaystyle V}
produces an action on
k
[
V
]
{\displaystyle k[V]}
by the following formula:
(
g
⋅
f
)
(
x
)
:=
f
(
g
−
1
(
x
)
)
∀
x
∈
V
,
g
∈
G
,
f
∈
k
[
V
]
.
{\displaystyle (g\cdot f)(x):=f(g^{-1}(x))\qquad \forall x\in V,g\in G,f\in k[V].}
With this action it is natural to consider the subspace of all polynomial functions which are invariant under this group action, in other words the set of polynomials such that
g
⋅
f
=
f
{\displaystyle g\cdot f=f}
for all
g
∈
G
{\displaystyle g\in G}
. This space of invariant polynomials is denoted
k
[
V
]
G
{\displaystyle k[V]^{G}}
.
First problem of invariant theory: Is
k
[
V
]
G
{\displaystyle k[V]^{G}}
a finitely generated algebra over
k
{\displaystyle k}
?
For example, if
G
=
S
L
n
{\displaystyle G=SL_{n}}
and
V
=
M
n
{\displaystyle V=M_{n}}
the space of square matrices, and the action of
G
{\displaystyle G}
on
V
{\displaystyle V}
is given by left multiplication, then
k
[
V
]
G
{\displaystyle k[V]^{G}}
is isomorphic to a polynomial algebra in one variable, generated by the determinant. In other words, in this case, every invariant polynomial is a linear combination of powers of the determinant polynomial. So in this case,
k
[
V
]
G
{\displaystyle k[V]^{G}}
is finitely generated over
k
{\displaystyle k}
.
If the answer is yes, then the next question is to find a minimal basis, and ask whether the module of polynomial relations between the basis elements (known as the syzygies) is finitely generated over
k
[
V
]
{\displaystyle k[V]}
.
Invariant theory of finite groups has intimate connections with Galois theory. One of the first major results was the main theorem on the symmetric functions that described the invariants of the symmetric group
S
n
{\displaystyle S_{n}}
acting on the polynomial ring
R
[
x
1
,
…
,
x
n
{\displaystyle R[x_{1},\ldots ,x_{n}}
] by permutations of the variables. More generally, the Chevalley–Shephard–Todd theorem characterizes finite groups whose algebra of invariants is a polynomial ring. Modern research in invariant theory of finite groups emphasizes "effective" results, such as explicit bounds on the degrees of the generators. The case of positive characteristic, ideologically close to modular representation theory, is an area of active study, with links to algebraic topology.
Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence was projective geometry, where invariant theory was expected to play a major role in organizing the material. One of the highlights of this relationship is the symbolic method. Representation theory of semisimple Lie groups has its roots in invariant theory.
David Hilbert's work on the question of the finite generation of the algebra of invariants (1890) resulted in the creation of a new mathematical discipline, abstract algebra. A later paper of Hilbert (1893) dealt with the same questions in more constructive and geometric ways, but remained virtually unknown until David Mumford brought these ideas back to life in the 1960s, in a considerably more general and modern form, in his geometric invariant theory. In large measure due to the influence of Mumford, the subject of invariant theory is seen to encompass the theory of actions of linear algebraic groups on affine and projective varieties. A distinct strand of invariant theory, going back to the classical constructive and combinatorial methods of the nineteenth century, has been developed by Gian-Carlo Rota and his school. A prominent example of this circle of ideas is given by the theory of standard monomials.
== Examples ==
Simple examples of invariant theory come from computing the invariant monomials from a group action. For example, consider the
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
-action on
C
[
x
,
y
]
{\displaystyle \mathbb {C} [x,y]}
sending
x
↦
−
x
y
↦
−
y
{\displaystyle {\begin{aligned}x\mapsto -x&&y\mapsto -y\end{aligned}}}
Then, since
x
2
,
x
y
,
y
2
{\displaystyle x^{2},xy,y^{2}}
are the lowest degree monomials which are invariant, we have that
C
[
x
,
y
]
Z
/
2
Z
≅
C
[
x
2
,
x
y
,
y
2
]
≅
C
[
a
,
b
,
c
]
(
a
c
−
b
2
)
{\displaystyle \mathbb {C} [x,y]^{\mathbb {Z} /2\mathbb {Z} }\cong \mathbb {C} [x^{2},xy,y^{2}]\cong {\frac {\mathbb {C} [a,b,c]}{(ac-b^{2})}}}
This example forms the basis for doing many computations.
== The nineteenth-century origins ==
Cayley first established invariant theory in his "On the Theory of Linear Transformations (1845)." In the opening of his paper, Cayley credits an 1841 paper of George Boole, "investigations were suggested to me by a very elegant paper on the same subject... by Mr Boole." (Boole's paper was Exposition of a General Theory of Linear Transformations, Cambridge Mathematical Journal.)
Classically, the term "invariant theory" refers to the study of invariant algebraic forms (equivalently, symmetric tensors) for the action of linear transformations. This was a major field of study in the latter part of the nineteenth century. Current theories relating to the symmetric group and symmetric functions, commutative algebra, moduli spaces and the representations of Lie groups are rooted in this area.
In greater detail, given a finite-dimensional vector space V of dimension n we can consider the symmetric algebra S(Sr(V)) of the polynomials of degree r over V, and the action on it of GL(V). It is actually more accurate to consider the relative invariants of GL(V), or representations of SL(V), if we are going to speak of invariants: that is because a scalar multiple of the identity will act on a tensor of rank r in S(V) through the r-th power 'weight' of the scalar. The point is then to define the subalgebra of invariants I(Sr(V)) for the action. We are, in classical language, looking at invariants of n-ary r-ics, where n is the dimension of V. (This is not the same as finding invariants of GL(V) on S(V); this is an uninteresting problem as the only such invariants are constants.) The case that was most studied was invariants of binary forms where n = 2.
Other work included that of Felix Klein in computing the invariant rings of finite group actions on
C
2
{\displaystyle \mathbf {C} ^{2}}
(the binary polyhedral groups, classified by the ADE classification); these are the coordinate rings of du Val singularities.
The work of David Hilbert, proving that I(V) was finitely presented in many cases, almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later. Explicit calculations for particular purposes have been known in modern times (for example Shioda, with the binary octavics).
== Hilbert's theorems ==
Hilbert (1890) proved that if V is a finite-dimensional representation of the complex algebraic group G = SLn(C) then the ring of invariants of G acting on the ring of polynomials R = S(V) is finitely generated. His proof used the Reynolds operator ρ from R to RG with the properties
ρ(1) = 1
ρ(a + b) = ρ(a) + ρ(b)
ρ(ab) = a ρ(b) whenever a is an invariant.
Hilbert constructed the Reynolds operator explicitly using Cayley's omega process Ω, though now it is more common to construct ρ indirectly as follows: for compact groups G, the Reynolds operator is given by taking the average over G, and non-compact reductive groups can be reduced to the case of compact groups using Weyl's unitarian trick.
Given the Reynolds operator, Hilbert's theorem is proved as follows. The ring R is a polynomial ring so is graded by degrees, and the ideal I is defined to be the ideal generated by the homogeneous invariants of positive degrees. By Hilbert's basis theorem the ideal I is finitely generated (as an ideal). Hence, I is finitely generated by finitely many invariants of G (because if we are given any – possibly infinite – subset S that generates a finitely generated ideal I, then I is already generated by some finite subset of S). Let i1,...,in be a finite set of invariants of G generating I (as an ideal). The key idea is to show that these generate the ring RG of invariants. Suppose that x is some homogeneous invariant of degree d > 0. Then
x = a1i1 + ... + anin
for some aj in the ring R because x is in the ideal I. We can assume that aj is homogeneous of degree d − deg ij for every j (otherwise, we replace aj by its homogeneous component of degree d − deg ij; if we do this for every j, the equation x = a1i1 + ... + anin will remain valid). Now, applying the Reynolds operator to x = a1i1 + ... + anin gives
x = ρ(a1)i1 + ... + ρ(an)in
We are now going to show that x lies in the R-algebra generated by i1,...,in.
First, let us do this in the case when the elements ρ(ak) all have degree less than d. In this case, they are all in the R-algebra generated by i1,...,in (by our induction assumption). Therefore, x is also in this R-algebra (since x = ρ(a1)i1 + ... + ρ(an)in).
In the general case, we cannot be sure that the elements ρ(ak) all have degree less than d. But we can replace each ρ(ak) by its homogeneous component of degree d − deg ij. As a result, these modified ρ(ak) are still G-invariants (because every homogeneous component of a G-invariant is a G-invariant) and have degree less than d (since deg ik > 0). The equation x = ρ(a1)i1 + ... + ρ(an)in still holds for our modified ρ(ak), so we can again conclude that x lies in the R-algebra generated by i1,...,in.
Hence, by induction on the degree, all elements of RG are in the R-algebra generated by i1,...,in.
== Geometric invariant theory ==
The modern formulation of geometric invariant theory is due to David Mumford, and emphasizes the construction of a quotient by the group action that should capture invariant information through its coordinate ring. It is a subtle theory, in that success is obtained by excluding some 'bad' orbits and identifying others with 'good' orbits. In a separate development the symbolic method of invariant theory, an apparently heuristic combinatorial notation, has been rehabilitated.
One motivation was to construct moduli spaces in algebraic geometry as quotients of schemes parametrizing marked objects. In the 1970s and 1980s the theory developed
interactions with symplectic geometry and equivariant topology, and was used to construct moduli spaces of objects in differential geometry, such as instantons and monopoles.
== See also ==
== References ==
Dieudonné, Jean A.; Carrell, James B. (1970), "Invariant theory, old and new", Advances in Mathematics, 4 (1): 1–80, doi:10.1016/0001-8708(70)90015-0, ISSN 0001-8708, MR 0255525 Reprinted as Dieudonné, Jean A.; Carrell, James B. (1971), Invariant theory, old and new, Boston, MA: Academic Press, ISBN 978-0-12-215540-6, MR 0279102
Dolgachev, Igor (2003), Lectures on invariant theory, London Mathematical Society Lecture Note Series, vol. 296, Cambridge University Press, doi:10.1017/CBO9780511615436, ISBN 978-0-521-52548-0, MR 2004511
Grace, J. H.; Young, Alfred (1903), The algebra of invariants, Cambridge: Cambridge University Press
Grosshans, Frank D. (1997), Algebraic homogeneous spaces and invariant theory, New York: Springer, ISBN 3-540-63628-5
Hilbert, David (1890), "Ueber die Theorie der algebraischen Formen", Mathematische Annalen, 36 (4): 473–534, doi:10.1007/BF01208503, ISSN 0025-5831
Hilbert, D. (1893), "Über die vollen Invariantensysteme (On Full Invariant Systems)", Math. Annalen, 42 (3): 313, doi:10.1007/BF01444162
Kung, Joseph P. S.; Rota, Gian-Carlo (1984), "The invariant theory of binary forms", Bulletin of the American Mathematical Society, New Series, 10 (1): 27–85, doi:10.1090/S0273-0979-1984-15188-7, ISSN 0002-9904, MR 0722856
Neusel, Mara D.; Smith, Larry (2002), Invariant Theory of Finite Groups, Providence, RI: American Mathematical Society, ISBN 0-8218-2916-5 A recent resource for learning about modular invariants of finite groups.
Olver, Peter J. (1999), Classical invariant theory, Cambridge: Cambridge University Press, ISBN 0-521-55821-2 An undergraduate level introduction to the classical theory of invariants of binary forms, including the Omega process starting at page 87.
Popov, V.L. (2001) [1994], "Invariants, theory of", Encyclopedia of Mathematics, EMS Press
Springer, T. A. (1977), Invariant Theory, New York: Springer, ISBN 0-387-08242-5 An older but still useful survey.
Sturmfels, Bernd (1993), Algorithms in Invariant Theory, New York: Springer, ISBN 0-387-82445-6 A beautiful introduction to the theory of invariants of finite groups and techniques for computing them using Gröbner bases.
Weyl, Hermann (1939), The Classical Groups. Their Invariants and Representations, Princeton University Press, ISBN 978-0-691-05756-9, MR 0000255 {{citation}}: ISBN / Date incompatibility (help)
Weyl, Hermann (1939b), "Invariants", Duke Mathematical Journal, 5 (3): 489–502, doi:10.1215/S0012-7094-39-00540-5, ISSN 0012-7094, MR 0000030
== External links ==
H. Kraft, C. Procesi, Classical Invariant Theory, a Primer
V. L. Popov, E. B. Vinberg, ``Invariant Theory", in Algebraic geometry. IV. Encyclopaedia of Mathematical Sciences, 55 (translated from 1989 Russian edition) Springer-Verlag, Berlin, 1994; vi+284 pp.; ISBN 3-540-54682-0 | Wikipedia/Invariant_theory |
In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring.
The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in Nakayama (1951), although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals.
== Statement ==
Let
R
{\displaystyle R}
be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in Matsumura (1989):
Statement 1: Let
I
{\displaystyle I}
be an ideal in
R
{\displaystyle R}
, and
M
{\displaystyle M}
a finitely generated module over
R
{\displaystyle R}
. If
I
M
=
M
{\displaystyle IM=M}
, then there exists
r
∈
R
{\displaystyle r\in R}
with
r
≡
1
(
mod
I
)
{\displaystyle r\equiv 1\;(\operatorname {mod} I)}
such that
r
M
=
0
{\displaystyle rM=0}
.
This is proven below. A useful mnemonic for Nakayama's lemma is "
I
M
=
M
⟹
i
m
=
m
{\displaystyle IM=M\implies im=m}
". This summarizes the following alternative formulation:
Statement 2: Let
I
{\displaystyle I}
be an ideal in
R
{\displaystyle R}
, and
M
{\displaystyle M}
a finitely generated module over
R
{\displaystyle R}
. If
I
M
=
M
{\displaystyle IM=M}
, then there exists an
i
∈
I
{\displaystyle i\in I}
such that
i
m
=
m
{\displaystyle im=m}
for all
m
∈
M
{\displaystyle m\in M}
.
Proof: Take
i
=
1
−
r
{\displaystyle i=1-r}
in Statement 1.
The following corollary is also known as Nakayama's lemma, and it is in this form that it most often appears.
Statement 3: If
M
{\displaystyle M}
is a finitely generated module over
R
{\displaystyle R}
,
J
(
R
)
{\displaystyle J(R)}
is the Jacobson radical of
R
{\displaystyle R}
, and
J
(
R
)
M
=
M
{\displaystyle J(R)M=M}
, then
M
=
0
{\displaystyle M=0}
.
Proof:
1
−
r
{\displaystyle 1-r}
(with
r
{\displaystyle r}
as in Statement 1) is in the Jacobson radical so
r
{\displaystyle r}
is invertible.
More generally, one has that
J
(
R
)
M
{\displaystyle J(R)M}
is a superfluous submodule of
M
{\displaystyle M}
when
M
{\displaystyle M}
is finitely generated.
Statement 4: If
M
{\displaystyle M}
is a finitely generated module over
R
{\displaystyle R}
,
N
{\displaystyle N}
is a submodule of
M
{\displaystyle M}
, and
M
{\displaystyle M}
=
N
+
J
(
R
)
M
{\displaystyle N+J(R)M}
, then
M
{\displaystyle M}
=
N
{\displaystyle N}
.
Proof: Apply Statement 3 to
M
/
N
{\displaystyle M/N}
.
The following result manifests Nakayama's lemma in terms of generators.
Statement 5: If
M
{\displaystyle M}
is a finitely generated module over
R
{\displaystyle R}
and the images of elements
m
{\displaystyle m}
1,...,
m
{\displaystyle m}
n
{\displaystyle n}
of
M
{\displaystyle M}
in
M
/
J
(
R
)
M
{\displaystyle M/J(R)M}
generate
M
/
J
(
R
)
M
{\displaystyle M/J(R)M}
as an
R
/
J
(
R
)
{\displaystyle R/J(R)}
-module, then
m
{\displaystyle m}
1,...,
m
{\displaystyle m}
n
{\displaystyle n}
also generate
M
{\displaystyle M}
as an
R
{\displaystyle R}
-module.
Proof: Apply Statement 4 to
N
=
∑
i
R
m
i
{\displaystyle \textstyle {N=\sum _{i}Rm_{i}}}
.
If one assumes instead that
R
{\displaystyle R}
is complete and
M
{\displaystyle M}
is separated with respect to the
I
{\displaystyle I}
-adic topology for an ideal
I
{\displaystyle I}
in
R
{\displaystyle R}
, this last statement holds with
I
{\displaystyle I}
in place of
J
(
R
)
{\displaystyle J(R)}
and without assuming in advance that
M
{\displaystyle M}
is finitely generated. Here separatedness means that the
I
{\displaystyle I}
-adic topology satisfies the T1 separation axiom, and is equivalent to
⋂
k
=
1
∞
I
k
M
=
0.
{\displaystyle \textstyle {\bigcap _{k=1}^{\infty }I^{k}M=0.}}
== Consequences ==
=== Local rings ===
In the special case of a finitely generated module
M
{\displaystyle M}
over a local ring
R
{\displaystyle R}
with maximal ideal
m
{\displaystyle {\mathfrak {m}}}
, the quotient
M
/
m
M
{\displaystyle M/{\mathfrak {m}}M}
is a vector space over the field
R
/
m
{\displaystyle R/{\mathfrak {m}}}
. Statement 5 then implies that a basis of
M
/
m
M
{\displaystyle M/{\mathfrak {m}}M}
lifts to a minimal set of generators of
M
{\displaystyle M}
. Conversely, every minimal set of generators of
M
{\displaystyle M}
is obtained in this way, and any two such sets of generators are related by an invertible matrix with entries in the ring.
==== Geometric interpretation ====
In this form, Nakayama's lemma takes on concrete geometrical significance. Local rings arise in geometry as the germs of functions at a point. Finitely generated modules over local rings arise quite often as germs of sections of vector bundles. Working at the level of germs rather than points, the notion of finite-dimensional vector bundle gives way to that of a coherent sheaf. Informally, Nakayama's lemma says that one can still regard a coherent sheaf as coming from a vector bundle in some sense. More precisely, let
M
{\displaystyle {\mathcal {M}}}
be a coherent sheaf of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules over an arbitrary scheme
X
{\displaystyle X}
. The stalk of
M
{\displaystyle {\mathcal {M}}}
at a point
p
∈
X
{\displaystyle p\in X}
, denoted by
M
p
{\displaystyle {\mathcal {M}}_{p}}
, is a module over the local ring
(
O
X
,
p
,
m
p
)
{\displaystyle ({\mathcal {O}}_{X,p},{\displaystyle {\mathfrak {m}}_{p}})}
and the fiber of
M
{\displaystyle {\mathcal {M}}}
at
p
{\displaystyle p}
is the vector space
M
(
p
)
=
M
p
/
m
p
M
p
{\displaystyle {\mathcal {M}}(p)={\mathcal {M}}_{p}/{\mathfrak {m}}_{p}{\mathcal {M}}_{p}}
. Nakayama's lemma implies that a basis of the fiber
M
(
p
)
{\displaystyle {\mathcal {M}}(p)}
lifts to a minimal set of generators of
M
p
{\displaystyle {\mathcal {M}}_{p}}
. That is:
Any basis of the fiber of a coherent sheaf
M
{\displaystyle {\mathcal {M}}}
at a point comes from a minimal basis of local sections.
Reformulating this geometrically, if
M
{\displaystyle {\mathcal {M}}}
is a locally free
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module representing a vector bundle
E
→
X
{\displaystyle E\to X}
, and if we take a basis of the vector bundle at a point in the scheme
X
{\displaystyle X}
, this basis can be lifted to a basis of sections of the vector bundle in some neighborhood of the point. We can organize this data diagrammatically
E
|
p
→
E
|
U
→
E
↓
↓
↓
p
→
U
→
X
{\displaystyle {\begin{matrix}E|_{p}&\to &E|_{U}&\to &E\\\downarrow &&\downarrow &&\downarrow \\p&\to &U&\to &X\end{matrix}}}
where
E
|
p
{\displaystyle E|_{p}}
is an n-dimensional vector space, to say a basis in
E
|
p
{\displaystyle E|_{p}}
(which is a basis of sections of the bundle
E
p
→
p
{\displaystyle E_{p}\to p}
) can be lifted to a basis of sections
E
|
U
→
U
{\displaystyle E|_{U}\to U}
for some neighborhood
U
{\displaystyle U}
of
p
{\displaystyle p}
.
=== Going up and going down ===
The going up theorem is essentially a corollary of Nakayama's lemma. It asserts:
Let
R
↪
S
{\displaystyle R\hookrightarrow S}
be an integral extension of commutative rings, and
p
{\displaystyle {\mathfrak {p}}}
a prime ideal of
R
{\displaystyle R}
. Then there is a prime ideal
q
{\displaystyle {\mathfrak {q}}}
in
S
{\displaystyle S}
such that
q
∩
R
=
p
{\displaystyle {\mathfrak {q}}\cap R={\mathfrak {p}}}
. Moreover,
q
{\displaystyle {\mathfrak {q}}}
can be chosen to contain any prime
q
1
{\displaystyle {\mathfrak {q}}_{1}}
of
S
{\displaystyle S}
such that
q
1
∩
R
⊂
p
{\displaystyle {\mathfrak {q}}_{1}\cap R\subset {\mathfrak {p}}}
.
=== Module epimorphisms ===
Nakayama's lemma makes precise one sense in which finitely generated modules over a commutative ring are like vector spaces over a field. The following consequence of Nakayama's lemma gives another way in which this is true:
If
M
{\displaystyle M}
is a finitely generated
R
{\displaystyle R}
-module and
f
:
M
→
M
{\displaystyle f:M\to M}
is a surjective endomorphism, then
f
{\displaystyle f}
is an isomorphism.
Over a local ring, one can say more about module epimorphisms:
Suppose that
R
{\displaystyle R}
is a local ring with maximal ideal
m
{\displaystyle {\mathfrak {m}}}
, and
M
,
N
{\displaystyle M,N}
are finitely generated
R
{\displaystyle R}
-modules. If
ϕ
:
M
→
N
{\displaystyle \phi :M\to N}
is an
R
{\displaystyle R}
-linear map such that the quotient
ϕ
m
:
M
/
m
M
→
N
/
m
N
{\displaystyle \phi _{\mathfrak {m}}:M/{\mathfrak {m}}M\to N/{\mathfrak {m}}N}
is surjective, then
ϕ
{\displaystyle \phi }
is surjective.
=== Homological versions ===
Nakayama's lemma also has several versions in homological algebra. The above statement about epimorphisms can be used to show:
Let
M
{\displaystyle M}
be a finitely generated module over a local ring. Then
M
{\displaystyle M}
is projective if and only if it is free. This can be used to compute the Grothendieck group of any local ring
R
{\displaystyle R}
as
K
(
R
)
=
Z
{\displaystyle K(R)=\mathbb {Z} }
.
A geometrical and global counterpart to this is the Serre–Swan theorem, relating projective modules and coherent sheaves.
More generally, one has
Let
R
{\displaystyle R}
be a local ring and
M
{\displaystyle M}
a finitely generated module over
R
{\displaystyle R}
. Then the projective dimension of
M
{\displaystyle M}
over
R
{\displaystyle R}
is equal to the length of every minimal free resolution of
M
{\displaystyle M}
. Moreover, the projective dimension is equal to the global dimension of
M
{\displaystyle M}
, which is by definition the smallest integer
i
≥
0
{\displaystyle i\geq 0}
such that
Tor
i
+
1
R
(
k
,
M
)
=
0.
{\displaystyle \operatorname {Tor} _{i+1}^{R}(k,M)=0.}
Here
k
{\displaystyle k}
is the residue field of
R
{\displaystyle R}
and
Tor
{\displaystyle {\text{Tor}}}
is the tor functor.
=== Inverse function theorem ===
Nakayama's lemma is used to prove a version of the inverse function theorem in algebraic geometry:
Let
f
:
X
→
Y
{\textstyle f:X\to Y}
be a projective morphism between quasi-projective varieties. Then
f
{\textstyle f}
is an isomorphism if and only if it is a bijection and the differential
d
f
p
{\textstyle df_{p}}
is injective for all
p
∈
X
{\displaystyle p\in X}
.
== Proof ==
A standard proof of the Nakayama lemma uses the following technique due to Atiyah & Macdonald (1969).
Let M be an R-module generated by n elements, and φ: M → M an R-linear map. If there is an ideal I of R such that φ(M) ⊂ IM, then there is a monic polynomial
p
(
x
)
=
x
n
+
p
1
x
n
−
1
+
⋯
+
p
n
{\displaystyle p(x)=x^{n}+p_{1}x^{n-1}+\cdots +p_{n}}
with pk ∈ Ik, such that
p
(
φ
)
=
0
{\displaystyle p(\varphi )=0}
as an endomorphism of M.
This assertion is precisely a generalized version of the Cayley–Hamilton theorem, and the proof proceeds along the same lines. On the generators xi of M, one has a relation of the form
φ
(
x
i
)
=
∑
j
=
1
n
a
i
j
x
j
{\displaystyle \varphi (x_{i})=\sum _{j=1}^{n}a_{ij}x_{j}}
where aij ∈ I. Thus
∑
j
=
1
n
(
φ
δ
i
j
−
a
i
j
)
x
j
=
0.
{\displaystyle \sum _{j=1}^{n}\left(\varphi \delta _{ij}-a_{ij}\right)x_{j}=0.}
The required result follows by multiplying by the adjugate of the matrix (φδij − aij) and invoking Cramer's rule. One finds then det(φδij − aij) = 0, so the required polynomial is
p
(
t
)
=
det
(
t
δ
i
j
−
a
i
j
)
.
{\displaystyle p(t)=\det(t\delta _{ij}-a_{ij}).}
To prove Nakayama's lemma from the Cayley–Hamilton theorem, assume that IM = M and take φ to be the identity on M. Then define a polynomial p(x) as above. Then
r
=
p
(
1
)
=
1
+
p
1
+
p
2
+
⋯
+
p
n
{\displaystyle r=p(1)=1+p_{1}+p_{2}+\cdots +p_{n}}
has the required property:
r
≡
1
(
mod
I
)
{\displaystyle r\equiv 1\;(\operatorname {mod} I)}
and
r
M
=
0
{\displaystyle rM=0}
.
== Noncommutative case ==
A version of the lemma holds for right modules over non-commutative unital rings R. The resulting theorem is sometimes known as the Jacobson–Azumaya theorem.
Let J(R) be the Jacobson radical of R. If U is a right module over a ring, R, and I is a right ideal in R, then define U·I to be the set of all (finite) sums of elements of the form u·i, where · is simply the action of R on U. Necessarily, U·I is a submodule of U.
If V is a maximal submodule of U, then U/V is simple. So U·J(R) is necessarily a subset of V, by the definition of J(R) and the fact that U/V is simple. Thus, if U contains at least one (proper) maximal submodule, U·J(R) is a proper submodule of U. However, this need not hold for arbitrary modules U over R, for U need not contain any maximal submodules. Naturally, if U is a Noetherian module, this holds. If R is Noetherian, and U is finitely generated, then U is a Noetherian module over R, and the conclusion is satisfied. Somewhat remarkable is that the weaker assumption, namely that U is finitely generated as an R-module (and no finiteness assumption on R), is sufficient to guarantee the conclusion. This is essentially the statement of Nakayama's lemma.
Precisely, one has:
Nakayama's lemma: Let U be a finitely generated right module over a (unital) ring R. If U is a non-zero module, then U·J(R) is a proper submodule of U.
=== Proof ===
Let
X
{\displaystyle X}
be a finite subset of
U
{\displaystyle U}
, minimal with respect to the property that it generates
U
{\displaystyle U}
. Since
U
{\displaystyle U}
is non-zero, this set
X
{\displaystyle X}
is nonempty. Denote every element of
X
{\displaystyle X}
by
x
i
{\displaystyle x_{i}}
for
i
∈
{
1
,
…
,
n
}
{\displaystyle i\in \{1,\ldots ,n\}}
. Since
X
{\displaystyle X}
generates
U
{\displaystyle U}
,
∑
i
=
1
n
x
i
R
=
U
{\displaystyle \sum _{i=1}^{n}x_{i}R=U}
.
Suppose
U
⋅
J
(
R
)
=
U
{\displaystyle U\cdot \operatorname {J} (R)=U}
, to obtain a contradiction. Then every element
u
∈
U
{\displaystyle u\in U}
can be expressed as a finite combination
u
=
∑
s
=
1
m
u
s
j
s
{\displaystyle u=\sum \limits _{s=1}^{m}u_{s}j_{s}}
for some
m
∈
N
,
u
s
∈
U
,
j
s
∈
J
(
R
)
,
s
=
1
,
…
,
m
{\displaystyle m\in \mathbb {N} ,\,u_{s}\in U,\,j_{s}\in \operatorname {J} (R),\,s=1,\dots ,m}
.
Each
u
s
{\displaystyle u_{s}}
can be further decomposed as
u
s
=
∑
i
=
1
n
x
i
r
i
,
s
{\displaystyle u_{s}=\sum \limits _{i=1}^{n}x_{i}r_{i,s}}
for some
r
i
,
s
∈
R
{\displaystyle r_{i,s}\in R}
. Therefore, we have
u
=
∑
s
=
1
m
(
∑
i
=
1
n
x
i
r
i
,
s
)
j
s
=
∑
i
=
1
n
x
i
(
∑
s
=
1
m
r
i
,
s
j
s
)
{\displaystyle u=\sum _{s=1}^{m}\left(\sum _{i=1}^{n}x_{i}r_{i,s}\right)j_{s}=\sum \limits _{i=1}^{n}x_{i}\left(\sum _{s=1}^{m}r_{i,s}j_{s}\right)}
.
Since
J
(
R
)
{\displaystyle \operatorname {J} (R)}
is a (two-sided) ideal in
R
{\displaystyle R}
, we have
∑
s
=
1
m
r
i
,
s
j
s
∈
J
(
R
)
{\displaystyle \sum _{s=1}^{m}r_{i,s}j_{s}\in \operatorname {J} (R)}
for every
i
∈
{
1
,
…
,
n
}
{\displaystyle i\in \{1,\dots ,n\}}
, and thus this becomes
u
=
∑
i
=
1
n
x
i
k
i
{\displaystyle u=\sum _{i=1}^{n}x_{i}k_{i}}
for some
k
i
∈
J
(
R
)
{\displaystyle k_{i}\in \operatorname {J} (R)}
,
i
=
1
,
…
,
n
{\displaystyle i=1,\dots ,n}
.
Putting
u
=
∑
i
=
1
n
x
i
{\displaystyle u=\sum _{i=1}^{n}x_{i}}
and applying distributivity, we obtain
∑
i
=
1
n
x
i
(
1
−
k
i
)
=
0
{\displaystyle \sum _{i=1}^{n}x_{i}(1-k_{i})=0}
.
Choose some
j
∈
{
1
,
…
,
n
}
{\displaystyle j\in \{1,\dots ,n\}}
. If the right ideal
(
1
−
k
j
)
R
{\displaystyle (1-k_{j})R}
were proper, then it would be contained in a maximal right ideal
M
≠
R
{\displaystyle M\neq R}
and both
1
−
k
j
{\displaystyle 1-k_{j}}
and
k
j
{\displaystyle k_{j}}
would belong to
M
{\displaystyle M}
, leading to a contradiction (note that
J
(
R
)
⊆
M
{\displaystyle \operatorname {J} (R)\subseteq M}
by the definition of the Jacobson radical). Thus
(
1
−
k
j
)
R
=
R
{\displaystyle (1-k_{j})R=R}
and
1
−
k
j
{\displaystyle 1-k_{j}}
has a right inverse
(
1
−
k
j
)
−
1
{\displaystyle (1-k_{j})^{-1}}
in
R
{\displaystyle R}
. We have
∑
i
=
1
n
x
i
(
1
−
k
i
)
(
1
−
k
j
)
−
1
=
0
{\displaystyle \sum _{i=1}^{n}x_{i}(1-k_{i})(1-k_{j})^{-1}=0}
.
Therefore,
∑
i
≠
j
x
i
(
1
−
k
i
)
(
1
−
k
j
)
−
1
=
−
x
j
{\displaystyle \sum _{i\neq j}x_{i}(1-k_{i})(1-k_{j})^{-1}=-x_{j}}
.
Thus
x
j
{\displaystyle x_{j}}
is a linear combination of the elements from
X
∖
{
x
j
}
{\displaystyle X\setminus \{x_{j}\}}
. This contradicts the minimality of
X
{\displaystyle X}
and establishes the result.
== Graded version ==
There is also a graded version of Nakayama's lemma. Let R be a ring that is graded by the ordered semigroup of non-negative integers, and let
R
+
{\displaystyle R_{+}}
denote the ideal generated by positively graded elements. Then if M is a graded module over R for which
M
i
=
0
{\displaystyle M_{i}=0}
for i sufficiently negative (in particular, if M is finitely generated and R does not contain elements of negative degree) such that
R
+
M
=
M
{\displaystyle R_{+}M=M}
, then
M
=
0
{\displaystyle M=0}
. Of particular importance is the case that R is a polynomial ring with the standard grading, and M is a finitely generated module.
The proof is much easier than in the ungraded case: taking i to be the least integer such that
M
i
≠
0
{\displaystyle M_{i}\neq 0}
, we see that
M
i
{\displaystyle M_{i}}
does not appear in
R
+
M
{\displaystyle R_{+}M}
, so either
M
≠
R
+
M
{\displaystyle M\neq R_{+}M}
, or such an i does not exist, i.e.,
M
=
0
{\displaystyle M=0}
.
== See also ==
Module theory
Serre–Swan theorem
== Notes ==
== References ==
Atiyah, Michael F.; Macdonald, Ian G. (1969), Introduction to Commutative Algebra, Reading, MA: Addison-Wesley.
Azumaya, Gorô (1951), "On maximally central algebras", Nagoya Mathematical Journal, 2: 119–150, doi:10.1017/s0027763000010114, ISSN 0027-7630, MR 0040287.
Eisenbud, David (1995), Commutative algebra, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-5350-1, ISBN 978-0-387-94268-1, MR 1322960
Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, doi:10.1002/9781118032527, ISBN 978-0-471-05059-9, MR 1288523
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, Springer-Verlag.
Isaacs, I. Martin (1993), Algebra, a graduate course (1st ed.), Brooks/Cole Publishing Company, ISBN 0-534-19002-2
Jacobson, Nathan (1945), "The radical and semi-simplicity for arbitrary rings", American Journal of Mathematics, 67 (2): 300–320, doi:10.2307/2371731, ISSN 0002-9327, JSTOR 2371731, MR 0012271.
Matsumura, Hideyuki (1989), Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8 (2nd ed.), Cambridge University Press, ISBN 978-0-521-36764-6, MR 1011461.
Nagata, Masayoshi (1975), Local rings, Robert E. Krieger Publishing Co., Huntington, N.Y., ISBN 978-0-88275-228-0, MR 0460307.
Nakayama, Tadasi (1951), "A remark on finitely generated modules", Nagoya Mathematical Journal, 3: 139–140, doi:10.1017/s0027763000012265, ISSN 0027-7630, MR 0043770.
== Links ==
How to understand Nakayama's Lemma and its Corollaries | Wikipedia/Nakayama's_lemma |
Introduction to Commutative Algebra is a well-known commutative algebra textbook written by Michael Atiyah and Ian G. Macdonald. It is on the list of 173 books essential for undergraduate math libraries.
As of May 2025, Google Scholar lists over 8000 citations to this book.
It deals with elementary concepts of commutative algebra including localization, primary decomposition, integral dependence, Noetherian and Artinian rings and modules, Dedekind rings, completions and a moderate amount of dimension theory. It is notable for being among the shorter English-language introductory textbooks in the subject, relegating a good deal of material to the exercises.
(Hardcover 1969, ISBN 0-201-00361-9) (Paperback 1994, ISBN 0-201-40751-5)
== Reviews ==
Michael Berg says "this classic book, is one of the premier texts for a serious graduate (or very gifted undergraduate) student". Mark Green calls it an "elegant minimalist introduction". W. Jonsson says "An amazing amount of information is included in the 128 pages of this book". D. J. Lewis says "The highlight of the text is the very excellent set of problems which constitute one-third of the text". B. R. McDonald says "The student consensus was that the text was very readable ... we were pleased with the success of the text".
On the other hand, Lewis says "The text is very tersely written, examples are a bit scarce and proofs are condensed. This reviewer doubts that many students can profitably read it unassisted."
The book has enthusiastic endorsements from several math professors. Henry Pinkham, former Professor of Math at Columbia University said "probably the best introduction to Commutative Algebra, and has very good exercises." Jonathan Wise, Associate Professor at University of Colorado Boulder says "may be the best math textbook ever written."
== References == | Wikipedia/Introduction_to_Commutative_Algebra |
In commutative algebra and algebraic geometry, localization is a formal way to introduce the "denominators" to a given ring or module. That is, it introduces a new ring/module out of an existing ring/module R, so that it consists of fractions
m
s
,
{\displaystyle {\frac {m}{s}},}
such that the denominator s belongs to a given subset S of R. If S is the set of the non-zero elements of an integral domain, then the localization is the field of fractions: this case generalizes the construction of the field
Q
{\displaystyle \mathbb {Q} }
of rational numbers from the ring
Z
{\displaystyle \mathbb {Z} }
of integers.
The technique has become fundamental, particularly in algebraic geometry, as it provides a natural link to sheaf theory. In fact, the term localization originated in algebraic geometry: if R is a ring of functions defined on some geometric object (algebraic variety) V, and one wants to study this variety "locally" near a point p, then one considers the set S of all functions that are not zero at p and localizes R with respect to S. The resulting ring
S
−
1
R
{\displaystyle S^{-1}R}
contains information about the behavior of V near p, and excludes information that is not "local", such as the zeros of functions that are outside V (cf. the example given at local ring).
== Localization of a ring ==
The localization of a commutative ring R by a multiplicatively closed set S is a new ring
S
−
1
R
{\displaystyle S^{-1}R}
whose elements are fractions with numerators in R and denominators in S.
If the ring is an integral domain the construction generalizes and follows closely that of the field of fractions, and, in particular, that of the rational numbers as the field of fractions of the integers. For rings that have zero divisors, the construction is similar but requires more care.
=== Multiplicative set ===
Localization is commonly done with respect to a multiplicatively closed set S (also called a multiplicative set or a multiplicative system) of elements of a ring R, that is a subset of R that is closed under multiplication, and contains 1.
The requirement that S must be a multiplicative set is natural, since it implies that all denominators introduced by the localization belong to S. The localization by a set U that is not multiplicatively closed can also be defined, by taking as possible denominators all products of elements of U. However, the same localization is obtained by using the multiplicatively closed set S of all products of elements of U. As this often makes reasoning and notation simpler, it is standard practice to consider only localizations by multiplicative sets.
For example, the localization by a single element s introduces fractions of the form
a
s
,
{\displaystyle {\tfrac {a}{s}},}
but also products of such fractions, such as
a
b
s
2
.
{\displaystyle {\tfrac {ab}{s^{2}}}.}
So, the denominators will belong to the multiplicative set
{
1
,
s
,
s
2
,
s
3
,
…
}
{\displaystyle \{1,s,s^{2},s^{3},\ldots \}}
of the powers of s. Therefore, one generally talks of "the localization by the powers of an element" rather than of "the localization by an element".
The localization of a ring R by a multiplicative set S is generally denoted
S
−
1
R
,
{\displaystyle S^{-1}R,}
but other notations are commonly used in some special cases: if
S
=
{
1
,
t
,
t
2
,
…
}
{\displaystyle S=\{1,t,t^{2},\ldots \}}
consists of the powers of a single element,
S
−
1
R
{\displaystyle S^{-1}R}
is often denoted
R
t
;
{\displaystyle R_{t};}
if
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
is the complement of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
, then
S
−
1
R
{\displaystyle S^{-1}R}
is denoted
R
p
.
{\displaystyle R_{\mathfrak {p}}.}
In the remainder of this article, only localizations by a multiplicative set are considered.
=== Integral domains ===
When the ring R is an integral domain and S does not contain 0, the ring
S
−
1
R
{\displaystyle S^{-1}R}
is a subring of the field of fractions of R. As such, the localization of a domain is a domain.
More precisely, it is the subring of the field of fractions of R, that consists of the fractions
a
s
{\displaystyle {\tfrac {a}{s}}}
such that
s
∈
S
.
{\displaystyle s\in S.}
This is a subring since the sum
a
s
+
b
t
=
a
t
+
b
s
s
t
,
{\displaystyle {\tfrac {a}{s}}+{\tfrac {b}{t}}={\tfrac {at+bs}{st}},}
and the product
a
s
b
t
=
a
b
s
t
{\displaystyle {\tfrac {a}{s}}\,{\tfrac {b}{t}}={\tfrac {ab}{st}}}
of two elements of
S
−
1
R
{\displaystyle S^{-1}R}
are in
S
−
1
R
.
{\displaystyle S^{-1}R.}
This results from the defining property of a multiplicative set, which implies also that
1
=
1
1
∈
S
−
1
R
.
{\displaystyle 1={\tfrac {1}{1}}\in S^{-1}R.}
In this case, R is a subring of
S
−
1
R
.
{\displaystyle S^{-1}R.}
It is shown below that this is no longer true in general, typically when S contains zero divisors.
For example, the decimal fractions are the localization of the ring of integers by the multiplicative set of the powers of ten. In this case,
S
−
1
R
{\displaystyle S^{-1}R}
consists of the rational numbers that can be written as
n
10
k
,
{\displaystyle {\tfrac {n}{10^{k}}},}
where n is an integer, and k is a nonnegative integer.
=== General construction ===
In the general case, a problem arises with zero divisors. Let S be a multiplicative set in a commutative ring R. Suppose that
s
∈
S
,
{\displaystyle s\in S,}
and
0
≠
a
∈
R
{\displaystyle 0\neq a\in R}
is a zero divisor with
a
s
=
0.
{\displaystyle as=0.}
Then
a
1
{\displaystyle {\tfrac {a}{1}}}
is the image in
S
−
1
R
{\displaystyle S^{-1}R}
of
a
∈
R
,
{\displaystyle a\in R,}
and one has
a
1
=
a
s
s
=
0
s
=
0
1
.
{\displaystyle {\tfrac {a}{1}}={\tfrac {as}{s}}={\tfrac {0}{s}}={\tfrac {0}{1}}.}
Thus some nonzero elements of R must be zero in
S
−
1
R
.
{\displaystyle S^{-1}R.}
The construction that follows is designed for taking this into account.
Given R and S as above, one considers the equivalence relation on
R
×
S
{\displaystyle R\times S}
that is defined by
(
r
1
,
s
1
)
∼
(
r
2
,
s
2
)
{\displaystyle (r_{1},s_{1})\sim (r_{2},s_{2})}
if there exists a
t
∈
S
{\displaystyle t\in S}
such that
t
(
s
1
r
2
−
s
2
r
1
)
=
0.
{\displaystyle t(s_{1}r_{2}-s_{2}r_{1})=0.}
The localization
S
−
1
R
{\displaystyle S^{-1}R}
is defined as the set of the equivalence classes for this relation. The class of (r, s) is denoted as
r
s
,
{\displaystyle {\frac {r}{s}},}
r
/
s
,
{\displaystyle r/s,}
or
s
−
1
r
.
{\displaystyle s^{-1}r.}
So, one has
r
1
s
1
=
r
2
s
2
{\displaystyle {\tfrac {r_{1}}{s_{1}}}={\tfrac {r_{2}}{s_{2}}}}
if and only if there is a
t
∈
S
{\displaystyle t\in S}
such that
t
(
s
1
r
2
−
s
2
r
1
)
=
0.
{\displaystyle t(s_{1}r_{2}-s_{2}r_{1})=0.}
The reason for the
t
{\displaystyle t}
is to handle cases such as the above
a
1
=
0
1
,
{\displaystyle {\tfrac {a}{1}}={\tfrac {0}{1}},}
where
s
1
r
2
−
s
2
r
1
{\displaystyle s_{1}r_{2}-s_{2}r_{1}}
is nonzero even though the fractions should be regarded as equal.
The localization
S
−
1
R
{\displaystyle S^{-1}R}
is a commutative ring with addition
r
1
s
1
+
r
2
s
2
=
r
1
s
2
+
r
2
s
1
s
1
s
2
,
{\displaystyle {\frac {r_{1}}{s_{1}}}+{\frac {r_{2}}{s_{2}}}={\frac {r_{1}s_{2}+r_{2}s_{1}}{s_{1}s_{2}}},}
multiplication
r
1
s
1
r
2
s
2
=
r
1
r
2
s
1
s
2
,
{\displaystyle {\frac {r_{1}}{s_{1}}}\,{\frac {r_{2}}{s_{2}}}={\frac {r_{1}r_{2}}{s_{1}s_{2}}},}
additive identity
0
1
,
{\displaystyle {\tfrac {0}{1}},}
and multiplicative identity
1
1
.
{\displaystyle {\tfrac {1}{1}}.}
The function
r
↦
r
1
{\displaystyle r\mapsto {\frac {r}{1}}}
defines a ring homomorphism from
R
{\displaystyle R}
into
S
−
1
R
,
{\displaystyle S^{-1}R,}
which is injective if and only if S does not contain any zero divisors.
If
0
∈
S
,
{\displaystyle 0\in S,}
then
S
−
1
R
{\displaystyle S^{-1}R}
is the zero ring that has only one unique element 0.
If S is the set of all regular elements of R (that is the elements that are not zero divisors),
S
−
1
R
{\displaystyle S^{-1}R}
is called the total ring of fractions of R.
=== Universal property ===
The (above defined) ring homomorphism
j
:
R
→
S
−
1
R
{\displaystyle j\colon R\to S^{-1}R}
satisfies a universal property that is described below. This characterizes
S
−
1
R
{\displaystyle S^{-1}R}
up to an isomorphism. So all properties of localizations can be deduced from the universal property, independently from the way they have been constructed. Moreover, many important properties of localization are easily deduced from the general properties of universal properties, while their direct proof may be more technical.
The universal property satisfied by
j
:
R
→
S
−
1
R
{\displaystyle j\colon R\to S^{-1}R}
is the following:
If
f
:
R
→
T
{\displaystyle f\colon R\to T}
is a ring homomorphism that maps every element of S to a unit (invertible element) in T, there exists a unique ring homomorphism
g
:
S
−
1
R
→
T
{\displaystyle g\colon S^{-1}R\to T}
such that
f
=
g
∘
j
.
{\displaystyle f=g\circ j.}
Using category theory, this can be expressed by saying that localization is a functor that is left adjoint to a forgetful functor. More precisely, let
C
{\displaystyle {\mathcal {C}}}
and
D
{\displaystyle {\mathcal {D}}}
be the categories whose objects are pairs of a commutative ring and a submonoid of, respectively, the multiplicative monoid or the group of units of the ring. The morphisms of these categories are the ring homomorphisms that map the submonoid of the first object into the submonoid of the second one. Finally, let
F
:
D
→
C
{\displaystyle {\mathcal {F}}\colon {\mathcal {D}}\to {\mathcal {C}}}
be the forgetful functor that forgets that the elements of the second element of the pair are invertible.
Then the factorization
f
=
g
∘
j
{\displaystyle f=g\circ j}
of the universal property defines a bijection
hom
C
(
(
R
,
S
)
,
F
(
T
,
U
)
)
→
hom
D
(
(
S
−
1
R
,
j
(
S
)
)
,
(
T
,
U
)
)
.
{\displaystyle \hom _{\mathcal {C}}((R,S),{\mathcal {F}}(T,U))\to \hom _{\mathcal {D}}((S^{-1}R,j(S)),(T,U)).}
This may seem a rather tricky way of expressing the universal property, but it is useful for showing easily many properties, by using the fact that the composition of two left adjoint functors is a left adjoint functor.
=== Examples ===
If
R
=
Z
{\displaystyle R=\mathbb {Z} }
is the ring of integers, and
S
=
Z
∖
{
0
}
,
{\displaystyle S=\mathbb {Z} \setminus \{0\},}
then
S
−
1
R
{\displaystyle S^{-1}R}
is the field
Q
{\displaystyle \mathbb {Q} }
of the rational numbers.
If R is an integral domain, and
S
=
R
∖
{
0
}
,
{\displaystyle S=R\setminus \{0\},}
then
S
−
1
R
{\displaystyle S^{-1}R}
is the field of fractions of R. The preceding example is a special case of this one.
If R is a commutative ring, and if S is the subset of its elements that are not zero divisors, then
S
−
1
R
{\displaystyle S^{-1}R}
is the total ring of fractions of R. In this case, S is the largest multiplicative set such that the homomorphism
R
→
S
−
1
R
{\displaystyle R\to S^{-1}R}
is injective. The preceding example is a special case of this one.
If
x
{\displaystyle x}
is an element of a commutative ring R and
S
=
{
1
,
x
,
x
2
,
…
}
,
{\displaystyle S=\{1,x,x^{2},\ldots \},}
then
S
−
1
R
{\displaystyle S^{-1}R}
can be identified (is canonically isomorphic to)
R
[
x
−
1
]
=
R
[
s
]
/
(
x
s
−
1
)
.
{\displaystyle R[x^{-1}]=R[s]/(xs-1).}
(The proof consists of showing that this ring satisfies the above universal property.) This sort of localization plays a fundamental role in the definition of an affine scheme.
If
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal of a commutative ring R, the set complement
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
of
p
{\displaystyle {\mathfrak {p}}}
in R is a multiplicative set (by the definition of a prime ideal). The ring
S
−
1
R
{\displaystyle S^{-1}R}
is a local ring that is generally denoted
R
p
,
{\displaystyle R_{\mathfrak {p}},}
and called the local ring of R at
p
.
{\displaystyle {\mathfrak {p}}.}
This sort of localization is fundamental in commutative algebra, because many properties of a commutative ring can be read on its local rings. Such a property is often called a local property. For example, a ring is regular if and only if all its local rings are regular.
=== Ring properties ===
Localization is a rich construction that has many useful properties. In this section, only the properties relative to rings and to a single localization are considered. Properties concerning ideals, modules, or several multiplicative sets are considered in other sections.
S
−
1
R
=
0
{\displaystyle S^{-1}R=0}
if and only if S contains 0.
The ring homomorphism
R
→
S
−
1
R
{\displaystyle R\to S^{-1}R}
is injective if and only if S does not contain any zero divisors.
The ring homomorphism
R
→
S
−
1
R
{\displaystyle R\to S^{-1}R}
is an epimorphism in the category of rings, that is not surjective in general.
The ring
S
−
1
R
{\displaystyle S^{-1}R}
is a flat R-module (see § Localization of a module for details).
If
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
is the complement of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
, then
S
−
1
R
,
{\displaystyle S^{-1}R,}
denoted
R
p
,
{\displaystyle R_{\mathfrak {p}},}
is a local ring; that is, it has only one maximal ideal.
Localization commutes with formations of finite sums, products, intersections and radicals; e.g., if
I
{\displaystyle {\sqrt {I}}}
denote the radical of an ideal I in R, then
I
⋅
S
−
1
R
=
I
⋅
S
−
1
R
.
{\displaystyle {\sqrt {I}}\cdot S^{-1}R={\sqrt {I\cdot S^{-1}R}}\,.}
In particular, R is reduced if and only if its total ring of fractions is reduced.
Let R be an integral domain with the field of fractions K. Then its localization
R
p
{\displaystyle R_{\mathfrak {p}}}
at a prime ideal
p
{\displaystyle {\mathfrak {p}}}
can be viewed as a subring of K. Moreover,
R
=
⋂
p
R
p
=
⋂
m
R
m
{\displaystyle R=\bigcap _{\mathfrak {p}}R_{\mathfrak {p}}=\bigcap _{\mathfrak {m}}R_{\mathfrak {m}}}
where the first intersection is over all prime ideals and the second over the maximal ideals.
There is a bijection between the set of prime ideals of S−1R and the set of prime ideals of R that are disjoint from S. This bijection is induced by the given homomorphism R → S −1R.
=== Saturation of a multiplicative set ===
Let
S
⊆
R
{\displaystyle S\subseteq R}
be a multiplicative set. The saturation
S
^
{\displaystyle {\hat {S}}}
of
S
{\displaystyle S}
is the set
S
^
=
{
r
∈
R
:
∃
s
∈
R
,
r
s
∈
S
}
.
{\displaystyle {\hat {S}}=\{r\in R\colon \exists s\in R,rs\in S\}.}
The multiplicative set S is saturated if it equals its saturation, that is, if
S
^
=
S
{\displaystyle {\hat {S}}=S}
, or equivalently, if
r
s
∈
S
{\displaystyle rs\in S}
implies that r and s are in S.
If S is not saturated, and
r
s
∈
S
,
{\displaystyle rs\in S,}
then
s
r
s
{\displaystyle {\frac {s}{rs}}}
is a multiplicative inverse of the image of r in
S
−
1
R
.
{\displaystyle S^{-1}R.}
So, the images of the elements of
S
^
{\displaystyle {\hat {S}}}
are all invertible in
S
−
1
R
,
{\displaystyle S^{-1}R,}
and the universal property implies that
S
−
1
R
{\displaystyle S^{-1}R}
and
S
^
−
1
R
{\displaystyle {\hat {S}}{}^{-1}R}
are canonically isomorphic, that is, there is a unique isomorphism between them that fixes the images of the elements of R.
If S and T are two multiplicative sets, then
S
−
1
R
{\displaystyle S^{-1}R}
and
T
−
1
R
{\displaystyle T^{-1}R}
are isomorphic if and only if they have the same saturation, or, equivalently, if s belongs to one of the multiplicative sets, then there exists
t
∈
R
{\displaystyle t\in R}
such that st belongs to the other.
Saturated multiplicative sets are not widely used explicitly, since, for verifying that a set is saturated, one must know all units of the ring.
== Terminology explained by the context ==
The term localization originates in the general trend of modern mathematics to study geometrical and topological objects locally, that is in terms of their behavior near each point. Examples of this trend are the fundamental concepts of manifolds, germs and sheafs. In algebraic geometry, an affine algebraic set can be identified with a quotient ring of a polynomial ring in such a way that the points of the algebraic set correspond to the maximal ideals of the ring (this is Hilbert's Nullstellensatz). This correspondence has been generalized for making the set of the prime ideals of a commutative ring a topological space equipped with the Zariski topology; this topological space is called the spectrum of the ring.
In this context, a localization by a multiplicative set may be viewed as the restriction of the spectrum of a ring to the subspace of the prime ideals (viewed as points) that do not intersect the multiplicative set.
Two classes of localizations are more commonly considered:
The multiplicative set is the complement of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
of a ring R. In this case, one speaks of the "localization at
p
{\displaystyle {\mathfrak {p}}}
", or "localization at a point". The resulting ring, denoted
R
p
{\displaystyle R_{\mathfrak {p}}}
is a local ring, and is the algebraic analog of a ring of germs.
The multiplicative set consists of all powers of an element t of a ring R. The resulting ring is commonly denoted
R
t
,
{\displaystyle R_{t},}
and its spectrum is the Zariski open set of the prime ideals that do not contain t. Thus the localization is the analog of the restriction of a topological space to a neighborhood of a point (every prime ideal has a neighborhood basis consisting of Zariski open sets of this form).
In number theory and algebraic topology, when working over the ring
Z
{\displaystyle \mathbb {Z} }
of integers, one refers to a property relative to an integer n as a property true at n or away from n, depending on the localization that is considered. "Away from n" means that the property is considered after localization by the powers of n, and, if p is a prime number, "at p" means that the property is considered after localization at the prime ideal
p
Z
{\displaystyle p\mathbb {Z} }
. This terminology can be explained by the fact that, if p is prime, the nonzero prime ideals of the localization of
Z
{\displaystyle \mathbb {Z} }
are either the singleton set {p} or its complement in the set of prime numbers.
== Localization and saturation of ideals ==
Let S be a multiplicative set in a commutative ring R, and
j
:
R
→
S
−
1
R
{\displaystyle j\colon R\to S^{-1}R}
be the canonical ring homomorphism. Given an ideal I in R, let
S
−
1
I
{\displaystyle S^{-1}I}
the set of the fractions in
S
−
1
R
{\displaystyle S^{-1}R}
whose numerator is in I. This is an ideal of
S
−
1
R
,
{\displaystyle S^{-1}R,}
which is generated by j(I), and called the localization of I by S.
The saturation of I by S is
j
−
1
(
S
−
1
I
)
;
{\displaystyle j^{-1}(S^{-1}I);}
it is an ideal of R, which can also defined as the set of the elements
r
∈
R
{\displaystyle r\in R}
such that there exists
s
∈
S
{\displaystyle s\in S}
with
s
r
∈
I
.
{\displaystyle sr\in I.}
Many properties of ideals are either preserved by saturation and localization, or can be characterized by simpler properties of localization and saturation.
In what follows, S is a multiplicative set in a ring R, and I and J are ideals of R; the saturation of an ideal I by a multiplicative set S is denoted
sat
S
(
I
)
,
{\displaystyle \operatorname {sat} _{S}(I),}
or, when the multiplicative set S is clear from the context,
sat
(
I
)
.
{\displaystyle \operatorname {sat} (I).}
1
∈
S
−
1
I
⟺
1
∈
sat
(
I
)
⟺
S
∩
I
≠
∅
{\displaystyle 1\in S^{-1}I\quad \iff \quad 1\in \operatorname {sat} (I)\quad \iff \quad S\cap I\neq \emptyset }
I
⊆
J
⟹
S
−
1
I
⊆
S
−
1
J
and
sat
(
I
)
⊆
sat
(
J
)
{\displaystyle I\subseteq J\quad \ \implies \quad \ S^{-1}I\subseteq S^{-1}J\quad \ {\text{and}}\quad \ \operatorname {sat} (I)\subseteq \operatorname {sat} (J)}
(this is not always true for strict inclusions)
S
−
1
(
I
∩
J
)
=
S
−
1
I
∩
S
−
1
J
,
sat
(
I
∩
J
)
=
sat
(
I
)
∩
sat
(
J
)
{\displaystyle S^{-1}(I\cap J)=S^{-1}I\cap S^{-1}J,\qquad \,\operatorname {sat} (I\cap J)=\operatorname {sat} (I)\cap \operatorname {sat} (J)}
S
−
1
(
I
+
J
)
=
S
−
1
I
+
S
−
1
J
,
sat
(
I
+
J
)
=
sat
(
I
)
+
sat
(
J
)
{\displaystyle S^{-1}(I+J)=S^{-1}I+S^{-1}J,\qquad \operatorname {sat} (I+J)=\operatorname {sat} (I)+\operatorname {sat} (J)}
S
−
1
(
I
⋅
J
)
=
S
−
1
I
⋅
S
−
1
J
,
sat
(
I
⋅
J
)
=
sat
(
I
)
⋅
sat
(
J
)
{\displaystyle S^{-1}(I\cdot J)=S^{-1}I\cdot S^{-1}J,\qquad \quad \operatorname {sat} (I\cdot J)=\operatorname {sat} (I)\cdot \operatorname {sat} (J)}
If
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal such that
p
∩
S
=
∅
,
{\displaystyle {\mathfrak {p}}\cap S=\emptyset ,}
then
S
−
1
p
{\displaystyle S^{-1}{\mathfrak {p}}}
is a prime ideal and
p
=
sat
(
p
)
{\displaystyle {\mathfrak {p}}=\operatorname {sat} ({\mathfrak {p}})}
; if the intersection is nonempty, then
S
−
1
p
=
S
−
1
R
{\displaystyle S^{-1}{\mathfrak {p}}=S^{-1}R}
and
sat
(
p
)
=
R
.
{\displaystyle \operatorname {sat} ({\mathfrak {p}})=R.}
== Localization of a module ==
Let
R
{\displaystyle R}
be a commutative ring,
S
{\displaystyle S}
be a multiplicative set in
R
{\displaystyle R}
, and
M
{\displaystyle M}
be an
R
{\displaystyle R}
-module. The localization of the module
M
{\displaystyle M}
by
S
{\displaystyle S}
, denoted
S
−
1
M
{\displaystyle S^{-1}M}
, is an
S
−
1
R
{\displaystyle S^{-1}R}
-module that is constructed exactly as the localization of
R
{\displaystyle R}
, except that the numerators of the fractions belong to
M
{\displaystyle M}
. That is, as a set, it consists of equivalence classes, denoted
m
s
{\displaystyle {\frac {m}{s}}}
, of pairs
(
m
,
s
)
{\displaystyle (m,s)}
, where
m
∈
M
{\displaystyle m\in M}
and
s
∈
S
,
{\displaystyle s\in S,}
and two pairs
(
m
,
s
)
{\displaystyle (m,s)}
and
(
n
,
t
)
{\displaystyle (n,t)}
are equivalent if there is an element
u
{\displaystyle u}
in
S
{\displaystyle S}
such that
u
(
s
n
−
t
m
)
=
0.
{\displaystyle u(sn-tm)=0.}
Addition and scalar multiplication are defined as for usual fractions (in the following formula,
r
∈
R
,
{\displaystyle r\in R,}
s
,
t
∈
S
,
{\displaystyle s,t\in S,}
and
m
,
n
∈
M
{\displaystyle m,n\in M}
):
m
s
+
n
t
=
t
m
+
s
n
s
t
,
{\displaystyle {\frac {m}{s}}+{\frac {n}{t}}={\frac {tm+sn}{st}},}
r
s
m
t
=
r
m
s
t
.
{\displaystyle {\frac {r}{s}}{\frac {m}{t}}={\frac {rm}{st}}.}
Moreover,
S
−
1
M
{\displaystyle S^{-1}M}
is also an
R
{\displaystyle R}
-module with scalar multiplication
r
m
s
=
r
1
m
s
=
r
m
s
.
{\displaystyle r\,{\frac {m}{s}}={\frac {r}{1}}{\frac {m}{s}}={\frac {rm}{s}}.}
It is straightforward to check that these operations are well-defined, that is, they give the same result for different choices of representatives of fractions.
The localization of a module can be equivalently defined by using tensor products:
S
−
1
M
=
S
−
1
R
⊗
R
M
.
{\displaystyle S^{-1}M=S^{-1}R\otimes _{R}M.}
The proof of equivalence (up to a canonical isomorphism) can be done by showing that the two definitions satisfy the same universal property.
=== Module properties ===
If M is a submodule of an R-module N, and S is a multiplicative set in R, one has
S
−
1
M
⊆
S
−
1
N
.
{\displaystyle S^{-1}M\subseteq S^{-1}N.}
This implies that, if
f
:
M
→
N
{\displaystyle f\colon M\to N}
is an injective module homomorphism, then
S
−
1
R
⊗
R
f
:
S
−
1
R
⊗
R
M
→
S
−
1
R
⊗
R
N
{\displaystyle S^{-1}R\otimes _{R}f:\quad S^{-1}R\otimes _{R}M\to S^{-1}R\otimes _{R}N}
is also an injective homomorphism.
Since the tensor product is a right exact functor, this implies that localization by S maps exact sequences of R-modules to exact sequences of
S
−
1
R
{\displaystyle S^{-1}R}
-modules. In other words, localization is an exact functor, and
S
−
1
R
{\displaystyle S^{-1}R}
is a flat R-module.
This flatness and the fact that localization solves a universal property make that localization preserves many properties of modules and rings, and is compatible with solutions of other universal properties. For example, the natural map
S
−
1
(
M
⊗
R
N
)
→
S
−
1
M
⊗
S
−
1
R
S
−
1
N
{\displaystyle S^{-1}(M\otimes _{R}N)\to S^{-1}M\otimes _{S^{-1}R}S^{-1}N}
is an isomorphism. If
M
{\displaystyle M}
is a finitely presented module, the natural map
S
−
1
Hom
R
(
M
,
N
)
→
Hom
S
−
1
R
(
S
−
1
M
,
S
−
1
N
)
{\displaystyle S^{-1}\operatorname {Hom} _{R}(M,N)\to \operatorname {Hom} _{S^{-1}R}(S^{-1}M,S^{-1}N)}
is also an isomorphism.
If a module M is a finitely generated over R, one has
S
−
1
(
Ann
R
(
M
)
)
=
Ann
S
−
1
R
(
S
−
1
M
)
,
{\displaystyle S^{-1}(\operatorname {Ann} _{R}(M))=\operatorname {Ann} _{S^{-1}R}(S^{-1}M),}
where
Ann
{\displaystyle \operatorname {Ann} }
denotes annihilator, that is the ideal of the elements of the ring that map to zero all elements of the module. In particular,
S
−
1
M
=
0
⟺
S
∩
Ann
R
(
M
)
≠
∅
,
{\displaystyle S^{-1}M=0\quad \iff \quad S\cap \operatorname {Ann} _{R}(M)\neq \emptyset ,}
that is, if
t
M
=
0
{\displaystyle tM=0}
for some
t
∈
S
.
{\displaystyle t\in S.}
== Localization at primes ==
The definition of a prime ideal implies immediately that the complement
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
in a commutative ring R is a multiplicative set. In this case, the localization
S
−
1
R
{\displaystyle S^{-1}R}
is commonly denoted
R
p
.
{\displaystyle R_{\mathfrak {p}}.}
The ring
R
p
{\displaystyle R_{\mathfrak {p}}}
is a local ring, that is called the local ring of R at
p
.
{\displaystyle {\mathfrak {p}}.}
This means that
p
R
p
=
p
⊗
R
R
p
{\displaystyle {\mathfrak {p}}\,R_{\mathfrak {p}}={\mathfrak {p}}\otimes _{R}R_{\mathfrak {p}}}
is the unique maximal ideal of the ring
R
p
.
{\displaystyle R_{\mathfrak {p}}.}
Analogously one can define the localization of a module M at a prime ideal
p
{\displaystyle {\mathfrak {p}}}
of R. Again, the localization
S
−
1
M
{\displaystyle S^{-1}M}
is commonly denoted
M
p
{\displaystyle M_{\mathfrak {p}}}
.
Such localizations are fundamental for commutative algebra and algebraic geometry for several reasons. One is that local rings are often easier to study than general commutative rings, in particular because of Nakayama lemma. However, the main reason is that many properties are true for a ring if and only if they are true for all its local rings. For example, a ring is regular if and only if all its local rings are regular local rings.
Properties of a ring that can be characterized on its local rings are called local properties, and are often the algebraic counterpart of geometric local properties of algebraic varieties, which are properties that can be studied by restriction to a small neighborhood of each point of the variety. (There is another concept of local property that refers to localization to Zariski open sets; see § Localization to Zariski open sets, below.)
Many local properties are a consequence of the fact that the module
⨁
p
R
p
{\displaystyle \bigoplus _{\mathfrak {p}}R_{\mathfrak {p}}}
is a faithfully flat module when the direct sum is taken over all prime ideals (or over all maximal ideals of R). See also Faithfully flat descent.
=== Examples of local properties ===
A property P of an R-module M is a local property if the following conditions are equivalent:
P holds for M.
P holds for all
M
p
,
{\displaystyle M_{\mathfrak {p}},}
where
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal of R.
P holds for all
M
m
,
{\displaystyle M_{\mathfrak {m}},}
where
m
{\displaystyle {\mathfrak {m}}}
is a maximal ideal of R.
The following are local properties:
M is zero.
M is torsion-free (in the case where R is a commutative domain).
M is a flat module.
M is an invertible module (in the case where R is a commutative domain, and M is a submodule of the field of fractions of R).
f
:
M
→
N
{\displaystyle f\colon M\to N}
is injective (resp. surjective), where N is another R-module.
On the other hand, some properties are not local properties. For example, an infinite direct product of fields is not an integral domain nor a Noetherian ring, while all its local rings are fields, and therefore Noetherian integral domains.
== Non-commutative case ==
Localizing non-commutative rings is more difficult. While the localization exists for every set S of prospective units, it might take a different form to the one described above. One condition which ensures that the localization is well behaved is the Ore condition.
One case for non-commutative rings where localization has a clear interest is for rings of differential operators. It has the interpretation, for example, of adjoining a formal inverse D−1 for a differentiation operator D. This is done in many contexts in methods for differential equations. There is now a large mathematical theory about it, named microlocalization, connecting with numerous other branches. The micro- tag is to do with connections with Fourier theory, in particular.
== See also ==
Local analysis
Localization of a category
Localization of a topological space
== References ==
== External links ==
Localization from MathWorld. | Wikipedia/Localization_(algebra) |
In algebraic geometry, the étale topology is a Grothendieck topology on the category of schemes which has properties similar to the Euclidean topology, but unlike the Euclidean topology, it is also defined in positive characteristic. The étale topology was originally introduced by Alexander Grothendieck to define étale cohomology, and this is still the étale topology's most well-known use.
== Definitions ==
For any scheme X, let Ét(X) be the category of all étale morphisms from a scheme to X. This is the analog of the category of open subsets of X (that is, the category whose objects are varieties and whose morphisms are open immersions). Its objects can be informally thought of as étale open subsets of X. The intersection of two objects corresponds to their fiber product over X. Ét(X) is a large category, meaning that its objects do not form a set.
An étale presheaf on X is a contravariant functor from Ét(X) to the category of sets. A presheaf F is called an étale sheaf if it satisfies the analog of the usual gluing condition for sheaves on topological spaces. That is, F is an étale sheaf if and only if the following condition is true. Suppose that U → X is an object of Ét(X) and that Ui → U is a jointly surjective family of étale morphisms over X. For each i, choose a section xi of F over Ui. The projection map Ui × Uj → Ui, which is loosely speaking the inclusion of the intersection of Ui and Uj in Ui, induces a restriction map F(Ui) → F(Ui × Uj). If for all i and j the restrictions of xi and xj to Ui × Uj are equal, then there must exist a unique section x of F over U which restricts to xi for all i.
Suppose that X is a Noetherian scheme. An abelian étale sheaf F on X is called finite locally constant if it is a representable functor which can be represented by an étale cover of X. It is called constructible if X can be covered by a finite family of subschemes on each of which the restriction of F is finite locally constant. It is called torsion if F(U) is a torsion group for all étale covers U of X. Finite locally constant sheaves are constructible, and constructible sheaves are torsion. Every torsion sheaf is a filtered inductive limit of constructible sheaves.
Grothendieck originally introduced the machinery of Grothendieck topologies and topoi to define the étale topology. In this language, the definition of the étale topology is succinct but abstract: It is the topology generated by the pretopology whose covering families are jointly surjective families of étale morphisms. The small étale site of X is the category O(Xét) whose objects are schemes U with a fixed étale morphism U → X. The morphisms are morphisms of schemes compatible with the fixed maps to X. The big étale site of X is the category Ét/X, that is, the category of schemes with a fixed map to X, considered with the étale topology.
The étale topology can be defined using slightly less data. First, notice that the étale topology is finer than the Zariski topology. Consequently, to define an étale cover of a scheme X, it suffices to first cover X by open affine subschemes, that is, to take a Zariski cover, and then to define an étale cover of an affine scheme. An étale cover of an affine scheme X can be defined as a jointly surjective family {uα : Xα → X} such that the set of all α is finite, each Xα is affine, and each uα is étale. Then an étale cover of X is a family {uα : Xα → X} which becomes an étale cover after base changing to any open affine subscheme of X.
== Local rings ==
Let X be a scheme with its étale topology, and fix a point x of X. In the Zariski topology, the stalk of X at x is computed by taking a direct limit of the sections of the structure sheaf over all the Zariski open neighborhoods of x. In the étale topology, there are strictly more open neighborhoods of x, so the correct analog of the local ring at x is formed by taking the limit over a strictly larger family. The correct analog of the local ring at x for the étale topology turns out to be the strict henselization of the local ring
O
X
,
x
{\displaystyle {\mathcal {O}}_{X,x}}
. It is usually denoted
O
X
,
x
sh
{\displaystyle {\mathcal {O}}_{X,x}^{\text{sh}}}
.
== Examples ==
For each étale morphism
U
→
X
{\displaystyle U\to X}
, let
G
m
(
U
)
=
O
U
(
U
)
×
{\displaystyle \mathbb {G} _{m}(U)={\mathcal {O}}_{U}(U)^{\times }}
. Then
U
↦
G
m
(
U
)
{\displaystyle U\mapsto \mathbb {G} _{m}(U)}
is a presheaf on X; it is a sheaf since it can be represented by the scheme
Spec
X
(
O
X
[
t
,
t
−
1
]
)
{\displaystyle \operatorname {Spec} _{X}({\mathcal {O}}_{X}[t,t^{-1}])}
.
== Étale topos ==
Let X be a scheme. An étale covering of X is a family
{
φ
i
:
U
i
→
X
}
i
∈
I
{\displaystyle \{\varphi _{i}:U_{i}\to X\}_{i\in I}}
, where each
φ
i
{\displaystyle \varphi _{i}}
is an étale morphism of schemes, such that the family is jointly surjective that is
X
=
⋃
i
∈
I
φ
i
(
U
i
)
{\displaystyle X=\bigcup _{i\in I}\varphi _{i}(U_{i})}
.
The category Ét(X) is the category of all étale schemes over X. The collection of all étale coverings of a étale scheme U over X i.e. an object in Ét(X) defines a Grothendieck pretopology on Ét(X) which in turn induces a Grothendieck topology, the étale topology on X. The category together with the étale topology on it is called the étale site on X.
The étale topos
X
ét
{\displaystyle X^{\text{ét}}}
of a scheme X is then the category of all sheaves of sets on the site Ét(X). Such sheaves are called étale sheaves on X. In other words, an étale sheaf
F
{\displaystyle {\mathcal {F}}}
is a (contravariant) functor from the category Ét(X) to the category of sets satisfying the following sheaf axiom:
For each étale U over X and each étale covering
{
φ
i
:
U
i
→
U
}
{\displaystyle \{\varphi _{i}:U_{i}\to U\}}
of U the sequence
0
→
F
(
U
)
→
∏
i
∈
I
F
(
U
i
)
⟶
⟶
∏
i
,
j
∈
I
F
(
U
i
j
)
{\displaystyle 0\to {\mathcal {F}}(U)\to \prod _{i\in I}{\mathcal {F}}(U_{i}){{{} \atop \longrightarrow } \atop {\longrightarrow \atop {}}}\prod _{i,j\in I}{\mathcal {F}}(U_{ij})}
is exact, where
U
i
j
=
U
i
×
U
U
j
{\displaystyle U_{ij}=U_{i}\times _{U}U_{j}}
.
== See also ==
Nisnevich topology
Smooth topology
ℓ-adic sheaf
Étale spectrum
== References ==
Grothendieck, Alexandre; Dieudonné, Jean (1964). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Première partie". Publications Mathématiques de l'IHÉS. 20. doi:10.1007/bf02684747. MR 0173675.
Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32. doi:10.1007/bf02732123. MR 0238860.
Artin, Michael (1972). Alexandre Grothendieck; Jean-Louis Verdier (eds.). Théorie des Topos et Cohomologie Etale des Schémas. Lecture notes in mathematics (in French). Vol. 270. Berlin; New York: Springer-Verlag. pp. iv+418. doi:10.1007/BFb0061319. ISBN 978-3-540-06012-3.
Artin, Michael (1972). Alexandre Grothendieck; Jean-Louis Verdier (eds.). Théorie des Topos et Cohomologie Etale des Schémas. Lecture notes in mathematics (in French). Vol. 305. Berlin; New York: Springer-Verlag. pp. vi+640. doi:10.1007/BFb0070714. ISBN 978-3-540-06118-2.
Deligne, Pierre (1977). Cohomologie Etale. Lecture notes in mathematics (in French). Vol. 569. Berlin; New York: Springer-Verlag. pp. iv+312. doi:10.1007/BFb0091516. ISBN 978-3-540-08066-4.
J. S. Milne (1980), Étale cohomology, Princeton, N.J: Princeton University Press, ISBN 0-691-08238-3
J. S. Milne (2008). Lectures on Étale Cohomology | Wikipedia/Étale_topology |
In mathematics, ideal theory is the theory of ideals in commutative rings. While the notion of an ideal exists also for non-commutative rings, a much more substantial theory exists only for commutative rings (and this article therefore only considers ideals in commutative rings.)
Throughout the articles, rings refer to commutative rings. See also the article ideal (ring theory) for basic operations such as sum or products of ideals.
== Ideals in a finitely generated algebra over a field ==
Ideals in a finitely generated algebra over a field (that is, a quotient of a polynomial ring over a field) behave somehow nicer than those in a general commutative ring. First, in contrast to the general case, if
A
{\displaystyle A}
is a finitely generated algebra over a field, then the radical of an ideal in
A
{\displaystyle A}
is the intersection of all maximal ideals containing the ideal (because
A
{\displaystyle A}
is a Jacobson ring). This may be thought of as an extension of Hilbert's Nullstellensatz, which concerns the case when
A
{\displaystyle A}
is a polynomial ring.
== Topology determined by an ideal ==
If I is an ideal in a ring A, then it determines the topology on A where a subset U of A is open if, for each x in U,
x
+
I
n
⊂
U
.
{\displaystyle x+I^{n}\subset U.}
for some integer
n
>
0
{\displaystyle n>0}
. This topology is called the I-adic topology. It is also called an a-adic topology if
I
=
a
A
{\displaystyle I=aA}
is generated by an element
a
{\displaystyle a}
.
For example, take
A
=
Z
{\displaystyle A=\mathbb {Z} }
, the ring of integers and
I
=
p
A
{\displaystyle I=pA}
an ideal generated by a prime number p. For each integer
x
{\displaystyle x}
, define
|
x
|
p
=
p
−
n
{\displaystyle |x|_{p}=p^{-n}}
when
x
=
p
n
y
{\displaystyle x=p^{n}y}
,
y
{\displaystyle y}
prime to
p
{\displaystyle p}
. Then, clearly,
x
+
p
n
A
=
B
(
x
,
p
−
(
n
−
1
)
)
{\displaystyle x+p^{n}A=B(x,p^{-(n-1)})}
where
B
(
x
,
r
)
=
{
z
∈
Z
∣
|
z
−
x
|
p
<
r
}
{\displaystyle B(x,r)=\{z\in \mathbb {Z} \mid |z-x|_{p}<r\}}
denotes an open ball of radius
r
{\displaystyle r}
with center
x
{\displaystyle x}
. Hence, the
p
{\displaystyle p}
-adic topology on
Z
{\displaystyle \mathbb {Z} }
is the same as the metric space topology given by
d
(
x
,
y
)
=
|
x
−
y
|
p
{\displaystyle d(x,y)=|x-y|_{p}}
. As a metric space,
Z
{\displaystyle \mathbb {Z} }
can be completed. The resulting complete metric space has a structure of a ring that extended the ring structure of
Z
{\displaystyle \mathbb {Z} }
; this ring is denoted as
Z
p
{\displaystyle \mathbb {Z} _{p}}
and is called the ring of p-adic integers.
== Ideal class group ==
In a Dedekind domain A (e.g., a ring of integers in a number field or the coordinate ring of a smooth affine curve) with the field of fractions
K
{\displaystyle K}
, an ideal
I
{\displaystyle I}
is invertible in the sense: there exists a fractional ideal
I
−
1
{\displaystyle I^{-1}}
(that is, an A-submodule of
K
{\displaystyle K}
) such that
I
I
−
1
=
A
{\displaystyle I\,I^{-1}=A}
, where the product on the left is a product of submodules of K. In other words, fractional ideals form a group under a product. The quotient of the group of fractional ideals by the subgroup of principal ideals is then the ideal class group of A.
In a general ring, an ideal may not be invertible (in fact, already the definition of a fractional ideal is not clear). However, over a Noetherian integral domain, it is still possible to develop some theory generalizing the situation in Dedekind domains. For example, Ch. VII of Bourbaki's Algèbre commutative gives such a theory.
The ideal class group of A, when it can be defined, is closely related to the Picard group of the spectrum of A (often the two are the same; e.g., for Dedekind domains).
In algebraic number theory, especially in class field theory, it is more convenient to use a generalization of an ideal class group called an idele class group.
== Closure operations ==
There are several operations on ideals that play roles of closures. The most basic one is the radical of an ideal. Another is the integral closure of an ideal. Given an irredundant primary decomposition
I
=
∩
Q
i
{\displaystyle I=\cap Q_{i}}
, the intersection of
Q
i
{\displaystyle Q_{i}}
's whose radicals are minimal (don’t contain any of the radicals of other
Q
j
{\displaystyle Q_{j}}
's) is uniquely determined by
I
{\displaystyle I}
; this intersection is then called the unmixed part of
I
{\displaystyle I}
. It is also a closure operation.
Given ideals
I
,
J
{\displaystyle I,J}
in a ring
A
{\displaystyle A}
, the ideal
(
I
:
J
∞
)
=
{
f
∈
A
∣
f
J
n
⊂
I
,
n
≫
0
}
=
⋃
n
>
0
Ann
A
(
(
J
n
+
I
)
/
I
)
{\displaystyle (I:J^{\infty })=\{f\in A\mid fJ^{n}\subset I,n\gg 0\}=\bigcup _{n>0}\operatorname {Ann} _{A}((J^{n}+I)/I)}
is called the saturation of
I
{\displaystyle I}
with respect to
J
{\displaystyle J}
and is a closure operation (this notion is closely related to the study of local cohomology).
See also tight closure.
== Reduction theory ==
== Local cohomology in ideal theory ==
Local cohomology can sometimes be used to obtain information on an ideal. This section assumes some familiarity with sheaf theory and scheme theory.
Let
M
{\displaystyle M}
be a module over a ring
R
{\displaystyle R}
and
I
{\displaystyle I}
an ideal. Then
M
{\displaystyle M}
determines the sheaf
M
~
{\displaystyle {\widetilde {M}}}
on
Y
=
Spec
(
R
)
−
V
(
I
)
{\displaystyle Y=\operatorname {Spec} (R)-V(I)}
(the restriction to Y of the sheaf associated to M). Unwinding the definition, one sees:
Γ
I
(
M
)
:=
Γ
(
Y
,
M
~
)
=
lim
→
Hom
(
I
n
,
M
)
{\displaystyle \Gamma _{I}(M):=\Gamma (Y,{\widetilde {M}})=\varinjlim \operatorname {Hom} (I^{n},M)}
.
Here,
Γ
I
(
M
)
{\displaystyle \Gamma _{I}(M)}
is called the ideal transform of
M
{\displaystyle M}
with respect to
I
{\displaystyle I}
.
== See also ==
System of parameters
== References ==
Atiyah, Michael Francis; Macdonald, I.G. (1969), Introduction to Commutative Algebra, Westview Press, ISBN 978-0-201-40751-8
Eisenbud, David, Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Mathematics, 150, Springer-Verlag, 1995, ISBN 0-387-94268-8.
Huneke, Craig; Swanson, Irena (2006), Integral closure of ideals, rings, and modules, London Mathematical Society Lecture Note Series, vol. 336, Cambridge, UK: Cambridge University Press, ISBN 978-0-521-68860-4, MR 2266432, archived from the original on 2019-11-15, retrieved 2019-11-15 | Wikipedia/Ideal_theory |
In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p (the case of roots corresponds to the case of degree 1 for one of the factors).
By passing to the "limit" (in fact this is an inverse limit) when the power of p tends to infinity, it follows that a root or a factorization modulo p can be lifted to a root or a factorization over the p-adic integers.
These results have been widely generalized, under the same name, to the case of polynomials over an arbitrary commutative ring, where p is replaced by an ideal, and "coprime polynomials" means "polynomials that generate an ideal containing 1".
Hensel's lemma is fundamental in p-adic analysis, a branch of analytic number theory.
The proof of Hensel's lemma is constructive, and leads to an efficient algorithm for Hensel lifting, which is fundamental for factoring polynomials, and gives the most efficient known algorithm for exact linear algebra over the rational numbers.
== Modular reduction and lifting ==
Hensel's original lemma concerns the relation between polynomial factorization over the integers and over the integers modulo a prime number p and its powers. It can be straightforwardly extended to the case where the integers are replaced by any commutative ring, and p is replaced by any maximal ideal (indeed, the maximal ideals of
Z
{\displaystyle \mathbb {Z} }
have the form
p
Z
,
{\displaystyle p\mathbb {Z} ,}
where p is a prime number).
Making this precise requires a generalization of the usual modular arithmetic, and so it is useful to define accurately the terminology that is commonly used in this context.
Let R be a commutative ring, and I an ideal of R. Reduction modulo I refers to the replacement of every element of R by its image under the canonical map
R
→
R
/
I
.
{\displaystyle R\to R/I.}
For example, if
f
∈
R
[
X
]
{\displaystyle f\in R[X]}
is a polynomial with coefficients in R, its reduction modulo I, denoted
f
mod
I
,
{\displaystyle f{\bmod {I}},}
is the polynomial in
(
R
/
I
)
[
X
]
=
R
[
X
]
/
I
R
[
X
]
{\displaystyle (R/I)[X]=R[X]/IR[X]}
obtained by replacing the coefficients of f by their image in
R
/
I
.
{\displaystyle R/I.}
Two polynomials f and g in
R
[
X
]
{\displaystyle R[X]}
are congruent modulo I, denoted
f
≡
g
(
mod
I
)
{\textstyle f\equiv g{\pmod {I}}}
if they have the same coefficients modulo I, that is if
f
−
g
∈
I
R
[
X
]
.
{\displaystyle f-g\in IR[X].}
If
h
∈
R
[
X
]
,
{\displaystyle h\in R[X],}
a factorization of h modulo I consists in two (or more) polynomials f, g in
R
[
X
]
{\displaystyle R[X]}
such that
h
≡
f
g
(
mod
I
)
.
{\textstyle h\equiv fg{\pmod {I}}.}
The lifting process is the inverse of reduction. That is, given objects depending on elements of
R
/
I
,
{\displaystyle R/I,}
the lifting process replaces these elements by elements of
R
{\displaystyle R}
(or of
R
/
I
k
{\displaystyle R/I^{k}}
for some k > 1) that maps to them in a way that keeps the properties of the objects.
For example, given a polynomial
h
∈
R
[
X
]
{\displaystyle h\in R[X]}
and a factorization modulo I expressed as
h
≡
f
g
(
mod
I
)
,
{\textstyle h\equiv fg{\pmod {I}},}
lifting this factorization modulo
I
k
{\displaystyle I^{k}}
consists of finding polynomials
f
′
,
g
′
∈
R
[
X
]
{\displaystyle f',g'\in R[X]}
such that
f
′
≡
f
(
mod
I
)
,
{\textstyle f'\equiv f{\pmod {I}},}
g
′
≡
g
(
mod
I
)
,
{\textstyle g'\equiv g{\pmod {I}},}
and
h
≡
f
′
g
′
(
mod
I
k
)
.
{\textstyle h\equiv f'g'{\pmod {I^{k}}}.}
Hensel's lemma asserts that such a lifting is always possible under mild conditions; see next section.
== Statement ==
Originally, Hensel's lemma was stated (and proved) for lifting a factorization modulo a prime number p of a polynomial over the integers to a factorization modulo any power of p and to a factorization over the p-adic integers. This can be generalized easily, with the same proof to the case where the integers are replaced by any commutative ring, the prime number is replaced by a maximal ideal, and the p-adic integers are replaced by the completion with respect to the maximal ideal. It is this generalization, which is also widely used, that is presented here.
Let
m
{\displaystyle {\mathfrak {m}}}
be a maximal ideal of a commutative ring R, and
h
=
α
0
X
n
+
⋯
+
α
n
−
1
X
+
α
n
{\displaystyle h=\alpha _{0}X^{n}+\cdots +\alpha _{n-1}X+\alpha _{n}}
be a polynomial in
R
[
X
]
{\displaystyle R[X]}
with a leading coefficient
α
0
{\displaystyle \alpha _{0}}
not in
m
.
{\displaystyle {\mathfrak {m}}.}
Since
m
{\displaystyle {\mathfrak {m}}}
is a maximal ideal, the quotient ring
R
/
m
{\displaystyle R/{\mathfrak {m}}}
is a field, and
(
R
/
m
)
[
X
]
{\displaystyle (R/{\mathfrak {m}})[X]}
is a principal ideal domain, and, in particular, a unique factorization domain, which means that every nonzero polynomial in
(
R
/
m
)
[
X
]
{\displaystyle (R/{\mathfrak {m}})[X]}
can be factorized in a unique way as the product of a nonzero element of
(
R
/
m
)
{\displaystyle (R/{\mathfrak {m}})}
and irreducible polynomials that are monic (that is, their leading coefficients are 1).
Hensel's lemma asserts that every factorization of h modulo
m
{\displaystyle {\mathfrak {m}}}
into coprime polynomials can be lifted in a unique way into a factorization modulo
m
k
{\displaystyle {\mathfrak {m}}^{k}}
for every k.
More precisely, with the above hypotheses, if
h
≡
α
0
f
g
(
mod
m
)
,
{\textstyle h\equiv \alpha _{0}fg{\pmod {\mathfrak {m}}},}
where f and g are monic and coprime modulo
m
,
{\displaystyle {\mathfrak {m}},}
then, for every positive integer k there are monic polynomials
f
k
{\displaystyle f_{k}}
and
g
k
{\displaystyle g_{k}}
such that
h
≡
α
0
f
k
g
k
(
mod
m
k
)
,
f
k
≡
f
(
mod
m
)
,
g
k
≡
g
(
mod
m
)
,
{\displaystyle {\begin{aligned}h&\equiv \alpha _{0}f_{k}g_{k}{\pmod {{\mathfrak {m}}^{k}}},\\f_{k}&\equiv f{\pmod {\mathfrak {m}}},\\g_{k}&\equiv g{\pmod {\mathfrak {m}}},\end{aligned}}}
and
f
k
{\displaystyle f_{k}}
and
g
k
{\displaystyle g_{k}}
are unique (with these properties) modulo
m
k
.
{\displaystyle {\mathfrak {m}}^{k}.}
=== Lifting simple roots ===
An important special case is when
f
=
X
−
r
.
{\displaystyle f=X-r.}
In this case the coprimality hypothesis means that r is a simple root of
h
mod
m
.
{\displaystyle h{\bmod {\mathfrak {m}}}.}
This gives the following special case of Hensel's lemma, which is often also called Hensel's lemma.
With above hypotheses and notations, if r is a simple root of
h
mod
m
,
{\displaystyle h{\bmod {\mathfrak {m}}},}
then r can be lifted in a unique way to a simple root of
h
mod
m
n
{\displaystyle h{\bmod {{\mathfrak {m}}^{n}}}}
for every positive integer n. Explicitly, for every positive integer n, there is a unique
r
n
∈
R
/
m
n
{\displaystyle r_{n}\in R/{\mathfrak {m}}^{n}}
such that
r
n
≡
r
(
mod
m
)
{\textstyle r_{n}\equiv r{\pmod {\mathfrak {m}}}}
and
r
n
{\displaystyle r_{n}}
is a simple root of
h
mod
m
n
.
{\displaystyle h{\bmod {\mathfrak {m}}}^{n}.}
=== Lifting to adic completion ===
The fact that one can lift to
R
/
m
n
{\displaystyle R/{\mathfrak {m}}^{n}}
for every positive integer n suggests to "pass to the limit" when n tends to the infinity. This was one of the main motivations for introducing p-adic integers.
Given a maximal ideal
m
{\displaystyle {\mathfrak {m}}}
of a commutative ring R, the powers of
m
{\displaystyle {\mathfrak {m}}}
form a basis of open neighborhoods for a topology on R, which is called the
m
{\displaystyle {\mathfrak {m}}}
-adic topology. The completion of this topology can be identified with the completion of the local ring
R
m
,
{\displaystyle R_{\mathfrak {m}},}
and with the inverse limit
lim
←
R
/
m
n
.
{\displaystyle \lim _{\leftarrow }R/{\mathfrak {m}}^{n}.}
This completion is a complete local ring, generally denoted
R
^
m
.
{\displaystyle {\widehat {R}}_{\mathfrak {m}}.}
When R is the ring of the integers, and
m
=
p
Z
,
{\displaystyle {\mathfrak {m}}=p\mathbb {Z} ,}
where p is a prime number, this completion is the ring of p-adic integers
Z
p
.
{\displaystyle \mathbb {Z} _{p}.}
The definition of the completion as an inverse limit, and the above statement of Hensel's lemma imply that every factorization into pairwise coprime polynomials modulo
m
{\displaystyle {\mathfrak {m}}}
of a polynomial
h
∈
R
[
X
]
{\displaystyle h\in R[X]}
can be uniquely lifted to a factorization of the image of h in
R
^
m
[
X
]
.
{\displaystyle {\widehat {R}}_{\mathfrak {m}}[X].}
Similarly, every simple root of h modulo
m
{\displaystyle {\mathfrak {m}}}
can be lifted to a simple root of the image of h in
R
^
m
[
X
]
.
{\displaystyle {\widehat {R}}_{\mathfrak {m}}[X].}
== Proof ==
Hensel's lemma is generally proved incrementally by lifting a factorization over
R
/
m
n
{\displaystyle R/{\mathfrak {m}}^{n}}
to either a factorization over
R
/
m
n
+
1
{\displaystyle R/{\mathfrak {m}}^{n+1}}
(Linear lifting), or a factorization over
R
/
m
2
n
{\displaystyle R/{\mathfrak {m}}^{2n}}
(Quadratic lifting).
The main ingredient of the proof is that coprime polynomials over a field satisfy Bézout's identity. That is, if f and g are coprime univariate polynomials over a field (here
R
/
m
{\displaystyle R/{\mathfrak {m}}}
), there are polynomials a and b such that
deg
a
<
deg
g
,
{\displaystyle \deg a<\deg g,}
deg
b
<
deg
f
,
{\displaystyle \deg b<\deg f,}
and
a
f
+
b
g
=
1.
{\displaystyle af+bg=1.}
Bézout's identity allows defining coprime polynomials and proving Hensel's lemma, even if the ideal
m
{\displaystyle {\mathfrak {m}}}
is not maximal. Therefore, in the following proofs, one starts from a commutative ring R, an ideal I, a polynomial
h
∈
R
[
X
]
{\displaystyle h\in R[X]}
that has a leading coefficient that is invertible modulo I (that is its image in
R
/
I
{\displaystyle R/I}
is a unit in
R
/
I
{\displaystyle R/I}
), and factorization of h modulo I or modulo a power of I, such that the factors satisfy a Bézout's identity modulo I. In these proofs,
A
≡
B
(
mod
I
)
{\textstyle A\equiv B{\pmod {I}}}
means
A
−
B
∈
I
R
[
X
]
.
{\displaystyle A-B\in IR[X].}
=== Linear lifting ===
Let I be an ideal of a commutative ring R, and
h
∈
R
[
X
]
{\displaystyle h\in R[X]}
be a univariate polynomial with coefficients in R that has a leading coefficient
α
{\displaystyle \alpha }
that is invertible modulo I (that is, the image of
α
{\displaystyle \alpha }
in
R
/
I
{\displaystyle R/I}
is a unit in
R
/
I
{\displaystyle R/I}
).
Suppose that for some positive integer k there is a factorization
h
≡
α
f
g
(
mod
I
k
)
,
{\displaystyle h\equiv \alpha fg{\pmod {I^{k}}},}
such that f and g are monic polynomials that are coprime modulo I, in the sense that there exist
a
,
b
∈
R
[
X
]
,
{\displaystyle a,b\in R[X],}
such that
a
f
+
b
g
≡
1
(
mod
I
)
.
{\textstyle af+bg\equiv 1{\pmod {I}}.}
Then, there are polynomials
δ
f
,
δ
g
∈
I
k
R
[
X
]
,
{\displaystyle \delta _{f},\delta _{g}\in I^{k}R[X],}
such that
deg
δ
f
<
deg
f
,
{\displaystyle \deg \delta _{f}<\deg f,}
deg
δ
g
<
deg
g
,
{\displaystyle \deg \delta _{g}<\deg g,}
and
h
≡
α
(
f
+
δ
f
)
(
g
+
δ
g
)
(
mod
I
k
+
1
)
.
{\displaystyle h\equiv \alpha (f+\delta _{f})(g+\delta _{g}){\pmod {I^{k+1}}}.}
Under these conditions,
δ
f
{\displaystyle \delta _{f}}
and
δ
g
{\displaystyle \delta _{g}}
are unique modulo
I
k
+
1
R
[
X
]
.
{\displaystyle I^{k+1}R[X].}
Moreover,
f
+
δ
f
{\displaystyle f+\delta _{f}}
and
g
+
δ
g
{\displaystyle g+\delta _{g}}
satisfy the same Bézout's identity as f and g, that is,
a
(
f
+
δ
f
)
+
b
(
g
+
δ
g
)
≡
1
(
mod
I
)
.
{\displaystyle a(f+\delta _{f})+b(g+\delta _{g})\equiv 1{\pmod {I}}.}
This follows immediately from the preceding assertions, but is needed to apply iteratively the result with increasing values of k.
The proof that follows is written for computing
δ
f
{\displaystyle \delta _{f}}
and
δ
g
{\displaystyle \delta _{g}}
by using only polynomials with coefficients in
R
/
I
{\displaystyle R/I}
or
I
k
/
I
k
+
1
.
{\displaystyle I^{k}/I^{k+1}.}
When
R
=
Z
{\displaystyle R=\mathbb {Z} }
and
I
=
p
Z
,
{\displaystyle I=p\mathbb {Z} ,}
this allows manipulating only integers modulo p.
Proof: By hypothesis,
α
{\displaystyle \alpha }
is invertible modulo I. This means that there exists
β
∈
R
{\displaystyle \beta \in R}
and
γ
∈
I
{\displaystyle \gamma \in I}
such that
α
β
=
1
−
γ
.
{\displaystyle \alpha \beta =1-\gamma .}
Let
δ
h
∈
I
k
R
[
X
]
,
{\displaystyle \delta _{h}\in I^{k}R[X],}
of degree less than
deg
h
,
{\displaystyle \deg h,}
such that
δ
h
≡
h
−
α
f
g
(
mod
I
k
+
1
)
.
{\displaystyle \delta _{h}\equiv h-\alpha fg{\pmod {I^{k+1}}}.}
(One may choose
δ
h
=
h
−
α
f
g
,
{\displaystyle \delta _{h}=h-\alpha fg,}
but other choices may lead to simpler computations. For example, if
R
=
Z
{\displaystyle R=\mathbb {Z} }
and
I
=
p
Z
,
{\displaystyle I=p\mathbb {Z} ,}
it is possible and better to choose
δ
h
=
p
k
δ
h
′
{\displaystyle \delta _{h}=p^{k}\delta '_{h}}
where the coefficients of
δ
h
′
{\displaystyle \delta '_{h}}
are integers in the interval
[
0
,
p
−
1
]
.
{\displaystyle [0,p-1].}
)
As g is monic, the Euclidean division of
a
δ
h
{\displaystyle a\delta _{h}}
by g is defined, and provides q and c such that
a
δ
h
=
q
g
+
c
,
{\displaystyle a\delta _{h}=qg+c,}
and
deg
c
<
deg
g
.
{\displaystyle \deg c<\deg g.}
Moreover, both q and c are in
I
k
R
[
X
]
.
{\displaystyle I^{k}R[X].}
Similarly, let
b
δ
h
=
q
′
f
+
d
,
{\displaystyle b\delta _{h}=q'f+d,}
with
deg
d
<
deg
f
,
{\displaystyle \deg d<\deg f,}
and
q
′
,
d
∈
I
k
R
[
X
]
.
{\displaystyle q',d\in I^{k}R[X].}
One has
q
+
q
′
∈
I
k
+
1
R
[
X
]
.
{\displaystyle q+q'\in I^{k+1}R[X].}
Indeed, one has
f
c
+
g
d
=
a
f
δ
h
+
b
g
δ
h
−
f
g
(
q
+
q
′
)
≡
δ
h
−
f
g
(
q
+
q
′
)
(
mod
I
k
+
1
)
.
{\displaystyle fc+gd=af\delta _{h}+bg\delta _{h}-fg(q+q')\equiv \delta _{h}-fg(q+q'){\pmod {I^{k+1}}}.}
As
f
g
{\displaystyle fg}
is monic, the degree modulo
I
k
+
1
{\displaystyle I^{k+1}}
of
f
g
(
q
+
q
′
)
{\displaystyle fg(q+q')}
can be less than
deg
f
g
{\displaystyle \deg fg}
only if
q
+
q
′
∈
I
k
+
1
R
[
X
]
.
{\displaystyle q+q'\in I^{k+1}R[X].}
Thus, considering congruences modulo
I
k
+
1
,
{\displaystyle I^{k+1},}
one has
α
(
f
+
β
d
)
(
g
+
β
c
)
−
h
≡
α
f
g
−
h
+
α
β
(
f
(
a
δ
h
−
q
g
)
+
g
(
b
δ
h
−
q
′
f
)
)
≡
δ
h
(
−
1
+
α
β
(
a
f
+
b
g
)
)
−
α
β
f
g
(
q
+
q
′
)
≡
0
(
mod
I
k
+
1
)
.
{\displaystyle {\begin{aligned}\alpha (f+\beta d)&(g+\beta c)-h\\&\equiv \alpha fg-h+\alpha \beta (f(a\delta _{h}-qg)+g(b\delta _{h}-q'f))\\&\equiv \delta _{h}(-1+\alpha \beta (af+bg))-\alpha \beta fg(q+q')\\&\equiv 0{\pmod {I^{k+1}}}.\end{aligned}}}
So, the existence assertion is verified with
δ
f
=
β
d
,
δ
g
=
β
c
.
{\displaystyle \delta _{f}=\beta d,\qquad \delta _{g}=\beta c.}
=== Uniqueness ===
Let R, I, h and
α
{\displaystyle \alpha }
as a in the preceding section. Let
h
≡
α
f
g
(
mod
I
)
{\displaystyle h\equiv \alpha fg{\pmod {I}}}
be a factorization into coprime polynomials (in the above sense), such
deg
f
0
+
deg
g
0
=
deg
h
.
{\displaystyle \deg f_{0}+\deg g_{0}=\deg h.}
The application of linear lifting for
k
=
1
,
2
,
…
,
n
−
1
…
,
{\displaystyle k=1,2,\ldots ,n-1\ldots ,}
shows the existence of
δ
f
{\displaystyle \delta _{f}}
and
δ
g
{\displaystyle \delta _{g}}
such that
deg
δ
f
<
deg
f
,
{\displaystyle \deg \delta _{f}<\deg f,}
deg
δ
g
<
deg
g
,
{\displaystyle \deg \delta _{g}<\deg g,}
and
h
≡
α
(
f
+
δ
f
)
(
g
+
δ
g
)
(
mod
I
n
)
.
{\displaystyle h\equiv \alpha (f+\delta _{f})(g+\delta _{g}){\pmod {I^{n}}}.}
The polynomials
δ
f
{\displaystyle \delta _{f}}
and
δ
g
{\displaystyle \delta _{g}}
are uniquely defined modulo
I
n
.
{\displaystyle I^{n}.}
This means that, if another pair
(
δ
f
′
,
δ
g
′
)
{\displaystyle (\delta '_{f},\delta '_{g})}
satisfies the same conditions, then one has
δ
f
′
≡
δ
f
(
mod
I
n
)
and
δ
g
′
≡
δ
g
(
mod
I
n
)
.
{\displaystyle \delta '_{f}\equiv \delta _{f}{\pmod {I^{n}}}\qquad {\text{and}}\qquad \delta '_{g}\equiv \delta _{g}{\pmod {I^{n}}}.}
Proof: Since a congruence modulo
I
n
{\displaystyle I^{n}}
implies the same concruence modulo
I
n
−
1
,
{\displaystyle I^{n-1},}
one can proceed by induction and suppose that the uniqueness has been proved for n − 1, the case n = 0 being trivial. That is, one can suppose that
δ
f
−
δ
f
′
∈
I
n
−
1
R
[
X
]
and
δ
g
−
δ
g
′
∈
I
n
−
1
R
[
X
]
.
{\displaystyle \delta _{f}-\delta '_{f}\in I^{n-1}R[X]\qquad {\text{and}}\qquad \delta _{g}-\delta '_{g}\in I^{n-1}R[X].}
By hypothesis, has
h
≡
α
(
f
+
δ
f
)
(
g
+
δ
g
)
≡
α
(
f
+
δ
f
′
)
(
g
+
δ
g
′
)
(
mod
I
n
)
,
{\displaystyle h\equiv \alpha (f+\delta _{f})(g+\delta _{g})\equiv \alpha (f+\delta '_{f})(g+\delta '_{g}){\pmod {I^{n}}},}
and thus
α
(
f
+
δ
f
)
(
g
+
δ
g
)
−
α
(
f
+
δ
f
′
)
(
g
+
δ
g
′
)
=
α
(
f
(
δ
g
−
δ
g
′
)
+
g
(
δ
f
−
δ
f
′
)
)
+
α
(
δ
f
(
δ
g
−
δ
g
′
)
−
δ
g
(
δ
f
−
δ
f
′
)
)
∈
I
n
R
[
X
]
.
{\displaystyle {\begin{aligned}\alpha (f+\delta _{f})(g+\delta _{g})&-\alpha (f+\delta '_{f})(g+\delta '_{g})\\&=\alpha (f(\delta _{g}-\delta '_{g})+g(\delta _{f}-\delta '_{f}))+\alpha (\delta _{f}(\delta _{g}-\delta '_{g})-\delta _{g}(\delta _{f}-\delta '_{f}))\in I^{n}R[X].\end{aligned}}}
By induction hypothesis, the second term of the latter sum belongs to
I
n
,
{\displaystyle I^{n},}
and the same is thus true for the first term. As
α
{\displaystyle \alpha }
is invertible modulo I, there exist
β
∈
R
{\displaystyle \beta \in R}
and
γ
∈
I
{\displaystyle \gamma \in I}
such that
α
β
=
1
+
γ
.
{\displaystyle \alpha \beta =1+\gamma .}
Thus
f
(
δ
g
−
δ
g
′
)
+
g
(
δ
f
−
δ
f
′
)
=
α
β
(
f
(
δ
g
−
δ
g
′
)
+
g
(
δ
f
−
δ
f
′
)
)
−
γ
(
f
(
δ
g
−
δ
g
′
)
+
g
(
δ
f
−
δ
f
′
)
)
∈
I
n
R
[
X
]
,
{\displaystyle {\begin{aligned}f(\delta _{g}-\delta '_{g})&+g(\delta _{f}-\delta '_{f})\\&=\alpha \beta (f(\delta _{g}-\delta '_{g})+g(\delta _{f}-\delta '_{f}))-\gamma (f(\delta _{g}-\delta '_{g})+g(\delta _{f}-\delta '_{f}))\in I^{n}R[X],\end{aligned}}}
using the induction hypothesis again.
The coprimality modulo I implies the existence of
a
,
b
∈
R
[
X
]
{\displaystyle a,b\in R[X]}
such that
1
≡
a
f
+
b
g
(
mod
I
)
.
{\textstyle 1\equiv af+bg{\pmod {I}}.}
Using the induction hypothesis once more, one gets
δ
g
−
δ
g
′
≡
(
a
f
+
b
g
)
(
δ
g
−
δ
g
′
)
≡
g
(
b
(
δ
g
−
δ
g
′
)
−
a
(
δ
f
−
δ
f
′
)
)
(
mod
I
n
)
.
{\displaystyle {\begin{aligned}\delta _{g}-\delta '_{g}&\equiv (af+bg)(\delta _{g}-\delta '_{g})\\&\equiv g(b(\delta _{g}-\delta '_{g})-a(\delta _{f}-\delta '_{f})){\pmod {I^{n}}}.\end{aligned}}}
Thus one has a polynomial of degree less than
deg
g
{\displaystyle \deg g}
that is congruent modulo
I
n
{\displaystyle I^{n}}
to the product of the monic polynomial g and another polynomial w. This is possible only if
w
∈
I
n
R
[
X
]
,
{\displaystyle w\in I^{n}R[X],}
and implies
δ
g
−
δ
g
′
∈
I
n
R
[
X
]
.
{\displaystyle \delta _{g}-\delta '_{g}\in I^{n}R[X].}
Similarly,
δ
f
−
δ
f
′
{\displaystyle \delta _{f}-\delta '_{f}}
is also in
I
n
R
[
X
]
,
{\displaystyle I^{n}R[X],}
and this proves the uniqueness.
=== Quadratic lifting ===
Linear lifting allows lifting a factorization modulo
I
n
{\displaystyle I^{n}}
to a factorization modulo
I
n
+
1
.
{\displaystyle I^{n+1}.}
Quadratic lifting allows lifting directly to a factorization modulo
I
2
n
,
{\displaystyle I^{2n},}
at the cost of lifting also the Bézout's identity and of computing modulo
I
n
{\displaystyle I^{n}}
instead of modulo I (if one uses the above description of linear lifting).
For lifting up to modulo
I
N
{\displaystyle I^{N}}
for large N one can use either method. If, say,
N
=
2
k
,
{\displaystyle N=2^{k},}
a factorization modulo
I
N
{\displaystyle I^{N}}
requires N − 1 steps of linear lifting or only k − 1 steps of quadratic lifting. However, in the latter case the size of the coefficients that have to be manipulated increase during the computation. This implies that the best lifting method depends on the context (value of N, nature of R, multiplication algorithm that is used, hardware specificities, etc.).
Quadratic lifting is based on the following property.
Suppose that for some positive integer k there is a factorization
h
≡
α
f
g
(
mod
I
k
)
,
{\displaystyle h\equiv \alpha fg{\pmod {I^{k}}},}
such that f and g are monic polynomials that are coprime modulo I, in the sense that there exist
a
,
b
∈
R
[
X
]
,
{\displaystyle a,b\in R[X],}
such that
a
f
+
b
g
≡
1
(
mod
I
k
)
.
{\textstyle af+bg\equiv 1{\pmod {I^{k}}}.}
Then, there are polynomials
δ
f
,
δ
g
∈
I
k
R
[
X
]
,
{\displaystyle \delta _{f},\delta _{g}\in I^{k}R[X],}
such that
deg
δ
f
<
deg
f
,
{\displaystyle \deg \delta _{f}<\deg f,}
deg
δ
g
<
deg
g
,
{\displaystyle \deg \delta _{g}<\deg g,}
and
h
≡
α
(
f
+
δ
f
)
(
g
+
δ
g
)
(
mod
I
2
k
)
.
{\displaystyle h\equiv \alpha (f+\delta _{f})(g+\delta _{g}){\pmod {I^{2k}}}.}
Moreover,
f
+
δ
f
{\displaystyle f+\delta _{f}}
and
g
+
δ
g
{\displaystyle g+\delta _{g}}
satisfy a Bézout's identity of the form
(
a
+
δ
a
)
(
f
+
δ
f
)
+
(
b
+
δ
b
)
(
g
+
δ
g
)
≡
1
(
mod
I
2
k
)
.
{\displaystyle (a+\delta _{a})(f+\delta _{f})+(b+\delta _{b})(g+\delta _{g})\equiv 1{\pmod {I^{2k}}}.}
(This is required for allowing iterations of quadratic lifting.)
Proof: The first assertion is exactly that of linear lifting applied with k = 1 to the ideal
I
k
{\displaystyle I^{k}}
instead of
I
.
{\displaystyle I.}
Let
α
=
a
f
+
b
g
−
1
∈
I
k
R
[
X
]
.
{\displaystyle \alpha =af+bg-1\in I^{k}R[X].}
One has
a
(
f
+
δ
f
)
+
b
(
g
+
δ
g
)
=
1
+
Δ
,
{\displaystyle a(f+\delta _{f})+b(g+\delta _{g})=1+\Delta ,}
where
Δ
=
α
+
a
δ
f
+
b
δ
g
∈
I
k
R
[
X
]
.
{\displaystyle \Delta =\alpha +a\delta _{f}+b\delta _{g}\in I^{k}R[X].}
Setting
δ
a
=
−
a
Δ
{\displaystyle \delta _{a}=-a\Delta }
and
δ
b
=
−
b
Δ
,
{\displaystyle \delta _{b}=-b\Delta ,}
one gets
(
a
+
δ
a
)
(
f
+
δ
f
)
+
(
b
+
δ
b
)
(
g
+
δ
g
)
=
1
−
Δ
2
∈
I
2
k
R
[
X
]
,
{\displaystyle (a+\delta _{a})(f+\delta _{f})+(b+\delta _{b})(g+\delta _{g})=1-\Delta ^{2}\in I^{2k}R[X],}
which proves the second assertion.
== Explicit example ==
Let
f
(
X
)
=
X
6
−
2
∈
Q
[
X
]
.
{\displaystyle f(X)=X^{6}-2\in \mathbb {Q} [X].}
Modulo 2, Hensel's lemma cannot be applied since the reduction of
f
(
X
)
{\displaystyle f(X)}
modulo 2 is simplypg 15-16
f
¯
(
X
)
=
X
6
−
2
¯
=
X
6
{\displaystyle {\bar {f}}(X)=X^{6}-{\overline {2}}=X^{6}}
with 6 factors
X
{\displaystyle X}
not being relatively prime to each other. By Eisenstein's criterion, however, one can conclude that the polynomial
f
(
X
)
{\displaystyle f(X)}
is irreducible in
Q
2
[
X
]
.
{\displaystyle \mathbb {Q} _{2}[X].}
Over
k
=
F
7
{\displaystyle k=\mathbb {F} _{7}}
, on the other hand, one has
f
¯
(
X
)
=
X
6
−
2
¯
=
X
6
−
16
¯
=
(
X
3
−
4
¯
)
(
X
3
+
4
¯
)
{\displaystyle {\bar {f}}(X)=X^{6}-{\overline {2}}=X^{6}-{\overline {16}}=(X^{3}-{\overline {4}})\;(X^{3}+{\overline {4}})}
where
4
{\displaystyle 4}
is the square root of 2 in
F
7
{\displaystyle \mathbb {F} _{7}}
. As 4 is not a cube in
F
7
,
{\displaystyle \mathbb {F} _{7},}
these two factors are irreducible over
F
7
{\displaystyle \mathbb {F} _{7}}
. Hence the complete factorization of
X
6
−
2
{\displaystyle X^{6}-2}
in
Z
7
[
X
]
{\displaystyle \mathbb {Z} _{7}[X]}
and
Q
7
[
X
]
{\displaystyle \mathbb {Q} _{7}[X]}
is
f
(
X
)
=
X
6
−
2
=
(
X
3
−
α
)
(
X
3
+
α
)
,
{\displaystyle f(X)=X^{6}-2=(X^{3}-\alpha )\;(X^{3}+\alpha ),}
where
α
=
…
450
454
7
{\displaystyle \alpha =\ldots 450\,454_{7}}
is a square root of 2 in
Z
7
{\displaystyle \mathbb {Z} _{7}}
that can be obtained by lifting the above factorization.
Finally, in
F
727
[
X
]
{\displaystyle \mathbb {F} _{727}[X]}
the polynomial splits into
f
¯
(
X
)
=
X
6
−
2
¯
=
(
X
−
3
¯
)
(
X
−
116
¯
)
(
X
−
119
¯
)
(
X
−
608
¯
)
(
X
−
611
¯
)
(
X
−
724
¯
)
{\displaystyle {\bar {f}}(X)=X^{6}-{\overline {2}}=(X-{\overline {3}})\;(X-{\overline {116}})\;(X-{\overline {119}})\;(X-{\overline {608}})\;(X-{\overline {611}})\;(X-{\overline {724}})}
with all factors relatively prime to each other, so that in
Z
727
[
X
]
{\displaystyle \mathbb {Z} _{727}[X]}
and
Q
727
[
X
]
{\displaystyle \mathbb {Q} _{727}[X]}
there are 6 factors
X
−
β
{\displaystyle X-\beta }
with the (non-rational) 727-adic integers
β
=
{
3
+
545
⋅
727
+
537
⋅
727
2
+
161
⋅
727
3
+
…
116
+
48
⋅
727
+
130
⋅
727
2
+
498
⋅
727
3
+
…
119
+
593
⋅
727
+
667
⋅
727
2
+
659
⋅
727
3
+
…
608
+
133
⋅
727
+
59
⋅
727
2
+
67
⋅
727
3
+
…
611
+
678
⋅
727
+
596
⋅
727
2
+
228
⋅
727
3
+
…
724
+
181
⋅
727
+
189
⋅
727
2
+
565
⋅
727
3
+
…
{\displaystyle \beta =\left\{{\begin{array}{rrr}3\;+&\!\!\!545\cdot 727\;+&\!\!\!537\cdot 727^{2}\,+&\!\!\!161\cdot 727^{3}+\ldots \\116\;+&\!\!\!48\cdot 727\;+&\!\!\!130\cdot 727^{2}\,+&\!\!\!498\cdot 727^{3}+\ldots \\119\;+&\!\!\!593\cdot 727\;+&\!\!\!667\cdot 727^{2}\,+&\!\!\!659\cdot 727^{3}+\ldots \\608\;+&\!\!\!133\cdot 727\;+&\!\!\!59\cdot 727^{2}\,+&\!\!\!67\cdot 727^{3}+\ldots \\611\;+&\!\!\!678\cdot 727\;+&\!\!\!596\cdot 727^{2}\,+&\!\!\!228\cdot 727^{3}+\ldots \\724\;+&\!\!\!181\cdot 727\;+&\!\!\!189\cdot 727^{2}\,+&\!\!\!565\cdot 727^{3}+\ldots \end{array}}\right.}
== Using derivatives for lifting roots ==
Let
f
(
x
)
{\displaystyle f(x)}
be a polynomial with integer (or p-adic integer) coefficients, and let m, k be positive integers such that m ≤ k. If r is an integer such that
f
(
r
)
≡
0
mod
p
k
and
f
′
(
r
)
≢
0
mod
p
{\displaystyle f(r)\equiv 0{\bmod {p}}^{k}\quad {\text{and}}\quad f'(r)\not \equiv 0{\bmod {p}}}
then, for every
m
>
0
{\displaystyle m>0}
there exists an integer s such that
f
(
s
)
≡
0
mod
p
k
+
m
and
r
≡
s
mod
p
k
.
{\displaystyle f(s)\equiv 0{\bmod {p}}^{k+m}\quad {\text{and}}\quad r\equiv s{\bmod {p}}^{k}.}
Furthermore, this s is unique modulo pk+m, and can be computed explicitly as the integer such that
s
=
r
−
f
(
r
)
⋅
a
,
{\displaystyle s=r-f(r)\cdot a,}
where
a
{\displaystyle a}
is an integer satisfying
a
≡
[
f
′
(
r
)
]
−
1
mod
p
m
.
{\displaystyle a\equiv [f'(r)]^{-1}{\bmod {p}}^{m}.}
Note that
f
(
r
)
≡
0
mod
p
k
{\displaystyle f(r)\equiv 0{\bmod {p}}^{k}}
so that the condition
s
≡
r
mod
p
k
{\displaystyle s\equiv r{\bmod {p}}^{k}}
is met. As an aside, if
f
′
(
r
)
≡
0
mod
p
{\displaystyle f'(r)\equiv 0{\bmod {p}}}
, then 0, 1, or several s may exist (see Hensel Lifting below).
=== Derivation ===
We use the Taylor expansion of f around r to write:
f
(
s
)
=
∑
n
=
0
N
c
n
(
s
−
r
)
n
,
c
n
=
f
(
n
)
(
r
)
/
n
!
.
{\displaystyle f(s)=\sum _{n=0}^{N}c_{n}(s-r)^{n},\qquad c_{n}=f^{(n)}(r)/n!.}
From
r
≡
s
mod
p
k
,
{\displaystyle r\equiv s{\bmod {p}}^{k},}
we see that s − r = tpk for some integer t. Let
f
(
s
)
=
∑
n
=
0
N
c
n
(
t
p
k
)
n
=
f
(
r
)
+
t
p
k
f
′
(
r
)
+
∑
n
=
2
N
c
n
t
n
p
k
n
=
f
(
r
)
+
t
p
k
f
′
(
r
)
+
p
2
k
t
2
g
(
t
)
g
(
t
)
∈
Z
[
t
]
=
z
p
k
+
t
p
k
f
′
(
r
)
+
p
2
k
t
2
g
(
t
)
f
(
r
)
≡
0
mod
p
k
=
(
z
+
t
f
′
(
r
)
)
p
k
+
p
2
k
t
2
g
(
t
)
{\displaystyle {\begin{aligned}f(s)&=\sum _{n=0}^{N}c_{n}\left(tp^{k}\right)^{n}\\&=f(r)+tp^{k}f'(r)+\sum _{n=2}^{N}c_{n}t^{n}p^{kn}\\&=f(r)+tp^{k}f'(r)+p^{2k}t^{2}g(t)&&g(t)\in \mathbb {Z} [t]\\&=zp^{k}+tp^{k}f'(r)+p^{2k}t^{2}g(t)&&f(r)\equiv 0{\bmod {p}}^{k}\\&=(z+tf'(r))p^{k}+p^{2k}t^{2}g(t)\end{aligned}}}
For
m
⩽
k
,
{\displaystyle m\leqslant k,}
we have:
f
(
s
)
≡
0
mod
p
k
+
m
⟺
(
z
+
t
f
′
(
r
)
)
p
k
≡
0
mod
p
k
+
m
⟺
z
+
t
f
′
(
r
)
≡
0
mod
p
m
⟺
t
f
′
(
r
)
≡
−
z
mod
p
m
⟺
t
≡
−
z
[
f
′
(
r
)
]
−
1
mod
p
m
p
∤
f
′
(
r
)
{\displaystyle {\begin{aligned}f(s)\equiv 0{\bmod {p}}^{k+m}&\Longleftrightarrow (z+tf'(r))p^{k}\equiv 0{\bmod {p}}^{k+m}\\&\Longleftrightarrow z+tf'(r)\equiv 0{\bmod {p}}^{m}\\&\Longleftrightarrow tf'(r)\equiv -z{\bmod {p}}^{m}\\&\Longleftrightarrow t\equiv -z[f'(r)]^{-1}{\bmod {p}}^{m}&&p\nmid f'(r)\end{aligned}}}
The assumption that
f
′
(
r
)
{\displaystyle f'(r)}
is not divisible by p ensures that
f
′
(
r
)
{\displaystyle f'(r)}
has an inverse mod
p
m
{\displaystyle p^{m}}
which is necessarily unique. Hence a solution for t exists uniquely modulo
p
m
,
{\displaystyle p^{m},}
and s exists uniquely modulo
p
k
+
m
.
{\displaystyle p^{k+m}.}
== Observations ==
=== Criterion for irreducible polynomials ===
Using the above hypotheses, if we consider an irreducible polynomial
f
(
x
)
=
a
0
+
a
1
x
+
⋯
+
a
n
x
n
∈
K
[
X
]
{\displaystyle f(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n}\in K[X]}
such that
a
0
,
a
n
≠
0
{\displaystyle a_{0},a_{n}\neq 0}
, then
|
f
|
=
max
{
|
a
0
|
,
|
a
n
|
}
{\displaystyle |f|=\max\{|a_{0}|,|a_{n}|\}}
In particular, for
f
(
X
)
=
X
6
+
10
X
−
1
{\displaystyle f(X)=X^{6}+10X-1}
, we find in
Q
2
[
X
]
{\displaystyle \mathbb {Q} _{2}[X]}
|
f
(
X
)
|
=
max
{
|
a
0
|
,
…
,
|
a
n
|
}
=
max
{
0
,
1
,
0
}
=
1
{\displaystyle {\begin{aligned}|f(X)|&=\max\{|a_{0}|,\ldots ,|a_{n}|\}\\&=\max\{0,1,0\}=1\end{aligned}}}
but
max
{
|
a
0
|
,
|
a
n
|
}
=
0
{\displaystyle \max\{|a_{0}|,|a_{n}|\}=0}
, hence the polynomial cannot be irreducible. Whereas in
Q
7
[
X
]
{\displaystyle \mathbb {Q} _{7}[X]}
we have both values agreeing, meaning the polynomial could be irreducible. In order to determine irreducibility, the Newton polygon must be employed.: 144
=== Frobenius ===
Note that given an
a
∈
F
p
{\displaystyle a\in \mathbb {F} _{p}}
the Frobenius endomorphism
y
↦
y
p
{\displaystyle y\mapsto y^{p}}
gives a nonzero polynomial
x
p
−
a
{\displaystyle x^{p}-a}
that has zero derivative
d
d
x
(
x
p
−
a
)
=
p
⋅
x
p
−
1
≡
0
⋅
x
p
−
1
mod
p
≡
0
mod
p
{\displaystyle {\begin{aligned}{\frac {d}{dx}}(x^{p}-a)&=p\cdot x^{p-1}\\&\equiv 0\cdot x^{p-1}{\bmod {p}}\\&\equiv 0{\bmod {p}}\end{aligned}}}
hence the pth roots of
a
{\displaystyle a}
do not exist in
Z
p
{\displaystyle \mathbb {Z} _{p}}
. For
a
=
1
{\displaystyle a=1}
, this implies that
Z
p
{\displaystyle \mathbb {Z} _{p}}
cannot contain the root of unity
μ
p
{\displaystyle \mu _{p}}
.
=== Roots of unity ===
Although the pth roots of unity are not contained in
F
p
{\displaystyle \mathbb {F} _{p}}
, there are solutions of
x
p
−
x
=
x
(
x
p
−
1
−
1
)
{\displaystyle x^{p}-x=x(x^{p-1}-1)}
. Note that
d
d
x
(
x
p
−
x
)
=
p
x
p
−
1
−
1
≡
−
1
mod
p
{\displaystyle {\begin{aligned}{\frac {d}{dx}}(x^{p}-x)&=px^{p-1}-1\\&\equiv -1{\bmod {p}}\end{aligned}}}
is never zero, so if there exists a solution, it necessarily lifts to
Z
p
{\displaystyle \mathbb {Z} _{p}}
. Because the Frobenius gives
a
p
=
a
,
{\displaystyle a^{p}=a,}
all of the non-zero elements
F
p
×
{\displaystyle \mathbb {F} _{p}^{\times }}
are solutions. In fact, these are the only roots of unity contained in
Q
p
{\displaystyle \mathbb {Q} _{p}}
.
== Hensel lifting ==
Using the lemma, one can "lift" a root r of the polynomial f modulo pk to a new root s modulo pk+1 such that r ≡ s mod pk (by taking m = 1; taking larger m follows by induction). In fact, a root modulo pk+1 is also a root modulo pk, so the roots modulo pk+1 are precisely the liftings of roots modulo pk. The new root s is congruent to r modulo p, so the new root also satisfies
f
′
(
s
)
≡
f
′
(
r
)
≢
0
mod
p
.
{\displaystyle f'(s)\equiv f'(r)\not \equiv 0{\bmod {p}}.}
So the lifting can be repeated, and starting from a solution rk of
f
(
x
)
≡
0
mod
p
k
{\displaystyle f(x)\equiv 0{\bmod {p}}^{k}}
we can derive a sequence of solutions rk+1, rk+2, ... of the same congruence for successively higher powers of p, provided that
f
′
(
r
k
)
≢
0
mod
p
{\displaystyle f'(r_{k})\not \equiv 0{\bmod {p}}}
for the initial root rk. This also shows that f has the same number of roots mod pk as mod pk+1, mod pk+2, or any other higher power of p, provided that the roots of f mod pk are all simple.
What happens to this process if r is not a simple root mod p? Suppose that
f
(
r
)
≡
0
mod
p
k
and
f
′
(
r
)
≡
0
mod
p
.
{\displaystyle f(r)\equiv 0{\bmod {p}}^{k}\quad {\text{and}}\quad f'(r)\equiv 0{\bmod {p}}.}
Then
s
≡
r
mod
p
k
{\displaystyle s\equiv r{\bmod {p}}^{k}}
implies
f
(
s
)
≡
f
(
r
)
mod
p
k
+
1
.
{\displaystyle f(s)\equiv f(r){\bmod {p}}^{k+1}.}
That is,
f
(
r
+
t
p
k
)
≡
f
(
r
)
mod
p
k
+
1
{\displaystyle f(r+tp^{k})\equiv f(r){\bmod {p}}^{k+1}}
for all integers t. Therefore, we have two cases:
If
f
(
r
)
≢
0
mod
p
k
+
1
{\displaystyle f(r)\not \equiv 0{\bmod {p}}^{k+1}}
then there is no lifting of r to a root of f(x) modulo pk+1.
If
f
(
r
)
≡
0
mod
p
k
+
1
{\displaystyle f(r)\equiv 0{\bmod {p}}^{k+1}}
then every lifting of r to modulus pk+1 is a root of f(x) modulo pk+1.
Example. To see both cases we examine two different polynomials with p = 2:
f
(
x
)
=
x
2
+
1
{\displaystyle f(x)=x^{2}+1}
and r = 1. Then
f
(
1
)
≡
0
mod
2
{\displaystyle f(1)\equiv 0{\bmod {2}}}
and
f
′
(
1
)
≡
0
mod
2
.
{\displaystyle f'(1)\equiv 0{\bmod {2}}.}
We have
f
(
1
)
≢
0
mod
4
{\displaystyle f(1)\not \equiv 0{\bmod {4}}}
which means that no lifting of 1 to modulus 4 is a root of f(x) modulo 4.
g
(
x
)
=
x
2
−
17
{\displaystyle g(x)=x^{2}-17}
and r = 1. Then
g
(
1
)
≡
0
mod
2
{\displaystyle g(1)\equiv 0{\bmod {2}}}
and
g
′
(
1
)
≡
0
mod
2
.
{\displaystyle g'(1)\equiv 0{\bmod {2}}.}
However, since
g
(
1
)
≡
0
mod
4
,
{\displaystyle g(1)\equiv 0{\bmod {4}},}
we can lift our solution to modulus 4 and both lifts (i.e. 1, 3) are solutions. The derivative is still 0 modulo 2, so a priori we don't know whether we can lift them to modulo 8, but in fact we can, since g(1) is 0 mod 8 and g(3) is 0 mod 8, giving solutions at 1, 3, 5, and 7 mod 8. Since of these only g(1) and g(7) are 0 mod 16 we can lift only 1 and 7 to modulo 16, giving 1, 7, 9, and 15 mod 16. Of these, only 7 and 9 give g(x) = 0 mod 32, so these can be raised giving 7, 9, 23, and 25 mod 32. It turns out that for every integer k ≥ 3, there are four liftings of 1 mod 2 to a root of g(x) mod 2k.
== Hensel's lemma for p-adic numbers ==
In the p-adic numbers, where we can make sense of rational numbers modulo powers of p as long as the denominator is not a multiple of p, the recursion from rk (roots mod pk) to rk+1 (roots mod pk+1) can be expressed in a much more intuitive way. Instead of choosing t to be an(y) integer which solves the congruence
t
f
′
(
r
k
)
≡
−
(
f
(
r
k
)
/
p
k
)
mod
p
m
,
{\displaystyle tf'(r_{k})\equiv -(f(r_{k})/p^{k}){\bmod {p}}^{m},}
let t be the rational number (the pk here is not really a denominator since f(rk) is divisible by pk):
−
(
f
(
r
k
)
/
p
k
)
/
f
′
(
r
k
)
.
{\displaystyle -(f(r_{k})/p^{k})/f'(r_{k}).}
Then set
r
k
+
1
=
r
k
+
t
p
k
=
r
k
−
f
(
r
k
)
f
′
(
r
k
)
.
{\displaystyle r_{k+1}=r_{k}+tp^{k}=r_{k}-{\frac {f(r_{k})}{f'(r_{k})}}.}
This fraction may not be an integer, but it is a p-adic integer, and the sequence of numbers rk converges in the p-adic integers to a root of f(x) = 0. Moreover, the displayed recursive formula for the (new) number rk+1 in terms of rk is precisely Newton's method for finding roots to equations in the real numbers.
By working directly in the p-adics and using the p-adic absolute value, there is a version of Hensel's lemma which can be applied even if we start with a solution of f(a) ≡ 0 mod p such that
f
′
(
a
)
≡
0
mod
p
.
{\displaystyle f'(a)\equiv 0{\bmod {p}}.}
We just need to make sure the number
f
′
(
a
)
{\displaystyle f'(a)}
is not exactly 0. This more general version is as follows: if there is an integer a which satisfies:
|
f
(
a
)
|
p
<
|
f
′
(
a
)
|
p
2
,
{\displaystyle |f(a)|_{p}<|f'(a)|_{p}^{2},}
then there is a unique p-adic integer b such f(b) = 0 and
|
b
−
a
|
p
<
|
f
′
(
a
)
|
p
.
{\displaystyle |b-a|_{p}<|f'(a)|_{p}.}
The construction of b amounts to showing that the recursion from Newton's method with initial value a converges in the p-adics and we let b be the limit. The uniqueness of b as a root fitting the condition
|
b
−
a
|
p
<
|
f
′
(
a
)
|
p
{\displaystyle |b-a|_{p}<|f'(a)|_{p}}
needs additional work.
The statement of Hensel's lemma given above (taking
m
=
1
{\displaystyle m=1}
) is a special case of this more general version, since the conditions that f(a) ≡ 0 mod p and
f
′
(
a
)
≢
0
mod
p
{\displaystyle f'(a)\not \equiv 0{\bmod {p}}}
say that
|
f
(
a
)
|
p
<
1
{\displaystyle |f(a)|_{p}<1}
and
|
f
′
(
a
)
|
p
=
1.
{\displaystyle |f'(a)|_{p}=1.}
== Examples ==
Suppose that p is an odd prime and a is a non-zero quadratic residue modulo p. Then Hensel's lemma implies that a has a square root in the ring of p-adic integers
Z
p
.
{\displaystyle \mathbb {Z} _{p}.}
Indeed, let
f
(
x
)
=
x
2
−
a
.
{\displaystyle f(x)=x^{2}-a.}
If r is a square root of a modulo p then:
f
(
r
)
=
r
2
−
a
≡
0
mod
p
and
f
′
(
r
)
=
2
r
≢
0
mod
p
,
{\displaystyle f(r)=r^{2}-a\equiv 0{\bmod {p}}\quad {\text{and}}\quad f'(r)=2r\not \equiv 0{\bmod {p}},}
where the second condition is dependent on the fact that p is odd. The basic version of Hensel's lemma tells us that starting from r1 = r we can recursively construct a sequence of integers
{
r
k
}
{\displaystyle \{r_{k}\}}
such that:
r
k
+
1
≡
r
k
mod
p
k
,
r
k
2
≡
a
mod
p
k
.
{\displaystyle r_{k+1}\equiv r_{k}{\bmod {p}}^{k},\quad r_{k}^{2}\equiv a{\bmod {p}}^{k}.}
This sequence converges to some p-adic integer b which satisfies b2 = a. In fact, b is the unique square root of a in
Z
p
{\displaystyle \mathbb {Z} _{p}}
congruent to r1 modulo p. Conversely, if a is a perfect square in
Z
p
{\displaystyle \mathbb {Z} _{p}}
and it is not divisible by p then it is a nonzero quadratic residue mod p. Note that the quadratic reciprocity law allows one to easily test whether a is a nonzero quadratic residue mod p, thus we get a practical way to determine which p-adic numbers (for p odd) have a p-adic square root, and it can be extended to cover the case p = 2 using the more general version of Hensel's lemma (an example with 2-adic square roots of 17 is given later).
To make the discussion above more explicit, let us find a "square root of 2" (the solution to
x
2
−
2
=
0
{\displaystyle x^{2}-2=0}
) in the 7-adic integers. Modulo 7 one solution is 3 (we could also take 4), so we set
r
1
=
3
{\displaystyle r_{1}=3}
. Hensel's lemma then allows us to find
r
2
{\displaystyle r_{2}}
as follows:
f
(
r
1
)
=
3
2
−
2
=
7
f
(
r
1
)
/
p
1
=
7
/
7
=
1
f
′
(
r
1
)
=
2
r
1
=
6
{\displaystyle {\begin{aligned}f(r_{1})&=3^{2}-2=7\\f(r_{1})/p^{1}&=7/7=1\\f'(r_{1})&=2r_{1}=6\end{aligned}}}
Based on which the expression
t
f
′
(
r
1
)
≡
−
(
f
(
r
1
)
/
p
k
)
mod
p
,
{\displaystyle tf'(r_{1})\equiv -(f(r_{1})/p^{k}){\bmod {p}},}
turns into:
t
⋅
6
≡
−
1
mod
7
{\displaystyle t\cdot 6\equiv -1{\bmod {7}}}
which implies
t
=
1.
{\displaystyle t=1.}
Now:
r
2
=
r
1
+
t
p
1
=
3
+
1
⋅
7
=
10
=
13
7
.
{\displaystyle r_{2}=r_{1}+tp^{1}=3+1\cdot 7=10=13_{7}.}
And sure enough,
10
2
≡
2
mod
7
2
.
{\displaystyle 10^{2}\equiv 2{\bmod {7}}^{2}.}
(If we had used the Newton method recursion directly in the 7-adics, then
r
2
=
r
1
−
f
(
r
1
)
/
f
′
(
r
1
)
=
3
−
7
/
6
=
11
/
6
,
{\displaystyle r_{2}=r_{1}-f(r_{1})/f'(r_{1})=3-7/6=11/6,}
and
11
/
6
≡
10
mod
7
2
.
{\displaystyle 11/6\equiv 10{\bmod {7}}^{2}.}
)
We can continue and find
r
3
=
108
=
3
+
7
+
2
⋅
7
2
=
213
7
{\displaystyle r_{3}=108=3+7+2\cdot 7^{2}=213_{7}}
. Each time we carry out the calculation (that is, for each successive value of k), one more base 7 digit is added for the next higher power of 7. In the 7-adic integers this sequence converges, and the limit is a square root of 2 in
Z
7
{\displaystyle \mathbb {Z} _{7}}
which has initial 7-adic expansion
3
+
7
+
2
⋅
7
2
+
6
⋅
7
3
+
7
4
+
2
⋅
7
5
+
7
6
+
2
⋅
7
7
+
4
⋅
7
8
+
⋯
.
{\displaystyle 3+7+2\cdot 7^{2}+6\cdot 7^{3}+7^{4}+2\cdot 7^{5}+7^{6}+2\cdot 7^{7}+4\cdot 7^{8}+\cdots .}
If we started with the initial choice
r
1
=
4
{\displaystyle r_{1}=4}
then Hensel's lemma would produce a square root of 2 in
Z
7
{\displaystyle \mathbb {Z} _{7}}
which is congruent to 4 (mod 7) instead of 3 (mod 7) and in fact this second square root would be the negative of the first square root (which is consistent with 4 = −3 mod 7).
As an example where the original version of Hensel's lemma is not valid but the more general one is, let
f
(
x
)
=
x
2
−
17
{\displaystyle f(x)=x^{2}-17}
and
a
=
1.
{\displaystyle a=1.}
Then
f
(
a
)
=
−
16
{\displaystyle f(a)=-16}
and
f
′
(
a
)
=
2
,
{\displaystyle f'(a)=2,}
so
|
f
(
a
)
|
2
<
|
f
′
(
a
)
|
2
2
,
{\displaystyle |f(a)|_{2}<|f'(a)|_{2}^{2},}
which implies there is a unique 2-adic integer b satisfying
b
2
=
17
and
|
b
−
a
|
2
<
|
f
′
(
a
)
|
2
=
1
2
,
{\displaystyle b^{2}=17\quad {\text{and}}\quad |b-a|_{2}<|f'(a)|_{2}={\frac {1}{2}},}
i.e., b ≡ 1 mod 4. There are two square roots of 17 in the 2-adic integers, differing by a sign, and although they are congruent mod 2 they are not congruent mod 4. This is consistent with the general version of Hensel's lemma only giving us a unique 2-adic square root of 17 that is congruent to 1 mod 4 rather than mod 2. If we had started with the initial approximate root a = 3 then we could apply the more general Hensel's lemma again to find a unique 2-adic square root of 17 which is congruent to 3 mod 4. This is the other 2-adic square root of 17.
In terms of lifting the roots of
x
2
−
17
{\displaystyle x^{2}-17}
from modulus 2k to 2k+1, the lifts starting with the root 1 mod 2 are as follows:
1 mod 2 → 1, 3 mod 4
1 mod 4 → 1, 5 mod 8 and 3 mod 4 → 3, 7 mod 8
1 mod 8 → 1, 9 mod 16 and 7 mod 8 → 7, 15 mod 16, while 3 mod 8 and 5 mod 8 don't lift to roots mod 16
9 mod 16 → 9, 25 mod 32 and 7 mod 16 → 7, 23 mod 16, while 1 mod 16 and 15 mod 16 don't lift to roots mod 32.
For every k at least 3, there are four roots of x2 − 17 mod 2k, but if we look at their 2-adic expansions we can see that in pairs they are converging to just two 2-adic limits. For instance, the four roots mod 32 break up into two pairs of roots which each look the same mod 16:
9 = 1 + 23 and 25 = 1 + 23 + 24.
7 = 1 + 2 + 22 and 23 = 1 + 2 + 22 + 24.
The 2-adic square roots of 17 have expansions
1
+
2
3
+
2
5
+
2
6
+
2
7
+
2
9
+
2
10
+
⋯
{\displaystyle 1+2^{3}+2^{5}+2^{6}+2^{7}+2^{9}+2^{10}+\cdots }
1
+
2
+
2
2
+
2
4
+
2
8
+
2
11
+
⋯
{\displaystyle 1+2+2^{2}+2^{4}+2^{8}+2^{11}+\cdots }
Another example where we can use the more general version of Hensel's lemma but not the basic version is a proof that any 3-adic integer c ≡ 1 mod 9 is a cube in
Z
3
.
{\displaystyle \mathbb {Z} _{3}.}
Let
f
(
x
)
=
x
3
−
c
{\displaystyle f(x)=x^{3}-c}
and take initial approximation a = 1. The basic Hensel's lemma cannot be used to find roots of f(x) since
f
′
(
r
)
≡
0
mod
3
{\displaystyle f'(r)\equiv 0{\bmod {3}}}
for every r. To apply the general version of Hensel's lemma we want
|
f
(
1
)
|
3
<
|
f
′
(
1
)
|
3
2
,
{\displaystyle |f(1)|_{3}<|f'(1)|_{3}^{2},}
which means
c
≡
1
mod
2
7.
{\displaystyle c\equiv 1{\bmod {2}}7.}
That is, if c ≡ 1 mod 27 then the general Hensel's lemma tells us f(x) has a 3-adic root, so c is a 3-adic cube. However, we wanted to have this result under the weaker condition that c ≡ 1 mod 9. If c ≡ 1 mod 9 then c ≡ 1, 10, or 19 mod 27. We can apply the general Hensel's lemma three times depending on the value of c mod 27: if c ≡ 1 mod 27 then use a = 1, if c ≡ 10 mod 27 then use a = 4 (since 4 is a root of f(x) mod 27), and if c ≡ 19 mod 27 then use a = 7. (It is not true that every c ≡ 1 mod 3 is a 3-adic cube, e.g., 4 is not a 3-adic cube since it is not a cube mod 9.)
In a similar way, after some preliminary work, Hensel's lemma can be used to show that for any odd prime number p, any p-adic integer c congruent to 1 modulo p2 is a p-th power in
Z
p
.
{\displaystyle \mathbb {Z} _{p}.}
(This is false for p = 2.)
== Generalizations ==
Suppose A is a commutative ring, complete with respect to an ideal
m
,
{\displaystyle {\mathfrak {m}},}
and let
f
(
x
)
∈
A
[
x
]
.
{\displaystyle f(x)\in A[x].}
a ∈ A is called an "approximate root" of f, if
f
(
a
)
≡
0
mod
f
′
(
a
)
2
m
.
{\displaystyle f(a)\equiv 0{\bmod {f}}'(a)^{2}{\mathfrak {m}}.}
If f has an approximate root then it has an exact root b ∈ A "close to" a; that is,
f
(
b
)
=
0
and
b
≡
a
mod
m
.
{\displaystyle f(b)=0\quad {\text{and}}\quad b\equiv a{\bmod {\mathfrak {m}}}.}
Furthermore, if
f
′
(
a
)
{\displaystyle f'(a)}
is not a zero-divisor then b is unique.
This result can be generalized to several variables as follows:
Theorem. Let A be a commutative ring that is complete with respect to ideal
m
⊂
A
.
{\displaystyle {\mathfrak {m}}\subset A.}
Let
f
1
,
…
,
f
n
∈
A
[
x
1
,
…
,
x
n
]
{\displaystyle f_{1},\ldots ,f_{n}\in A[x_{1},\ldots ,x_{n}]}
be a system of n polynomials in n variables over A. View
f
=
(
f
1
,
…
,
f
n
)
,
{\displaystyle \mathbf {f} =(f_{1},\ldots ,f_{n}),}
as a mapping from An to itself, and let
J
f
(
x
)
{\displaystyle J_{\mathbf {f} }(\mathbf {x} )}
denote its Jacobian matrix. Suppose a = (a1, ..., an) ∈ An is an approximate solution to f = 0 in the sense that
f
i
(
a
)
≡
0
mod
(
det
J
f
(
a
)
)
2
m
,
1
⩽
i
⩽
n
.
{\displaystyle f_{i}(\mathbf {a} )\equiv 0{\bmod {(}}{\det J_{\mathbf {f} }(a)})^{2}{\mathfrak {m}},\qquad 1\leqslant i\leqslant n.}
Then there is some b = (b1, ..., bn) ∈ An satisfying f(b) = 0, i.e.,
f
i
(
b
)
=
0
,
1
⩽
i
⩽
n
.
{\displaystyle f_{i}(\mathbf {b} )=0,\qquad 1\leqslant i\leqslant n.}
Furthermore this solution is "close" to a in the sense that
b
i
≡
a
i
mod
det
J
f
(
a
)
m
,
1
⩽
i
⩽
n
.
{\displaystyle b_{i}\equiv a_{i}{\bmod {\det }}J_{\mathbf {f} }(a){\mathfrak {m}},\qquad 1\leqslant i\leqslant n.}
As a special case, if
f
i
(
a
)
≡
0
mod
m
{\displaystyle f_{i}(\mathbf {a} )\equiv 0{\bmod {\mathfrak {m}}}}
for all i and
det
J
f
(
a
)
{\displaystyle \det J_{\mathbf {f} }(\mathbf {a} )}
is a unit in A then there is a solution to f(b) = 0 with
b
i
≡
a
i
mod
m
{\displaystyle b_{i}\equiv a_{i}{\bmod {\mathfrak {m}}}}
for all i.
When n = 1, a = a is an element of A and
J
f
(
a
)
=
J
f
(
a
)
=
f
′
(
a
)
.
{\displaystyle J_{\mathbf {f} }(\mathbf {a} )=J_{f}(a)=f'(a).}
The hypotheses of this multivariable Hensel's lemma reduce to the ones which were stated in the one-variable Hensel's lemma.
== Related concepts ==
Completeness of a ring is not a necessary condition for the ring to have the Henselian property: Goro Azumaya in 1950 defined a commutative local ring satisfying the Henselian property for the maximal ideal m to be a Henselian ring.
Masayoshi Nagata proved in the 1950s that for any commutative local ring A with maximal ideal m there always exists a smallest ring Ah containing A such that Ah is Henselian with respect to mAh. This Ah is called the Henselization of A. If A is noetherian, Ah will also be noetherian, and Ah is manifestly algebraic as it is constructed as a limit of étale neighbourhoods. This means that Ah is usually much smaller than the completion  while still retaining the Henselian property and remaining in the same category.
== See also ==
Hasse–Minkowski theorem
Newton polygon
Locally compact field
Lifting-the-exponent lemma
== References ==
Eisenbud, David (1995), Commutative algebra, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-5350-1, ISBN 978-0-387-94269-8, MR 1322960
Milne, J. G. (1980), Étale cohomology, Princeton University Press, ISBN 978-0-691-08238-7 | Wikipedia/Hensel's_lemma |
In abstract algebra, a completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have a simpler structure than general ones, and Hensel's lemma applies to them. In algebraic geometry, a completion of a ring of functions R on a space X concentrates on a formal neighborhood of a point of X: heuristically, this is a neighborhood so small that all Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to completion of a metric space with Cauchy sequences, and agrees with it in the case when R has a metric given by a non-Archimedean absolute value.
== General construction ==
Suppose that E is an abelian group with a descending filtration
E
=
F
0
E
⊃
F
1
E
⊃
F
2
E
⊃
⋯
{\displaystyle E=F^{0}E\supset F^{1}E\supset F^{2}E\supset \cdots \,}
of subgroups. One then defines the completion (with respect to the filtration) as the inverse limit:
E
^
=
lim
←
(
E
/
F
n
E
)
=
{
(
a
n
¯
)
n
≥
0
∈
∏
n
≥
0
(
E
/
F
n
E
)
|
a
i
≡
a
j
(
mod
F
i
E
)
for all
i
≤
j
}
.
{\displaystyle {\widehat {E}}=\varprojlim (E/F^{n}E)=\left\{\left.({\overline {a_{n}}})_{n\geq 0}\in \prod _{n\geq 0}(E/F^{n}E)\;\right|\;a_{i}\equiv a_{j}{\pmod {F^{i}E}}{\text{ for all }}i\leq j\right\}.\,}
This is again an abelian group. Usually E is an additive abelian group. If E has additional algebraic structure compatible with the filtration, for instance E is a filtered ring, a filtered module, or a filtered vector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both to commutative and noncommutative rings. As may be expected, when the intersection of the
F
i
E
{\displaystyle F^{i}E}
equals zero, this produces a complete topological ring.
== Krull topology ==
In commutative algebra, the filtration on a commutative ring R by the powers of a proper ideal I determines the Krull (after Wolfgang Krull) or I-adic topology on R. The case of a maximal ideal
I
=
m
{\displaystyle I={\mathfrak {m}}}
is especially important, for example the distinguished maximal ideal of a valuation ring. The basis of open neighbourhoods of 0 in R is given by the powers In, which are nested and form a descending filtration on R:
F
0
R
=
R
⊃
I
⊃
I
2
⊃
⋯
,
F
n
R
=
I
n
.
{\displaystyle F^{0}R=R\supset I\supset I^{2}\supset \cdots ,\quad F^{n}R=I^{n}.}
(Open neighborhoods of any r ∈ R are given by cosets r + In.) The (I-adic) completion is the inverse limit of the factor rings,
R
^
I
=
lim
←
(
R
/
I
n
)
{\displaystyle {\widehat {R}}_{I}=\varprojlim (R/I^{n})}
pronounced "R I hat". The kernel of the canonical map π from the ring to its completion is the intersection of the powers of I. Thus π is injective if and only if this intersection reduces to the zero element of the ring; by the Krull intersection theorem, this is the case for any commutative Noetherian ring which is an integral domain or a local ring.
There is a related topology on R-modules, also called Krull or I-adic topology. A basis of open neighborhoods of a module M is given by the sets of the form
x
+
I
n
M
for
x
∈
M
.
{\displaystyle x+I^{n}M\quad {\text{for }}x\in M.}
The I-adic completion of an R-module M is the inverse limit of the quotients
M
^
I
=
lim
←
(
M
/
I
n
M
)
.
{\displaystyle {\widehat {M}}_{I}=\varprojlim (M/I^{n}M).}
This procedure converts any module over R into a complete topological module over
R
^
I
{\displaystyle {\widehat {R}}_{I}}
if I is finitely generated.
== Examples ==
The ring of p-adic integers
Z
p
{\displaystyle \mathbb {Z} _{p}}
is obtained by completing the ring
Z
{\displaystyle \mathbb {Z} }
of integers at the ideal (p).
Let R = K[x1,...,xn] be the polynomial ring in n variables over a field K and
m
=
(
x
1
,
…
,
x
n
)
{\displaystyle {\mathfrak {m}}=(x_{1},\ldots ,x_{n})}
be the maximal ideal generated by the variables. Then the completion
R
^
m
{\displaystyle {\widehat {R}}_{\mathfrak {m}}}
is the ring K[[x1,...,xn]] of formal power series in n variables over K.
Given a noetherian ring
R
{\displaystyle R}
and an ideal
I
=
(
f
1
,
…
,
f
n
)
,
{\displaystyle I=(f_{1},\ldots ,f_{n}),}
the
I
{\displaystyle I}
-adic completion of
R
{\displaystyle R}
is an image of a formal power series ring, specifically, the image of the surjection
{
R
[
[
x
1
,
…
,
x
n
]
]
→
R
^
I
x
i
↦
f
i
{\displaystyle {\begin{cases}R[[x_{1},\ldots ,x_{n}]]\to {\widehat {R}}_{I}\\x_{i}\mapsto f_{i}\end{cases}}}
The kernel is the ideal
(
x
1
−
f
1
,
…
,
x
n
−
f
n
)
.
{\displaystyle (x_{1}-f_{1},\ldots ,x_{n}-f_{n}).}
Completions can also be used to analyze the local structure of singularities of a scheme. For example, the affine schemes associated to
C
[
x
,
y
]
/
(
x
y
)
{\displaystyle \mathbb {C} [x,y]/(xy)}
and the nodal cubic plane curve
C
[
x
,
y
]
/
(
y
2
−
x
2
(
1
+
x
)
)
{\displaystyle \mathbb {C} [x,y]/(y^{2}-x^{2}(1+x))}
have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, any Zariski neighborhood of the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal
(
x
,
y
)
{\displaystyle (x,y)}
and completing gives
C
[
[
x
,
y
]
]
/
(
x
y
)
{\displaystyle \mathbb {C} [[x,y]]/(xy)}
and
C
[
[
x
,
y
]
]
/
(
(
y
+
u
)
(
y
−
u
)
)
{\displaystyle \mathbb {C} [[x,y]]/((y+u)(y-u))}
respectively, where
u
{\displaystyle u}
is the formal square root of
x
2
(
1
+
x
)
{\displaystyle x^{2}(1+x)}
in
C
[
[
x
,
y
]
]
.
{\displaystyle \mathbb {C} [[x,y]].}
More explicitly, the power series:
u
=
x
1
+
x
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
)
!
(
1
−
2
n
)
(
n
!
)
2
(
4
n
)
x
n
+
1
.
{\displaystyle u=x{\sqrt {1+x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)(n!)^{2}(4^{n})}}x^{n+1}.}
Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
== Properties ==
The completion of a Noetherian ring with respect to some ideal is a Noetherian ring.
The completion of a Noetherian local ring with respect to the unique maximal ideal is a Noetherian local ring.
The completion is a functorial operation: a continuous map f: R → S of topological rings gives rise to a map of their completions,
f
^
:
R
^
→
S
^
.
{\displaystyle {\widehat {f}}:{\widehat {R}}\to {\widehat {S}}.}
Moreover, if M and N are two modules over the same topological ring R and f: M → N is a continuous module map then f uniquely extends to the map of the completions:
f
^
:
M
^
→
N
^
,
{\displaystyle {\widehat {f}}:{\widehat {M}}\to {\widehat {N}},}
where
M
^
,
N
^
{\displaystyle {\widehat {M}},{\widehat {N}}}
are modules over
R
^
.
{\displaystyle {\widehat {R}}.}
The completion of a Noetherian ring R is a flat module over R.
The completion of a finitely generated module M over a Noetherian ring R can be obtained by extension of scalars:
M
^
=
M
⊗
R
R
^
.
{\displaystyle {\widehat {M}}=M\otimes _{R}{\widehat {R}}.}
Together with the previous property, this implies that the functor of completion on finitely generated R-modules is exact: it preserves short exact sequences. In particular, taking quotients of rings commutes with completion, meaning that for any quotient R-algebra
R
/
I
{\displaystyle R/I}
, there is an isomorphism
R
/
I
^
≅
R
^
/
I
^
.
{\displaystyle {\widehat {R/I}}\cong {\widehat {R}}/{\widehat {I}}.}
Cohen structure theorem (equicharacteristic case). Let R be a complete local Noetherian commutative ring with maximal ideal
m
{\displaystyle {\mathfrak {m}}}
and residue field K. If R contains a field, then
R
≃
K
[
[
x
1
,
…
,
x
n
]
]
/
I
{\displaystyle R\simeq K[[x_{1},\ldots ,x_{n}]]/I}
for some n and some ideal I (Eisenbud, Theorem 7.7).
== See also ==
Formal scheme
Profinite integer
Locally compact field
Zariski ring
Linear topology
Quasi-unmixed ring
== Citations ==
== References == | Wikipedia/Completion_(ring_theory) |
Combinatorial commutative algebra is a relatively new, rapidly developing mathematical discipline. As the name implies, it lies at the intersection of two more established fields, commutative algebra and combinatorics, and frequently uses methods of one to address problems arising in the other. Less obviously, polyhedral geometry plays a significant role.
One of the milestones in the development of the subject was Richard Stanley's 1975 proof of the Upper Bound Conjecture for simplicial spheres, which was based on earlier work of Melvin Hochster and Gerald Reisner. While the problem can be formulated purely in geometric terms, the methods of the proof drew on commutative algebra techniques.
A signature theorem in combinatorial commutative algebra is the characterization of h-vectors of simplicial polytopes conjectured in 1970 by Peter McMullen. Known as the g-theorem, it was proved in 1979 by Stanley (necessity of the conditions, algebraic argument) and by Louis Billera and Carl W. Lee (sufficiency, combinatorial and geometric construction). A major open question was the extension of this characterization from simplicial polytopes to simplicial spheres, the g-conjecture, which was resolved in 2018 by Karim Adiprasito.
== Important notions of combinatorial commutative algebra ==
Square-free monomial ideal in a polynomial ring and Stanley–Reisner ring of a simplicial complex.
Cohen–Macaulay rings.
Monomial ring, closely related to an affine semigroup ring and to the coordinate ring of an affine toric variety.
Algebra with a straightening law. There are several versions of those, including Hodge algebras of Corrado de Concini, David Eisenbud, and Claudio Procesi.
== See also ==
Algebraic combinatorics
Polyhedral combinatorics
Zero-divisor graph
== References ==
A foundational paper on Stanley–Reisner complexes by one of the pioneers of the theory:
Hochster, Melvin (1977). "Cohen–Macaulay rings, combinatorics, and simplicial complexes". Ring Theory II: Proceedings of the Second Oklahoma Conference. Lecture Notes in Pure and Applied Mathematics. Vol. 26. Dekker. pp. 171–223. ISBN 0-8247-6575-3. OCLC 610144046. Zbl 0351.13009.
The first book is a classic (first edition published in 1983):
Stanley, Richard (1996). Combinatorics and commutative algebra. Progress in Mathematics. Vol. 41 (2nd ed.). Birkhäuser. ISBN 0-8176-3836-9. Zbl 0838.13008.
Very influential, and well written, textbook-monograph:
Bruns, Winfried; Herzog, Jürgen (1993). Cohen–Macaulay rings. Vol. 39. Cambridge Studies in Advanced Mathematics: Cambridge University Press. ISBN 0-521-41068-1. OCLC 802912314. Zbl 0788.13005.
Additional reading:
Villarreal, Rafael H. (2001). Monomial algebras. Monographs and Textbooks in Pure and Applied Mathematics. Vol. 238. Marcel Dekker. ISBN 0-8247-0524-6. Zbl 1002.13010.
Hibi, Takayuki (1992). Algebraic combinatorics on convex polytopes. Glebe, Australia: Carslaw Publications. ISBN 1875399046. OCLC 29023080.
Sturmfels, Bernd (1996). Gröbner bases and convex polytopes. University Lecture Series. Vol. 8. American Mathematical Society. ISBN 0-8218-0487-1. OCLC 907364245. Zbl 0856.13020.
Bruns, Winfried; Gubeladze, Joseph (2009). Polytopes, Rings, and K-Theory. Springer Monographs in Mathematics. Springer. doi:10.1007/b105283. ISBN 978-0-387-76355-2. Zbl 1168.13001.
A recent addition to the growing literature in the field, contains exposition of current research topics:
Miller, Ezra; Sturmfels, Bernd (2005). Combinatorial commutative algebra. Graduate Texts in Mathematics. Vol. 227. Springer. ISBN 0-387-22356-8. Zbl 1066.13001.
Herzog, Jürgen; Hibi, Takayuki (2011). Monomial Ideals. Graduate Texts in Mathematics. Vol. 260. Springer. ISBN 978-0-85729-106-6. Zbl 1206.13001.
Herzog, Jürgen; Hibi, Takayuki; Oshugi, Hidefumi (2018). Binomial Ideals. Graduate Texts in Mathematics. Vol. 279. Springer. ISBN 978-3-319-95349-6. Zbl 1403.13004. | Wikipedia/Combinatorial_commutative_algebra |
In category theory, a branch of mathematics, duality is a correspondence between the properties of a category C and the dual properties of the opposite category Cop. Given a statement regarding the category C, by interchanging the source and target of each morphism as well as interchanging the order of composing two morphisms, a corresponding dual statement is obtained regarding the opposite category Cop. (Cop is composed by reversing every morphism of C.) Duality, as such, is the assertion that truth is invariant under this operation on statements. In other words, if a statement S is true about C, then its dual statement is true about Cop. Also, if a statement is false about C, then its dual has to be false about Cop. (Compactly saying, S for C is true if and only if its dual for Cop is true.)
Given a concrete category C, it is often the case that the opposite category Cop per se is abstract. Cop need not be a category that arises from mathematical practice. In this case, another category D is also termed to be in duality with C if D and Cop are equivalent as categories.
In the case when C and its opposite Cop are equivalent, such a category is self-dual.
== Formal definition ==
We define the elementary language of category theory as the two-sorted first order language with objects and morphisms as distinct sorts, together with the relations of an object being the source or target of a morphism and a symbol for composing two morphisms.
Let σ be any statement in this language. We form the dual σop as follows:
Interchange each occurrence of "source" in σ with "target".
Interchange the order of composing morphisms. That is, replace each occurrence of
g
∘
f
{\displaystyle g\circ f}
with
f
∘
g
{\displaystyle f\circ g}
Informally, these conditions state that the dual of a statement is formed by reversing arrows and compositions.
Duality is the observation that σ is true for some category C if and only if σop is true for Cop.
== Examples ==
A morphism
f
:
A
→
B
{\displaystyle f\colon A\to B}
is a monomorphism if
f
∘
g
=
f
∘
h
{\displaystyle f\circ g=f\circ h}
implies
g
=
h
{\displaystyle g=h}
. Performing the dual operation, we get the statement that
g
∘
f
=
h
∘
f
{\displaystyle g\circ f=h\circ f}
implies
g
=
h
.
{\displaystyle g=h.}
This reversed morphism
f
:
B
→
A
{\displaystyle f\colon B\to A}
is by definition precisely an epimorphism. In short, the property of being a monomorphism is dual to the property of being an epimorphism.
Applying duality, this means that a morphism in some category C is a monomorphism if and only if the reverse morphism in the opposite category Cop (composed by reversing all morphisms in C) is an epimorphism.
An example comes from reversing the direction of inequalities in a partial order. So, if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤new by
x ≤new y if and only if y ≤ x.
This example on orders is a special case, since partial orders correspond to a certain kind of category in which Hom(A,B) (a set of all morphisms from A to B of a category) can have at most one element. In applications to logic, this then looks like a very general description of negation (that is, proofs run in the opposite direction). For example, if we take the opposite of a lattice, we will find that meets and joins have their roles interchanged. This is an abstract form of De Morgan's laws, or of duality applied to lattices.
Limits and colimits are dual notions.
Fibrations and cofibrations are examples of dual notions in algebraic topology and homotopy theory. In this context, the duality is often called Eckmann–Hilton duality.
== See also ==
Adjoint functor
Dual object
Duality (mathematics)
Opposite category
Pulation square
== References == | Wikipedia/Duality_(category_theory) |
In algebra, the kernel of a homomorphism is the relation describing how elements in the domain of the homomorphism become related in the image. A homomorphism is a function that preserves the underlying algebraic structure in the domain to its image.
When the algebraic structures involved have an underlying group structure, the kernel is taken to be the preimage of the group's identity element in the image, that is, it consists of the elements of the domain mapping to the image's identity. For example, the map that sends every integer to its parity (that is, 0 if the number is even, 1 if the number is odd) would be a homomorphism to the integers modulo 2, and its respective kernel would be the even integers which all have 0 as its parity. The kernel of a homomorphism of group-like structures will only contain the identity if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.
For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and some kernels have received a special name, such as normal subgroups for groups and two-sided ideals for rings. The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation.
Kernels allow defining quotient objects (also called quotient algebras in universal algebra). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.
== Definition ==
=== Group homomorphisms ===
Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set {eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH.
The kernel is usually denoted ker f (or a variation). In symbols:
ker
f
=
{
g
∈
G
:
f
(
g
)
=
e
H
}
.
{\displaystyle \ker f=\{g\in G:f(g)=e_{H}\}.}
Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set {eG}.
ker f is a subgroup of G and further it is a normal subgroup. Thus, there is a corresponding quotient group G / (ker f). This is isomorphic to f(G), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups.
=== Ring homomorphisms ===
Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S.
If 0S is the zero element of S, then the kernel of f is its kernel as additive groups. It is the preimage of the zero ideal {0S}, which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S.
The kernel is usually denoted ker f (or a variation).
In symbols:
ker
f
=
{
r
∈
R
:
f
(
r
)
=
0
S
}
.
{\displaystyle \operatorname {ker} f=\{r\in R:f(r)=0_{S}\}.}
Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel.
The homomorphism f is injective if and only if its kernel is only the singleton set {0R}.
This is always the case if R is a field, and S is not the zero ring.
Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a subrng, and, more precisely, a two-sided ideal of R.
Thus, it makes sense to speak of the quotient ring R / (ker f).
The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S).
=== Linear maps ===
Let V and W be vector spaces over a field (or more generally, modules over a ring) and let T be a linear map from V to W. If 0W is the zero vector of W, then the kernel of T (or null space) is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as ker T, or some variation thereof:
ker
T
=
{
v
∈
V
:
T
(
v
)
=
0
W
}
.
{\displaystyle \ker T=\{\mathbf {v} \in V:T(\mathbf {v} )=\mathbf {0} _{W}\}.}
Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace.
The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space V / (ker T). The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image.
=== Module homomorphisms ===
Let
R
{\displaystyle R}
be a ring, and let
M
{\displaystyle M}
and
N
{\displaystyle N}
be
R
{\displaystyle R}
-modules. If
φ
:
M
→
N
{\displaystyle \varphi :M\to N}
is a module homomorphism, then the kernel is defined to be:
ker
φ
=
{
m
∈
M
|
φ
(
m
)
=
0
}
{\displaystyle \ker \varphi =\{m\in M\ |\ \varphi (m)=0\}}
Every kernel is a submodule of the domain module, which means they always contain 0, the additive identity of the module. Kernels of abelian groups can be considered a particular kind of module kernel when the underlying ring is the integers.
== Survey of examples ==
=== Group homomorphisms ===
Let G be the cyclic group on 6 elements {0, 1, 2, 3, 4, 5} with modular addition, H be the cyclic on 2 elements {0, 1} with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H. Then ker f = {0, 2, 4} , since all these elements are mapped to 0H. The quotient group G / (ker f) has two elements: {0, 2, 4} and {1, 3, 5}, and is isomorphic to H.
Given a isomorphism
φ
:
G
→
H
{\displaystyle \varphi :G\to H}
, one has
ker
φ
=
1
{\displaystyle \ker \varphi =1}
. On the other hand, if this mapping is merely a homomorphism where H is the trivial group, then
φ
(
g
)
=
1
{\displaystyle \varphi (g)=1}
for all
g
∈
G
{\displaystyle g\in G}
, so thus
ker
φ
=
G
{\displaystyle \ker \varphi =G}
.
Let
φ
:
R
2
→
R
{\displaystyle \varphi :\mathbb {R} ^{2}\to \mathbb {R} }
be the map defined as
φ
(
(
x
,
y
)
)
=
x
{\displaystyle \varphi ((x,y))=x}
. Then this is a homomorphism with the kernel consisting precisely the points of the form
(
0
,
y
)
{\displaystyle (0,y)}
. This mapping is considered the "projection onto the x-axis." A similar phenomenon occurs with the mapping
f
:
(
R
×
)
2
→
R
×
{\displaystyle f:(\mathbb {R} ^{\times })^{2}\to \mathbb {R} ^{\times }}
defined as
f
(
a
,
b
)
=
b
{\displaystyle f(a,b)=b}
, where the kernel is the points of the form
(
a
,
1
)
{\displaystyle (a,1)}
For a non-abelian example, let
Q
8
{\displaystyle Q_{8}}
denote the Quaternion group, and
V
4
{\displaystyle V_{4}}
the Klein 4-group. Define a mapping
φ
:
Q
8
→
V
4
{\displaystyle \varphi :Q_{8}\to V_{4}}
to be:
φ
(
±
1
)
=
1
{\displaystyle \varphi (\pm 1)=1}
φ
(
±
i
)
=
a
{\displaystyle \varphi (\pm i)=a}
φ
(
±
j
)
=
b
{\displaystyle \varphi (\pm j)=b}
φ
(
±
k
)
=
c
{\displaystyle \varphi (\pm k)=c}
Then this mapping is a homomorphism where
ker
φ
=
{
±
1
}
{\displaystyle \ker \varphi =\{\pm 1\}}
.
=== Ring homomorphisms ===
Consider the mapping
φ
:
Z
→
Z
/
2
Z
{\displaystyle \varphi :\mathbb {Z} \to \mathbb {Z} /2\mathbb {Z} }
where the later ring is the integers modulo 2 and the map sends each number to its parity; 0 for even numbers, and 1 for odd numbers. This mapping turns out to be a homomorphism, and since the additive identity of the later ring is 0, the kernel is precisely the even numbers.
Let
φ
:
Q
[
x
]
→
Q
{\displaystyle \varphi :\mathbb {Q} [x]\to \mathbb {Q} }
be defined as
φ
(
p
(
x
)
)
=
p
(
0
)
{\displaystyle \varphi (p(x))=p(0)}
. This mapping , which happens to be a homomorphism, sends each polynomial to its constant term. It maps a polynomial to zero if and only if said polynomial's constant term is 0. Polynomials with real coefficients can receive a similar homomorphism, with its kernel being the polynomials with constant term 0.
=== Linear maps ===
Let
φ
:
C
3
→
C
{\displaystyle \varphi :\mathbb {C} ^{3}\to \mathbb {C} }
be defined as
φ
(
x
,
y
,
z
)
=
x
+
2
y
+
3
z
{\displaystyle \varphi (x,y,z)=x+2y+3z}
, then the kernel of
φ
{\displaystyle \varphi }
(that is, the null space) will be the set of points
(
x
,
y
,
z
)
∈
C
3
{\displaystyle (x,y,z)\in \mathbb {C} ^{3}}
such that
x
+
2
y
+
3
z
=
0
{\displaystyle x+2y+3z=0}
, and this set is a subspace of
C
3
{\displaystyle \mathbb {C} ^{3}}
(the same is true for every kernel of a linear map).
If
D
{\displaystyle D}
represents the derivative operator on real polynomials, then the kernel of
D
{\displaystyle D}
will consist of the polynomials with deterivative equal to 0, that is the constant functions.
Consider the mapping
(
T
p
)
(
x
)
=
x
2
p
(
x
)
{\displaystyle (Tp)(x)=x^{2}p(x)}
, where
p
{\displaystyle p}
is a polynomial with real coefficients. Then
T
{\displaystyle T}
is a linear map whose kernel is precisely 0, since it is the only polynomial to satisfy
x
2
p
(
x
)
=
0
{\displaystyle x^{2}p(x)=0}
for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
.
== Quotient algebras ==
The kernel of a homomorphism can be used to define a quotient algebra. For instance, if
φ
:
G
→
H
{\displaystyle \varphi :G\to H}
denotes a group homomorphism, and denote
K
=
ker
φ
{\displaystyle K=\ker \varphi }
, then consider
G
/
K
{\displaystyle G/K}
to be the set of fibers of the homomorphism
φ
{\displaystyle \varphi }
, where a fiber is merely the set of points of the domain mapping to a single chosen point in the range. If
X
a
∈
G
/
K
{\displaystyle X_{a}\in G/K}
denotes the fiber of the element
a
∈
H
{\displaystyle a\in H}
, then a group operation on the set of fibers can be endowed by
X
a
X
b
=
X
a
b
{\displaystyle X_{a}X_{b}=X_{ab}}
, and
G
/
K
{\displaystyle G/K}
is called the quotient group (or factor group), to be read as "G modulo K" or "G mod K". The terminology arises from the fact that the kernel represents the fiber of the identity element of the range,
H
{\displaystyle H}
, and that the remaining elements are simply "translates" of the kernel, so the quotient group is obtained by "dividing out" by the kernel.
The fibers can also be described by looking at the domain relative to the kernel; given
X
∈
G
/
K
{\displaystyle X\in G/K}
and any element
u
∈
X
{\displaystyle u\in X}
, then
X
=
u
K
=
K
u
{\displaystyle X=uK=Ku}
where:
u
K
=
{
u
k
|
k
∈
K
}
{\displaystyle uK=\{uk\ |\ k\in K\}}
K
u
=
{
k
u
|
k
∈
K
}
{\displaystyle Ku=\{ku\ |\ k\in K\}}
these sets are called the left and right cosets respectively, and can be defined in general for any arbitrary subgroup aside from the kernel. The group operation can then be defined as
u
K
∘
v
K
=
(
u
k
)
K
{\displaystyle uK\circ vK=(uk)K}
, which is well-defined regardless of the choice of representatives of the fibers.
According to the first isomorphism theorem, there is an isomorphism
μ
:
G
/
K
→
φ
(
G
)
{\displaystyle \mu :G/K\to \varphi (G)}
, where the later group is the image of the homomorphism
φ
{\displaystyle \varphi }
, and the isomorphism is defined as
μ
(
u
K
)
=
φ
(
u
)
{\displaystyle \mu (uK)=\varphi (u)}
, and such map is also well-defined.
For rings, modules, and vector spaces, one can define the respective quotient algebras via the underlying additive group structure, with cosets represented as
x
+
K
{\displaystyle x+K}
. Ring multiplication can be defined on the quotient algebra the same way as in the group (and be well-defined). For a ring
R
{\displaystyle R}
(possibly a field when describing vector spaces) and a module homomorphism
φ
:
M
→
N
{\displaystyle \varphi :M\to N}
with kernel
K
=
ker
φ
{\displaystyle K=\ker \varphi }
, one can define scalar multiplication on
G
/
K
{\displaystyle G/K}
by
r
(
x
+
K
)
=
r
x
+
K
{\displaystyle r(x+K)=rx+K}
for
r
∈
R
{\displaystyle r\in R}
and
x
∈
M
{\displaystyle x\in M}
, which will also be well-defined.
== Kernel structures ==
The structure of kernels allows for the building of quotient algebras from structures satisfying the properties of kernels. Any subgroup
N
{\displaystyle N}
of a group
G
{\displaystyle G}
can construct a quotient
G
/
N
{\displaystyle G/N}
by the set of all cosets of
N
{\displaystyle N}
in
G
{\displaystyle G}
. The natural way to turn this into a group, similar to the treatment for the quotient by a kernel, is to define an operation on (left) cosets by
u
N
⋅
v
N
=
(
u
v
)
N
{\displaystyle uN\cdot vN=(uv)N}
, however this operation is well defined if and only if the subgroup
N
{\displaystyle N}
is closed under conjugation under
G
{\displaystyle G}
, that is, if
g
∈
G
{\displaystyle g\in G}
and
n
∈
N
{\displaystyle n\in N}
, then
g
n
g
−
1
∈
N
{\displaystyle gng^{-1}\in N}
. Furthermore, the operation being well defined is sufficient for the quotient to be a group. Subgroups satisfying this property are called normal subgroups. Every kernel of a group is a normal subgroup, and for a given normal subgroup
N
{\displaystyle N}
of a group
G
{\displaystyle G}
, the natural projection
π
(
g
)
=
g
N
{\displaystyle \pi (g)=gN}
is a homomorphism with
ker
π
=
N
{\displaystyle \ker \pi =N}
, so the normal subgroups are precisely the subgroups which are kernels. The closure under conjugation, however, gives a criterion for when a subgroup is a kernel for some homomorphism.
For a ring
R
{\displaystyle R}
, treating it as a group, one can take a quotient group via an arbitrary subgroup
I
{\displaystyle I}
of the ring, which will be normal due to the ring's additive group being abelian. To define multiplication on
R
/
I
{\displaystyle R/I}
, the multiplication of cosets, defined as
(
r
+
I
)
(
s
+
I
)
=
r
s
+
I
{\displaystyle (r+I)(s+I)=rs+I}
needs to be well-defined. Taking representative
r
+
α
{\displaystyle r+\alpha }
and
s
+
β
{\displaystyle s+\beta }
of
r
+
I
{\displaystyle r+I}
and
s
+
I
{\displaystyle s+I}
respectively, for
r
,
s
∈
R
{\displaystyle r,s\in R}
and
α
,
β
∈
I
{\displaystyle \alpha ,\beta \in I}
, yields:
(
r
+
α
)
(
s
+
β
)
+
I
=
r
s
+
I
{\displaystyle (r+\alpha )(s+\beta )+I=rs+I}
Setting
r
=
s
=
0
{\displaystyle r=s=0}
implies that
I
{\displaystyle I}
is closed under multiplication, while setting
α
=
s
=
0
{\displaystyle \alpha =s=0}
shows that
r
β
∈
I
{\displaystyle r\beta \in I}
, that is,
I
{\displaystyle I}
is closed under arbitrary multiplication by elements on the left. Similarly, taking
r
=
β
=
0
{\displaystyle r=\beta =0}
implies that
I
{\displaystyle I}
is also closed under multiplication by arbitrary elements on the right. Any subgroup of
R
{\displaystyle R}
that is closed under multiplication by any element of the ring is called an ideal. Analogously to normal subgroups, the ideals of a ring are precisely the kernels of homomorphisms.
== Exact sequence ==
Kernels are used to define exact sequences of homomorphisms for groups and modules. If A, B, and C are modules, then a pair of homomorphisms
ψ
:
A
→
B
,
φ
:
B
→
C
{\displaystyle \psi :A\to B,\varphi :B\to C}
is said to be exact if
image
ψ
=
ker
φ
{\displaystyle {\text{image }}\psi =\ker \varphi }
. An exact sequence is then a sequence of modules and homomorphism
⋯
→
X
n
−
1
→
X
n
→
X
n
+
1
→
⋯
{\displaystyle \cdots \to X_{n-1}\to X_{n}\to X_{n+1}\to \cdots }
where each adjacent pair of homomorphisms is exact.
== Universal algebra ==
All the above cases may be unified and generalized in universal algebra. Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B. Then the kernel of f is the subset of the direct product A × A consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B. The kernel is usually denoted ker f (or a variation). In symbols:
ker
f
=
{
(
a
,
b
)
∈
A
×
A
:
f
(
a
)
=
f
(
b
)
}
.
{\displaystyle \operatorname {ker} f=\left\{\left(a,b\right)\in A\times A:f(a)=f\left(b\right)\right\}{\mbox{.}}}
The homomorphism f is injective if and only if its kernel is exactly the diagonal set {(a, a) : a ∈ A}, which is always at least contained inside the kernel.
It is easy to see that ker f is an equivalence relation on A, and in fact a congruence relation.
Thus, it makes sense to speak of the quotient algebra A / (ker f).
The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
== See also ==
Kernel (linear algebra)
Kernel (category theory)
Kernel of a function
Equalizer (mathematics)
Zero set
== Notes ==
== References ==
Axler, Sheldon. Linear Algebra Done Right (4th ed.). Springer.
Burris, Stanley; Sankappanavar, H.P. (2012). A Course in Universal Algebra (Millennium ed.). S. Burris and H.P. Sankappanavar. ISBN 978-0-9880552-0-9.
Dummit, David Steven; Foote, Richard M. (2004). Abstract algebra (3rd ed.). Hoboken, NJ: Wiley. ISBN 978-0-471-43334-7.
Fraleigh, John B.; Katz, Victor (2003). A first course in abstract algebra. World student series (7th ed.). Boston: Addison-Wesley. ISBN 978-0-201-76390-4.
Hungerford, Thomas W. (2014). Abstract Algebra: an introduction (3rd ed.). Boston, MA: Brooks/Cole, Cengage Learning. ISBN 978-1-111-56962-4.
McKenzie, Ralph; McNulty, George F.; Taylor, W. (1987). Algebras, lattices, varieties. The Wadsworth & Brooks/Cole mathematics series. Monterey, Calif: Wadsworth & Brooks/Cole Advanced Books & Software. ISBN 978-0-534-07651-1. | Wikipedia/Kernel_(algebra) |
In mathematics, specifically algebraic geometry, a scheme is a structure that enlarges the notion of algebraic variety in several ways, such as taking account of multiplicities (the equations x = 0 and x2 = 0 define the same algebraic variety but different schemes) and allowing "varieties" defined over any commutative ring (for example, Fermat curves are defined over the integers).
Scheme theory was introduced by Alexander Grothendieck in 1960 in his treatise Éléments de géométrie algébrique (EGA); one of its aims was developing the formalism needed to solve deep problems of algebraic geometry, such as the Weil conjectures (the last of which was proved by Pierre Deligne). Strongly based on commutative algebra, scheme theory allows a systematic use of methods of topology and homological algebra. Scheme theory also unifies algebraic geometry with much of number theory, which eventually led to Wiles's proof of Fermat's Last Theorem.
Schemes elaborate the fundamental idea that an algebraic variety is best analyzed through the coordinate ring of regular algebraic functions defined on it (or on its subsets), and each subvariety corresponds to the ideal of functions which vanish on the subvariety. Intuitively, a scheme is a topological space consisting of closed points which correspond to geometric points, together with non-closed points which are generic points of irreducible subvarieties. The space is covered by an atlas of open sets, each endowed with a coordinate ring of regular functions, with specified coordinate changes between the functions over intersecting open sets. Such a structure is called a ringed space or a sheaf of rings. The cases of main interest are the Noetherian schemes, in which the coordinate rings are Noetherian rings.
Formally, a scheme is a ringed space covered by affine schemes. An affine scheme is the spectrum of a commutative ring; its points are the prime ideals of the ring, and its closed points are maximal ideals. The coordinate ring of an affine scheme is the ring itself, and the coordinate rings of open subsets are rings of fractions.
The relative point of view is that much of algebraic geometry should be developed for a morphism X → Y of schemes (called a scheme X over the base Y ), rather than for an individual scheme. For example, in studying algebraic surfaces, it can be useful to consider families of algebraic surfaces over any scheme Y. In many cases, the family of all varieties of a given type can itself be viewed as a variety or scheme, known as a moduli space.
For some of the detailed definitions in the theory of schemes, see the glossary of scheme theory.
== Development ==
The origins of algebraic geometry mostly lie in the study of polynomial equations over the real numbers. By the 19th century, it became clear (notably in the work of Jean-Victor Poncelet and Bernhard Riemann) that algebraic geometry over the real numbers is simplified by working over the field of complex numbers, which has the advantage of being algebraically closed. The early 20th century saw analogies between algebraic geometry and number theory, suggesting the question: can algebraic geometry be developed over other fields, such as those with positive characteristic, and more generally over number rings like the integers, where the tools of topology and complex analysis used to study complex varieties do not seem to apply?
Hilbert's Nullstellensatz suggests an approach to algebraic geometry over any algebraically closed field k : the maximal ideals in the polynomial ring k[x1, ... , xn] are in one-to-one correspondence with the set kn of n-tuples of elements of k, and the prime ideals correspond to the irreducible algebraic sets in kn, known as affine varieties. Motivated by these ideas, Emmy Noether and Wolfgang Krull developed commutative algebra in the 1920s and 1930s. Their work generalizes algebraic geometry in a purely algebraic direction, generalizing the study of points (maximal ideals in a polynomial ring) to the study of prime ideals in any commutative ring. For example, Krull defined the dimension of a commutative ring in terms of prime ideals and, at least when the ring is Noetherian, he proved that this definition satisfies many of the intuitive properties of geometric dimension.
Noether and Krull's commutative algebra can be viewed as an algebraic approach to affine algebraic varieties. However, many arguments in algebraic geometry work better for projective varieties, essentially because they are compact. From the 1920s to the 1940s, B. L. van der Waerden, André Weil and Oscar Zariski applied commutative algebra as a new foundation for algebraic geometry in the richer setting of projective (or quasi-projective) varieties. In particular, the Zariski topology is a useful topology on a variety over any algebraically closed field, replacing to some extent the classical topology on a complex variety (based on the metric topology of the complex numbers).
For applications to number theory, van der Waerden and Weil formulated algebraic geometry over any field, not necessarily algebraically closed. Weil was the first to define an abstract variety (not embedded in projective space), by gluing affine varieties along open subsets, on the model of abstract manifolds in topology. He needed this generality for his construction of the Jacobian variety of a curve over any field. (Later, Jacobians were shown to be projective varieties by Weil, Chow and Matsusaka.)
The algebraic geometers of the Italian school had often used the somewhat foggy concept of the generic point of an algebraic variety. What is true for the generic point is true for "most" points of the variety. In Weil's Foundations of Algebraic Geometry (1946), generic points are constructed by taking points in a very large algebraically closed field, called a universal domain. This worked awkwardly: there were many different generic points for the same variety. (In the later theory of schemes, each algebraic variety has a single generic point.)
In the 1950s, Claude Chevalley, Masayoshi Nagata and Jean-Pierre Serre, motivated in part by the Weil conjectures relating number theory and algebraic geometry, further extended the objects of algebraic geometry, for example by generalizing the base rings allowed. The word scheme was first used in the 1956 Chevalley Seminar, in which Chevalley pursued Zariski's ideas. According to Pierre Cartier, it was André Martineau who suggested to Serre the possibility of using the spectrum of an arbitrary commutative ring as a foundation for algebraic geometry.
== Origin of schemes ==
The theory took its definitive form in Grothendieck's Éléments de géométrie algébrique (EGA) and the later Séminaire de géométrie algébrique (SGA), bringing to a conclusion a generation of experimental suggestions and partial developments. Grothendieck defined the spectrum
X
{\displaystyle X}
of a commutative ring
R
{\displaystyle R}
as the space of prime ideals of
R
{\displaystyle R}
with a natural topology (known as the Zariski topology), but augmented it with a sheaf of rings: to every open subset
U
{\displaystyle U}
he assigned a commutative ring
O
X
(
U
)
{\displaystyle {\mathcal {O}}_{X}(U)}
, which may be thought of as the coordinate ring of regular functions on
U
{\displaystyle U}
. These objects
Spec
(
R
)
{\displaystyle \operatorname {Spec} (R)}
are the affine schemes; a general scheme is then obtained by "gluing together" affine schemes.
Much of algebraic geometry focuses on projective or quasi-projective varieties over a field
k
{\displaystyle k}
, most often over the complex numbers. Grothendieck developed a large body of theory for arbitrary schemes extending much of the geometric intuition for varieties. For example, it is common to construct a moduli space first as a scheme, and only later study whether it is a more concrete object such as a projective variety. Applying Grothendieck's theory to schemes over the integers and other number fields led to powerful new perspectives in number theory.
== Definition ==
An affine scheme is a locally ringed space isomorphic to the spectrum
Spec
(
R
)
{\displaystyle \operatorname {Spec} (R)}
of a commutative ring
R
{\displaystyle R}
. A scheme is a locally ringed space
X
{\displaystyle X}
admitting a covering by open sets
U
i
{\displaystyle U_{i}}
, such that each
U
i
{\displaystyle U_{i}}
(as a locally ringed space) is an affine scheme. In particular,
X
{\displaystyle X}
comes with a sheaf
O
X
{\displaystyle {\mathcal {O}}_{X}}
, which assigns to every open subset
U
{\displaystyle U}
a commutative ring
O
X
(
U
)
{\displaystyle {\mathcal {O}}_{X}(U)}
called the ring of regular functions on
U
{\displaystyle U}
. One can think of a scheme as being covered by "coordinate charts" that are affine schemes. The definition means exactly that schemes are obtained by gluing together affine schemes using the Zariski topology.
In the early days, this was called a prescheme, and a scheme was defined to be a separated prescheme. The term prescheme has fallen out of use, but can still be found in older books, such as Grothendieck's "Éléments de géométrie algébrique" and Mumford's "Red Book". The sheaf properties of
O
X
(
U
)
{\displaystyle {\mathcal {O}}_{X}(U)}
mean that its elements, which are not necessarily functions, can neverthess be patched together from their restrictions in the same way as functions.
A basic example of an affine scheme is affine
n
{\displaystyle n}
-space over a field
k
{\displaystyle k}
, for a natural number
n
{\displaystyle n}
. By definition,
A
k
n
{\displaystyle A_{k}^{n}}
is the spectrum of the polynomial ring
k
[
x
1
,
…
,
x
n
]
{\displaystyle k[x_{1},\dots ,x_{n}]}
. In the spirit of scheme theory, affine
n
{\displaystyle n}
-space can in fact be defined over any commutative ring
R
{\displaystyle R}
, meaning
Spec
(
R
[
x
1
,
…
,
x
n
]
)
{\displaystyle \operatorname {Spec} (R[x_{1},\dots ,x_{n}])}
.
== The category of schemes ==
Schemes form a category, with morphisms defined as morphisms of locally ringed spaces. (See also: morphism of schemes.) For a scheme Y, a scheme X over Y (or a Y-scheme) means a morphism X → Y of schemes. A scheme X over a commutative ring R means a morphism X → Spec(R).
An algebraic variety over a field k can be defined as a scheme over k with certain properties. There are different conventions about exactly which schemes should be called varieties. One standard choice is that a variety over k means an integral separated scheme of finite type over k.
A morphism f: X → Y of schemes determines a pullback homomorphism on the rings of regular functions, f*: O(Y) → O(X). In the case of affine schemes, this construction gives a one-to-one correspondence between morphisms Spec(A) → Spec(B) of schemes and ring homomorphisms B → A. In this sense, scheme theory completely subsumes the theory of commutative rings.
Since Z is an initial object in the category of commutative rings, the category of schemes has Spec(Z) as a terminal object.
For a scheme X over a commutative ring R, an R-point of X means a section of the morphism X → Spec(R). One writes X(R) for the set of R-points of X. In examples, this definition reconstructs the old notion of the set of solutions of the defining equations of X with values in R. When R is a field k, X(k) is also called the set of k-rational points of X.
More generally, for a scheme X over a commutative ring R and any commutative R-algebra S, an S-point of X means a morphism Spec(S) → X over R. One writes X(S) for the set of S-points of X. (This generalizes the old observation that given some equations over a field k, one can consider the set of solutions of the equations in any field extension E of k.) For a scheme X over R, the assignment S ↦ X(S) is a functor from commutative R-algebras to sets. It is an important observation that a scheme X over R is determined by this functor of points.
The fiber product of schemes always exists. That is, for any schemes X and Z with morphisms to a scheme Y, the categorical fiber product
X
×
Y
Z
{\displaystyle X\times _{Y}Z}
exists in the category of schemes. If X and Z are schemes over a field k, their fiber product over Spec(k) may be called the product X × Z in the category of k-schemes. For example, the product of affine spaces
A
m
{\displaystyle \mathbb {A} ^{m}}
and
A
n
{\displaystyle \mathbb {A} ^{n}}
over k is affine space
A
m
+
n
{\displaystyle \mathbb {A} ^{m+n}}
over k.
Since the category of schemes has fiber products and also a terminal object Spec(Z), it has all finite limits.
== Examples ==
Here and below, all the rings considered are commutative.
=== Affine space ===
Let k be an algebraically closed field. The affine space
X
¯
=
A
k
n
{\displaystyle {\bar {X}}=\mathbb {A} _{k}^{n}}
is the algebraic variety of all points
a
=
(
a
1
,
…
,
a
n
)
{\displaystyle a=(a_{1},\ldots ,a_{n})}
with coordinates in k; its coordinate ring is the polynomial ring
R
=
k
[
x
1
,
…
,
x
n
]
{\displaystyle R=k[x_{1},\ldots ,x_{n}]}
. The corresponding scheme
X
=
S
p
e
c
(
R
)
{\displaystyle X=\mathrm {Spec} (R)}
is a topological space with the Zariski topology, whose closed points are the maximal ideals
m
a
=
(
x
1
−
a
1
,
…
,
x
n
−
a
n
)
{\displaystyle {\mathfrak {m}}_{a}=(x_{1}-a_{1},\ldots ,x_{n}-a_{n})}
, the set of polynomials vanishing at
a
{\displaystyle a}
. The scheme also contains a non-closed point for each non-maximal prime ideal
p
⊂
R
{\displaystyle {\mathfrak {p}}\subset R}
, whose vanishing defines an irreducible subvariety
V
¯
=
V
¯
(
p
)
⊂
X
¯
{\displaystyle {\bar {V}}={\bar {V}}({\mathfrak {p}})\subset {\bar {X}}}
; the topological closure of the scheme point
p
{\displaystyle {\mathfrak {p}}}
is the subscheme
V
(
p
)
=
{
q
∈
X
with
p
⊂
q
}
{\displaystyle V({\mathfrak {p}})=\{{\mathfrak {q}}\in X\ \ {\text{with}}\ \ {\mathfrak {p}}\subset {\mathfrak {q}}\}}
, specially including all the closed points of the subvariety, i.e.
m
a
{\displaystyle {\mathfrak {m}}_{a}}
with
a
∈
V
¯
{\displaystyle a\in {\bar {V}}}
, or equivalently
p
⊂
m
a
{\displaystyle {\mathfrak {p}}\subset {\mathfrak {m}}_{a}}
.
The scheme
X
{\displaystyle X}
has a basis of open subsets given by the complements of hypersurfaces,
U
f
=
X
∖
V
(
f
)
=
{
p
∈
X
with
f
∉
p
}
{\displaystyle U_{f}=X\setminus V(f)=\{{\mathfrak {p}}\in X\ \ {\text{with}}\ \ f\notin {\mathfrak {p}}\}}
for irreducible polynomials
f
∈
R
{\displaystyle f\in R}
. This set is endowed with its coordinate ring of regular functions
O
X
(
U
f
)
=
R
[
f
−
1
]
=
{
r
f
m
for
r
∈
R
,
m
∈
Z
≥
0
}
.
{\displaystyle {\mathcal {O}}_{X}(U_{f})=R[f^{-1}]=\left\{{\tfrac {r}{f^{m}}}\ \ {\text{for}}\ \ r\in R,\ m\in \mathbb {Z} _{\geq 0}\right\}.}
This induces a unique sheaf
O
X
{\displaystyle {\mathcal {O}}_{X}}
which gives the usual ring of rational functions regular on a given open set
U
{\displaystyle U}
.
Each ring element
r
=
r
(
x
1
,
…
,
x
n
)
∈
R
{\displaystyle r=r(x_{1},\ldots ,x_{n})\in R}
, a polynomial function on
X
¯
{\displaystyle {\bar {X}}}
, also defines a function on the points of the scheme
X
{\displaystyle X}
whose value at
p
{\displaystyle {\mathfrak {p}}}
lies in the quotient ring
R
/
p
{\displaystyle R/{\mathfrak {p}}}
, the residue ring. We define
r
(
p
)
{\displaystyle r({\mathfrak {p}})}
as the image of
r
{\displaystyle r}
under the natural map
R
→
R
/
p
{\displaystyle R\to R/{\mathfrak {p}}}
. A maximal ideal
m
a
{\displaystyle {\mathfrak {m}}_{a}}
gives the residue field
k
(
m
a
)
=
R
/
m
a
≅
k
{\displaystyle k({\mathfrak {m}}_{a})=R/{\mathfrak {m}}_{a}\cong k}
, with the natural isomorphism
x
i
↦
a
i
{\displaystyle x_{i}\mapsto a_{i}}
, so that
r
(
m
a
)
{\displaystyle r({\mathfrak {m}}_{a})}
corresponds to the original value
r
(
a
)
{\displaystyle r(a)}
.
The vanishing locus of a polynomial
f
=
f
(
x
1
,
…
,
x
n
)
{\displaystyle f=f(x_{1},\ldots ,x_{n})}
is a hypersurface subvariety
V
¯
(
f
)
⊂
A
k
n
{\displaystyle {\bar {V}}(f)\subset \mathbb {A} _{k}^{n}}
, corresponding to the principal ideal
(
f
)
⊂
R
{\displaystyle (f)\subset R}
. The corresponding scheme is
V
(
f
)
=
Spec
(
R
/
(
f
)
)
{\textstyle V(f)=\operatorname {Spec} (R/(f))}
, a closed subscheme of affine space. For example, taking k to be the complex or real numbers, the equation
x
2
=
y
2
(
y
+
1
)
{\displaystyle x^{2}=y^{2}(y+1)}
defines a nodal cubic curve in the affine plane
A
k
2
{\displaystyle \mathbb {A} _{k}^{2}}
, corresponding to the scheme
V
=
Spec
k
[
x
,
y
]
/
(
x
2
−
y
2
(
y
+
1
)
)
{\displaystyle V=\operatorname {Spec} k[x,y]/(x^{2}-y^{2}(y+1))}
.
=== Spec of the integers ===
The ring of integers
Z
{\displaystyle \mathbb {Z} }
can be considered as the coordinate ring of the scheme
Z
=
Spec
(
Z
)
{\displaystyle Z=\operatorname {Spec} (\mathbb {Z} )}
. The Zariski topology has closed points
m
p
=
(
p
)
{\displaystyle {\mathfrak {m}}_{p}=(p)}
, the principal ideals of the prime numbers
p
∈
Z
{\displaystyle p\in \mathbb {Z} }
; as well as the generic point
p
0
=
(
0
)
{\displaystyle {\mathfrak {p}}_{0}=(0)}
, the zero ideal, whose closure is the whole scheme. Closed sets are finite sets, and open sets are their complements, the cofinite sets; any infinite set of points is dense.
The basis open set corresponding to the irreducible element
p
∈
Z
{\displaystyle p\in \mathbb {Z} }
is
U
p
=
Z
∖
{
m
p
}
{\displaystyle U_{p}=Z\smallsetminus \{{\mathfrak {m}}_{p}\}}
, with coordinate ring
O
Z
(
U
p
)
=
Z
[
p
−
1
]
=
{
n
p
m
for
n
∈
Z
,
m
≥
0
}
{\displaystyle {\mathcal {O}}_{Z}(U_{p})=\mathbb {Z} [p^{-1}]=\{{\tfrac {n}{p^{m}}}\ {\text{for}}\ n\in \mathbb {Z} ,\ m\geq 0\}}
. For the open set
U
=
Z
∖
{
m
p
1
,
…
,
m
p
ℓ
}
{\displaystyle U=Z\smallsetminus \{{\mathfrak {m}}_{p_{1}},\ldots ,{\mathfrak {m}}_{p_{\ell }}\}}
, this induces
O
Z
(
U
)
=
Z
[
p
1
−
1
,
…
,
p
ℓ
−
1
]
{\displaystyle {\mathcal {O}}_{Z}(U)=\mathbb {Z} [p_{1}^{-1},\ldots ,p_{\ell }^{-1}]}
.
A number
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
corresponds to a function on the scheme
Z
{\displaystyle Z}
, a function whose value at
m
p
{\displaystyle {\mathfrak {m}}_{p}}
lies in the residue field
k
(
m
p
)
=
Z
/
(
p
)
=
F
p
{\displaystyle k({\mathfrak {m}}_{p})=\mathbb {Z} /(p)=\mathbb {F} _{p}}
, the finite field of integers modulo
p
{\displaystyle p}
: the function is defined by
n
(
m
p
)
=
n
mod
p
{\displaystyle n({\mathfrak {m}}_{p})=n\ {\text{mod}}\ p}
, and also
n
(
p
0
)
=
n
{\displaystyle n({\mathfrak {p}}_{0})=n}
in the generic residue ring
Z
/
(
0
)
=
Z
{\displaystyle \mathbb {Z} /(0)=\mathbb {Z} }
. The function
n
{\displaystyle n}
is determined by its values at the points
m
p
{\displaystyle {\mathfrak {m}}_{p}}
only, so we can think of
n
{\displaystyle n}
as a kind of "regular function" on the closed points, a very special type among the arbitrary functions
f
{\displaystyle f}
with
f
(
m
p
)
∈
F
p
{\displaystyle f({\mathfrak {m}}_{p})\in \mathbb {F} _{p}}
.
Note that the point
m
p
{\displaystyle {\mathfrak {m}}_{p}}
is the vanishing locus of the function
n
=
p
{\displaystyle n=p}
, the point where the value of
p
{\displaystyle p}
is equal to zero in the residue field. The field of "rational functions" on
Z
{\displaystyle Z}
is the fraction field of the generic residue ring,
k
(
p
0
)
=
Frac
(
Z
)
=
Q
{\displaystyle k({\mathfrak {p}}_{0})=\operatorname {Frac} (\mathbb {Z} )=\mathbb {Q} }
. A fraction
a
/
b
{\displaystyle a/b}
has "poles" at the points
m
p
{\displaystyle {\mathfrak {m}}_{p}}
corresponding to prime divisors of the denominator.
This also gives a geometric interpretaton of Bezout's lemma stating that if the integers
n
1
,
…
,
n
r
{\displaystyle n_{1},\ldots ,n_{r}}
have no common prime factor, then there are integers
a
1
,
…
,
a
r
{\displaystyle a_{1},\ldots ,a_{r}}
with
a
1
n
1
+
⋯
+
a
r
n
r
=
1
{\displaystyle a_{1}n_{1}+\cdots +a_{r}n_{r}=1}
. Geometrically, this is a version of the weak Hilbert Nullstellensatz for the scheme
Z
{\displaystyle Z}
: if the functions
n
1
,
…
,
n
r
{\displaystyle n_{1},\ldots ,n_{r}}
have no common vanishing points
m
p
{\displaystyle {\mathfrak {m}}_{p}}
in
Z
{\displaystyle Z}
, then they generate the unit ideal
(
n
1
,
…
,
n
r
)
=
(
1
)
{\displaystyle (n_{1},\ldots ,n_{r})=(1)}
in the coordinate ring
Z
{\displaystyle \mathbb {Z} }
. Indeed, we may consider the terms
ρ
i
=
a
i
n
i
{\displaystyle \rho _{i}=a_{i}n_{i}}
as forming a kind of partition of unity subordinate to the covering of
Z
{\displaystyle Z}
by the open sets
U
i
=
Z
∖
V
(
n
i
)
{\displaystyle U_{i}=Z\smallsetminus V(n_{i})}
.
=== Affine line over the integers ===
The affine space
A
Z
1
=
{
a
for
a
∈
Z
}
{\displaystyle \mathbb {A} _{\mathbb {Z} }^{1}=\{a\ {\text{for}}\ a\in \mathbb {Z} \}}
is a variety with coordinate ring
Z
[
x
]
{\displaystyle \mathbb {Z} [x]}
, the polynomials with integer coefficients. The corresponding scheme is
Y
=
Spec
(
Z
[
x
]
)
{\displaystyle Y=\operatorname {Spec} (\mathbb {Z} [x])}
, whose points are all of the prime ideals
p
⊂
Z
[
x
]
{\displaystyle {\mathfrak {p}}\subset \mathbb {Z} [x]}
. The closed points are maximal ideals of the form
m
=
(
p
,
f
(
x
)
)
{\displaystyle {\mathfrak {m}}=(p,f(x))}
, where
p
{\displaystyle p}
is a prime number, and
f
(
x
)
{\displaystyle f(x)}
is a non-constant polynomial with no integer factor and which is irreducible modulo
p
{\displaystyle p}
. Thus, we may picture
Y
{\displaystyle Y}
as two-dimensional, with a "characteristic direction" measured by the coordinate
p
{\displaystyle p}
, and a "spatial direction" with coordinate
x
{\displaystyle x}
.
A given prime number
p
{\displaystyle p}
defines a "vertical line", the subscheme
V
(
p
)
{\displaystyle V(p)}
of the prime ideal
p
=
(
p
)
{\displaystyle {\mathfrak {p}}=(p)}
: this contains
m
=
(
p
,
f
(
x
)
)
{\displaystyle {\mathfrak {m}}=(p,f(x))}
for all
f
(
x
)
{\displaystyle f(x)}
, the "characteristic
p
{\displaystyle p}
points" of the scheme. Fixing the
x
{\displaystyle x}
-coordinate, we have the "horizontal line"
x
=
a
{\displaystyle x=a}
, the subscheme
V
(
x
−
a
)
{\displaystyle V(x-a)}
of the prime ideal
p
=
(
x
−
a
)
{\displaystyle {\mathfrak {p}}=(x-a)}
. We also have the line
V
(
b
x
−
a
)
{\displaystyle V(bx-a)}
corresponding to the rational coordinate
x
=
a
/
b
{\displaystyle x=a/b}
, which does not intersect
V
(
p
)
{\displaystyle V(p)}
for those
p
{\displaystyle p}
which divide
b
{\displaystyle b}
.
A higher degree "horizontal" subscheme like
V
(
x
2
+
1
)
{\displaystyle V(x^{2}+1)}
corresponds to
x
{\displaystyle x}
-values which are roots of
x
2
+
1
{\displaystyle x^{2}+1}
, namely
x
=
±
−
1
{\displaystyle x=\pm {\sqrt {-1}}}
. This behaves differently under different
p
{\displaystyle p}
-coordinates. At
p
=
5
{\displaystyle p=5}
, we get two points
x
=
±
2
mod
5
{\displaystyle x=\pm 2\ {\text{mod}}\ 5}
, since
(
5
,
x
2
+
1
)
=
(
5
,
x
−
2
)
∩
(
5
,
x
+
2
)
{\displaystyle (5,x^{2}+1)=(5,x-2)\cap (5,x+2)}
. At
p
=
2
{\displaystyle p=2}
, we get one ramified double-point
x
=
1
mod
2
{\displaystyle x=1\ {\text{mod}}\ 2}
, since
(
2
,
x
2
+
1
)
=
(
2
,
(
x
−
1
)
2
)
{\displaystyle (2,x^{2}+1)=(2,(x-1)^{2})}
. And at
p
=
3
{\displaystyle p=3}
, we get that
m
=
(
3
,
x
2
+
1
)
{\displaystyle {\mathfrak {m}}=(3,x^{2}+1)}
is a prime ideal corresponding to
x
=
±
−
1
{\displaystyle x=\pm {\sqrt {-1}}}
in an extension field of
F
3
{\displaystyle \mathbb {F} _{3}}
; since we cannot distinguish between these values (they are symmetric under the Galois group), we should picture
V
(
3
,
x
2
+
1
)
{\displaystyle V(3,x^{2}+1)}
as two fused points. Overall,
V
(
x
2
+
1
)
{\displaystyle V(x^{2}+1)}
is a kind of fusion of two Galois-symmetric horizonal lines, a curve of degree 2.
The residue field at
m
=
(
p
,
f
(
x
)
)
{\displaystyle {\mathfrak {m}}=(p,f(x))}
is
k
(
m
)
=
Z
[
x
]
/
m
=
F
p
[
x
]
/
(
f
(
x
)
)
≅
F
p
(
α
)
{\displaystyle k({\mathfrak {m}})=\mathbb {Z} [x]/{\mathfrak {m}}=\mathbb {F} _{p}[x]/(f(x))\cong \mathbb {F} _{p}(\alpha )}
, a field extension of
F
p
{\displaystyle \mathbb {F} _{p}}
adjoining a root
x
=
α
{\displaystyle x=\alpha }
of
f
(
x
)
{\displaystyle f(x)}
; this is a finite field with
p
d
{\displaystyle p^{d}}
elements,
d
=
deg
(
f
)
{\displaystyle d=\operatorname {deg} (f)}
. A polynomial
r
(
x
)
∈
Z
[
x
]
{\displaystyle r(x)\in \mathbb {Z} [x]}
corresponds to a function on the scheme
Y
{\displaystyle Y}
with values
r
(
m
)
=
r
m
o
d
m
{\displaystyle r({\mathfrak {m}})=r\ \mathrm {mod} \ {\mathfrak {m}}}
, that is
r
(
m
)
=
r
(
α
)
∈
F
p
(
α
)
{\displaystyle r({\mathfrak {m}})=r(\alpha )\in \mathbb {F} _{p}(\alpha )}
. Again each
r
(
x
)
∈
Z
[
x
]
{\displaystyle r(x)\in \mathbb {Z} [x]}
is determined by its values
r
(
m
)
{\displaystyle r({\mathfrak {m}})}
at closed points;
V
(
p
)
{\displaystyle V(p)}
is the vanishing locus of the constant polynomial
r
(
x
)
=
p
{\displaystyle r(x)=p}
; and
V
(
f
(
x
)
)
{\displaystyle V(f(x))}
contains the points in each characteristic
p
{\displaystyle p}
corresponding to Galois orbits of roots of
f
(
x
)
{\displaystyle f(x)}
in the algebraic closure
F
¯
p
{\displaystyle {\overline {\mathbb {F} }}_{p}}
.
The scheme
Y
{\displaystyle Y}
is not proper, so that pairs of curves may fail to intersect with the expected multiplicity. This is a major obstacle to analyzing Diophantine equations with geometric tools. Arakelov theory overcomes this obstacle by compactifying affine arithmetic schemes, adding points at infinity corresponding to valuations.
=== Arithmetic surfaces ===
If we consider a polynomial
f
∈
Z
[
x
,
y
]
{\displaystyle f\in \mathbb {Z} [x,y]}
then the affine scheme
X
=
Spec
(
Z
[
x
,
y
]
/
(
f
)
)
{\displaystyle X=\operatorname {Spec} (\mathbb {Z} [x,y]/(f))}
has a canonical morphism to
Spec
Z
{\displaystyle \operatorname {Spec} \mathbb {Z} }
and is called an arithmetic surface. The fibers
X
p
=
X
×
Spec
(
Z
)
Spec
(
F
p
)
{\displaystyle X_{p}=X\times _{\operatorname {Spec} (\mathbb {Z} )}\operatorname {Spec} (\mathbb {F} _{p})}
are then algebraic curves over the finite fields
F
p
{\displaystyle \mathbb {F} _{p}}
. If
f
(
x
,
y
)
=
y
2
−
x
3
+
a
x
2
+
b
x
+
c
{\displaystyle f(x,y)=y^{2}-x^{3}+ax^{2}+bx+c}
is an elliptic curve, then the fibers over its discriminant locus, where
Δ
f
=
−
4
a
3
c
+
a
2
b
2
+
18
a
b
c
−
4
b
3
−
27
c
2
=
0
mod
p
,
{\displaystyle \Delta _{f}=-4a^{3}c+a^{2}b^{2}+18abc-4b^{3}-27c^{2}=0\ {\text{mod}}\ p,}
are all singular schemes. For example, if
p
{\displaystyle p}
is a prime number and
X
=
Spec
Z
[
x
,
y
]
(
y
2
−
x
3
−
p
)
{\displaystyle X=\operatorname {Spec} {\frac {\mathbb {Z} [x,y]}{(y^{2}-x^{3}-p)}}}
then its discriminant is
−
27
p
2
{\displaystyle -27p^{2}}
. This curve is singular over the prime numbers
3
,
p
{\displaystyle 3,p}
.
=== Non-affine schemes ===
For any commutative ring R and natural number n, projective space
P
R
n
{\displaystyle \mathbb {P} _{R}^{n}}
can be constructed as a scheme by gluing n + 1 copies of affine n-space over R along open subsets. This is the fundamental example that motivates going beyond affine schemes. The key advantage of projective space over affine space is that
P
R
n
{\displaystyle \mathbb {P} _{R}^{n}}
is proper over R; this is an algebro-geometric version of compactness. Indeed, complex projective space
C
P
n
{\displaystyle \mathbb {C} \mathbb {P} ^{n}}
is a compact space in the classical topology, whereas
C
n
{\displaystyle \mathbb {C} ^{n}}
is not.
A homogeneous polynomial f of positive degree in the polynomial ring R[x0, ..., xn] determines a closed subscheme f = 0 in projective space
P
R
n
{\displaystyle \mathbb {P} _{R}^{n}}
, called a projective hypersurface. In terms of the Proj construction, this subscheme can be written as
Proj
R
[
x
0
,
…
,
x
n
]
/
(
f
)
.
{\displaystyle \operatorname {Proj} R[x_{0},\ldots ,x_{n}]/(f).}
For example, the closed subscheme x3 + y3 = z3 of
P
Q
2
{\displaystyle \mathbb {P} _{\mathbb {Q} }^{2}}
is an elliptic curve over the rational numbers.
The line with two origins (over a field k) is the scheme defined by starting with two copies of the affine line over k, and gluing together the two open subsets A1 − 0 by the identity map. This is a simple example of a non-separated scheme. In particular, it is not affine.
A simple reason to go beyond affine schemes is that an open subset of an affine scheme need not be affine. For example, let
X
=
A
n
∖
{
0
}
{\displaystyle X=\mathbb {A} ^{n}\smallsetminus \{0\}}
, say over the complex numbers
C
{\displaystyle \mathbb {C} }
; then X is not affine for n ≥ 2. (However, the affine line minus the origin is isomorphic to the affine scheme
S
p
e
c
C
[
x
,
x
−
1
]
{\displaystyle \mathrm {Spec} \,\mathbb {C} [x,x^{-1}]}
. To show X is not affine, one computes that every regular function on X extends to a regular function on
A
n
{\displaystyle \mathbb {A} ^{n}}
when n ≥ 2: this is analogous to Hartogs's lemma in complex analysis, though easier to prove. That is, the inclusion
f
:
X
→
A
n
{\displaystyle f:X\to \mathbb {A} ^{n}}
induces an isomorphism from
O
(
A
n
)
=
C
[
x
1
,
…
,
x
n
]
{\displaystyle O(\mathbb {A} ^{n})=\mathbb {C} [x_{1},\ldots ,x_{n}]}
to
O
(
X
)
{\displaystyle O(X)}
. If X were affine, it would follow that f is an isomorphism, but f is not surjective and hence not an isomorphism. Therefore, the scheme X is not affine.
Let k be a field. Then the scheme
Spec
(
∏
n
=
1
∞
k
)
{\textstyle \operatorname {Spec} \left(\prod _{n=1}^{\infty }k\right)}
is an affine scheme whose underlying topological space is the Stone–Čech compactification of the positive integers (with the discrete topology). In fact, the prime ideals of this ring are in one-to-one correspondence with the ultrafilters on the positive integers, with the ideal
∏
m
≠
n
k
{\textstyle \prod _{m\neq n}k}
corresponding to the principal ultrafilter associated to the positive integer n. This topological space is zero-dimensional, and in particular, each point is an irreducible component. Since affine schemes are quasi-compact, this is an example of a non-Noetherian quasi-compact scheme with infinitely many irreducible components. (By contrast, a Noetherian scheme has only finitely many irreducible components.)
=== Examples of morphisms ===
It is also fruitful to consider examples of morphisms as examples of schemes since they demonstrate their technical effectiveness for encapsulating many objects of study in algebraic and arithmetic geometry.
== Motivation for schemes ==
Here are some of the ways in which schemes go beyond older notions of algebraic varieties, and their significance.
Field extensions. Given some polynomial equations in n variables over a field k, one can study the set X(k) of solutions of the equations in the product set kn. If the field k is algebraically closed (for example the complex numbers), then one can base algebraic geometry on sets such as X(k): define the Zariski topology on X(k), consider polynomial mappings between different sets of this type, and so on. But if k is not algebraically closed, then the set X(k) is not rich enough. Indeed, one can study the solutions X(E) of the given equations in any field extension E of k, but these sets are not determined by X(k) in any reasonable sense. For example, the plane curve X over the real numbers defined by x2 + y2 = −1 has X(R) empty, but X(C) not empty. (In fact, X(C) can be identified with C − 0.) By contrast, a scheme X over a field k has enough information to determine the set X(E) of E-rational points for every extension field E of k. (In particular, the closed subscheme of A2R defined by x2 + y2 = −1 is a nonempty topological space.)
Generic point. The points of the affine line A1C, as a scheme, are its complex points (one for each complex number) together with one generic point (whose closure is the whole scheme). The generic point is the image of a natural morphism Spec(C(x)) → A1C, where C(x) is the field of rational functions in one variable. To see why it is useful to have an actual "generic point" in the scheme, consider the following example.
Let X be the plane curve y2 = x(x−1)(x−5) over the complex numbers. This is a closed subscheme of A2C. It can be viewed as a ramified double cover of the affine line A1C by projecting to the x-coordinate. The fiber of the morphism X → A1 over the generic point of A1 is exactly the generic point of X, yielding the morphism
Spec
C
(
x
)
(
x
(
x
−
1
)
(
x
−
5
)
)
→
Spec
C
(
x
)
.
{\displaystyle \operatorname {Spec} \mathbf {C} (x)\left({\sqrt {x(x-1)(x-5)}}\right)\to \operatorname {Spec} \mathbf {C} (x).}
This in turn is equivalent to the degree-2 extension of fields
C
(
x
)
⊂
C
(
x
)
(
x
(
x
−
1
)
(
x
−
5
)
)
.
{\displaystyle \mathbf {C} (x)\subset \mathbf {C} (x)\left({\sqrt {x(x-1)(x-5)}}\right).}
Thus, having an actual generic point of a variety yields a geometric relation between a degree-2 morphism of algebraic varieties and the corresponding degree-2 extension of function fields. This generalizes to a relation between the fundamental group (which classifies covering spaces in topology) and the Galois group (which classifies certain field extensions). Indeed, Grothendieck's theory of the étale fundamental group treats the fundamental group and the Galois group on the same footing.
Nilpotent elements. Let X be the closed subscheme of the affine line A1C defined by x2 = 0, sometimes called a fat point. The ring of regular functions on X is C[x]/(x2); in particular, the regular function x on X is nilpotent but not zero. To indicate the meaning of this scheme: two regular functions on the affine line have the same restriction to X if and only if they have the same value and first derivative at the origin. Allowing such non-reduced schemes brings the ideas of calculus and infinitesimals into algebraic geometry.
Nilpotent elements arise naturally in intersection theory. For example in the plane
A
k
2
{\displaystyle \mathbb {A} _{k}^{2}}
over a field
k
{\displaystyle k}
, with coordinate ring
k
[
x
,
y
]
{\displaystyle k[x,y]}
, consider the x-axis, which is the variety
V
(
y
)
{\displaystyle V(y)}
, and the parabola
y
=
x
2
{\displaystyle y=x^{2}}
, which is
V
(
x
2
−
y
)
{\displaystyle V(x^{2}-y)}
. Their scheme-theoretic intersection is defined by the ideal
(
y
)
+
(
x
2
−
y
)
=
(
x
2
,
y
)
{\displaystyle (y)+(x^{2}-y)=(x^{2},\,y)}
. Since the intersection is not transverse, this is not merely the point
(
x
,
y
)
=
(
0
,
0
)
{\displaystyle (x,y)=(0,0)}
defined by the ideal
(
x
,
y
)
⊂
k
[
x
,
y
]
{\displaystyle (x,y)\subset k[x,y]}
, but rather a fat point containing the x-axis tangent direction (the common tangent of the two curves) and having coordinate ring:
k
[
x
,
y
]
(
x
2
,
y
)
≅
k
[
x
]
(
x
2
)
.
{\displaystyle {\frac {k[x,y]}{(x^{2},\,y)}}\cong {\frac {k[x]}{(x^{2})}}.}
The intersection multiplicity of 2 is defined as the length of this
k
[
x
,
y
]
{\displaystyle k[x,y]}
-module, i.e. its dimension as a
k
{\displaystyle k}
-vector space.
For a more elaborate example, one can describe all the zero-dimensional closed subschemes of degree 2 in a smooth complex variety Y. Such a subscheme consists of either two distinct complex points of Y, or else a subscheme isomorphic to X = Spec C[x]/(x2) as in the previous paragraph. Subschemes of the latter type are determined by a complex point y of Y together with a line in the tangent space TyY. This again indicates that non-reduced subschemes have geometric meaning, related to derivatives and tangent vectors.
== Coherent sheaves ==
A central part of scheme theory is the notion of coherent sheaves, generalizing the notion of (algebraic) vector bundles. For a scheme X, one starts by considering the abelian category of OX-modules, which are sheaves of abelian groups on X that form a module over the sheaf of regular functions OX. In particular, a module M over a commutative ring R determines an associated OX-module ~M on X = Spec(R). A quasi-coherent sheaf on a scheme X means an OX-module that is the sheaf associated to a module on each affine open subset of X. Finally, a coherent sheaf (on a Noetherian scheme X, say) is an OX-module that is the sheaf associated to a finitely generated module on each affine open subset of X.
Coherent sheaves include the important class of vector bundles, which are the sheaves that locally come from finitely generated free modules. An example is the tangent bundle of a smooth variety over a field. However, coherent sheaves are richer; for example, a vector bundle on a closed subscheme Y of X can be viewed as a coherent sheaf on X that is zero outside Y (by the direct image construction). In this way, coherent sheaves on a scheme X include information about all closed subschemes of X. Moreover, sheaf cohomology has good properties for coherent (and quasi-coherent) sheaves. The resulting theory of coherent sheaf cohomology is perhaps the main technical tool in algebraic geometry.
== Generalizations ==
Considered as its functor of points, a scheme is a functor that is a sheaf of sets for the Zariski topology on the category of commutative rings, and that, locally in the Zariski topology, is an affine scheme. This can be generalized in several ways. One is to use the étale topology. Michael Artin defined an algebraic space as a functor that is a sheaf in the étale topology and that, locally in the étale topology, is an affine scheme. Equivalently, an algebraic space is the quotient of a scheme by an étale equivalence relation. A powerful result, the Artin representability theorem, gives simple conditions for a functor to be represented by an algebraic space.
A further generalization is the idea of a stack. Crudely speaking, algebraic stacks generalize algebraic spaces by having an algebraic group attached to each point, which is viewed as the automorphism group of that point. For example, any action of an algebraic group G on an algebraic variety X determines a quotient stack [X/G], which remembers the stabilizer subgroups for the action of G. More generally, moduli spaces in algebraic geometry are often best viewed as stacks, thereby keeping track of the automorphism groups of the objects being classified.
Grothendieck originally introduced stacks as a tool for the theory of descent. In that formulation, stacks are (informally speaking) sheaves of categories. From this general notion, Artin defined the narrower class of algebraic stacks (or "Artin stacks"), which can be considered geometric objects. These include Deligne–Mumford stacks (similar to orbifolds in topology), for which the stabilizer groups are finite, and algebraic spaces, for which the stabilizer groups are trivial. The Keel–Mori theorem says that an algebraic stack with finite stabilizer groups has a coarse moduli space that is an algebraic space.
Another type of generalization is to enrich the structure sheaf, bringing algebraic geometry closer to homotopy theory. In this setting, known as derived algebraic geometry or "spectral algebraic geometry", the structure sheaf is replaced by a homotopical analog of a sheaf of commutative rings (for example, a sheaf of E-infinity ring spectra). These sheaves admit algebraic operations that are associative and commutative only up to an equivalence relation. Taking the quotient by this equivalence relation yields the structure sheaf of an ordinary scheme. Not taking the quotient, however, leads to a theory that can remember higher information, in the same way that derived functors in homological algebra yield higher information about operations such as tensor product and the Hom functor on modules.
== See also ==
Flat morphism, Smooth morphism, Proper morphism, Finite morphism, Étale morphism
Stable curve
Birational geometry
Étale cohomology, Chow group, Hodge theory
Group scheme, Abelian variety, Linear algebraic group, Reductive group
Moduli of algebraic curves
Gluing schemes
== Citations ==
== References ==
== External links ==
David Mumford, Can one explain schemes to biologists?
The Stacks Project Authors, The Stacks Project
https://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/ – the comment section contains some interesting discussion on scheme theory (including posts from Terence Tao). | Wikipedia/Scheme_theory |
In algebra, an algebraic fraction is a fraction whose numerator and denominator are algebraic expressions. Two examples of algebraic fractions are
3
x
x
2
+
2
x
−
3
{\displaystyle {\frac {3x}{x^{2}+2x-3}}}
and
x
+
2
x
2
−
3
{\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}}
. Algebraic fractions are subject to the same laws as arithmetic fractions.
A rational fraction is an algebraic fraction whose numerator and denominator are both polynomials. Thus
3
x
x
2
+
2
x
−
3
{\displaystyle {\frac {3x}{x^{2}+2x-3}}}
is a rational fraction, but not
x
+
2
x
2
−
3
,
{\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}},}
because the numerator contains a square root function.
== Terminology ==
In the algebraic fraction
a
b
{\displaystyle {\tfrac {a}{b}}}
, the dividend a is called the numerator and the divisor b is called the denominator. The numerator and denominator are called the terms of the algebraic fraction.
A complex fraction is a fraction whose numerator or denominator, or both, contains a fraction. A simple fraction contains no fraction either in its numerator or its denominator. A fraction is in lowest terms if the only factor common to the numerator and the denominator is 1.
An expression which is not in fractional form is an integral expression. An integral expression can always be written in fractional form by giving it the denominator 1. A mixed expression is the algebraic sum of one or more integral expressions and one or more fractional terms.
== Rational fractions ==
If the expressions a and b are polynomials, the algebraic fraction is called a rational algebraic fraction or simply rational fraction. Rational fractions are also known as rational expressions. A rational fraction
f
(
x
)
g
(
x
)
{\displaystyle {\tfrac {f(x)}{g(x)}}}
is called proper if
deg
f
(
x
)
<
deg
g
(
x
)
{\displaystyle \deg f(x)<\deg g(x)}
, and improper otherwise. For example, the rational fraction
2
x
x
2
−
1
{\displaystyle {\tfrac {2x}{x^{2}-1}}}
is proper, and the rational fractions
x
3
+
x
2
+
1
x
2
−
5
x
+
6
{\displaystyle {\tfrac {x^{3}+x^{2}+1}{x^{2}-5x+6}}}
and
x
2
−
x
+
1
5
x
2
+
3
{\displaystyle {\tfrac {x^{2}-x+1}{5x^{2}+3}}}
are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has
x
3
+
x
2
+
1
x
2
−
5
x
+
6
=
(
x
+
6
)
+
24
x
−
35
x
2
−
5
x
+
6
,
{\displaystyle {\frac {x^{3}+x^{2}+1}{x^{2}-5x+6}}=(x+6)+{\frac {24x-35}{x^{2}-5x+6}},}
where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example,
2
x
x
2
−
1
=
1
x
−
1
+
1
x
+
1
.
{\displaystyle {\frac {2x}{x^{2}-1}}={\frac {1}{x-1}}+{\frac {1}{x+1}}.}
Here, the two terms on the right are called partial fractions.
== Irrational fractions ==
An irrational fraction is one that contains the variable under a fractional exponent. An example of an irrational fraction is
x
1
/
2
−
1
3
a
x
1
/
3
−
x
1
/
2
.
{\displaystyle {\frac {x^{1/2}-{\tfrac {1}{3}}a}{x^{1/3}-x^{1/2}}}.}
The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute
x
=
z
6
{\displaystyle x=z^{6}}
to obtain
z
3
−
1
3
a
z
2
−
z
3
.
{\displaystyle {\frac {z^{3}-{\tfrac {1}{3}}a}{z^{2}-z^{3}}}.}
== See also ==
Partial fraction decomposition
== References ==
Brink, Raymond W. (1951). "IV. Fractions". College Algebra. | Wikipedia/Algebraic_fraction |
In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image of the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication.
A commutative algebra is an associative algebra for which the multiplication is commutative, or, equivalently, an associative algebra that is also a commutative ring.
In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital.
Every ring is an associative algebra over its center and over the integers.
== Definition ==
Let R be a commutative ring (so R could be a field). An associative R-algebra A (or more simply, an R-algebra A) is a ring A
that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies
r
⋅
(
x
y
)
=
(
r
⋅
x
)
y
=
x
(
r
⋅
y
)
{\displaystyle r\cdot (xy)=(r\cdot x)y=x(r\cdot y)}
for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.)
Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (r, x) ↦ f(r)x (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by r ↦ r ⋅ 1A. (See also § From ring homomorphisms below).
Every ring is an associative Z-algebra, where Z denotes the ring of the integers.
A commutative algebra is an associative algebra that is also a commutative ring.
=== As a monoid object in the category of modules ===
The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules.
Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map
m
:
A
⊗
R
A
→
A
{\displaystyle m:A\otimes _{R}A\to A}
.
The associativity then refers to the identity:
m
∘
(
id
⊗
m
)
=
m
∘
(
m
⊗
id
)
.
{\displaystyle m\circ ({\operatorname {id} }\otimes m)=m\circ (m\otimes \operatorname {id} ).}
=== From ring homomorphisms ===
An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism η : R → A whose image lies in the center of A, we can make A an R-algebra by defining
r
⋅
x
=
η
(
r
)
x
{\displaystyle r\cdot x=\eta (r)x}
for all r ∈ R and x ∈ A. If A is an R-algebra, taking x = 1, the same formula in turn defines a ring homomorphism η : R → A whose image lies in the center.
If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism η : R → A.
The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms R → A for a fixed R, i.e., commutative R-algebras, and whose morphisms are ring homomorphisms A → A′ that are under R; i.e., R → A → A′ is R → A′ (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R.
How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: Generic matrix ring.
== Algebra homomorphisms ==
A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, φ : A1 → A2 is an associative algebra homomorphism if
φ
(
r
⋅
x
)
=
r
⋅
φ
(
x
)
φ
(
x
+
y
)
=
φ
(
x
)
+
φ
(
y
)
φ
(
x
y
)
=
φ
(
x
)
φ
(
y
)
φ
(
1
)
=
1
{\displaystyle {\begin{aligned}\varphi (r\cdot x)&=r\cdot \varphi (x)\\\varphi (x+y)&=\varphi (x)+\varphi (y)\\\varphi (xy)&=\varphi (x)\varphi (y)\\\varphi (1)&=1\end{aligned}}}
The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg.
The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings.
== Examples ==
The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics.
=== Algebra ===
Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent.
Any ring of characteristic n is a (Z/nZ)-algebra in the same way.
Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining (r·φ)(x) = r·φ(x).
Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module.
In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K.
The complex numbers form a 2-dimensional commutative algebra over the real numbers.
The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions).
Every polynomial ring R[x1, ..., xn] is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set {x1, ..., xn}.
The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E.
The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure).
Given a module M over a commutative ring R, the direct sum of modules R ⊕ M has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as (a + x)(b + y) = ab + ay + bx. The notion is sometimes called the algebra of dual numbers.
A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field.
=== Representation theory ===
The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra.
If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups.
If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A.
A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph.
=== Analysis ===
Given any Banach space X, the continuous linear operators A : X → X form an associative algebra (using composition of operators as multiplication); this is a Banach algebra.
Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise.
The set of semimartingales defined on the filtered probability space (Ω, F, (Ft)t≥0, P) forms a ring under stochastic integration.
The Weyl algebra
An Azumaya algebra
=== Geometry and combinatorics ===
The Clifford algebras, which are useful in geometry and physics.
Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics.
The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra.
A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra
Ω
(
M
)
=
⨁
p
=
0
n
Ω
p
(
M
)
{\textstyle \Omega (M)=\bigoplus _{p=0}^{n}\Omega ^{p}(M)}
, where
Ω
p
(
M
)
{\textstyle \Omega ^{p}(M)}
consists of differential p-forms on a manifold M, is a differential graded algebra.
=== Mathematical physics ===
A Poisson algebra is a commutative associative algebra over a field together with a structure of a Lie algebra so that the Lie bracket {,} satisfies the Leibniz rule; i.e., {fg, h} = f{g, h} + g{f, h}.
Given a Poisson algebra
a
{\displaystyle {\mathfrak {a}}}
, consider the vector space
a
[
[
u
]
]
{\displaystyle {\mathfrak {a}}[\![u]\!]}
of formal power series over
a
{\displaystyle {\mathfrak {a}}}
. If
a
[
[
u
]
]
{\displaystyle {\mathfrak {a}}[\![u]\!]}
has a structure of an associative algebra with multiplication
∗
{\displaystyle *}
such that, for
f
,
g
∈
a
{\displaystyle f,g\in {\mathfrak {a}}}
,
f
∗
g
=
f
g
−
1
2
{
f
,
g
}
u
+
⋯
,
{\displaystyle f*g=fg-{\frac {1}{2}}\{f,g\}u+\cdots ,}
then
a
[
[
u
]
]
{\displaystyle {\mathfrak {a}}[\![u]\!]}
is called a deformation quantization of
a
{\displaystyle {\mathfrak {a}}}
.
A quantized enveloping algebra. The dual of such an algebra turns out to be an associative algebra (see § Dual of an associative algebra) and is, philosophically speaking, the (quantized) coordinate ring of a quantum group.
Gerstenhaber algebra
== Constructions ==
Subalgebras
A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A.
Quotient algebras
Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since r · x = (r1A)x. This gives the quotient ring A / I the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra.
Direct products
The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication.
Free products
One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras.
Tensor products
The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining r · (s ⊗ a) = (rs ⊗ a). The functor which sends A to R ⊗Z A is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings.
Free algebra
A free algebra is an algebra generated by symbols. If one imposes commutativity; i.e., take the quotient by commutators, then one gets a polynomial algebra.
== Dual of an associative algebra ==
Let A be an associative algebra over a commutative ring R. Since A is in particular a module, we can take the dual module A* of A. A priori, the dual A* need not have a structure of an associative algebra. However, A may come with an extra structure (namely, that of a Hopf algebra) so that the dual is also an associative algebra.
For example, take A to be the ring of continuous functions on a compact group G. Then, not only A is an associative algebra, but it also comes with the co-multiplication Δ(f)(g, h) = f(gh) and co-unit ε(f) = f(1). The "co-" refers to the fact that they satisfy the dual of the usual multiplication and unit in the algebra axiom. Hence, the dual A* is an associative algebra. The co-multiplication and co-unit are also important in order to form a tensor product of representations of associative algebras (see § Representations below).
== Enveloping algebra ==
Given an associative algebra A over a commutative ring R, the enveloping algebra Ae of A is the algebra A ⊗R Aop or Aop ⊗R A, depending on authors.
Note that a bimodule over A is exactly a left module over Ae.
== Separable algebra ==
Let A be an algebra over a commutative ring R. Then the algebra A is a right module over Ae := Aop ⊗R A with the action x ⋅ (a ⊗ b) = axb. Then, by definition, A is said to separable if the multiplication map A ⊗R A → A : x ⊗ y ↦ xy splits as an Ae-linear map, where A ⊗ A is an Ae-module by (x ⊗ y) ⋅ (a ⊗ b) = ax ⊗ yb. Equivalently,
A is separable if it is a projective module over Ae; thus, the Ae-projective dimension of A, sometimes called the bidimension of A, measures the failure of separability.
== Finite-dimensional algebra ==
Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring.
=== Commutative case ===
As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent
A
{\displaystyle A}
is separable.
A
⊗
k
¯
{\displaystyle A\otimes {\overline {k}}}
is reduced, where
k
¯
{\displaystyle {\overline {k}}}
is some algebraic closure of k.
A
⊗
k
¯
=
k
¯
n
{\displaystyle A\otimes {\overline {k}}={\overline {k}}^{n}}
for some n.
dim
k
A
{\displaystyle \dim _{k}A}
is the number of
k
{\displaystyle k}
-algebra homomorphisms
A
→
k
¯
{\displaystyle A\to {\overline {k}}}
.
Let
Γ
=
Gal
(
k
s
/
k
)
=
lim
←
Gal
(
k
′
/
k
)
{\displaystyle \Gamma =\operatorname {Gal} (k_{s}/k)=\varprojlim \operatorname {Gal} (k'/k)}
, the profinite group of finite Galois extensions of k. Then
A
↦
X
A
=
{
k
-algebra homomorphisms
A
→
k
s
}
{\displaystyle A\mapsto X_{A}=\{k{\text{-algebra homomorphisms }}A\to k_{s}\}}
is an anti-equivalence of the category of finite-dimensional separable k-algebras to the category of finite sets with continuous
Γ
{\displaystyle \Gamma }
-actions.
=== Noncommutative case ===
Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., A = Mn(D). More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem.
The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.)
The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of A / I as a module over the enveloping algebra (A / I)e is at most one, then the natural surjection p : A → A / I splits; i.e., A contains a subalgebra B such that p|B : B ~→ A / I is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras.
== Lattices and orders ==
Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z, Q). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, L ⊗R K = V.
Let AK be a finite-dimensional K-algebra. An order in AK is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., 1/2Z is a lattice in Q but not an order (since it is not an algebra).
A maximal order is an order that is maximal among all the orders.
== Related concepts ==
=== Coalgebras ===
An associative algebra over K is given by a K-vector space A endowed with a bilinear map A × A → A having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism K → A identifying the scalar multiples of the multiplicative identity. If the bilinear map A × A → A is reinterpreted as a linear map (i.e., morphism in the category of K-vector spaces) A ⊗ A → A (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form A ⊗ A → A and one of the form K → A) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra.
There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above.
== Representations ==
A representation of an algebra A is an algebra homomorphism ρ : A → End(V) from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, ρ(xy) = ρ(x)ρ(y) for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V).
If A and B are two algebras, and ρ : A → End(V) and τ : B → End(W) are two representations, then there is a (canonical) representation A ⊗ B → End(V ⊗ W) of the tensor product algebra A ⊗ B on the vector space V ⊗ W. However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below.
=== Motivation for a Hopf algebra ===
Consider, for example, two representations σ : A → End(V) and τ : A → End(W). One might try to form a tensor product representation ρ : x ↦ σ(x) ⊗ τ(x) according to how it acts on the product vector space, so that
ρ
(
x
)
(
v
⊗
w
)
=
(
σ
(
x
)
(
v
)
)
⊗
(
τ
(
x
)
(
w
)
)
.
{\displaystyle \rho (x)(v\otimes w)=(\sigma (x)(v))\otimes (\tau (x)(w)).}
However, such a map would not be linear, since one would have
ρ
(
k
x
)
=
σ
(
k
x
)
⊗
τ
(
k
x
)
=
k
σ
(
x
)
⊗
k
τ
(
x
)
=
k
2
(
σ
(
x
)
⊗
τ
(
x
)
)
=
k
2
ρ
(
x
)
{\displaystyle \rho (kx)=\sigma (kx)\otimes \tau (kx)=k\sigma (x)\otimes k\tau (x)=k^{2}(\sigma (x)\otimes \tau (x))=k^{2}\rho (x)}
for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ : A → A ⊗ A, and defining the tensor product representation as
ρ
=
(
σ
⊗
τ
)
∘
Δ
.
{\displaystyle \rho =(\sigma \otimes \tau )\circ \Delta .}
Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups).
=== Motivation for a Lie algebra ===
One can try to be more clever in defining a tensor product. Consider, for example,
x
↦
ρ
(
x
)
=
σ
(
x
)
⊗
Id
W
+
Id
V
⊗
τ
(
x
)
{\displaystyle x\mapsto \rho (x)=\sigma (x)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)}
so that the action on the tensor product space is given by
ρ
(
x
)
(
v
⊗
w
)
=
(
σ
(
x
)
v
)
⊗
w
+
v
⊗
(
τ
(
x
)
w
)
{\displaystyle \rho (x)(v\otimes w)=(\sigma (x)v)\otimes w+v\otimes (\tau (x)w)}
.
This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication:
ρ
(
x
y
)
=
σ
(
x
)
σ
(
y
)
⊗
Id
W
+
Id
V
⊗
τ
(
x
)
τ
(
y
)
{\displaystyle \rho (xy)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)}
.
But, in general, this does not equal
ρ
(
x
)
ρ
(
y
)
=
σ
(
x
)
σ
(
y
)
⊗
Id
W
+
σ
(
x
)
⊗
τ
(
y
)
+
σ
(
y
)
⊗
τ
(
x
)
+
Id
V
⊗
τ
(
x
)
τ
(
y
)
{\displaystyle \rho (x)\rho (y)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+\sigma (x)\otimes \tau (y)+\sigma (y)\otimes \tau (x)+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)}
.
This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra.
== Non-unital algebras ==
Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital.
One example of a non-unital associative algebra is given by the set of all functions f : R → R whose limit as x nears infinity is zero.
Another example is the vector space of continuous periodic functions, together with the convolution product.
== See also ==
Abstract algebra
Algebraic structure
Algebra over a field
Sheaf of algebras, a sort of an algebra over a ringed space
Deligne's conjecture on Hochschild cohomology
== Notes ==
== Citations ==
== References == | Wikipedia/Commutative_algebra_(structure) |
In algebraic number theory, an algebraic integer is a complex number that is integral over the integers. That is, an algebraic integer is a complex root of some monic polynomial (a polynomial whose leading coefficient is 1) whose coefficients are integers. The set of all algebraic integers A is closed under addition, subtraction and multiplication and therefore is a commutative subring of the complex numbers.
The ring of integers of a number field K, denoted by OK, is the intersection of K and A: it can also be characterized as the maximal order of the field K. Each algebraic integer belongs to the ring of integers of some number field. A number α is an algebraic integer if and only if the ring
Z
[
α
]
{\displaystyle \mathbb {Z} [\alpha ]}
is finitely generated as an abelian group, which is to say, as a
Z
{\displaystyle \mathbb {Z} }
-module.
== Definitions ==
The following are equivalent definitions of an algebraic integer. Let K be a number field (i.e., a finite extension of
Q
{\displaystyle \mathbb {Q} }
, the field of rational numbers), in other words,
K
=
Q
(
θ
)
{\displaystyle K=\mathbb {Q} (\theta )}
for some algebraic number
θ
∈
C
{\displaystyle \theta \in \mathbb {C} }
by the primitive element theorem.
α ∈ K is an algebraic integer if there exists a monic polynomial
f
(
x
)
∈
Z
[
x
]
{\displaystyle f(x)\in \mathbb {Z} [x]}
such that f(α) = 0.
α ∈ K is an algebraic integer if the minimal monic polynomial of α over
Q
{\displaystyle \mathbb {Q} }
is in
Z
[
x
]
{\displaystyle \mathbb {Z} [x]}
.
α ∈ K is an algebraic integer if
Z
[
α
]
{\displaystyle \mathbb {Z} [\alpha ]}
is a finitely generated
Z
{\displaystyle \mathbb {Z} }
-module.
α ∈ K is an algebraic integer if there exists a non-zero finitely generated
Z
{\displaystyle \mathbb {Z} }
-submodule
M
⊂
C
{\displaystyle M\subset \mathbb {C} }
such that αM ⊆ M.
Algebraic integers are a special case of integral elements of a ring extension. In particular, an algebraic integer is an integral element of a finite extension
K
/
Q
{\displaystyle K/\mathbb {Q} }
.
Note that if P(x) is a primitive polynomial that has integer coefficients but is not monic, and P is irreducible over
Q
{\displaystyle \mathbb {Q} }
, then none of the roots of P are algebraic integers (but are algebraic numbers). Here primitive is used in the sense that the highest common factor of the coefficients of P is 1, which is weaker than requiring the coefficients to be pairwise relatively prime.
== Examples ==
The only algebraic integers that are found in the set of rational numbers are the integers. In other words, the intersection of
Q
{\displaystyle \mathbb {Q} }
and A is exactly
Z
{\displaystyle \mathbb {Z} }
. The rational number a/b is not an algebraic integer unless b divides a. The leading coefficient of the polynomial bx − a is the integer b.
The square root
n
{\displaystyle {\sqrt {n}}}
of a nonnegative integer n is an algebraic integer, but is irrational unless n is a perfect square.
If d is a square-free integer then the extension
K
=
Q
(
d
)
{\displaystyle K=\mathbb {Q} ({\sqrt {d}}\,)}
is a quadratic field of rational numbers. The ring of algebraic integers OK contains
d
{\displaystyle {\sqrt {d}}}
since this is a root of the monic polynomial x2 − d. Moreover, if d ≡ 1 mod 4, then the element
1
2
(
1
+
d
)
{\textstyle {\frac {1}{2}}(1+{\sqrt {d}}\,)}
is also an algebraic integer. It satisfies the polynomial x2 − x + 1/4(1 − d) where the constant term 1/4(1 − d) is an integer. The full ring of integers is generated by
d
{\displaystyle {\sqrt {d}}}
or
1
2
(
1
+
d
)
{\textstyle {\frac {1}{2}}(1+{\sqrt {d}}\,)}
respectively. See Quadratic integer for more.
The ring of integers of the field
F
=
Q
[
α
]
{\displaystyle F=\mathbb {Q} [\alpha ]}
, α = 3√m, has the following integral basis, writing m = hk2 for two square-free coprime integers h and k:
{
1
,
α
,
α
2
±
k
2
α
+
k
2
3
k
m
≡
±
1
mod
9
1
,
α
,
α
2
k
otherwise
{\displaystyle {\begin{cases}1,\alpha ,{\dfrac {\alpha ^{2}\pm k^{2}\alpha +k^{2}}{3k}}&m\equiv \pm 1{\bmod {9}}\\1,\alpha ,{\dfrac {\alpha ^{2}}{k}}&{\text{otherwise}}\end{cases}}}
If ζn is a primitive nth root of unity, then the ring of integers of the cyclotomic field
Q
(
ζ
n
)
{\displaystyle \mathbb {Q} (\zeta _{n})}
is precisely
Z
[
ζ
n
]
{\displaystyle \mathbb {Z} [\zeta _{n}]}
.
If α is an algebraic integer then β = n√α is another algebraic integer. A polynomial for β is obtained by substituting xn in the polynomial for α.
== Finite generation of ring extension ==
For any α, the ring extension (in the sense that is equivalent to field extension) of the integers by α, denoted by
Z
[
α
]
≡
{
∑
i
=
0
n
α
i
z
i
|
z
i
∈
Z
,
n
∈
Z
}
{\displaystyle \mathbb {Z} [\alpha ]\equiv \left\{\sum _{i=0}^{n}\alpha ^{i}z_{i}|z_{i}\in \mathbb {Z} ,n\in \mathbb {Z} \right\}}
, is finitely generated if and only if α is an algebraic integer.
The proof is analogous to that of the corresponding fact regarding algebraic numbers, with
Q
{\displaystyle \mathbb {Q} }
there replaced by
Z
{\displaystyle \mathbb {Z} }
here, and the notion of field extension degree replaced by finite generation (using the fact that
Z
{\displaystyle \mathbb {Z} }
is finitely generated itself); the only required change is that only non-negative powers of α are involved in the proof.
The analogy is possible because both algebraic integers and algebraic numbers are defined as roots of monic polynomials over either
Z
{\displaystyle \mathbb {Z} }
or
Q
{\displaystyle \mathbb {Q} }
, respectively.
== Ring ==
The sum, difference and product of two algebraic integers is an algebraic integer. In general their quotient is not. Thus the algebraic integers form a ring.
This can be shown analogously to the corresponding proof for algebraic numbers, using the integers
Z
{\displaystyle \mathbb {Z} }
instead of the rationals
Q
{\displaystyle \mathbb {Q} }
.
One may also construct explicitly the monic polynomial involved, which is generally of higher degree than those of the original algebraic integers, by taking resultants and factoring. For example, if x2 − x − 1 = 0, y3 − y − 1 = 0 and z = xy, then eliminating x and y from z − xy = 0 and the polynomials satisfied by x and y using the resultant gives z6 − 3z4 − 4z3 + z2 + z − 1 = 0, which is irreducible, and is the monic equation satisfied by the product. (To see that the xy is a root of the x-resultant of z − xy and x2 − x − 1, one might use the fact that the resultant is contained in the ideal generated by its two input polynomials.)
=== Integral closure ===
Every root of a monic polynomial whose coefficients are algebraic integers is itself an algebraic integer. In other words, the algebraic integers form a ring that is integrally closed in any of its extensions.
Again, the proof is analogous to the corresponding proof for algebraic numbers being algebraically closed.
== Additional facts ==
Any number constructible out of the integers with roots, addition, and multiplication is an algebraic integer; but not all algebraic integers are so constructible: in a naïve sense, most roots of irreducible quintics are not. This is the Abel–Ruffini theorem.
The ring of algebraic integers is a Bézout domain, as a consequence of the principal ideal theorem.
If the monic polynomial associated with an algebraic integer has constant term 1 or −1, then the reciprocal of that algebraic integer is also an algebraic integer, and each is a unit, an element of the group of units of the ring of algebraic integers.
If x is an algebraic number then anx is an algebraic integer, where x satisfies a polynomial p(x) with integer coefficients and where anxn is the highest-degree term of p(x). The value y = anx is an algebraic integer because it is a root of q(y) = an − 1n p(y /an), where q(y) is a monic polynomial with integer coefficients.
If x is an algebraic number then it can be written as the ratio of an algebraic integer to a non-zero algebraic integer. In fact, the denominator can always be chosen to be a positive integer. The ratio is |an|x / |an|, where x satisfies a polynomial p(x) with integer coefficients and where anxn is the highest-degree term of p(x).
The only rational algebraic integers are the integers. That is, if x is an algebraic integer and
x
∈
Q
{\displaystyle x\in \mathbb {Q} }
then
x
∈
Z
{\displaystyle x\in \mathbb {Z} }
. This is a direct result of the rational root theorem for the case of a monic polynomial.
== See also ==
Gaussian integer
Eisenstein integer
Root of unity
Dirichlet's unit theorem
Fundamental units
== References == | Wikipedia/Algebraic_integer |
In mathematics, an algebraic number field (or simply number field) is an extension field
K
{\displaystyle K}
of the field of rational numbers
Q
{\displaystyle \mathbb {Q} }
such that the field extension
K
/
Q
{\displaystyle K/\mathbb {Q} }
has finite degree (and hence is an algebraic field extension).
Thus
K
{\displaystyle K}
is a field that contains
Q
{\displaystyle \mathbb {Q} }
and has finite dimension when considered as a vector space over
Q
{\displaystyle \mathbb {Q} }
.
The study of algebraic number fields, that is, of algebraic extensions of the field of rational numbers, is the central topic of algebraic number theory. This study reveals hidden structures behind the rational numbers, by using algebraic methods.
== Definition ==
=== Prerequisites ===
The notion of algebraic number field relies on the concept of a field. A field consists of a set of elements together with two operations, namely addition, and multiplication, and some distributivity assumptions. These operations make the field into an abelian group under addition, and they make the nonzero elements of the field into another abelian group under multiplication. A prominent example of a field is the field of rational numbers, commonly denoted
Q
{\displaystyle \mathbb {Q} }
, together with its usual operations of addition and multiplication.
Another notion needed to define algebraic number fields is vector spaces. To the extent needed here, vector spaces can be thought of as consisting of sequences (or tuples)
(
x
1
,
x
2
,
…
)
{\displaystyle (x_{1},x_{2},\dots )}
whose entries are elements of a fixed field, such as the field
Q
{\displaystyle \mathbb {Q} }
. Any two such sequences can be added by adding the corresponding entries. Furthermore, all members of any sequence can be multiplied by a single element c of the fixed field. These two operations known as vector addition and scalar multiplication satisfy a number of properties that serve to define vector spaces abstractly. Vector spaces are allowed to be "infinite-dimensional", that is to say that the sequences constituting the vector spaces may be of infinite length. If, however, the vector space consists of finite sequences
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\dots ,x_{n})}
,
the vector space is said to be of finite dimension,
n
{\displaystyle n}
.
=== Definition ===
An algebraic number field (or simply number field) is a finite-degree field extension of the field of rational numbers. Here degree means the dimension of the field as a vector space over
Q
{\displaystyle \mathbb {Q} }
.
== Examples ==
The smallest and most basic number field is the field
Q
{\displaystyle \mathbb {Q} }
of rational numbers. Many properties of general number fields are modeled after the properties of
Q
{\displaystyle \mathbb {Q} }
. At the same time, many other properties of algebraic number fields are substantially different from the properties of rational numbers—one notable example is that the ring of algebraic integers of a number field is not a principal ideal domain, and not even a unique factorization domain, in general.
The Gaussian rationals, denoted
Q
(
i
)
{\displaystyle \mathbb {Q} (i)}
(read as "
Q
{\displaystyle \mathbb {Q} }
adjoined
i
{\displaystyle i}
"), form the first (historically) non-trivial example of a number field. Its elements are elements of the form
a
+
b
i
{\displaystyle a+bi}
where both
a
{\displaystyle a}
and
b
{\displaystyle b}
are rational numbers and
i
{\displaystyle i}
is the imaginary unit. Such expressions may be added, subtracted, and multiplied according to the usual rules of arithmetic and then simplified using the identity
i
2
=
−
1
{\displaystyle i^{2}=-1}
. Explicitly, for real numbers
a
,
b
,
c
,
d
{\displaystyle a,b,c,d}
:
(
a
+
b
i
)
+
(
c
+
d
i
)
=
(
a
+
c
)
+
(
b
+
d
)
i
(
a
+
b
i
)
⋅
(
c
+
d
i
)
=
(
a
c
−
b
d
)
+
(
a
d
+
b
c
)
i
{\displaystyle {\begin{aligned}&(a+bi)+(c+di)=(a+c)+(b+d)i\\&(a+bi)\cdot (c+di)=(ac-bd)+(ad+bc)i\end{aligned}}}
Non-zero Gaussian rational numbers are invertible, which can be seen from the identity
(
a
+
b
i
)
(
a
a
2
+
b
2
−
b
a
2
+
b
2
i
)
=
(
a
+
b
i
)
(
a
−
b
i
)
a
2
+
b
2
=
1.
{\displaystyle (a+bi)\left({\frac {a}{a^{2}+b^{2}}}-{\frac {b}{a^{2}+b^{2}}}i\right)={\frac {(a+bi)(a-bi)}{a^{2}+b^{2}}}=1.}
It follows that the Gaussian rationals form a number field that is two-dimensional as a vector space over
Q
{\displaystyle \mathbb {Q} }
.
More generally, for any square-free integer
d
{\displaystyle d}
, the quadratic field
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
is a number field obtained by adjoining the square root of
d
{\displaystyle d}
to the field of rational numbers. Arithmetic operations in this field are defined in analogy with the case of Gaussian rational numbers,
d
=
−
1
{\displaystyle d=-1}
.
The cyclotomic field
Q
(
ζ
n
)
,
{\displaystyle \mathbb {Q} (\zeta _{n}),}
where
ζ
n
=
exp
(
2
π
i
/
n
)
{\displaystyle \zeta _{n}=\exp {(2\pi i/n)}}
, is a number field obtained from
Q
{\displaystyle \mathbb {Q} }
by adjoining a primitive n-th root of unity
ζ
n
{\displaystyle \zeta _{n}}
. This field contains all complex nth roots of unity and its dimension over
Q
{\displaystyle \mathbb {Q} }
is equal to
φ
(
n
)
{\displaystyle \varphi (n)}
, where
φ
{\displaystyle \varphi }
is the Euler totient function.
=== Non-examples ===
The real numbers,
R
{\displaystyle \mathbb {R} }
, and the complex numbers,
C
{\displaystyle \mathbb {C} }
, are fields that have infinite dimension as
Q
{\displaystyle \mathbb {Q} }
-vector spaces; hence, they are not number fields. This follows from the uncountability of
R
{\displaystyle \mathbb {R} }
and
C
{\displaystyle \mathbb {C} }
as sets, whereas every number field is necessarily countable.
The set
Q
2
{\displaystyle \mathbb {Q} ^{2}}
of ordered pairs of rational numbers, with the entry-wise addition and multiplication is a two-dimensional commutative algebra over
Q
{\displaystyle \mathbb {Q} }
. However, it is not a field, since it has zero divisors:
(
1
,
0
)
⋅
(
0
,
1
)
=
(
0
,
0
)
{\displaystyle (1,0)\cdot (0,1)=(0,0)}
.
== Algebraicity, and ring of integers ==
Generally, in abstract algebra, a field extension
K
/
L
{\displaystyle K/L}
is algebraic if every element
f
{\displaystyle f}
of the bigger field
K
{\displaystyle K}
is the zero of a (nonzero) polynomial with coefficients
e
0
,
…
,
e
m
{\displaystyle e_{0},\ldots ,e_{m}}
in
L
{\displaystyle L}
:
p
(
f
)
=
e
m
f
m
+
e
m
−
1
f
m
−
1
+
⋯
+
e
1
f
+
e
0
=
0
{\displaystyle p(f)=e_{m}f^{m}+e_{m-1}f^{m-1}+\cdots +e_{1}f+e_{0}=0}
Every field extension of finite degree is algebraic. (Proof: for
x
{\displaystyle x}
in
K
{\displaystyle K}
, simply consider
1
,
x
,
x
2
,
x
3
,
…
{\displaystyle 1,x,x^{2},x^{3},\ldots }
– we get a linear dependence, i.e. a polynomial that
x
{\displaystyle x}
is a root of.) In particular this applies to algebraic number fields, so any element
f
{\displaystyle f}
of an algebraic number field
K
{\displaystyle K}
can be written as a zero of a polynomial with rational coefficients. Therefore, elements of
K
{\displaystyle K}
are also referred to as algebraic numbers. Given a polynomial
p
{\displaystyle p}
such that
p
(
f
)
=
0
{\displaystyle p(f)=0}
, it can be arranged such that the leading coefficient
e
m
{\displaystyle e_{m}}
is one, by dividing all coefficients by it, if necessary. A polynomial with this property is known as a monic polynomial. In general it will have rational coefficients.
If, however, the monic polynomial's coefficients are actually all integers,
f
{\displaystyle f}
is called an algebraic integer.
Any (usual) integer
z
∈
Z
{\displaystyle z\in \mathbb {Z} }
is an algebraic integer, as it is the zero of the linear monic polynomial:
p
(
t
)
=
t
−
z
{\displaystyle p(t)=t-z}
.
It can be shown that any algebraic integer that is also a rational number must actually be an integer, hence the name "algebraic integer". Again using abstract algebra, specifically the notion of a finitely generated module, it can be shown that the sum and the product of any two algebraic integers is still an algebraic integer. It follows that the algebraic integers in
K
{\displaystyle K}
form a ring denoted
O
K
{\displaystyle {\mathcal {O}}_{K}}
called the ring of integers of
K
{\displaystyle K}
. It is a subring of (that is, a ring contained in)
K
{\displaystyle K}
. A field contains no zero divisors and this property is inherited by any subring, so the ring of integers of
K
{\displaystyle K}
is an integral domain. The field
K
{\displaystyle K}
is the field of fractions of the integral domain
O
K
{\displaystyle {\mathcal {O}}_{K}}
. This way one can get back and forth between the algebraic number field
K
{\displaystyle K}
and its ring of integers
O
K
{\displaystyle {\mathcal {O}}_{K}}
. Rings of algebraic integers have three distinctive properties: firstly,
O
K
{\displaystyle {\mathcal {O}}_{K}}
is an integral domain that is integrally closed in its field of fractions
K
{\displaystyle K}
. Secondly,
O
K
{\displaystyle {\mathcal {O}}_{K}}
is a Noetherian ring. Finally, every nonzero prime ideal of
O
K
{\displaystyle {\mathcal {O}}_{K}}
is maximal or, equivalently, the Krull dimension of this ring is one. An abstract commutative ring with these three properties is called a Dedekind ring (or Dedekind domain), in honor of Richard Dedekind, who undertook a deep study of rings of algebraic integers.
=== Unique factorization ===
For general Dedekind rings, in particular rings of integers, there is a unique factorization of ideals into a product of prime ideals. For example, the ideal
(
6
)
{\displaystyle (6)}
in the ring
Z
[
−
5
]
{\displaystyle \mathbf {Z} [{\sqrt {-5}}]}
of quadratic integers factors into prime ideals as
(
6
)
=
(
2
,
1
+
−
5
)
(
2
,
1
−
−
5
)
(
3
,
1
+
−
5
)
(
3
,
1
−
−
5
)
{\displaystyle (6)=(2,1+{\sqrt {-5}})(2,1-{\sqrt {-5}})(3,1+{\sqrt {-5}})(3,1-{\sqrt {-5}})}
However, unlike
Z
{\displaystyle \mathbf {Z} }
as the ring of integers of
Q
{\displaystyle \mathbf {Q} }
, the ring of integers of a proper extension of
Q
{\displaystyle \mathbf {Q} }
need not admit unique factorization of numbers into a product of prime numbers or, more precisely, prime elements. This happens already for quadratic integers, for example in
O
Q
(
−
5
)
=
Z
[
−
5
]
{\displaystyle {\mathcal {O}}_{\mathbf {Q} ({\sqrt {-5}})}=\mathbf {Z} [{\sqrt {-5}}]}
, the uniqueness of the factorization fails:
6
=
2
⋅
3
=
(
1
+
−
5
)
⋅
(
1
−
−
5
)
{\displaystyle 6=2\cdot 3=(1+{\sqrt {-5}})\cdot (1-{\sqrt {-5}})}
Using the norm it can be shown that these two factorization are actually inequivalent in the sense that the factors do not just differ by a unit in
O
Q
(
−
5
)
{\displaystyle {\mathcal {O}}_{\mathbf {Q} ({\sqrt {-5}})}}
. Euclidean domains are unique factorization domains: For example
Z
[
i
]
{\displaystyle \mathbf {Z} [i]}
, the ring of Gaussian integers, and
Z
[
ω
]
{\displaystyle \mathbf {Z} [\omega ]}
, the ring of Eisenstein integers, where
ω
{\displaystyle \omega }
is a cube root of unity (unequal to 1), have this property.
=== Analytic objects: ζ-functions, L-functions, and class number formula ===
The failure of unique factorization is measured by the class number, commonly denoted h, the cardinality of the so-called ideal class group. This group is always finite. The ring of integers
O
K
{\displaystyle {\mathcal {O}}_{K}}
possesses unique factorization if and only if it is a principal ring or, equivalently, if
K
{\displaystyle K}
has class number 1. Given a number field, the class number is often difficult to compute. The class number problem, going back to Gauss, is concerned with the existence of imaginary quadratic number fields (i.e.,
Q
(
−
d
)
,
d
≥
1
{\displaystyle \mathbf {Q} ({\sqrt {-d}}),d\geq 1}
) with prescribed class number. The class number formula relates h to other fundamental invariants of
K
{\displaystyle K}
. It involves the Dedekind zeta function
ζ
K
(
s
)
{\displaystyle \zeta _{K}(s)}
, a function in a complex variable
s
{\displaystyle s}
, defined by
ζ
K
(
s
)
:=
∏
p
1
1
−
N
(
p
)
−
s
.
{\displaystyle \zeta _{K}(s):=\prod _{\mathfrak {p}}{\frac {1}{1-N({\mathfrak {p}})^{-s}}}.}
(The product is over all prime ideals of
O
K
{\displaystyle {\mathcal {O}}_{K}}
,
N
(
p
)
{\displaystyle N({\mathfrak {p}})}
denotes the norm of the prime ideal or, equivalently, the (finite) number of elements in the residue field
O
K
/
p
{\displaystyle {\mathcal {O}}_{K}/{\mathfrak {p}}}
. The infinite product converges only for Re(s) > 1; in general analytic continuation and the functional equation for the zeta-function are needed to define the function for all s).
The Dedekind zeta-function generalizes the Riemann zeta-function in that ζ
Q
{\displaystyle \mathbb {Q} }
(s) = ζ(s).
The class number formula states that ζ
K
{\displaystyle K}
(s) has a simple pole at s = 1 and at this point the residue is given by
2
r
1
⋅
(
2
π
)
r
2
⋅
h
⋅
Reg
w
⋅
|
D
|
.
{\displaystyle {\frac {2^{r_{1}}\cdot (2\pi )^{r_{2}}\cdot h\cdot \operatorname {Reg} }{w\cdot {\sqrt {|D|}}}}.}
Here r1 and r2 classically denote the number of real embeddings and pairs of complex embeddings of
K
{\displaystyle K}
, respectively. Moreover, Reg is the regulator of
K
{\displaystyle K}
, w the number of roots of unity in
K
{\displaystyle K}
and D is the discriminant of
K
{\displaystyle K}
.
Dirichlet L-functions
L
(
χ
,
s
)
{\displaystyle L(\chi ,s)}
are a more refined variant of
ζ
(
s
)
{\displaystyle \zeta (s)}
. Both types of functions encode the arithmetic behavior of
Q
{\displaystyle \mathbb {Q} }
and
K
{\displaystyle K}
, respectively. For example, Dirichlet's theorem asserts that in any arithmetic progression
a
,
a
+
m
,
a
+
2
m
,
…
{\displaystyle a,a+m,a+2m,\ldots }
with coprime
a
{\displaystyle a}
and
m
{\displaystyle m}
, there are infinitely many prime numbers. This theorem is implied by the fact that the Dirichlet
L
{\displaystyle L}
-function is nonzero at
s
=
1
{\displaystyle s=1}
. Using much more advanced techniques including algebraic K-theory and Tamagawa measures, modern number theory deals with a description, if largely conjectural (see Tamagawa number conjecture), of values of more general L-functions.
== Bases for number fields ==
=== Integral basis ===
An integral basis for a number field
K
{\displaystyle K}
of degree
n
{\displaystyle n}
is a set
B = {b1, …, bn}
of n algebraic integers in
K
{\displaystyle K}
such that every element of the ring of integers
O
K
{\displaystyle {\mathcal {O}}_{K}}
of
K
{\displaystyle K}
can be written uniquely as a Z-linear combination of elements of B; that is, for any x in
O
K
{\displaystyle {\mathcal {O}}_{K}}
we have
x = m1b1 + ⋯ + mnbn,
where the mi are (ordinary) integers. It is then also the case that any element of
K
{\displaystyle K}
can be written uniquely as
m1b1 + ⋯ + mnbn,
where now the mi are rational numbers. The algebraic integers of
K
{\displaystyle K}
are then precisely those elements of
K
{\displaystyle K}
where the mi are all integers.
Working locally and using tools such as the Frobenius map, it is always possible to explicitly compute such a basis, and it is now standard for computer algebra systems to have built-in programs to do this.
=== Power basis ===
Let
K
{\displaystyle K}
be a number field of degree
n
{\displaystyle n}
. Among all possible bases of
K
{\displaystyle K}
(seen as a
Q
{\displaystyle \mathbb {Q} }
-vector space), there are particular ones known as power bases, that are bases of the form
B
x
=
{
1
,
x
,
x
2
,
…
,
x
n
−
1
}
{\displaystyle B_{x}=\{1,x,x^{2},\ldots ,x^{n-1}\}}
for some element
x
∈
K
{\displaystyle x\in K}
. By the primitive element theorem, there exists such an
x
{\displaystyle x}
, called a primitive element. If
x
{\displaystyle x}
can be chosen in
O
K
{\displaystyle {\mathcal {O}}_{K}}
and such that
B
x
{\displaystyle B_{x}}
is a basis of
O
K
{\displaystyle {\mathcal {O}}_{K}}
as a free Z-module, then
B
x
{\displaystyle B_{x}}
is called a power integral basis, and the field
K
{\displaystyle K}
is called a monogenic field. An example of a number field that is not monogenic was first given by Dedekind. His example is the field obtained by adjoining a root of the polynomial
x
3
−
x
2
−
2
x
−
8.
{\displaystyle x^{3}-x^{2}-2x-8.}
== Regular representation, trace and discriminant ==
Recall that any field extension
K
/
Q
{\displaystyle K/\mathbb {Q} }
has a unique
Q
{\displaystyle \mathbb {Q} }
-vector space structure. Using the multiplication in
K
{\displaystyle K}
, an element
x
{\displaystyle x}
of the field
K
{\displaystyle K}
over the base field
Q
{\displaystyle \mathbb {Q} }
may be represented by
n
×
n
{\displaystyle n\times n}
matrices
A
=
A
(
x
)
=
(
a
i
j
)
1
≤
i
,
j
≤
n
{\displaystyle A=A(x)=(a_{ij})_{1\leq i,j\leq n}}
by requiring
x
e
i
=
∑
j
=
1
n
a
i
j
e
j
,
a
i
j
∈
Q
.
{\displaystyle xe_{i}=\sum _{j=1}^{n}a_{ij}e_{j},\quad a_{ij}\in \mathbb {Q} .}
Here
e
1
,
…
,
e
n
{\displaystyle e_{1},\ldots ,e_{n}}
is a fixed basis for
K
{\displaystyle K}
, viewed as a
Q
{\displaystyle \mathbb {Q} }
-vector space. The rational numbers
a
i
j
{\displaystyle a_{ij}}
are uniquely determined by
x
{\displaystyle x}
and the choice of a basis since any element of
K
{\displaystyle K}
can be uniquely represented as a linear combination of the basis elements. This way of associating a matrix to any element of the field
K
{\displaystyle K}
is called the regular representation. The square matrix
A
{\displaystyle A}
represents the effect of multiplication by
x
{\displaystyle x}
in the given basis. It follows that if the element
y
{\displaystyle y}
of
K
{\displaystyle K}
is represented by a matrix
B
{\displaystyle B}
, then the product
x
y
{\displaystyle xy}
is represented by the matrix product
B
A
{\displaystyle BA}
. Invariants of matrices, such as the trace, determinant, and characteristic polynomial, depend solely on the field element
x
{\displaystyle x}
and not on the basis. In particular, the trace of the matrix
A
(
x
)
{\displaystyle A(x)}
is called the trace of the field element
x
{\displaystyle x}
and denoted
Tr
(
x
)
{\displaystyle {\text{Tr}}(x)}
, and the determinant is called the norm of x and denoted
N
(
x
)
{\displaystyle N(x)}
.
Now this can be generalized slightly by instead considering a field extension
K
/
L
{\displaystyle K/L}
and giving an
L
{\displaystyle L}
-basis for
K
{\displaystyle K}
. Then, there is an associated matrix
A
K
/
L
(
x
)
{\displaystyle A_{K/L}(x)}
, which has trace
Tr
K
/
L
(
x
)
{\displaystyle {\text{Tr}}_{K/L}(x)}
and norm
N
K
/
L
(
x
)
{\displaystyle {\text{N}}_{K/L}(x)}
defined as the trace and determinant of the matrix
A
K
/
L
(
x
)
{\displaystyle A_{K/L}(x)}
.
=== Example ===
Consider the field extension
Q
(
θ
)
{\displaystyle \mathbb {Q} (\theta )}
with
θ
=
ζ
3
2
3
{\displaystyle \theta =\zeta _{3}{\sqrt[{3}]{2}}}
, where
ζ
3
{\displaystyle \zeta _{3}}
denotes the cube root of unity
exp
(
2
π
i
/
3
)
.
{\displaystyle \exp(2\pi i/3).}
Then, we have a
Q
{\displaystyle \mathbb {Q} }
-basis given by
{
1
,
ζ
3
2
3
,
(
ζ
3
2
3
)
2
}
{\displaystyle \{1,\zeta _{3}{\sqrt[{3}]{2}},(\zeta _{3}{\sqrt[{3}]{2}})^{2}\}}
since any
x
∈
Q
(
θ
)
{\displaystyle x\in \mathbb {Q} (\theta )}
can be expressed as some
Q
{\displaystyle \mathbb {Q} }
-linear combination:
x
=
a
+
b
ζ
3
2
3
+
c
(
ζ
3
2
3
)
2
=
a
+
b
θ
+
c
θ
2
.
{\displaystyle x=a+b\zeta _{3}{\sqrt[{3}]{2}}+c(\zeta _{3}{\sqrt[{3}]{2}})^{2}=a+b\theta +c\theta ^{2}.}
We proceed to calculate the trace
T
(
x
)
{\displaystyle T(x)}
and norm
N
(
x
)
{\displaystyle N(x)}
of this number. To this end, we take an arbitrary
y
∈
Q
(
θ
)
{\displaystyle y\in \mathbb {Q} (\theta )}
where
y
=
y
0
+
y
1
θ
+
y
2
θ
2
{\displaystyle y=y_{0}+y_{1}\theta +y_{2}\theta ^{2}}
and compute the product
x
y
{\displaystyle xy}
. Writing this out gives
x
y
=
a
(
y
0
+
y
1
θ
+
y
2
θ
2
)
+
b
(
2
y
2
+
y
0
θ
+
y
1
θ
2
)
+
c
(
2
y
1
+
2
y
2
θ
+
y
0
θ
2
)
.
{\displaystyle {\begin{aligned}xy=a(y_{0}+y_{1}\theta +y_{2}\theta ^{2})+\\b(2y_{2}+y_{0}\theta +y_{1}\theta ^{2})+\\c(2y_{1}+2y_{2}\theta +y_{0}\theta ^{2}).\end{aligned}}}
We can find the matrix
A
(
x
)
{\displaystyle A(x)}
such that
x
y
=
A
(
x
)
y
{\displaystyle xy=A(x)y}
by writing out the associated matrix equation giving
[
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
]
[
y
0
y
1
y
2
]
=
[
a
y
0
+
2
c
y
1
+
2
b
y
2
b
y
0
+
a
y
1
+
2
c
y
2
c
y
0
+
b
y
1
+
a
y
2
]
{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}}{\begin{bmatrix}y_{0}\\y_{1}\\y_{2}\end{bmatrix}}={\begin{bmatrix}ay_{0}+2cy_{1}+2by_{2}\\by_{0}+ay_{1}+2cy_{2}\\cy_{0}+by_{1}+ay_{2}\end{bmatrix}}}
showing that
A
(
x
)
=
[
a
2
c
2
b
b
a
2
c
c
b
a
]
{\displaystyle A(x)={\begin{bmatrix}a&2c&2b\\b&a&2c\\c&b&a\end{bmatrix}}}
is the matrix that governs multiplication by the number
x
{\displaystyle x}
.
We can now easily compute the trace and determinant:
T
(
x
)
=
3
a
{\displaystyle T(x)=3a}
, and
N
(
x
)
=
a
3
+
2
b
3
+
4
c
3
−
6
a
b
c
{\displaystyle N(x)=a^{3}+2b^{3}+4c^{3}-6abc}
.
=== Properties ===
By definition, standard properties of traces and determinants of matrices carry over to Tr and N: Tr(x) is a linear function of x, as expressed by Tr(x + y) = Tr(x) + Tr(y), Tr(λx) = λ Tr(x), and the norm is a multiplicative homogeneous function of degree n: N(xy) = N(x) N(y), N(λx) = λn N(x). Here λ is a rational number, and x, y are any two elements of
K
{\displaystyle K}
.
The trace form derived is a bilinear form defined by means of the trace, as
T
r
K
/
L
:
K
⊗
L
K
→
L
{\displaystyle Tr_{K/L}:K\otimes _{L}K\to L}
by
T
r
K
/
L
(
x
⊗
y
)
=
T
r
K
/
L
(
x
⋅
y
)
{\displaystyle Tr_{K/L}(x\otimes y)=Tr_{K/L}(x\cdot y)}
and extending linearly. The integral trace form, an integer-valued symmetric matrix is defined as
t
i
j
=
Tr
K
/
Q
(
b
i
b
j
)
{\displaystyle t_{ij}={\text{Tr}}_{K/\mathbb {Q} }(b_{i}b_{j})}
, where b1, ..., bn is an integral basis for
K
{\displaystyle K}
. The discriminant of
K
{\displaystyle K}
is defined as det(t). It is an integer, and is an invariant property of the field
K
{\displaystyle K}
, not depending on the choice of integral basis.
The matrix associated to an element x of
K
{\displaystyle K}
can also be used to give other, equivalent descriptions of algebraic integers. An element x of
K
{\displaystyle K}
is an algebraic integer if and only if the characteristic polynomial pA of the matrix A associated to x is a monic polynomial with integer coefficients. Suppose that the matrix A that represents an element x has integer entries in some basis e. By the Cayley–Hamilton theorem, pA(A) = 0, and it follows that pA(x) = 0, so that x is an algebraic integer. Conversely, if x is an element of
K
{\displaystyle K}
that is a root of a monic polynomial with integer coefficients then the same property holds for the corresponding matrix A. In this case it can be proven that A is an integer matrix in a suitable basis of
K
{\displaystyle K}
. The property of being an algebraic integer is defined in a way that is independent of a choice of a basis in
K
{\displaystyle K}
.
=== Example with integral basis ===
Consider
K
=
Q
(
x
)
{\displaystyle K=\mathbb {Q} (x)}
, where x satisfies x3 − 11x2 + x + 1 = 0. Then an integral basis is [1, x, 1/2(x2 + 1)], and the corresponding integral trace form is
[
3
11
61
11
119
653
61
653
3589
]
.
{\displaystyle {\begin{bmatrix}3&11&61\\11&119&653\\61&653&3589\end{bmatrix}}.}
The "3" in the upper left hand corner of this matrix is the trace of the matrix of the map defined by the first basis element (1) in the regular representation of
K
{\displaystyle K}
on
K
{\displaystyle K}
. This basis element induces the identity map on the 3-dimensional vector space,
K
{\displaystyle K}
. The trace of the matrix of the identity map on a 3-dimensional vector space is 3.
The determinant of this is 1304 = 23·163, the field discriminant; in comparison the root discriminant, or discriminant of the polynomial, is 5216 = 25·163.
== Places ==
Mathematicians of the nineteenth century assumed that algebraic numbers were a type of complex number. This situation changed with the discovery of p-adic numbers by Hensel in 1897; and now it is standard to consider all of the various possible embeddings of a number field
K
{\displaystyle K}
into its various topological completions
K
p
{\displaystyle K_{\mathfrak {p}}}
at once.
A place of a number field
K
{\displaystyle K}
is an equivalence class of absolute values on
K
{\displaystyle K}
pg 9. Essentially, an absolute value is a notion to measure the size of elements
x
{\displaystyle x}
of
K
{\displaystyle K}
. Two such absolute values are considered equivalent if they give rise to the same notion of smallness (or proximity). The equivalence relation between absolute values
|
⋅
|
0
∼
|
⋅
|
1
{\displaystyle |\cdot |_{0}\sim |\cdot |_{1}}
is given by some
λ
∈
R
>
0
{\displaystyle \lambda \in \mathbb {R} _{>0}}
such that
|
⋅
|
0
=
|
⋅
|
1
λ
{\displaystyle |\cdot |_{0}=|\cdot |_{1}^{\lambda }}
meaning we take the value of the norm
|
⋅
|
1
{\displaystyle |\cdot |_{1}}
to the
λ
{\displaystyle \lambda }
-th power.
In general, the types of places fall into three regimes. Firstly (and mostly irrelevant), the trivial absolute value | |0, which takes the value
1
{\displaystyle 1}
on all non-zero
x
∈
K
{\displaystyle x\in K}
. The second and third classes are Archimedean places and non-Archimedean (or ultrametric) places. The completion of
K
{\displaystyle K}
with respect to a place
|
⋅
|
p
{\displaystyle |\cdot |_{\mathfrak {p}}}
is given in both cases by taking Cauchy sequences in
K
{\displaystyle K}
and dividing out null sequences, that is, sequences
{
x
n
}
n
∈
N
{\displaystyle \{x_{n}\}_{n\in \mathbb {N} }}
such that
|
x
n
|
p
→
0
{\displaystyle |x_{n}|_{\mathfrak {p}}\to 0}
tends to zero when
n
{\displaystyle n}
tends to infinity. This can be shown to be a field again, the so-called completion of
K
{\displaystyle K}
at the given place
|
⋅
|
p
{\displaystyle |\cdot |_{\mathfrak {p}}}
, denoted
K
p
{\displaystyle K_{\mathfrak {p}}}
.
For
K
=
Q
{\displaystyle K=\mathbb {Q} }
, the following non-trivial norms occur (Ostrowski's theorem): the (usual) absolute value, sometimes denoted
|
⋅
|
∞
{\displaystyle |\cdot |_{\infty }}
, which gives rise to the complete topological field of the real numbers
R
{\displaystyle \mathbb {R} }
. On the other hand, for any prime number
p
{\displaystyle p}
, the p-adic absolute value is defined by
|q|p = p−n, where q = pn a/b and a and b are integers not divisible by p.
It is used to construct the
p
{\displaystyle p}
-adic numbers
Q
p
{\displaystyle \mathbb {Q} _{p}}
. In contrast to the usual absolute value, the p-adic absolute value gets smaller when q is multiplied by p, leading to quite different behavior of
Q
p
{\displaystyle \mathbb {Q} _{p}}
as compared to
R
{\displaystyle \mathbb {R} }
.
Note the general situation typically considered is taking a number field
K
{\displaystyle K}
and considering a prime ideal
p
∈
Spec
(
O
K
)
{\displaystyle {\mathfrak {p}}\in {\text{Spec}}({\mathcal {O}}_{K})}
for its associated ring of algebraic numbers
O
K
{\displaystyle {\mathcal {O}}_{K}}
. Then, there will be a unique place
|
⋅
|
p
:
K
→
R
≥
0
{\displaystyle |\cdot |_{\mathfrak {p}}:K\to \mathbb {R} _{\geq 0}}
called a non-Archimedean place. In addition, for every embedding
σ
:
K
→
C
{\displaystyle \sigma :K\to \mathbb {C} }
there will be a place called an Archimedean place, denoted
|
⋅
|
σ
:
K
→
R
≥
0
{\displaystyle |\cdot |_{\sigma }:K\to \mathbb {R} _{\geq 0}}
. This statement is a theorem also called Ostrowski's theorem.
=== Examples ===
The field
K
=
Q
[
x
]
/
(
x
6
−
2
)
=
Q
(
θ
)
{\displaystyle K=\mathbb {Q} [x]/(x^{6}-2)=\mathbb {Q} (\theta )}
for
θ
=
ζ
2
6
{\displaystyle \theta =\zeta {\sqrt[{6}]{2}}}
where
ζ
{\displaystyle \zeta }
is a fixed 6th root of unity, provides a rich example for constructing explicit real and complex Archimedean embeddings, and non-Archimedean embeddings as wellpg 15-16.
=== Archimedean places ===
Here we use the standard notation
r
1
{\displaystyle r_{1}}
and
r
2
{\displaystyle r_{2}}
for the number of real and complex embeddings used, respectively (see below).
Calculating the archimedean places of a number field
K
{\displaystyle K}
is done as follows: let
x
{\displaystyle x}
be a primitive element of
K
{\displaystyle K}
, with minimal polynomial
f
{\displaystyle f}
(over
Q
{\displaystyle \mathbb {Q} }
). Over
R
{\displaystyle \mathbb {R} }
,
f
{\displaystyle f}
will generally no longer be irreducible, but its irreducible (real) factors are either of degree one or two. Since there are no repeated roots, there are no repeated factors. The roots
r
{\displaystyle r}
of factors of degree one are necessarily real, and replacing
x
{\displaystyle x}
by
r
{\displaystyle r}
gives an embedding of
K
{\displaystyle K}
into
R
{\displaystyle \mathbb {R} }
; the number of such embeddings is equal to the number of real roots of
f
{\displaystyle f}
. Restricting the standard absolute value on
R
{\displaystyle \mathbb {R} }
to
K
{\displaystyle K}
gives an archimedean absolute value on
K
{\displaystyle K}
; such an absolute value is also referred to as a real place of
K
{\displaystyle K}
. On the other hand, the roots of factors of degree two are pairs of conjugate complex numbers, which allows for two conjugate embeddings into
C
{\displaystyle \mathbb {C} }
. Either one of this pair of embeddings can be used to define an absolute value on
K
{\displaystyle K}
, which is the same for both embeddings since they are conjugate. This absolute value is called a complex place of
K
{\displaystyle K}
.
If all roots of
f
{\displaystyle f}
above are real (respectively, complex) or, equivalently, any possible embedding
K
⊆
C
{\displaystyle K\subseteq \mathbb {C} }
is actually forced to be inside
R
{\displaystyle \mathbb {R} }
(resp.
C
{\displaystyle \mathbb {C} }
),
K
{\displaystyle K}
is called totally real (resp. totally complex).
=== Non-Archimedean or ultrametric places ===
To find the non-Archimedean places, let again
f
{\displaystyle f}
and
x
{\displaystyle x}
be as above. In
Q
p
{\displaystyle \mathbb {Q} _{p}}
,
f
{\displaystyle f}
splits in factors of various degrees, none of which are repeated, and the degrees of which add up to
n
{\displaystyle n}
, the degree of
f
{\displaystyle f}
. For each of these
p
{\displaystyle p}
-adically irreducible factors
f
i
{\displaystyle f_{i}}
, we may suppose that
x
{\displaystyle x}
satisfies
f
i
{\displaystyle f_{i}}
and obtain an embedding of
K
{\displaystyle K}
into an algebraic extension of finite degree over
Q
p
{\displaystyle \mathbb {Q} _{p}}
. Such a local field behaves in many ways like a number field, and the
p
{\displaystyle p}
-adic numbers may similarly play the role of the rationals; in particular, we can define the norm and trace in exactly the same way, now giving functions mapping to
Q
p
{\displaystyle \mathbb {Q} _{p}}
. By using this
p
{\displaystyle p}
-adic norm map
N
f
i
{\displaystyle N_{f_{i}}}
for the place
f
i
{\displaystyle f_{i}}
, we may define an absolute value corresponding to a given
p
{\displaystyle p}
-adically irreducible factor
f
i
{\displaystyle f_{i}}
of degree
m
{\displaystyle m}
by
|
y
|
f
i
=
|
N
f
i
(
y
)
|
p
1
/
m
{\displaystyle |y|_{f_{i}}=|N_{f_{i}}(y)|_{p}^{1/m}}
Such an absolute value is called an ultrametric, non-Archimedean or
p
{\displaystyle p}
-adic place of
K
{\displaystyle K}
.
For any ultrametric place v we have that |x|v ≤ 1 for any x in
O
K
{\displaystyle {\mathcal {O}}_{K}}
, since the minimal polynomial for x has integer factors, and hence its p-adic factorization has factors in Zp. Consequently, the norm term (constant term) for each factor is a p-adic integer, and one of these is the integer used for defining the absolute value for v.
=== Prime ideals in OK ===
For an ultrametric place v, the subset of
O
K
{\displaystyle {\mathcal {O}}_{K}}
defined by |x|v < 1 is an ideal
p
{\displaystyle {\mathfrak {p}}}
of
O
K
{\displaystyle {\mathcal {O}}_{K}}
. This relies on the ultrametricity of v: given x and y in
p
{\displaystyle {\mathfrak {p}}}
, then
|x + y|v ≤ max (|x|v, |y|v) < 1.
Actually,
p
{\displaystyle {\mathfrak {p}}}
is even a prime ideal.
Conversely, given a prime ideal
p
{\displaystyle {\mathfrak {p}}}
of
O
K
{\displaystyle {\mathcal {O}}_{K}}
, a discrete valuation can be defined by setting
v
p
(
x
)
=
n
{\displaystyle v_{\mathfrak {p}}(x)=n}
where n is the biggest integer such that
x
∈
p
n
{\displaystyle x\in {\mathfrak {p}}^{n}}
, the n-fold power of the ideal. This valuation can be turned into an ultrametric place. Under this correspondence, (equivalence classes) of ultrametric places of
K
{\displaystyle K}
correspond to prime ideals of
O
K
{\displaystyle {\mathcal {O}}_{K}}
. For
K
=
Q
{\displaystyle K=\mathbb {Q} }
, this gives back Ostrowski's theorem: any prime ideal in Z (which is necessarily by a single prime number) corresponds to a non-Archimedean place and vice versa. However, for more general number fields, the situation becomes more involved, as will be explained below.
Yet another, equivalent way of describing ultrametric places is by means of localizations of
O
K
{\displaystyle {\mathcal {O}}_{K}}
. Given an ultrametric place
v
{\displaystyle v}
on a number field
K
{\displaystyle K}
, the corresponding localization is the subring
T
{\displaystyle T}
of
K
{\displaystyle K}
of all elements
x
{\displaystyle x}
such that | x |v ≤ 1. By the ultrametric property
T
{\displaystyle T}
is a ring. Moreover, it contains
O
K
{\displaystyle {\mathcal {O}}_{K}}
. For every element x of
K
{\displaystyle K}
, at least one of x or x−1 is contained in
T
{\displaystyle T}
. Actually, since K×/T× can be shown to be isomorphic to the integers,
T
{\displaystyle T}
is a discrete valuation ring, in particular a local ring. Actually,
T
{\displaystyle T}
is just the localization of
O
K
{\displaystyle {\mathcal {O}}_{K}}
at the prime ideal
p
{\displaystyle {\mathfrak {p}}}
, so
T
=
O
K
,
p
{\displaystyle T={\mathcal {O}}_{K,{\mathfrak {p}}}}
. Conversely,
p
{\displaystyle {\mathfrak {p}}}
is the maximal ideal of
T
{\displaystyle T}
.
Altogether, there is a three-way equivalence between ultrametric absolute values, prime ideals, and localizations on a number field.
==== Lying over theorem and places ====
Some of the basic theorems in algebraic number theory are the going up and going down theorems, which describe the behavior of some prime ideal
p
∈
Spec
(
O
K
)
{\displaystyle {\mathfrak {p}}\in {\text{Spec}}({\mathcal {O}}_{K})}
when it is extended as an ideal in
O
L
{\displaystyle {\mathcal {O}}_{L}}
for some field extension
L
/
K
{\displaystyle L/K}
. We say that an ideal
o
⊂
O
L
{\displaystyle {\mathfrak {o}}\subset {\mathcal {O}}_{L}}
lies over
p
{\displaystyle {\mathfrak {p}}}
if
o
∩
O
K
=
p
{\displaystyle {\mathfrak {o}}\cap {\mathcal {O}}_{K}={\mathfrak {p}}}
. Then, one incarnation of the theorem states a prime ideal in
Spec
(
O
L
)
{\displaystyle {\text{Spec}}({\mathcal {O}}_{L})}
lies over
p
{\displaystyle {\mathfrak {p}}}
, hence there is always a surjective map
Spec
(
O
L
)
→
Spec
(
O
K
)
{\displaystyle {\text{Spec}}({\mathcal {O}}_{L})\to {\text{Spec}}({\mathcal {O}}_{K})}
induced from the inclusion
O
K
↪
O
L
{\displaystyle {\mathcal {O}}_{K}\hookrightarrow {\mathcal {O}}_{L}}
. Since there exists a correspondence between places and prime ideals, this means we can find places dividing a place that is induced from a field extension. That is, if
p
{\displaystyle p}
is a place of
K
{\displaystyle K}
, then there are places
v
{\displaystyle v}
of
L
{\displaystyle L}
that divide
p
{\displaystyle p}
, in the sense that their induced prime ideals divide the induced prime ideal of
p
{\displaystyle p}
in
Spec
(
O
L
)
{\displaystyle {\text{Spec}}({\mathcal {O}}_{L})}
.
In fact, this observation is usefulpg 13 while looking at the base change of an algebraic field extension of
Q
{\displaystyle \mathbb {Q} }
to one of its completions
Q
p
{\displaystyle \mathbb {Q} _{p}}
. If we write
K
=
Q
[
X
]
Q
(
X
)
{\displaystyle K={\frac {\mathbb {Q} [X]}{Q(X)}}}
and write
θ
{\displaystyle \theta }
for the induced element of
X
∈
K
{\displaystyle X\in K}
, we get a decomposition of
K
⊗
Q
Q
p
{\displaystyle K\otimes _{\mathbb {Q} }\mathbb {Q} _{p}}
. Explicitly, this decomposition is
K
⊗
Q
Q
p
=
Q
[
X
]
Q
(
X
)
⊗
Q
Q
p
=
Q
p
[
X
]
Q
(
X
)
{\displaystyle {\begin{aligned}K\otimes _{\mathbb {Q} }\mathbb {Q} _{p}&={\frac {\mathbb {Q} [X]}{Q(X)}}\otimes _{\mathbb {Q} }\mathbb {Q} _{p}\\&={\frac {\mathbb {Q} _{p}[X]}{Q(X)}}\end{aligned}}}
furthermore, the induced polynomial
Q
(
X
)
∈
Q
p
[
X
]
{\displaystyle Q(X)\in \mathbb {Q} _{p}[X]}
decomposes as
Q
(
X
)
=
∏
v
|
p
Q
v
{\displaystyle Q(X)=\prod _{v|p}Q_{v}}
because of Hensel's lemmapg 129-131; hence
K
⊗
Q
Q
p
≅
Q
p
[
X
]
∏
v
|
p
Q
v
(
X
)
≅
⨁
v
|
p
K
v
{\displaystyle {\begin{aligned}K\otimes _{\mathbb {Q} }\mathbb {Q} _{p}&\cong {\frac {\mathbb {Q} _{p}[X]}{\prod _{v|p}Q_{v}(X)}}\\&\cong \bigoplus _{v|p}K_{v}\end{aligned}}}
Moreover, there are embeddings
i
v
:
K
→
K
v
θ
↦
θ
v
{\displaystyle {\begin{aligned}i_{v}:&K\to K_{v}\\&\theta \mapsto \theta _{v}\end{aligned}}}
where
θ
v
{\displaystyle \theta _{v}}
is a root of
Q
v
{\displaystyle Q_{v}}
giving
K
v
=
Q
p
(
θ
v
)
{\displaystyle K_{v}=\mathbb {Q} _{p}(\theta _{v})}
; hence we could write
K
v
=
i
v
(
K
)
Q
p
{\displaystyle K_{v}=i_{v}(K)\mathbb {Q} _{p}}
as subsets of
C
p
{\displaystyle \mathbb {C} _{p}}
(which is the completion of the algebraic closure of
Q
p
{\displaystyle \mathbb {Q} _{p}}
).
== Ramification ==
Ramification, generally speaking, describes a geometric phenomenon that can occur with finite-to-one maps (that is, maps
f
:
X
→
Y
{\displaystyle f:X\to Y}
such that the preimages of all points y in Y consist only of finitely many points): the cardinality of the fibers f−1(y) will generally have the same number of points, but it occurs that, in special points y, this number drops. For example, the map
C
→
C
,
z
↦
z
n
{\displaystyle \mathbb {C} \to \mathbb {C} ,z\mapsto z^{n}}
has n points in each fiber over t, namely the n (complex) roots of t, except in t = 0, where the fiber consists of only one element, z = 0. One says that the map is "ramified" in zero. This is an example of a branched covering of Riemann surfaces. This intuition also serves to define ramification in algebraic number theory. Given a (necessarily finite) extension of number fields
K
/
L
{\displaystyle K/L}
, a prime ideal p of
O
L
{\displaystyle {\mathcal {O}}_{L}}
generates the ideal pOK of
O
K
{\displaystyle {\mathcal {O}}_{K}}
. This ideal may or may not be a prime ideal, but, according to the Lasker–Noether theorem (see above), always is given by
pO
K
{\displaystyle K}
= q1e1 q2e2 ⋯ qmem
with uniquely determined prime ideals qi of
O
K
{\displaystyle {\mathcal {O}}_{K}}
and numbers (called ramification indices) ei. Whenever one ramification index is bigger than one, the prime p is said to ramify in
K
{\displaystyle K}
.
The connection between this definition and the geometric situation is delivered by the map of spectra of rings
S
p
e
c
O
K
→
S
p
e
c
O
L
{\displaystyle \mathrm {Spec} {\mathcal {O}}_{K}\to \mathrm {Spec} {\mathcal {O}}_{L}}
. In fact, unramified morphisms of schemes in algebraic geometry are a direct generalization of unramified extensions of number fields.
Ramification is a purely local property, i.e., depends only on the completions around the primes p and qi. The inertia group measures the difference between the local Galois groups at some place and the Galois groups of the involved finite residue fields.
=== An example ===
The following example illustrates the notions introduced above. In order to compute the ramification index of
Q
(
x
)
{\displaystyle \mathbb {Q} (x)}
, where
f(x) = x3 − x − 1 = 0,
at 23, it suffices to consider the field extension
Q
23
(
x
)
/
Q
23
{\displaystyle \mathbb {Q} _{23}(x)/\mathbb {Q} _{23}}
. Up to 529 = 232 (i.e., modulo 529) f can be factored as
f(x) = (x + 181)(x2 − 181x − 38) = gh.
Substituting x = y + 10 in the first factor g modulo 529 yields y + 191, so the valuation | y |g for y given by g is | −191 |23 = 1. On the other hand, the same substitution in h yields y2 − 161y − 161 modulo 529. Since 161 = 7 × 23,
|
y
|
h
=
|
161
|
23
=
1
23
{\displaystyle \left\vert y\right\vert _{h}={\sqrt {\left\vert 161\right\vert }}_{23}={\frac {1}{\sqrt {23}}}}
Since possible values for the absolute value of the place defined by the factor h are not confined to integer powers of 23, but instead are integer powers of the square root of 23, the ramification index of the field extension at 23 is two.
The valuations of any element of
K
{\displaystyle K}
can be computed in this way using resultants. If, for example y = x2 − x − 1, using the resultant to eliminate x between this relationship and f = x3 − x − 1 = 0 gives y3 − 5y2 + 4y − 1 = 0. If instead we eliminate with respect to the factors g and h of f, we obtain the corresponding factors for the polynomial for y, and then the 23-adic valuation applied to the constant (norm) term allows us to compute the valuations of y for g and h (which are both 1 in this instance.)
=== Dedekind discriminant theorem ===
Much of the significance of the discriminant lies in the fact that ramified ultrametric places are all places obtained from factorizations in
Q
p
{\displaystyle \mathbb {Q} _{p}}
where p divides the discriminant. This is even true of the polynomial discriminant; however the converse is also true, that if a prime p divides the discriminant, then there is a p-place that ramifies. For this converse the field discriminant is needed. This is the Dedekind discriminant theorem. In the example above, the discriminant of the number field
Q
(
x
)
{\displaystyle \mathbb {Q} (x)}
with x3 − x − 1 = 0 is −23, and as we have seen the 23-adic place ramifies. The Dedekind discriminant tells us it is the only ultrametric place that does. The other ramified place comes from the absolute value on the complex embedding of
K
{\displaystyle K}
.
== Galois groups and Galois cohomology ==
Generally in abstract algebra, field extensions K / L can be studied by examining the Galois group Gal(K / L), consisting of field automorphisms of
K
{\displaystyle K}
leaving
L
{\displaystyle L}
elementwise fixed. As an example, the Galois group
G
a
l
(
Q
(
ζ
n
)
/
Q
)
{\displaystyle \mathrm {Gal} (\mathbb {Q} (\zeta _{n})/\mathbb {Q} )}
of the cyclotomic field extension of degree n (see above) is given by (Z/nZ)×, the group of invertible elements in Z/nZ. This is the first stepstone into Iwasawa theory.
In order to include all possible extensions having certain properties, the Galois group concept is commonly applied to the (infinite) field extension K / K of the algebraic closure, leading to the absolute Galois group G := Gal(K / K) or just Gal(K), and to the extension
K
/
Q
{\displaystyle K/\mathbb {Q} }
. The fundamental theorem of Galois theory links fields in between
K
{\displaystyle K}
and its algebraic closure and closed subgroups of Gal(K). For example, the abelianization (the biggest abelian quotient) Gab of G corresponds to a field referred to as the maximal abelian extension Kab (called so since any further extension is not abelian, i.e., does not have an abelian Galois group). By the Kronecker–Weber theorem, the maximal abelian extension of
Q
{\displaystyle \mathbb {Q} }
is the extension generated by all roots of unity. For more general number fields, class field theory, specifically the Artin reciprocity law gives an answer by describing Gab in terms of the idele class group. Also notable is the Hilbert class field, the maximal abelian unramified field extension of
K
{\displaystyle K}
. It can be shown to be finite over
K
{\displaystyle K}
, its Galois group over
K
{\displaystyle K}
is isomorphic to the class group of
K
{\displaystyle K}
, in particular its degree equals the class number h of
K
{\displaystyle K}
(see above).
In certain situations, the Galois group acts on other mathematical objects, for example a group. Such a group is then also referred to as a Galois module. This enables the use of group cohomology for the Galois group Gal(K), also known as Galois cohomology, which in the first place measures the failure of exactness of taking Gal(K)-invariants, but offers deeper insights (and questions) as well. For example, the Galois group G of a field extension L / K acts on L×, the nonzero elements of L. This Galois module plays a significant role in many arithmetic dualities, such as Poitou-Tate duality. The Brauer group of
K
{\displaystyle K}
, originally conceived to classify division algebras over
K
{\displaystyle K}
, can be recast as a cohomology group, namely H2(Gal (K, K×)).
== Local-global principle ==
Generally speaking, the term "local to global" refers to the idea that a global problem is first done at a local level, which tends to simplify the questions. Then, of course, the information gained in the local analysis has to be put together to get back to some global statement. For example, the notion of sheaves reifies that idea in topology and geometry.
=== Local and global fields ===
Number fields share a great deal of similarity with another class of fields much used in algebraic geometry known as function fields of algebraic curves over finite fields. An example is Kp(T). They are similar in many respects, for example in that number rings are one-dimensional regular rings, as are the coordinate rings (the quotient fields of which are the function fields in question) of curves. Therefore, both types of field are called global fields. In accordance with the philosophy laid out above, they can be studied at a local level first, that is to say, by looking at the corresponding local fields. For number fields
K
{\displaystyle K}
, the local fields are the completions of
K
{\displaystyle K}
at all places, including the archimedean ones (see local analysis). For function fields, the local fields are completions of the local rings at all points of the curve for function fields.
Many results valid for function fields also hold, at least if reformulated properly, for number fields. However, the study of number fields often poses difficulties and phenomena not encountered in function fields. For example, in function fields, there is no dichotomy into non-archimedean and archimedean places. Nonetheless, function fields often serves as a source of intuition what should be expected in the number field case.
=== Hasse principle ===
A prototypical question, posed at a global level, is whether some polynomial equation has a solution in
K
{\displaystyle K}
. If this is the case, this solution is also a solution in all completions. The local-global principle or Hasse principle asserts that for quadratic equations, the converse holds, as well. Thereby, checking whether such an equation has a solution can be done on all the completions of
K
{\displaystyle K}
, which is often easier, since analytic methods (classical analytic tools such as intermediate value theorem at the archimedean places and p-adic analysis at the nonarchimedean places) can be used. This implication does not hold, however, for more general types of equations. However, the idea of passing from local data to global ones proves fruitful in class field theory, for example, where local class field theory is used to obtain global insights mentioned above. This is also related to the fact that the Galois groups of the completions Kv can be explicitly determined, whereas the Galois groups of global fields, even of
Q
{\displaystyle \mathbb {Q} }
are far less understood.
=== Adeles and ideles ===
In order to assemble local data pertaining to all local fields attached to
K
{\displaystyle K}
, the adele ring is set up. A multiplicative variant is referred to as ideles.
== See also ==
=== Generalizations ===
Algebraic function field
=== Algebraic number theory ===
Dirichlet's unit theorem, S-unit
Kummer extension
Minkowski's theorem, Geometry of numbers
Chebotarev's density theorem
=== Class field theory ===
Ray class group
Decomposition group
Genus field
== Notes ==
== References ==
Keune, Frans (2023). Number Fields. Radboud University Press. ISBN 9789493296039.
Keith Conrad, http://www.math.uconn.edu/~kconrad/blurbs/gradnumthy/unittheorem.pdf
Janusz, Gerald J. (1996), Algebraic Number Fields (2nd ed.), Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0429-2
Helmut Hasse, Number Theory, Springer Classics in Mathematics Series (2002)
Serge Lang, Algebraic Number Theory, second edition, Springer, 2000
Richard A. Mollin, Algebraic Number Theory, CRC, 1999
Ram Murty, Problems in Algebraic Number Theory, Second Edition, Springer, 2005
Narkiewicz, Władysław (2004), Elementary and analytic theory of algebraic numbers, Springer Monographs in Mathematics (3 ed.), Berlin: Springer-Verlag, ISBN 978-3-540-21902-6, MR 2078267
Neukirch, Jürgen (1999), Algebraic number theory, Grundlehren der Mathematischen Wissenschaften, vol. 322, Berlin, New York: Springer-Verlag, ISBN 978-3-540-65399-8, MR 1697859, Zbl 0956.11021
Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2000), Cohomology of Number Fields, Grundlehren der Mathematischen Wissenschaften, vol. 323, Berlin, New York: Springer-Verlag, ISBN 978-3-540-66671-4, MR 1737196, Zbl 1136.11001
André Weil, Basic Number Theory, third edition, Springer, 1995 | Wikipedia/Algebraic_number_field |
In algebra, the free product (coproduct) of a family of associative algebras
A
i
,
i
∈
I
{\displaystyle A_{i},i\in I}
over a commutative ring R is the associative algebra over R that is, roughly, defined by the generators and the relations of the
A
i
{\displaystyle A_{i}}
's. The free product of two algebras A, B is denoted by A ∗ B. The notion is a ring-theoretic analog of a free product of groups.
In the category of commutative R-algebras, the free product of two algebras (in that category) is their tensor product.
== Construction ==
We first define a free product of two algebras. Let A and B be algebras over a commutative ring R. Consider their tensor algebra, the direct sum of all possible finite tensor products of A, B; explicitly,
T
=
⨁
n
=
0
∞
T
n
{\displaystyle T=\bigoplus _{n=0}^{\infty }T_{n}}
where
T
0
=
R
,
T
1
=
A
⊕
B
,
T
2
=
(
A
⊗
A
)
⊕
(
A
⊗
B
)
⊕
(
B
⊗
A
)
⊕
(
B
⊗
B
)
,
T
3
=
⋯
,
…
{\displaystyle T_{0}=R,\,T_{1}=A\oplus B,\,T_{2}=(A\otimes A)\oplus (A\otimes B)\oplus (B\otimes A)\oplus (B\otimes B),\,T_{3}=\cdots ,\dots }
We then set
A
∗
B
=
T
/
I
{\displaystyle A*B=T/I}
where I is the two-sided ideal generated by elements of the form
a
⊗
a
′
−
a
a
′
,
b
⊗
b
′
−
b
b
′
,
1
A
−
1
B
.
{\displaystyle a\otimes a'-aa',\,b\otimes b'-bb',\,1_{A}-1_{B}.}
We then verify the universal property of coproduct holds for this (this is straightforward.)
A finite free product is defined similarly.
== References ==
K. I. Beidar, W. S. Martindale and A. V. Mikhalev, Rings with generalized identities, Section 1.4. This reference was mentioned in "Coproduct in the category of (noncommutative) associative algebras". Stack Exchange. May 9, 2012.
== External links ==
"How to construct the coproduct of two (non-commutative) rings". Stack Exchange. January 3, 2014. | Wikipedia/Free_product_of_associative_algebras |
In mathematics, the tensor product of two algebras over a commutative ring R is also an R-algebra. This gives the tensor product of algebras. When the ring is a field, the most common application of such products is to describe the product of algebra representations.
== Definition ==
Let R be a commutative ring and let A and B be R-algebras. Since A and B may both be regarded as R-modules, their tensor product
A
⊗
R
B
{\displaystyle A\otimes _{R}B}
is also an R-module. The tensor product can be given the structure of a ring by defining the product on elements of the form a ⊗ b by
(
a
1
⊗
b
1
)
(
a
2
⊗
b
2
)
=
a
1
a
2
⊗
b
1
b
2
{\displaystyle (a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=a_{1}a_{2}\otimes b_{1}b_{2}}
and then extending by linearity to all of A ⊗R B. This ring is an R-algebra, associative and unital with the identity element given by 1A ⊗ 1B. where 1A and 1B are the identity elements of A and B. If A and B are commutative, then the tensor product is commutative as well.
The tensor product turns the category of R-algebras into a symmetric monoidal category.
== Further properties ==
There are natural homomorphisms from A and B to A ⊗R B given by
a
↦
a
⊗
1
B
{\displaystyle a\mapsto a\otimes 1_{B}}
b
↦
1
A
⊗
b
{\displaystyle b\mapsto 1_{A}\otimes b}
These maps make the tensor product the coproduct in the category of commutative R-algebras. The tensor product is not the coproduct in the category of all R-algebras. There the coproduct is given by a more general free product of algebras. Nevertheless, the tensor product of non-commutative algebras can be described by a universal property similar to that of the coproduct:
Hom
(
A
⊗
B
,
X
)
≅
{
(
f
,
g
)
∈
Hom
(
A
,
X
)
×
Hom
(
B
,
X
)
∣
∀
a
∈
A
,
b
∈
B
:
[
f
(
a
)
,
g
(
b
)
]
=
0
}
,
{\displaystyle {\text{Hom}}(A\otimes B,X)\cong \lbrace (f,g)\in {\text{Hom}}(A,X)\times {\text{Hom}}(B,X)\mid \forall a\in A,b\in B:[f(a),g(b)]=0\rbrace ,}
where [-, -] denotes the commutator.
The natural isomorphism is given by identifying a morphism
ϕ
:
A
⊗
B
→
X
{\displaystyle \phi :A\otimes B\to X}
on the left hand side with the pair of morphisms
(
f
,
g
)
{\displaystyle (f,g)}
on the right hand side where
f
(
a
)
:=
ϕ
(
a
⊗
1
)
{\displaystyle f(a):=\phi (a\otimes 1)}
and similarly
g
(
b
)
:=
ϕ
(
1
⊗
b
)
{\displaystyle g(b):=\phi (1\otimes b)}
.
== Applications ==
The tensor product of commutative algebras is of frequent use in algebraic geometry. For affine schemes X, Y, Z with morphisms from X and Z to Y, so X = Spec(A), Y = Spec(R), and Z = Spec(B) for some commutative rings A, R, B, the fiber product scheme is the affine scheme corresponding to the tensor product of algebras:
X
×
Y
Z
=
Spec
(
A
⊗
R
B
)
.
{\displaystyle X\times _{Y}Z=\operatorname {Spec} (A\otimes _{R}B).}
More generally, the fiber product of schemes is defined by gluing together affine fiber products of this form.
== Examples ==
The tensor product can be used as a means of taking intersections of two subschemes in a scheme: consider the
C
[
x
,
y
]
{\displaystyle \mathbb {C} [x,y]}
-algebras
C
[
x
,
y
]
/
f
{\displaystyle \mathbb {C} [x,y]/f}
,
C
[
x
,
y
]
/
g
{\displaystyle \mathbb {C} [x,y]/g}
, then their tensor product is
C
[
x
,
y
]
/
(
f
)
⊗
C
[
x
,
y
]
C
[
x
,
y
]
/
(
g
)
≅
C
[
x
,
y
]
/
(
f
,
g
)
{\displaystyle \mathbb {C} [x,y]/(f)\otimes _{\mathbb {C} [x,y]}\mathbb {C} [x,y]/(g)\cong \mathbb {C} [x,y]/(f,g)}
, which describes the intersection of the algebraic curves f = 0 and g = 0 in the affine plane over C.
More generally, if
A
{\displaystyle A}
is a commutative ring and
I
,
J
⊆
A
{\displaystyle I,J\subseteq A}
are ideals, then
A
I
⊗
A
A
J
≅
A
I
+
J
{\displaystyle {\frac {A}{I}}\otimes _{A}{\frac {A}{J}}\cong {\frac {A}{I+J}}}
, with a unique isomorphism sending
(
a
+
I
)
⊗
(
b
+
J
)
{\displaystyle (a+I)\otimes (b+J)}
to
(
a
b
+
I
+
J
)
{\displaystyle (ab+I+J)}
.
Tensor products can be used as a means of changing coefficients. For example,
Z
[
x
,
y
]
/
(
x
3
+
5
x
2
+
x
−
1
)
⊗
Z
Z
/
5
≅
Z
/
5
[
x
,
y
]
/
(
x
3
+
x
−
1
)
{\displaystyle \mathbb {Z} [x,y]/(x^{3}+5x^{2}+x-1)\otimes _{\mathbb {Z} }\mathbb {Z} /5\cong \mathbb {Z} /5[x,y]/(x^{3}+x-1)}
and
Z
[
x
,
y
]
/
(
f
)
⊗
Z
C
≅
C
[
x
,
y
]
/
(
f
)
{\displaystyle \mathbb {Z} [x,y]/(f)\otimes _{\mathbb {Z} }\mathbb {C} \cong \mathbb {C} [x,y]/(f)}
.
Tensor products also can be used for taking products of affine schemes over a field. For example,
C
[
x
1
,
x
2
]
/
(
f
(
x
)
)
⊗
C
C
[
y
1
,
y
2
]
/
(
g
(
y
)
)
{\displaystyle \mathbb {C} [x_{1},x_{2}]/(f(x))\otimes _{\mathbb {C} }\mathbb {C} [y_{1},y_{2}]/(g(y))}
is isomorphic to the algebra
C
[
x
1
,
x
2
,
y
1
,
y
2
]
/
(
f
(
x
)
,
g
(
y
)
)
{\displaystyle \mathbb {C} [x_{1},x_{2},y_{1},y_{2}]/(f(x),g(y))}
which corresponds to an affine surface in
A
C
4
{\displaystyle \mathbb {A} _{\mathbb {C} }^{4}}
if f and g are not zero.
Given
R
{\displaystyle R}
-algebras
A
{\displaystyle A}
and
B
{\displaystyle B}
whose underlying rings are graded-commutative rings, the tensor product
A
⊗
R
B
{\displaystyle A\otimes _{R}B}
becomes a graded commutative ring by defining
(
a
⊗
b
)
(
a
′
⊗
b
′
)
=
(
−
1
)
|
b
|
|
a
′
|
a
a
′
⊗
b
b
′
{\displaystyle (a\otimes b)(a'\otimes b')=(-1)^{|b||a'|}aa'\otimes bb'}
for homogeneous
a
{\displaystyle a}
,
a
′
{\displaystyle a'}
,
b
{\displaystyle b}
, and
b
′
{\displaystyle b'}
.
== See also ==
Extension of scalars
Tensor product of modules
Tensor product of fields
Linearly disjoint
Multilinear subspace learning
== Notes ==
== References ==
Kassel, Christian (1995), Quantum groups, Graduate texts in mathematics, vol. 155, Springer, ISBN 978-0-387-94370-1.
Lang, Serge (2002) [first published in 1993]. Algebra. Graduate Texts in Mathematics. Vol. 21. Springer. ISBN 0-387-95385-X. | Wikipedia/Tensor_product_of_algebras |
In algebraic geometry, an affine variety or affine algebraic variety is a certain kind of algebraic variety that can be described as a subset of an affine space.
More formally, an affine algebraic set is the set of the common zeros over an algebraically closed field k of some family of polynomials in the polynomial ring
k
[
x
1
,
…
,
x
n
]
.
{\displaystyle k[x_{1},\ldots ,x_{n}].}
An affine variety is an affine algebraic set which is not the union of two smaller algebraic sets; algebraically, this means that (the radical of) the ideal generated by the defining polynomials is prime. One-dimensional affine varieties are called affine algebraic curves, while two-dimensional ones are affine algebraic surfaces.
Some texts use the term variety for any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense).
In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field k in which the coefficients are considered, from the algebraically closed field K (containing k) over which the common zeros are considered (that is, the points of the affine algebraic set are in Kn). In this case, the variety is said defined over k, and the points of the variety that belong to kn are said k-rational or rational over k. In the common case where k is the field of real numbers, a k-rational point is called a real point. When the field k is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by xn + yn − 1 = 0 has no rational points for any integer n greater than two.
== Introduction ==
An affine algebraic set is the set of solutions in an algebraically closed field k of a system of polynomial equations with coefficients in k. More precisely, if
f
1
,
…
,
f
m
{\displaystyle f_{1},\ldots ,f_{m}}
are polynomials with coefficients in k, they define an affine algebraic set
V
(
f
1
,
…
,
f
m
)
=
{
(
a
1
,
…
,
a
n
)
∈
k
n
|
f
1
(
a
1
,
…
,
a
n
)
=
…
=
f
m
(
a
1
,
…
,
a
n
)
=
0
}
.
{\displaystyle V(f_{1},\ldots ,f_{m})=\left\{(a_{1},\ldots ,a_{n})\in k^{n}\;|\;f_{1}(a_{1},\ldots ,a_{n})=\ldots =f_{m}(a_{1},\ldots ,a_{n})=0\right\}.}
An affine (algebraic) variety is an affine algebraic set that is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible.
If X is an affine algebraic set, and I is the ideal of all polynomials that are zero on X, then the quotient ring
R
=
k
[
x
1
,
…
,
x
n
]
/
I
{\displaystyle R=k[x_{1},\ldots ,x_{n}]/I}
is called the coordinate ring of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or the polynomial functions on the variety. They form the ring of regular functions on the variety, or, simply, the ring of the variety; in more technical terms (see § Structure sheaf), it is the space of global sections of the structure sheaf of X.
The dimension of a variety is an integer associated to every variety, and even to every algebraic set, whose importance relies on the large number of its equivalent definitions (see Dimension of an algebraic variety).
== Examples ==
The complement of a hypersurface in an affine variety X (that is X \ { f = 0 } for some polynomial f) is affine. Its defining equations are obtained by saturating by f the defining ideal of X. The coordinate ring is thus the localization
k
[
X
]
[
f
−
1
]
{\displaystyle k[X][f^{-1}]}
. For instance, for X = kn and f ∈ k[x1,..., xn], kn \ { f = 0 } is isomorphic to the hypersurface V(1 − xn+1f) in kn+1.
In particular,
k
−
0
{\displaystyle k-0}
(the affine line with the origin removed) is affine, isomorphic to the curve
V
(
1
−
x
y
)
{\displaystyle V(1-xy)}
in
k
2
{\displaystyle k^{2}}
(see Algebraic group § Examples).
On the other hand,
k
2
−
0
{\displaystyle k^{2}-0}
(the affine plane with the origin removed) is not an affine variety (compare this to Hartogs' extension theorem in complex analysis). See Spectrum of a ring § Non-affine examples.
The subvarieties of codimension one in the affine space
k
n
{\displaystyle k^{n}}
are exactly the hypersurfaces, that is the varieties defined by a single polynomial.
The normalization of an irreducible affine variety is affine; the coordinate ring of the normalization is the integral closure of the coordinate ring of the variety. (Similarly, the normalization of a projective variety is a projective variety.)
== Rational points ==
For an affine variety
V
⊆
K
n
{\displaystyle V\subseteq K^{n}}
over an algebraically closed field K, and a subfield k of K, a k-rational point of V is a point
p
∈
V
∩
k
n
.
{\displaystyle p\in V\cap k^{n}.}
That is, a point of V whose coordinates are elements of k. The collection of k-rational points of an affine variety V is often denoted
V
(
k
)
.
{\displaystyle V(k).}
Often, if the base field is the complex numbers C, points that are R-rational (where R is the real numbers) are called real points of the variety, and Q-rational points (Q the rational numbers) are often simply called rational points.
For instance, (1, 0) is a Q-rational and an R-rational point of the variety
V
=
V
(
x
2
+
y
2
−
1
)
⊆
C
2
,
{\displaystyle V=V(x^{2}+y^{2}-1)\subseteq \mathbf {C} ^{2},}
as it is in V and all its coordinates are integers. The point (√2/2, √2/2) is a real point of V that is not Q-rational, and
(
i
,
2
)
{\displaystyle (i,{\sqrt {2}})}
is a point of V that is not R-rational. This variety is called a circle, because the set of its R-rational points is the unit circle. It has infinitely many Q-rational points that are the points
(
1
−
t
2
1
+
t
2
,
2
t
1
+
t
2
)
{\displaystyle \left({\frac {1-t^{2}}{1+t^{2}}},{\frac {2t}{1+t^{2}}}\right)}
where t is a rational number.
The circle
V
(
x
2
+
y
2
−
3
)
⊆
C
2
{\displaystyle V(x^{2}+y^{2}-3)\subseteq \mathbf {C} ^{2}}
is an example of an algebraic curve of degree two that has no Q-rational point. This can be deduced from the fact that, modulo 4, the sum of two squares cannot be 3.
It can be proved that an algebraic curve of degree two with a Q-rational point has infinitely many other Q-rational points; each such point is the second intersection point of the curve and a line with a rational slope passing through the rational point.
The complex variety
V
(
x
2
+
y
2
+
1
)
⊆
C
2
{\displaystyle V(x^{2}+y^{2}+1)\subseteq \mathbf {C} ^{2}}
has no R-rational points, but has many complex points.
If V is an affine variety in C2 defined over the complex numbers C, the R-rational points of V can be drawn on a piece of paper or by graphing software. The figure on the right shows the R-rational points of
V
(
y
2
−
x
3
+
x
2
+
16
x
)
⊆
C
2
.
{\displaystyle V(y^{2}-x^{3}+x^{2}+16x)\subseteq \mathbf {C} ^{2}.}
== Singular points and tangent space ==
Let V be an affine variety defined by the polynomials
f
1
,
…
,
f
r
∈
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle f_{1},\dots ,f_{r}\in k[x_{1},\dots ,x_{n}],}
and
a
=
(
a
1
,
…
,
a
n
)
{\displaystyle a=(a_{1},\dots ,a_{n})}
be a point of V.
The Jacobian matrix JV(a) of V at a is the matrix of the partial derivatives
∂
f
j
∂
x
i
(
a
1
,
…
,
a
n
)
.
{\displaystyle {\frac {\partial f_{j}}{\partial {x_{i}}}}(a_{1},\dots ,a_{n}).}
The point a is regular if the rank of JV(a) equals the codimension of V, and singular otherwise.
If a is regular, the tangent space to V at a is the affine subspace of
k
n
{\displaystyle k^{n}}
defined by the linear equations
∑
i
=
1
n
∂
f
j
∂
x
i
(
a
1
,
…
,
a
n
)
(
x
i
−
a
i
)
=
0
,
j
=
1
,
…
,
r
.
{\displaystyle \sum _{i=1}^{n}{\frac {\partial f_{j}}{\partial {x_{i}}}}(a_{1},\dots ,a_{n})(x_{i}-a_{i})=0,\quad j=1,\dots ,r.}
If the point is singular, the affine subspace defined by these equations is also called a tangent space by some authors, while other authors say that there is no tangent space at a singular point.
A more intrinsic definition which does not use coordinates is given by Zariski tangent space.
== The Zariski topology ==
The affine algebraic sets of kn form the closed sets of a topology on kn, called the Zariski topology. This follows from the fact that
V
(
0
)
=
k
n
,
{\displaystyle V(0)=k^{n},}
V
(
1
)
=
∅
,
{\displaystyle V(1)=\emptyset ,}
V
(
S
)
∪
V
(
T
)
=
V
(
S
T
)
,
{\displaystyle V(S)\cup V(T)=V(ST),}
and
V
(
S
)
∩
V
(
T
)
=
V
(
S
,
T
)
{\displaystyle V(S)\cap V(T)=V(S,T)}
(in fact, a countable intersection of affine algebraic sets is an affine algebraic set).
The Zariski topology can also be described by way of basic open sets, where Zariski-open sets are countable unions of sets of the form
U
f
=
{
p
∈
k
n
:
f
(
p
)
≠
0
}
{\displaystyle U_{f}=\{p\in k^{n}:f(p)\neq 0\}}
for
f
∈
k
[
x
1
,
…
,
x
n
]
.
{\displaystyle f\in k[x_{1},\ldots ,x_{n}].}
These basic open sets are the complements in kn of the closed sets
V
(
f
)
=
D
f
=
{
p
∈
k
n
:
f
(
p
)
=
0
}
,
{\displaystyle V(f)=D_{f}=\{p\in k^{n}:f(p)=0\},}
zero loci of a single polynomial. If k is Noetherian (for instance, if k is a field or a principal ideal domain), then every ideal of k is finitely-generated, so every open set is a finite union of basic open sets.
If V is an affine subvariety of kn the Zariski topology on V is simply the subspace topology inherited from the Zariski topology on kn.
== Geometry–algebra correspondence ==
The geometric structure of an affine variety is linked in a deep way to the algebraic structure of its coordinate ring. Let I and J be ideals of k[V], the coordinate ring of an affine variety V. Let I(V) be the set of all polynomials in
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle k[x_{1},\ldots ,x_{n}],}
that vanish on V, and let
I
{\displaystyle {\sqrt {I}}}
denote the radical of the ideal I, the set of polynomials f for which some power of f is in I. The reason that the base field is required to be algebraically closed is that affine varieties automatically satisfy Hilbert's nullstellensatz: for an ideal J in
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle k[x_{1},\ldots ,x_{n}],}
where k is an algebraically closed field,
I
(
V
(
J
)
)
=
J
.
{\displaystyle I(V(J))={\sqrt {J}}.}
Radical ideals (ideals that are their own radical) of k[V] correspond to algebraic subsets of V. Indeed, for radical ideals I and J,
I
⊆
J
{\displaystyle I\subseteq J}
if and only if
V
(
J
)
⊆
V
(
I
)
.
{\displaystyle V(J)\subseteq V(I).}
Hence V(I)=V(J) if and only if I=J. Furthermore, the function taking an affine algebraic set W and returning I(W), the set of all functions that also vanish on all points of W, is the inverse of the function assigning an algebraic set to a radical ideal, by the nullstellensatz. Hence the correspondence between affine algebraic sets and radical ideals is a bijection. The coordinate ring of an affine algebraic set is reduced (nilpotent-free), as an ideal I in a ring R is radical if and only if the quotient ring R/I is reduced.
Prime ideals of the coordinate ring correspond to affine subvarieties. An affine algebraic set V(I) can be written as the union of two other algebraic sets if and only if I=JK for proper ideals J and K not equal to I (in which case
V
(
I
)
=
V
(
J
)
∪
V
(
K
)
{\displaystyle V(I)=V(J)\cup V(K)}
). This is the case if and only if I is not prime. Affine subvarieties are precisely those whose coordinate ring is an integral domain. This is because an ideal is prime if and only if the quotient of the ring by the ideal is an integral domain.
Maximal ideals of k[V] correspond to points of V. If I and J are radical ideals, then
V
(
J
)
⊆
V
(
I
)
{\displaystyle V(J)\subseteq V(I)}
if and only if
I
⊆
J
.
{\displaystyle I\subseteq J.}
As maximal ideals are radical, maximal ideals correspond to minimal algebraic sets (those that contain no proper algebraic subsets), which are points in V. If V is an affine variety with coordinate ring
R
=
k
[
x
1
,
…
,
x
n
]
/
⟨
f
1
,
…
,
f
m
⟩
,
{\displaystyle R=k[x_{1},\ldots ,x_{n}]/\langle f_{1},\ldots ,f_{m}\rangle ,}
this correspondence becomes explicit through the map
(
a
1
,
…
,
a
n
)
↦
⟨
x
1
−
a
1
¯
,
…
,
x
n
−
a
n
¯
⟩
,
{\displaystyle (a_{1},\ldots ,a_{n})\mapsto \langle {\overline {x_{1}-a_{1}}},\ldots ,{\overline {x_{n}-a_{n}}}\rangle ,}
where
x
i
−
a
i
¯
{\displaystyle {\overline {x_{i}-a_{i}}}}
denotes the image in the quotient algebra R of the polynomial
x
i
−
a
i
.
{\displaystyle x_{i}-a_{i}.}
An algebraic subset is a point if and only if the coordinate ring of the subset is a field, as the quotient of a ring by a maximal ideal is a field.
The following table summarizes this correspondence, for algebraic subsets of an affine variety and ideals of the corresponding coordinate ring:
== Products of affine varieties ==
A product of affine varieties can be defined using the isomorphism An × Am = An+m, then embedding the product in this new affine space. Let An and Am have coordinate rings k[x1,..., xn] and k[y1,..., ym] respectively, so that their product An+m has coordinate ring k[x1,..., xn, y1,..., ym]. Let V = V( f1,..., fN) be an algebraic subset of An, and W = V( g1,..., gM) an algebraic subset of Am. Then each fi is a polynomial in k[x1,..., xn], and each gj is in k[y1,..., ym]. The product of V and W is defined as the algebraic set V × W = V( f1,..., fN, g1,..., gM) in An+m. The product is irreducible if each V, W is irreducible.
The Zariski topology on An × Am is not the topological product of the Zariski topologies on the two spaces. Indeed, the product topology is generated by products of the basic open sets Uf = An − V( f ) and Tg = Am − V( g ). Hence, polynomials that are in k[x1,..., xn, y1,..., ym] but cannot be obtained as a product of a polynomial in k[x1,..., xn] with a polynomial in k[y1,..., ym] will define algebraic sets that are closed in the Zariski topology on An × Am , but not in the product topology.
== Morphisms of affine varieties ==
A morphism, or regular map, of affine varieties is a function between affine varieties that is polynomial in each coordinate: more precisely, for affine varieties V ⊆ kn and W ⊆ km, a morphism from V to W is a map φ : V → W of the form φ(a1, ..., an) = (f1(a1, ..., an), ..., fm(a1, ..., an)), where fi ∈ k[X1, ..., Xn] for each i = 1, ..., m. These are the morphisms in the category of affine varieties.
There is a one-to-one correspondence between morphisms of affine varieties over an algebraically closed field k, and homomorphisms of coordinate rings of affine varieties over k going in the opposite direction. Because of this, along with the fact that there is a one-to-one correspondence between affine varieties over k and their coordinate rings, the category of affine varieties over k is dual to the category of coordinate rings of affine varieties over k. The category of coordinate rings of affine varieties over k is precisely the category of finitely-generated, nilpotent-free algebras over k.
More precisely, for each morphism φ : V → W of affine varieties, there is a homomorphism φ# : k[W] → k[V] between the coordinate rings (going in the opposite direction), and for each such homomorphism, there is a morphism of the varieties associated to the coordinate rings. This can be shown explicitly: let V ⊆ kn and W ⊆ km be affine varieties with coordinate rings k[V] = k[X1, ..., Xn] / I and k[W] = k[Y1, ..., Ym] / J respectively. Let φ : V → W be a morphism. Indeed, a homomorphism between polynomial rings θ : k[Y1, ..., Ym] / J → k[X1, ..., Xn] / I factors uniquely through the ring k[X1, ..., Xn], and a homomorphism ψ : k[Y1, ..., Ym] / J → k[X1, ..., Xn] is determined uniquely by the images of Y1, ..., Ym. Hence, each homomorphism φ# : k[W] → k[V] corresponds uniquely to a choice of image for each Yi. Then given any morphism φ = (f1, ..., fm) from V to W, a homomorphism can be constructed φ# : k[W] → k[V] that sends Yi to
f
i
¯
,
{\displaystyle {\overline {f_{i}}},}
where
f
i
¯
{\displaystyle {\overline {f_{i}}}}
is the equivalence class of fi in k[V].
Similarly, for each homomorphism of the coordinate rings, a morphism of the affine varieties can be constructed in the opposite direction. Mirroring the paragraph above, a homomorphism φ# : k[W] → k[V] sends Yi to a polynomial
f
i
(
X
1
,
…
,
X
n
)
{\displaystyle f_{i}(X_{1},\dots ,X_{n})}
in k[V]. This corresponds to the morphism of varieties φ : V → W defined by φ(a1, ... , an) = (f1(a1, ..., an), ..., fm(a1, ..., an)).
== Structure sheaf ==
Equipped with the structure sheaf described below, an affine variety is a locally ringed space.
Given an affine variety X with coordinate ring A, the sheaf of k-algebras
O
X
{\displaystyle {\mathcal {O}}_{X}}
is defined by letting
O
X
(
U
)
=
Γ
(
U
,
O
X
)
{\displaystyle {\mathcal {O}}_{X}(U)=\Gamma (U,{\mathcal {O}}_{X})}
be the ring of regular functions on U.
Let D(f) = { x | f(x) ≠ 0 } for each f in A. They form a base for the topology of X and so
O
X
{\displaystyle {\mathcal {O}}_{X}}
is determined by its values on the open sets D(f). (See also: sheaf of modules#Sheaf associated to a module.)
The key fact, which relies on Hilbert nullstellensatz in the essential way, is the following:
Proof: The inclusion ⊃ is clear. For the opposite, let g be in the left-hand side and
J
=
{
h
∈
A
|
h
g
∈
A
}
{\displaystyle J=\{h\in A|hg\in A\}}
, which is an ideal. If x is in D(f), then, since g is regular near x, there is some open affine neighborhood D(h) of x such that
g
∈
k
[
D
(
h
)
]
=
A
[
h
−
1
]
{\displaystyle g\in k[D(h)]=A[h^{-1}]}
; that is, hm g is in A and thus x is not in V(J). In other words,
V
(
J
)
⊂
{
x
|
f
(
x
)
=
0
}
{\displaystyle V(J)\subset \{x|f(x)=0\}}
and thus the Hilbert nullstellensatz implies f is in the radical of J; i.e.,
f
n
g
∈
A
{\displaystyle f^{n}g\in A}
.
◻
{\displaystyle \square }
The claim, first of all, implies that X is a "locally ringed" space since
O
X
,
x
=
lim
→
f
(
x
)
≠
0
A
[
f
−
1
]
=
A
m
x
{\displaystyle {\mathcal {O}}_{X,x}=\varinjlim _{f(x)\neq 0}A[f^{-1}]=A_{{\mathfrak {m}}_{x}}}
where
m
x
=
{
f
∈
A
|
f
(
x
)
=
0
}
{\displaystyle {\mathfrak {m}}_{x}=\{f\in A|f(x)=0\}}
. Secondly, the claim implies that
O
X
{\displaystyle {\mathcal {O}}_{X}}
is a sheaf; indeed, it says if a function is regular (pointwise) on D(f), then it must be in the coordinate ring of D(f); that is, "regular-ness" can be patched together.
Hence,
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a locally ringed space.
== Serre's theorem on affineness ==
A theorem of Serre gives a cohomological characterization of an affine variety; it says an algebraic variety is affine if and only if
H
i
(
X
,
F
)
=
0
{\displaystyle H^{i}(X,F)=0}
for any
i
>
0
{\displaystyle i>0}
and any quasi-coherent sheaf F on X. (cf. Cartan's theorem B.) This makes the cohomological study of an affine variety non-existent, in a sharp contrast to the projective case in which cohomology groups of line bundles are of central interest.
== Affine algebraic groups ==
An affine variety G over an algebraically closed field k is called an affine algebraic group if it has:
A multiplication μ: G × G → G, which is a regular morphism that follows the associativity axiom—that is, such that μ(μ(f, g), h) = μ(f, μ(g, h)) for all points f, g and h in G;
An identity element e such that μ(e, g) = μ(g, e) = g for every g in G;
An inverse morphism, a regular bijection ι: G → G such that μ(ι(g), g) = μ(g, ι(g)) = e for every g in G.
Together, these define a group structure on the variety. The above morphisms are often written using ordinary group notation: μ(f, g) can be written as f + g, f⋅g, or fg; the inverse ι(g) can be written as −g or g−1. Using the multiplicative notation, the associativity, identity and inverse laws can be rewritten as: f(gh) = (fg)h, ge = eg = g and gg−1 = g−1g = e.
The most prominent example of an affine algebraic group is GLn(k), the general linear group of degree n. This is the group of linear transformations of the vector space kn; if a basis of kn, is fixed, this is equivalent to the group of n×n invertible matrices with entries in k. It can be shown that any affine algebraic group is isomorphic to a subgroup of GLn(k). For this reason, affine algebraic groups are often called linear algebraic groups.
Affine algebraic groups play an important role in the classification of finite simple groups, as the groups of Lie type are all sets of Fq-rational points of an affine algebraic group, where Fq is a finite field.
== Generalizations ==
If an author requires the base field of an affine variety to be algebraically closed (as this article does), then irreducible affine algebraic sets over non-algebraically closed fields are a generalization of affine varieties. This generalization notably includes affine varieties over the real numbers.
An open subset of an affine variety is called a quasi-affine variety, so every affine variety is quasi-affine. Any quasi-affine variety is in turn a quasi-projective variety.
Affine varieties play the role of local charts for algebraic varieties; that is to say, general algebraic varieties such as projective varieties are obtained by gluing affine varieties. Linear structures that are attached to varieties are also (trivially) affine varieties; e.g., tangent spaces, fibers of algebraic vector bundles.
The construction given in § Structure sheaf allows for a generalization that is used in scheme theory, the modern approach to algebraic geometry. An affine variety is (up to an equivalence of categories) a special case of an affine scheme, a locally-ringed space that is isomorphic to the spectrum of a commutative ring. Each affine variety has an affine scheme associated to it: if V(I) is an affine variety in kn with coordinate ring R = k[x1, ..., xn] / I, then the scheme corresponding to V(I) is Spec(R), the set of prime ideals of R. The affine scheme has "classical points", which correspond with points of the variety (and hence maximal ideals of the coordinate ring of the variety), and also a point for each closed subvariety of the variety (these points correspond to prime, non-maximal ideals of the coordinate ring). This creates a more well-defined notion of the "generic point" of an affine variety, by assigning to each closed subvariety an open point that is dense in the subvariety. More generally, an affine scheme is an affine variety if it is reduced, irreducible, and of finite type over an algebraically closed field k.
== Notes ==
== See also ==
Representations on coordinate rings
== References ==
The original article was written as a partial human translation of the corresponding French article.
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
Fulton, William (1969). Algebraic Curves (PDF). Addison-Wesley. ISBN 0-201-510103.
Milne, James S. (2017). "Algebraic Geometry" (PDF). www.jmilne.org. Retrieved 16 July 2021.
Milne, James S. Lectures on Étale cohomology
Mumford, David (1999). The Red Book of Varieties and Schemes: Includes the Michigan Lectures (1974) on Curves and Their Jacobians. Lecture Notes in Mathematics. Vol. 1358 (2nd ed.). Springer-Verlag. doi:10.1007/b62130. ISBN 354063293X.
Reid, Miles (1988). Undergraduate Algebraic Geometry. Cambridge University Press. ISBN 0-521-35662-8. | Wikipedia/Affine_algebraic_variety |
In mathematics, the annihilator of a subset S of a module over a ring is the ideal formed by the elements of the ring that give always zero when multiplied by each element of S.
Over an integral domain, a module that has a nonzero annihilator is a torsion module, and a finitely generated torsion module has a nonzero annihilator.
The above definition applies also in the case of noncommutative rings, where the left annihilator of a left module is a left ideal, and the right-annihilator, of a right module is a right ideal.
== Definitions ==
Let R be a ring, and let M be a left R-module. Choose a non-empty subset S of M. The annihilator of S, denoted AnnR(S), is the set of all elements r in R such that, for all s in S, rs = 0. In set notation,
A
n
n
R
(
S
)
=
{
r
∈
R
∣
r
s
=
0
{\displaystyle \mathrm {Ann} _{R}(S)=\{r\in R\mid rs=0}
for all
s
∈
S
}
{\displaystyle s\in S\}}
It is the set of all elements of R that "annihilate" S (the elements for which S is a torsion set). Subsets of right modules may be used as well, after the modification of "sr = 0" in the definition.
The annihilator of a single element x is usually written AnnR(x) instead of AnnR({x}). If the ring R can be understood from the context, the subscript R can be omitted.
Since R is a module over itself, S may be taken to be a subset of R itself, and since R is both a right and a left R-module, the notation must be modified slightly to indicate the left or right side. Usually
ℓ
.
A
n
n
R
(
S
)
{\displaystyle \ell .\!\mathrm {Ann} _{R}(S)\,}
and
r
.
A
n
n
R
(
S
)
{\displaystyle r.\!\mathrm {Ann} _{R}(S)\,}
or some similar subscript scheme are used to distinguish the left and right annihilators, if necessary.
If M is an R-module and AnnR(M) = 0, then M is called a faithful module.
== Properties ==
If S is a subset of a left R-module M, then Ann(S) is a left ideal of R.
If S is a submodule of M, then AnnR(S) is even a two-sided ideal: (ac)s = a(cs) = 0, since cs is another element of S.
If S is a subset of M and N is the submodule of M generated by S, then in general AnnR(N) is a subset of AnnR(S), but they are not necessarily equal. If R is commutative, then the equality holds.
M may be also viewed as an R/AnnR(M)-module using the action
r
¯
m
:=
r
m
{\displaystyle {\overline {r}}m:=rm\,}
. Incidentally, it is not always possible to make an R-module into an R/I-module this way, but if the ideal I is a subset of the annihilator of M, then this action is well-defined. Considered as an R/AnnR(M)-module, M is automatically a faithful module.
=== For commutative rings ===
Throughout this section, let
R
{\displaystyle R}
be a commutative ring and
M
{\displaystyle M}
a finitely generated
R
{\displaystyle R}
-module.
==== Relation to support ====
The support of a module is defined as
Supp
M
=
{
p
∈
Spec
R
∣
M
p
≠
0
}
.
{\displaystyle \operatorname {Supp} M=\{{\mathfrak {p}}\in \operatorname {Spec} R\mid M_{\mathfrak {p}}\neq 0\}.}
Then, when the module is finitely generated, there is the relation
V
(
Ann
R
(
M
)
)
=
Supp
M
{\displaystyle V(\operatorname {Ann} _{R}(M))=\operatorname {Supp} M}
,
where
V
(
⋅
)
{\displaystyle V(\cdot )}
is the set of prime ideals containing the subset.
==== Short exact sequences ====
Given a short exact sequence of modules,
0
→
M
′
→
M
→
M
″
→
0
,
{\displaystyle 0\to M'\to M\to M''\to 0,}
the support property
Supp
M
=
Supp
M
′
∪
Supp
M
″
,
{\displaystyle \operatorname {Supp} M=\operatorname {Supp} M'\cup \operatorname {Supp} M'',}
together with the relation with the annihilator implies
V
(
Ann
R
(
M
)
)
=
V
(
Ann
R
(
M
′
)
)
∪
V
(
Ann
R
(
M
″
)
)
.
{\displaystyle V(\operatorname {Ann} _{R}(M))=V(\operatorname {Ann} _{R}(M'))\cup V(\operatorname {Ann} _{R}(M'')).}
More specifically, the relations
Ann
R
(
M
′
)
∩
Ann
R
(
M
″
)
⊇
Ann
R
(
M
)
⊇
Ann
R
(
M
′
)
Ann
R
(
M
″
)
.
{\displaystyle \operatorname {Ann} _{R}(M')\cap \operatorname {Ann} _{R}(M'')\supseteq \operatorname {Ann} _{R}(M)\supseteq \operatorname {Ann} _{R}(M')\operatorname {Ann} _{R}(M'').}
If the sequence splits then the inequality on the left is always an equality. This holds for arbitrary direct sums of modules, as
Ann
R
(
⨁
i
∈
I
M
i
)
=
⋂
i
∈
I
Ann
R
(
M
i
)
.
{\displaystyle \operatorname {Ann} _{R}\left(\bigoplus _{i\in I}M_{i}\right)=\bigcap _{i\in I}\operatorname {Ann} _{R}(M_{i}).}
==== Quotient modules and annihilators ====
Given an ideal
I
⊆
R
{\displaystyle I\subseteq R}
and let
M
{\displaystyle M}
be a finitely generated module, then there is the relation
Supp
(
M
/
I
M
)
=
Supp
M
∩
V
(
I
)
{\displaystyle {\text{Supp}}(M/IM)=\operatorname {Supp} M\cap V(I)}
on the support. Using the relation to support, this gives the relation with the annihilator
V
(
Ann
R
(
M
/
I
M
)
)
=
V
(
Ann
R
(
M
)
)
∩
V
(
I
)
.
{\displaystyle V({\text{Ann}}_{R}(M/IM))=V({\text{Ann}}_{R}(M))\cap V(I).}
== Examples ==
=== Over the integers ===
Over
Z
{\displaystyle \mathbb {Z} }
any finitely generated module is completely classified as the direct sum of its free part with its torsion part from the fundamental theorem of abelian groups. Then the annihilator of a finitely generated module is non-trivial only if it is entirely torsion. This is because
Ann
Z
(
Z
⊕
k
)
=
{
0
}
=
(
0
)
{\displaystyle {\text{Ann}}_{\mathbb {Z} }(\mathbb {Z} ^{\oplus k})=\{0\}=(0)}
since the only element killing each of the
Z
{\displaystyle \mathbb {Z} }
is
0
{\displaystyle 0}
. For example, the annihilator of
Z
/
2
⊕
Z
/
3
{\displaystyle \mathbb {Z} /2\oplus \mathbb {Z} /3}
is
Ann
Z
(
Z
/
2
⊕
Z
/
3
)
=
(
6
)
=
(
lcm
(
2
,
3
)
)
,
{\displaystyle {\text{Ann}}_{\mathbb {Z} }(\mathbb {Z} /2\oplus \mathbb {Z} /3)=(6)=({\text{lcm}}(2,3)),}
the ideal generated by
(
6
)
{\displaystyle (6)}
. In fact the annihilator of a torsion module
M
≅
⨁
i
=
1
n
(
Z
/
a
i
)
⊕
k
i
{\displaystyle M\cong \bigoplus _{i=1}^{n}(\mathbb {Z} /a_{i})^{\oplus k_{i}}}
is isomorphic to the ideal generated by their least common multiple,
(
lcm
(
a
1
,
…
,
a
n
)
)
{\displaystyle (\operatorname {lcm} (a_{1},\ldots ,a_{n}))}
. This shows the annihilators can be easily be classified over the integers.
=== Over a commutative ring R ===
There is a similar computation that can be done for any finitely presented module over a commutative ring
R
{\displaystyle R}
. The definition of finite presentedness of
M
{\displaystyle M}
implies there exists an exact sequence, called a presentation, given by
R
⊕
l
→
ϕ
R
⊕
k
→
M
→
0
{\displaystyle R^{\oplus l}\xrightarrow {\phi } R^{\oplus k}\to M\to 0}
where
ϕ
{\displaystyle \phi }
is in
Mat
k
,
l
(
R
)
{\displaystyle {\text{Mat}}_{k,l}(R)}
. Writing
ϕ
{\displaystyle \phi }
explicitly as a matrix gives it as
ϕ
=
[
ϕ
1
,
1
⋯
ϕ
1
,
l
⋮
⋮
ϕ
k
,
1
⋯
ϕ
k
,
l
]
;
{\displaystyle \phi ={\begin{bmatrix}\phi _{1,1}&\cdots &\phi _{1,l}\\\vdots &&\vdots \\\phi _{k,1}&\cdots &\phi _{k,l}\end{bmatrix}};}
hence
M
{\displaystyle M}
has the direct sum decomposition
M
=
⨁
i
=
1
k
R
(
ϕ
i
,
1
(
1
)
,
…
,
ϕ
i
,
l
(
1
)
)
{\displaystyle M=\bigoplus _{i=1}^{k}{\frac {R}{(\phi _{i,1}(1),\ldots ,\phi _{i,l}(1))}}}
If each of these ideals is written as
I
i
=
(
ϕ
i
,
1
(
1
)
,
…
,
ϕ
i
,
l
(
1
)
)
{\displaystyle I_{i}=(\phi _{i,1}(1),\ldots ,\phi _{i,l}(1))}
then the ideal
I
{\displaystyle I}
given by
V
(
I
)
=
⋃
i
=
1
k
V
(
I
i
)
{\displaystyle V(I)=\bigcup _{i=1}^{k}V(I_{i})}
presents the annihilator.
=== Over k[x,y] ===
Over the commutative ring
k
[
x
,
y
]
{\displaystyle k[x,y]}
for a field
k
{\displaystyle k}
, the annihilator of the module
M
=
k
[
x
,
y
]
(
x
2
−
y
)
⊕
k
[
x
,
y
]
(
y
−
3
)
{\displaystyle M={\frac {k[x,y]}{(x^{2}-y)}}\oplus {\frac {k[x,y]}{(y-3)}}}
is given by the ideal
Ann
k
[
x
,
y
]
(
M
)
=
(
(
x
2
−
y
)
(
y
−
3
)
)
.
{\displaystyle {\text{Ann}}_{k[x,y]}(M)=((x^{2}-y)(y-3)).}
== Chain conditions on annihilator ideals ==
The lattice of ideals of the form
ℓ
.
A
n
n
R
(
S
)
{\displaystyle \ell .\!\mathrm {Ann} _{R}(S)}
where S is a subset of R is a complete lattice when partially ordered by inclusion. There is interest in studying rings for which this lattice (or its right counterpart) satisfies the ascending chain condition or descending chain condition.
Denote the lattice of left annihilator ideals of R as
L
A
{\displaystyle {\mathcal {LA}}\,}
and the lattice of right annihilator ideals of R as
R
A
{\displaystyle {\mathcal {RA}}\,}
. It is known that
L
A
{\displaystyle {\mathcal {LA}}\,}
satisfies the ascending chain condition if and only if
R
A
{\displaystyle {\mathcal {RA}}\,}
satisfies the descending chain condition, and symmetrically
R
A
{\displaystyle {\mathcal {RA}}\,}
satisfies the ascending chain condition if and only if
L
A
{\displaystyle {\mathcal {LA}}\,}
satisfies the descending chain condition. If either lattice has either of these chain conditions, then R has no infinite pairwise orthogonal sets of idempotents.
If R is a ring for which
L
A
{\displaystyle {\mathcal {LA}}\,}
satisfies the A.C.C. and RR has finite uniform dimension, then R is called a left Goldie ring.
== Category-theoretic description for commutative rings ==
When R is commutative and M is an R-module, we may describe AnnR(M) as the kernel of the action map R → EndR(M) determined by the adjunct map of the identity M → M along the Hom-tensor adjunction.
More generally, given a bilinear map of modules
F
:
M
×
N
→
P
{\displaystyle F\colon M\times N\to P}
, the annihilator of a subset
S
⊆
M
{\displaystyle S\subseteq M}
is the set of all elements in
N
{\displaystyle N}
that annihilate
S
{\displaystyle S}
:
Ann
(
S
)
:=
{
n
∈
N
∣
∀
s
∈
S
:
F
(
s
,
n
)
=
0
}
.
{\displaystyle \operatorname {Ann} (S):=\{n\in N\mid \forall s\in S:F(s,n)=0\}.}
Conversely, given
T
⊆
N
{\displaystyle T\subseteq N}
, one can define an annihilator as a subset of
M
{\displaystyle M}
.
The annihilator gives a Galois connection between subsets of
M
{\displaystyle M}
and
N
{\displaystyle N}
, and the associated closure operator is stronger than the span.
In particular:
annihilators are submodules
Span
S
≤
Ann
(
Ann
(
S
)
)
{\displaystyle \operatorname {Span} S\leq \operatorname {Ann} (\operatorname {Ann} (S))}
Ann
(
Ann
(
Ann
(
S
)
)
)
=
Ann
(
S
)
{\displaystyle \operatorname {Ann} (\operatorname {Ann} (\operatorname {Ann} (S)))=\operatorname {Ann} (S)}
An important special case is in the presence of a nondegenerate form on a vector space, particularly an inner product: then the annihilator associated to the map
V
×
V
→
K
{\displaystyle V\times V\to K}
is called the orthogonal complement.
== Relations to other properties of rings ==
Given a module M over a Noetherian commutative ring R, a prime ideal of R that is an annihilator of a nonzero element of M is called an associated prime of M.
Annihilators are used to define left Rickart rings and Baer rings.
The set of (left) zero divisors DS of S can be written as
D
S
=
⋃
x
∈
S
∖
{
0
}
A
n
n
R
(
x
)
.
{\displaystyle D_{S}=\bigcup _{x\in S\setminus \{0\}}{\mathrm {Ann} _{R}(x)}.}
(Here we allow zero to be a zero divisor.)
In particular DR is the set of (left) zero divisors of R taking S = R and R acting on itself as a left R-module.
When R is commutative and Noetherian, the set
D
R
{\displaystyle D_{R}}
is precisely equal to the union of the associated primes of the R-module R.
== See also ==
Faltings' annihilator theorem
Socle
Support of a module
== Notes ==
== References ==
Anderson, Frank W.; Fuller, Kent R. (1992), Rings and categories of modules, Graduate Texts in Mathematics, vol. 13 (2 ed.), New York: Springer-Verlag, pp. x+376, doi:10.1007/978-1-4612-4418-9, ISBN 0-387-97845-3, MR 1245487
Israel Nathan Herstein (1968) Noncommutative Rings, Carus Mathematical Monographs #15, Mathematical Association of America, page 3.
Lam, Tsit Yuen (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, vol. 189, Berlin, New York: Springer-Verlag, pp. 228–232, doi:10.1007/978-1-4612-0525-8, ISBN 978-0-387-98428-5, MR 1653294
Richard S. Pierce. Associative algebras. Graduate Texts in Mathematics, Vol. 88, Springer-Verlag, 1982, ISBN 978-0-387-90693-5 | Wikipedia/Annihilator_(ring_theory) |
In commutative algebra and algebraic geometry, localization is a formal way to introduce the "denominators" to a given ring or module. That is, it introduces a new ring/module out of an existing ring/module R, so that it consists of fractions
m
s
,
{\displaystyle {\frac {m}{s}},}
such that the denominator s belongs to a given subset S of R. If S is the set of the non-zero elements of an integral domain, then the localization is the field of fractions: this case generalizes the construction of the field
Q
{\displaystyle \mathbb {Q} }
of rational numbers from the ring
Z
{\displaystyle \mathbb {Z} }
of integers.
The technique has become fundamental, particularly in algebraic geometry, as it provides a natural link to sheaf theory. In fact, the term localization originated in algebraic geometry: if R is a ring of functions defined on some geometric object (algebraic variety) V, and one wants to study this variety "locally" near a point p, then one considers the set S of all functions that are not zero at p and localizes R with respect to S. The resulting ring
S
−
1
R
{\displaystyle S^{-1}R}
contains information about the behavior of V near p, and excludes information that is not "local", such as the zeros of functions that are outside V (cf. the example given at local ring).
== Localization of a ring ==
The localization of a commutative ring R by a multiplicatively closed set S is a new ring
S
−
1
R
{\displaystyle S^{-1}R}
whose elements are fractions with numerators in R and denominators in S.
If the ring is an integral domain the construction generalizes and follows closely that of the field of fractions, and, in particular, that of the rational numbers as the field of fractions of the integers. For rings that have zero divisors, the construction is similar but requires more care.
=== Multiplicative set ===
Localization is commonly done with respect to a multiplicatively closed set S (also called a multiplicative set or a multiplicative system) of elements of a ring R, that is a subset of R that is closed under multiplication, and contains 1.
The requirement that S must be a multiplicative set is natural, since it implies that all denominators introduced by the localization belong to S. The localization by a set U that is not multiplicatively closed can also be defined, by taking as possible denominators all products of elements of U. However, the same localization is obtained by using the multiplicatively closed set S of all products of elements of U. As this often makes reasoning and notation simpler, it is standard practice to consider only localizations by multiplicative sets.
For example, the localization by a single element s introduces fractions of the form
a
s
,
{\displaystyle {\tfrac {a}{s}},}
but also products of such fractions, such as
a
b
s
2
.
{\displaystyle {\tfrac {ab}{s^{2}}}.}
So, the denominators will belong to the multiplicative set
{
1
,
s
,
s
2
,
s
3
,
…
}
{\displaystyle \{1,s,s^{2},s^{3},\ldots \}}
of the powers of s. Therefore, one generally talks of "the localization by the powers of an element" rather than of "the localization by an element".
The localization of a ring R by a multiplicative set S is generally denoted
S
−
1
R
,
{\displaystyle S^{-1}R,}
but other notations are commonly used in some special cases: if
S
=
{
1
,
t
,
t
2
,
…
}
{\displaystyle S=\{1,t,t^{2},\ldots \}}
consists of the powers of a single element,
S
−
1
R
{\displaystyle S^{-1}R}
is often denoted
R
t
;
{\displaystyle R_{t};}
if
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
is the complement of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
, then
S
−
1
R
{\displaystyle S^{-1}R}
is denoted
R
p
.
{\displaystyle R_{\mathfrak {p}}.}
In the remainder of this article, only localizations by a multiplicative set are considered.
=== Integral domains ===
When the ring R is an integral domain and S does not contain 0, the ring
S
−
1
R
{\displaystyle S^{-1}R}
is a subring of the field of fractions of R. As such, the localization of a domain is a domain.
More precisely, it is the subring of the field of fractions of R, that consists of the fractions
a
s
{\displaystyle {\tfrac {a}{s}}}
such that
s
∈
S
.
{\displaystyle s\in S.}
This is a subring since the sum
a
s
+
b
t
=
a
t
+
b
s
s
t
,
{\displaystyle {\tfrac {a}{s}}+{\tfrac {b}{t}}={\tfrac {at+bs}{st}},}
and the product
a
s
b
t
=
a
b
s
t
{\displaystyle {\tfrac {a}{s}}\,{\tfrac {b}{t}}={\tfrac {ab}{st}}}
of two elements of
S
−
1
R
{\displaystyle S^{-1}R}
are in
S
−
1
R
.
{\displaystyle S^{-1}R.}
This results from the defining property of a multiplicative set, which implies also that
1
=
1
1
∈
S
−
1
R
.
{\displaystyle 1={\tfrac {1}{1}}\in S^{-1}R.}
In this case, R is a subring of
S
−
1
R
.
{\displaystyle S^{-1}R.}
It is shown below that this is no longer true in general, typically when S contains zero divisors.
For example, the decimal fractions are the localization of the ring of integers by the multiplicative set of the powers of ten. In this case,
S
−
1
R
{\displaystyle S^{-1}R}
consists of the rational numbers that can be written as
n
10
k
,
{\displaystyle {\tfrac {n}{10^{k}}},}
where n is an integer, and k is a nonnegative integer.
=== General construction ===
In the general case, a problem arises with zero divisors. Let S be a multiplicative set in a commutative ring R. Suppose that
s
∈
S
,
{\displaystyle s\in S,}
and
0
≠
a
∈
R
{\displaystyle 0\neq a\in R}
is a zero divisor with
a
s
=
0.
{\displaystyle as=0.}
Then
a
1
{\displaystyle {\tfrac {a}{1}}}
is the image in
S
−
1
R
{\displaystyle S^{-1}R}
of
a
∈
R
,
{\displaystyle a\in R,}
and one has
a
1
=
a
s
s
=
0
s
=
0
1
.
{\displaystyle {\tfrac {a}{1}}={\tfrac {as}{s}}={\tfrac {0}{s}}={\tfrac {0}{1}}.}
Thus some nonzero elements of R must be zero in
S
−
1
R
.
{\displaystyle S^{-1}R.}
The construction that follows is designed for taking this into account.
Given R and S as above, one considers the equivalence relation on
R
×
S
{\displaystyle R\times S}
that is defined by
(
r
1
,
s
1
)
∼
(
r
2
,
s
2
)
{\displaystyle (r_{1},s_{1})\sim (r_{2},s_{2})}
if there exists a
t
∈
S
{\displaystyle t\in S}
such that
t
(
s
1
r
2
−
s
2
r
1
)
=
0.
{\displaystyle t(s_{1}r_{2}-s_{2}r_{1})=0.}
The localization
S
−
1
R
{\displaystyle S^{-1}R}
is defined as the set of the equivalence classes for this relation. The class of (r, s) is denoted as
r
s
,
{\displaystyle {\frac {r}{s}},}
r
/
s
,
{\displaystyle r/s,}
or
s
−
1
r
.
{\displaystyle s^{-1}r.}
So, one has
r
1
s
1
=
r
2
s
2
{\displaystyle {\tfrac {r_{1}}{s_{1}}}={\tfrac {r_{2}}{s_{2}}}}
if and only if there is a
t
∈
S
{\displaystyle t\in S}
such that
t
(
s
1
r
2
−
s
2
r
1
)
=
0.
{\displaystyle t(s_{1}r_{2}-s_{2}r_{1})=0.}
The reason for the
t
{\displaystyle t}
is to handle cases such as the above
a
1
=
0
1
,
{\displaystyle {\tfrac {a}{1}}={\tfrac {0}{1}},}
where
s
1
r
2
−
s
2
r
1
{\displaystyle s_{1}r_{2}-s_{2}r_{1}}
is nonzero even though the fractions should be regarded as equal.
The localization
S
−
1
R
{\displaystyle S^{-1}R}
is a commutative ring with addition
r
1
s
1
+
r
2
s
2
=
r
1
s
2
+
r
2
s
1
s
1
s
2
,
{\displaystyle {\frac {r_{1}}{s_{1}}}+{\frac {r_{2}}{s_{2}}}={\frac {r_{1}s_{2}+r_{2}s_{1}}{s_{1}s_{2}}},}
multiplication
r
1
s
1
r
2
s
2
=
r
1
r
2
s
1
s
2
,
{\displaystyle {\frac {r_{1}}{s_{1}}}\,{\frac {r_{2}}{s_{2}}}={\frac {r_{1}r_{2}}{s_{1}s_{2}}},}
additive identity
0
1
,
{\displaystyle {\tfrac {0}{1}},}
and multiplicative identity
1
1
.
{\displaystyle {\tfrac {1}{1}}.}
The function
r
↦
r
1
{\displaystyle r\mapsto {\frac {r}{1}}}
defines a ring homomorphism from
R
{\displaystyle R}
into
S
−
1
R
,
{\displaystyle S^{-1}R,}
which is injective if and only if S does not contain any zero divisors.
If
0
∈
S
,
{\displaystyle 0\in S,}
then
S
−
1
R
{\displaystyle S^{-1}R}
is the zero ring that has only one unique element 0.
If S is the set of all regular elements of R (that is the elements that are not zero divisors),
S
−
1
R
{\displaystyle S^{-1}R}
is called the total ring of fractions of R.
=== Universal property ===
The (above defined) ring homomorphism
j
:
R
→
S
−
1
R
{\displaystyle j\colon R\to S^{-1}R}
satisfies a universal property that is described below. This characterizes
S
−
1
R
{\displaystyle S^{-1}R}
up to an isomorphism. So all properties of localizations can be deduced from the universal property, independently from the way they have been constructed. Moreover, many important properties of localization are easily deduced from the general properties of universal properties, while their direct proof may be more technical.
The universal property satisfied by
j
:
R
→
S
−
1
R
{\displaystyle j\colon R\to S^{-1}R}
is the following:
If
f
:
R
→
T
{\displaystyle f\colon R\to T}
is a ring homomorphism that maps every element of S to a unit (invertible element) in T, there exists a unique ring homomorphism
g
:
S
−
1
R
→
T
{\displaystyle g\colon S^{-1}R\to T}
such that
f
=
g
∘
j
.
{\displaystyle f=g\circ j.}
Using category theory, this can be expressed by saying that localization is a functor that is left adjoint to a forgetful functor. More precisely, let
C
{\displaystyle {\mathcal {C}}}
and
D
{\displaystyle {\mathcal {D}}}
be the categories whose objects are pairs of a commutative ring and a submonoid of, respectively, the multiplicative monoid or the group of units of the ring. The morphisms of these categories are the ring homomorphisms that map the submonoid of the first object into the submonoid of the second one. Finally, let
F
:
D
→
C
{\displaystyle {\mathcal {F}}\colon {\mathcal {D}}\to {\mathcal {C}}}
be the forgetful functor that forgets that the elements of the second element of the pair are invertible.
Then the factorization
f
=
g
∘
j
{\displaystyle f=g\circ j}
of the universal property defines a bijection
hom
C
(
(
R
,
S
)
,
F
(
T
,
U
)
)
→
hom
D
(
(
S
−
1
R
,
j
(
S
)
)
,
(
T
,
U
)
)
.
{\displaystyle \hom _{\mathcal {C}}((R,S),{\mathcal {F}}(T,U))\to \hom _{\mathcal {D}}((S^{-1}R,j(S)),(T,U)).}
This may seem a rather tricky way of expressing the universal property, but it is useful for showing easily many properties, by using the fact that the composition of two left adjoint functors is a left adjoint functor.
=== Examples ===
If
R
=
Z
{\displaystyle R=\mathbb {Z} }
is the ring of integers, and
S
=
Z
∖
{
0
}
,
{\displaystyle S=\mathbb {Z} \setminus \{0\},}
then
S
−
1
R
{\displaystyle S^{-1}R}
is the field
Q
{\displaystyle \mathbb {Q} }
of the rational numbers.
If R is an integral domain, and
S
=
R
∖
{
0
}
,
{\displaystyle S=R\setminus \{0\},}
then
S
−
1
R
{\displaystyle S^{-1}R}
is the field of fractions of R. The preceding example is a special case of this one.
If R is a commutative ring, and if S is the subset of its elements that are not zero divisors, then
S
−
1
R
{\displaystyle S^{-1}R}
is the total ring of fractions of R. In this case, S is the largest multiplicative set such that the homomorphism
R
→
S
−
1
R
{\displaystyle R\to S^{-1}R}
is injective. The preceding example is a special case of this one.
If
x
{\displaystyle x}
is an element of a commutative ring R and
S
=
{
1
,
x
,
x
2
,
…
}
,
{\displaystyle S=\{1,x,x^{2},\ldots \},}
then
S
−
1
R
{\displaystyle S^{-1}R}
can be identified (is canonically isomorphic to)
R
[
x
−
1
]
=
R
[
s
]
/
(
x
s
−
1
)
.
{\displaystyle R[x^{-1}]=R[s]/(xs-1).}
(The proof consists of showing that this ring satisfies the above universal property.) This sort of localization plays a fundamental role in the definition of an affine scheme.
If
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal of a commutative ring R, the set complement
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
of
p
{\displaystyle {\mathfrak {p}}}
in R is a multiplicative set (by the definition of a prime ideal). The ring
S
−
1
R
{\displaystyle S^{-1}R}
is a local ring that is generally denoted
R
p
,
{\displaystyle R_{\mathfrak {p}},}
and called the local ring of R at
p
.
{\displaystyle {\mathfrak {p}}.}
This sort of localization is fundamental in commutative algebra, because many properties of a commutative ring can be read on its local rings. Such a property is often called a local property. For example, a ring is regular if and only if all its local rings are regular.
=== Ring properties ===
Localization is a rich construction that has many useful properties. In this section, only the properties relative to rings and to a single localization are considered. Properties concerning ideals, modules, or several multiplicative sets are considered in other sections.
S
−
1
R
=
0
{\displaystyle S^{-1}R=0}
if and only if S contains 0.
The ring homomorphism
R
→
S
−
1
R
{\displaystyle R\to S^{-1}R}
is injective if and only if S does not contain any zero divisors.
The ring homomorphism
R
→
S
−
1
R
{\displaystyle R\to S^{-1}R}
is an epimorphism in the category of rings, that is not surjective in general.
The ring
S
−
1
R
{\displaystyle S^{-1}R}
is a flat R-module (see § Localization of a module for details).
If
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
is the complement of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
, then
S
−
1
R
,
{\displaystyle S^{-1}R,}
denoted
R
p
,
{\displaystyle R_{\mathfrak {p}},}
is a local ring; that is, it has only one maximal ideal.
Localization commutes with formations of finite sums, products, intersections and radicals; e.g., if
I
{\displaystyle {\sqrt {I}}}
denote the radical of an ideal I in R, then
I
⋅
S
−
1
R
=
I
⋅
S
−
1
R
.
{\displaystyle {\sqrt {I}}\cdot S^{-1}R={\sqrt {I\cdot S^{-1}R}}\,.}
In particular, R is reduced if and only if its total ring of fractions is reduced.
Let R be an integral domain with the field of fractions K. Then its localization
R
p
{\displaystyle R_{\mathfrak {p}}}
at a prime ideal
p
{\displaystyle {\mathfrak {p}}}
can be viewed as a subring of K. Moreover,
R
=
⋂
p
R
p
=
⋂
m
R
m
{\displaystyle R=\bigcap _{\mathfrak {p}}R_{\mathfrak {p}}=\bigcap _{\mathfrak {m}}R_{\mathfrak {m}}}
where the first intersection is over all prime ideals and the second over the maximal ideals.
There is a bijection between the set of prime ideals of S−1R and the set of prime ideals of R that are disjoint from S. This bijection is induced by the given homomorphism R → S −1R.
=== Saturation of a multiplicative set ===
Let
S
⊆
R
{\displaystyle S\subseteq R}
be a multiplicative set. The saturation
S
^
{\displaystyle {\hat {S}}}
of
S
{\displaystyle S}
is the set
S
^
=
{
r
∈
R
:
∃
s
∈
R
,
r
s
∈
S
}
.
{\displaystyle {\hat {S}}=\{r\in R\colon \exists s\in R,rs\in S\}.}
The multiplicative set S is saturated if it equals its saturation, that is, if
S
^
=
S
{\displaystyle {\hat {S}}=S}
, or equivalently, if
r
s
∈
S
{\displaystyle rs\in S}
implies that r and s are in S.
If S is not saturated, and
r
s
∈
S
,
{\displaystyle rs\in S,}
then
s
r
s
{\displaystyle {\frac {s}{rs}}}
is a multiplicative inverse of the image of r in
S
−
1
R
.
{\displaystyle S^{-1}R.}
So, the images of the elements of
S
^
{\displaystyle {\hat {S}}}
are all invertible in
S
−
1
R
,
{\displaystyle S^{-1}R,}
and the universal property implies that
S
−
1
R
{\displaystyle S^{-1}R}
and
S
^
−
1
R
{\displaystyle {\hat {S}}{}^{-1}R}
are canonically isomorphic, that is, there is a unique isomorphism between them that fixes the images of the elements of R.
If S and T are two multiplicative sets, then
S
−
1
R
{\displaystyle S^{-1}R}
and
T
−
1
R
{\displaystyle T^{-1}R}
are isomorphic if and only if they have the same saturation, or, equivalently, if s belongs to one of the multiplicative sets, then there exists
t
∈
R
{\displaystyle t\in R}
such that st belongs to the other.
Saturated multiplicative sets are not widely used explicitly, since, for verifying that a set is saturated, one must know all units of the ring.
== Terminology explained by the context ==
The term localization originates in the general trend of modern mathematics to study geometrical and topological objects locally, that is in terms of their behavior near each point. Examples of this trend are the fundamental concepts of manifolds, germs and sheafs. In algebraic geometry, an affine algebraic set can be identified with a quotient ring of a polynomial ring in such a way that the points of the algebraic set correspond to the maximal ideals of the ring (this is Hilbert's Nullstellensatz). This correspondence has been generalized for making the set of the prime ideals of a commutative ring a topological space equipped with the Zariski topology; this topological space is called the spectrum of the ring.
In this context, a localization by a multiplicative set may be viewed as the restriction of the spectrum of a ring to the subspace of the prime ideals (viewed as points) that do not intersect the multiplicative set.
Two classes of localizations are more commonly considered:
The multiplicative set is the complement of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
of a ring R. In this case, one speaks of the "localization at
p
{\displaystyle {\mathfrak {p}}}
", or "localization at a point". The resulting ring, denoted
R
p
{\displaystyle R_{\mathfrak {p}}}
is a local ring, and is the algebraic analog of a ring of germs.
The multiplicative set consists of all powers of an element t of a ring R. The resulting ring is commonly denoted
R
t
,
{\displaystyle R_{t},}
and its spectrum is the Zariski open set of the prime ideals that do not contain t. Thus the localization is the analog of the restriction of a topological space to a neighborhood of a point (every prime ideal has a neighborhood basis consisting of Zariski open sets of this form).
In number theory and algebraic topology, when working over the ring
Z
{\displaystyle \mathbb {Z} }
of integers, one refers to a property relative to an integer n as a property true at n or away from n, depending on the localization that is considered. "Away from n" means that the property is considered after localization by the powers of n, and, if p is a prime number, "at p" means that the property is considered after localization at the prime ideal
p
Z
{\displaystyle p\mathbb {Z} }
. This terminology can be explained by the fact that, if p is prime, the nonzero prime ideals of the localization of
Z
{\displaystyle \mathbb {Z} }
are either the singleton set {p} or its complement in the set of prime numbers.
== Localization and saturation of ideals ==
Let S be a multiplicative set in a commutative ring R, and
j
:
R
→
S
−
1
R
{\displaystyle j\colon R\to S^{-1}R}
be the canonical ring homomorphism. Given an ideal I in R, let
S
−
1
I
{\displaystyle S^{-1}I}
the set of the fractions in
S
−
1
R
{\displaystyle S^{-1}R}
whose numerator is in I. This is an ideal of
S
−
1
R
,
{\displaystyle S^{-1}R,}
which is generated by j(I), and called the localization of I by S.
The saturation of I by S is
j
−
1
(
S
−
1
I
)
;
{\displaystyle j^{-1}(S^{-1}I);}
it is an ideal of R, which can also defined as the set of the elements
r
∈
R
{\displaystyle r\in R}
such that there exists
s
∈
S
{\displaystyle s\in S}
with
s
r
∈
I
.
{\displaystyle sr\in I.}
Many properties of ideals are either preserved by saturation and localization, or can be characterized by simpler properties of localization and saturation.
In what follows, S is a multiplicative set in a ring R, and I and J are ideals of R; the saturation of an ideal I by a multiplicative set S is denoted
sat
S
(
I
)
,
{\displaystyle \operatorname {sat} _{S}(I),}
or, when the multiplicative set S is clear from the context,
sat
(
I
)
.
{\displaystyle \operatorname {sat} (I).}
1
∈
S
−
1
I
⟺
1
∈
sat
(
I
)
⟺
S
∩
I
≠
∅
{\displaystyle 1\in S^{-1}I\quad \iff \quad 1\in \operatorname {sat} (I)\quad \iff \quad S\cap I\neq \emptyset }
I
⊆
J
⟹
S
−
1
I
⊆
S
−
1
J
and
sat
(
I
)
⊆
sat
(
J
)
{\displaystyle I\subseteq J\quad \ \implies \quad \ S^{-1}I\subseteq S^{-1}J\quad \ {\text{and}}\quad \ \operatorname {sat} (I)\subseteq \operatorname {sat} (J)}
(this is not always true for strict inclusions)
S
−
1
(
I
∩
J
)
=
S
−
1
I
∩
S
−
1
J
,
sat
(
I
∩
J
)
=
sat
(
I
)
∩
sat
(
J
)
{\displaystyle S^{-1}(I\cap J)=S^{-1}I\cap S^{-1}J,\qquad \,\operatorname {sat} (I\cap J)=\operatorname {sat} (I)\cap \operatorname {sat} (J)}
S
−
1
(
I
+
J
)
=
S
−
1
I
+
S
−
1
J
,
sat
(
I
+
J
)
=
sat
(
I
)
+
sat
(
J
)
{\displaystyle S^{-1}(I+J)=S^{-1}I+S^{-1}J,\qquad \operatorname {sat} (I+J)=\operatorname {sat} (I)+\operatorname {sat} (J)}
S
−
1
(
I
⋅
J
)
=
S
−
1
I
⋅
S
−
1
J
,
sat
(
I
⋅
J
)
=
sat
(
I
)
⋅
sat
(
J
)
{\displaystyle S^{-1}(I\cdot J)=S^{-1}I\cdot S^{-1}J,\qquad \quad \operatorname {sat} (I\cdot J)=\operatorname {sat} (I)\cdot \operatorname {sat} (J)}
If
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal such that
p
∩
S
=
∅
,
{\displaystyle {\mathfrak {p}}\cap S=\emptyset ,}
then
S
−
1
p
{\displaystyle S^{-1}{\mathfrak {p}}}
is a prime ideal and
p
=
sat
(
p
)
{\displaystyle {\mathfrak {p}}=\operatorname {sat} ({\mathfrak {p}})}
; if the intersection is nonempty, then
S
−
1
p
=
S
−
1
R
{\displaystyle S^{-1}{\mathfrak {p}}=S^{-1}R}
and
sat
(
p
)
=
R
.
{\displaystyle \operatorname {sat} ({\mathfrak {p}})=R.}
== Localization of a module ==
Let
R
{\displaystyle R}
be a commutative ring,
S
{\displaystyle S}
be a multiplicative set in
R
{\displaystyle R}
, and
M
{\displaystyle M}
be an
R
{\displaystyle R}
-module. The localization of the module
M
{\displaystyle M}
by
S
{\displaystyle S}
, denoted
S
−
1
M
{\displaystyle S^{-1}M}
, is an
S
−
1
R
{\displaystyle S^{-1}R}
-module that is constructed exactly as the localization of
R
{\displaystyle R}
, except that the numerators of the fractions belong to
M
{\displaystyle M}
. That is, as a set, it consists of equivalence classes, denoted
m
s
{\displaystyle {\frac {m}{s}}}
, of pairs
(
m
,
s
)
{\displaystyle (m,s)}
, where
m
∈
M
{\displaystyle m\in M}
and
s
∈
S
,
{\displaystyle s\in S,}
and two pairs
(
m
,
s
)
{\displaystyle (m,s)}
and
(
n
,
t
)
{\displaystyle (n,t)}
are equivalent if there is an element
u
{\displaystyle u}
in
S
{\displaystyle S}
such that
u
(
s
n
−
t
m
)
=
0.
{\displaystyle u(sn-tm)=0.}
Addition and scalar multiplication are defined as for usual fractions (in the following formula,
r
∈
R
,
{\displaystyle r\in R,}
s
,
t
∈
S
,
{\displaystyle s,t\in S,}
and
m
,
n
∈
M
{\displaystyle m,n\in M}
):
m
s
+
n
t
=
t
m
+
s
n
s
t
,
{\displaystyle {\frac {m}{s}}+{\frac {n}{t}}={\frac {tm+sn}{st}},}
r
s
m
t
=
r
m
s
t
.
{\displaystyle {\frac {r}{s}}{\frac {m}{t}}={\frac {rm}{st}}.}
Moreover,
S
−
1
M
{\displaystyle S^{-1}M}
is also an
R
{\displaystyle R}
-module with scalar multiplication
r
m
s
=
r
1
m
s
=
r
m
s
.
{\displaystyle r\,{\frac {m}{s}}={\frac {r}{1}}{\frac {m}{s}}={\frac {rm}{s}}.}
It is straightforward to check that these operations are well-defined, that is, they give the same result for different choices of representatives of fractions.
The localization of a module can be equivalently defined by using tensor products:
S
−
1
M
=
S
−
1
R
⊗
R
M
.
{\displaystyle S^{-1}M=S^{-1}R\otimes _{R}M.}
The proof of equivalence (up to a canonical isomorphism) can be done by showing that the two definitions satisfy the same universal property.
=== Module properties ===
If M is a submodule of an R-module N, and S is a multiplicative set in R, one has
S
−
1
M
⊆
S
−
1
N
.
{\displaystyle S^{-1}M\subseteq S^{-1}N.}
This implies that, if
f
:
M
→
N
{\displaystyle f\colon M\to N}
is an injective module homomorphism, then
S
−
1
R
⊗
R
f
:
S
−
1
R
⊗
R
M
→
S
−
1
R
⊗
R
N
{\displaystyle S^{-1}R\otimes _{R}f:\quad S^{-1}R\otimes _{R}M\to S^{-1}R\otimes _{R}N}
is also an injective homomorphism.
Since the tensor product is a right exact functor, this implies that localization by S maps exact sequences of R-modules to exact sequences of
S
−
1
R
{\displaystyle S^{-1}R}
-modules. In other words, localization is an exact functor, and
S
−
1
R
{\displaystyle S^{-1}R}
is a flat R-module.
This flatness and the fact that localization solves a universal property make that localization preserves many properties of modules and rings, and is compatible with solutions of other universal properties. For example, the natural map
S
−
1
(
M
⊗
R
N
)
→
S
−
1
M
⊗
S
−
1
R
S
−
1
N
{\displaystyle S^{-1}(M\otimes _{R}N)\to S^{-1}M\otimes _{S^{-1}R}S^{-1}N}
is an isomorphism. If
M
{\displaystyle M}
is a finitely presented module, the natural map
S
−
1
Hom
R
(
M
,
N
)
→
Hom
S
−
1
R
(
S
−
1
M
,
S
−
1
N
)
{\displaystyle S^{-1}\operatorname {Hom} _{R}(M,N)\to \operatorname {Hom} _{S^{-1}R}(S^{-1}M,S^{-1}N)}
is also an isomorphism.
If a module M is a finitely generated over R, one has
S
−
1
(
Ann
R
(
M
)
)
=
Ann
S
−
1
R
(
S
−
1
M
)
,
{\displaystyle S^{-1}(\operatorname {Ann} _{R}(M))=\operatorname {Ann} _{S^{-1}R}(S^{-1}M),}
where
Ann
{\displaystyle \operatorname {Ann} }
denotes annihilator, that is the ideal of the elements of the ring that map to zero all elements of the module. In particular,
S
−
1
M
=
0
⟺
S
∩
Ann
R
(
M
)
≠
∅
,
{\displaystyle S^{-1}M=0\quad \iff \quad S\cap \operatorname {Ann} _{R}(M)\neq \emptyset ,}
that is, if
t
M
=
0
{\displaystyle tM=0}
for some
t
∈
S
.
{\displaystyle t\in S.}
== Localization at primes ==
The definition of a prime ideal implies immediately that the complement
S
=
R
∖
p
{\displaystyle S=R\setminus {\mathfrak {p}}}
of a prime ideal
p
{\displaystyle {\mathfrak {p}}}
in a commutative ring R is a multiplicative set. In this case, the localization
S
−
1
R
{\displaystyle S^{-1}R}
is commonly denoted
R
p
.
{\displaystyle R_{\mathfrak {p}}.}
The ring
R
p
{\displaystyle R_{\mathfrak {p}}}
is a local ring, that is called the local ring of R at
p
.
{\displaystyle {\mathfrak {p}}.}
This means that
p
R
p
=
p
⊗
R
R
p
{\displaystyle {\mathfrak {p}}\,R_{\mathfrak {p}}={\mathfrak {p}}\otimes _{R}R_{\mathfrak {p}}}
is the unique maximal ideal of the ring
R
p
.
{\displaystyle R_{\mathfrak {p}}.}
Analogously one can define the localization of a module M at a prime ideal
p
{\displaystyle {\mathfrak {p}}}
of R. Again, the localization
S
−
1
M
{\displaystyle S^{-1}M}
is commonly denoted
M
p
{\displaystyle M_{\mathfrak {p}}}
.
Such localizations are fundamental for commutative algebra and algebraic geometry for several reasons. One is that local rings are often easier to study than general commutative rings, in particular because of Nakayama lemma. However, the main reason is that many properties are true for a ring if and only if they are true for all its local rings. For example, a ring is regular if and only if all its local rings are regular local rings.
Properties of a ring that can be characterized on its local rings are called local properties, and are often the algebraic counterpart of geometric local properties of algebraic varieties, which are properties that can be studied by restriction to a small neighborhood of each point of the variety. (There is another concept of local property that refers to localization to Zariski open sets; see § Localization to Zariski open sets, below.)
Many local properties are a consequence of the fact that the module
⨁
p
R
p
{\displaystyle \bigoplus _{\mathfrak {p}}R_{\mathfrak {p}}}
is a faithfully flat module when the direct sum is taken over all prime ideals (or over all maximal ideals of R). See also Faithfully flat descent.
=== Examples of local properties ===
A property P of an R-module M is a local property if the following conditions are equivalent:
P holds for M.
P holds for all
M
p
,
{\displaystyle M_{\mathfrak {p}},}
where
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal of R.
P holds for all
M
m
,
{\displaystyle M_{\mathfrak {m}},}
where
m
{\displaystyle {\mathfrak {m}}}
is a maximal ideal of R.
The following are local properties:
M is zero.
M is torsion-free (in the case where R is a commutative domain).
M is a flat module.
M is an invertible module (in the case where R is a commutative domain, and M is a submodule of the field of fractions of R).
f
:
M
→
N
{\displaystyle f\colon M\to N}
is injective (resp. surjective), where N is another R-module.
On the other hand, some properties are not local properties. For example, an infinite direct product of fields is not an integral domain nor a Noetherian ring, while all its local rings are fields, and therefore Noetherian integral domains.
== Non-commutative case ==
Localizing non-commutative rings is more difficult. While the localization exists for every set S of prospective units, it might take a different form to the one described above. One condition which ensures that the localization is well behaved is the Ore condition.
One case for non-commutative rings where localization has a clear interest is for rings of differential operators. It has the interpretation, for example, of adjoining a formal inverse D−1 for a differentiation operator D. This is done in many contexts in methods for differential equations. There is now a large mathematical theory about it, named microlocalization, connecting with numerous other branches. The micro- tag is to do with connections with Fourier theory, in particular.
== See also ==
Local analysis
Localization of a category
Localization of a topological space
== References ==
== External links ==
Localization from MathWorld. | Wikipedia/Localization_(commutative_algebra) |
In algebraic geometry, the Nisnevich topology, sometimes called the completely decomposed topology, is a Grothendieck topology on the category of schemes which has been used in algebraic K-theory, A¹ homotopy theory, and the theory of motives. It was originally introduced by Yevsey Nisnevich, who was motivated by the theory of adeles.
== Definition ==
A morphism of schemes
f
:
Y
→
X
{\displaystyle f:Y\to X}
is called a Nisnevich morphism if it is an étale morphism such that for every (possibly non-closed) point x ∈ X, there exists a point y ∈ Y in the fiber f−1(x) such that the induced map of residue fields k(x) → k(y) is an isomorphism. Equivalently, f must be flat, unramified, locally of finite presentation, and for every point x ∈ X, there must exist a point y in the fiber f−1(x) such that k(x) → k(y) is an isomorphism.
A family of morphisms {uα : Xα → X} is a Nisnevich cover if each morphism in the family is étale and for every (possibly non-closed) point x ∈ X, there exists α and a point y ∈ Xα s.t. uα(y) = x and the induced map of residue fields k(x) → k(y) is an isomorphism. If the family is finite, this is equivalent to the morphism
∐
u
α
{\displaystyle \coprod u_{\alpha }}
from
∐
X
α
{\displaystyle \coprod X_{\alpha }}
to X being a Nisnevich morphism. The Nisnevich covers are the covering families of a pretopology on the category of schemes and morphisms of schemes. This generates a topology called the Nisnevich topology. The category of schemes with the Nisnevich topology is notated Nis.
The small Nisnevich site of X has as underlying category the same as the small étale site, that is to say, objects are schemes U with a fixed étale morphism U → X and the morphisms are morphisms of schemes compatible with the fixed maps to X. Admissible coverings are Nisnevich morphisms.
The big Nisnevich site of X has as underlying category schemes with a fixed map to X and morphisms the morphisms of X-schemes. The topology is the one given by Nisnevich morphisms.
The Nisnevich topology has several variants which are adapted to studying singular varieties. Covers in these topologies include resolutions of singularities or weaker forms of resolution.
The cdh topology allows proper birational morphisms as coverings.
The h topology allows De Jong's alterations as coverings.
The l′ topology allows morphisms as in the conclusion of Gabber's local uniformization theorem.
The cdh and l′ topologies are incomparable with the étale topology, and the h topology is finer than the étale topology.
=== Equivalent conditions for a Nisnevich cover ===
Assume the category consists of smooth schemes over a qcqs (quasi-compact and quasi-separated) scheme, then the original definition due to NisnevichRemark 3.39, which is equivalent to the definition above, for a family of morphisms
{
p
α
:
U
α
→
X
}
α
∈
A
{\displaystyle \{p_{\alpha }:U_{\alpha }\to X\}_{\alpha \in A}}
of schemes to be a Nisnevich covering is if
Every
p
α
{\displaystyle p_{\alpha }}
is étale; and
For all field
k
{\displaystyle k}
, on the level of
k
{\displaystyle k}
-points, the (set-theoretic) coproduct
p
k
:
∐
α
U
α
(
k
)
→
X
(
k
)
{\displaystyle p_{k}:\coprod _{\alpha }U_{\alpha }(k)\to X(k)}
of all covering morphisms
p
α
{\displaystyle p_{\alpha }}
is surjective.
The following yet another equivalent condition for Nisnevich covers is due to Lurie: The Nisnevich topology is generated by all finite families of étale morphisms
{
p
α
:
U
α
→
X
}
α
∈
A
{\displaystyle \{p_{\alpha }:U_{\alpha }\to X\}_{\alpha \in A}}
such that there is a finite sequence of finitely presented closed subschemes
∅
=
Z
n
+
1
⊆
Z
n
⊆
⋯
⊆
Z
1
⊆
Z
0
=
X
{\displaystyle \varnothing =Z_{n+1}\subseteq Z_{n}\subseteq \cdots \subseteq Z_{1}\subseteq Z_{0}=X}
such that for
0
≤
m
≤
n
{\displaystyle 0\leq m\leq n}
,
∐
α
∈
A
p
α
−
1
(
Z
m
−
Z
m
+
1
)
→
Z
m
−
Z
m
+
1
{\displaystyle \coprod _{\alpha \in A}p_{\alpha }^{-1}(Z_{m}-Z_{m+1})\to Z_{m}-Z_{m+1}}
admits a section.
Notice that when evaluating these morphisms on
S
{\displaystyle S}
-points, this implies the map is a surjection. Conversely, taking the trivial sequence
Z
0
=
X
{\displaystyle Z_{0}=X}
gives the result in the opposite direction.
== Motivation ==
One of the key motivations for introducing the Nisnevich topology in motivic cohomology is the fact that a Zariski open cover
π
:
U
→
X
{\displaystyle \pi :U\to X}
does not yield a resolution of Zariski sheaves
⋯
→
Z
t
r
(
U
×
X
U
)
→
Z
t
r
(
U
)
→
Z
t
r
(
X
)
→
0
{\displaystyle \cdots \to \mathbf {Z} _{tr}(U\times _{X}U)\to \mathbf {Z} _{tr}(U)\to \mathbf {Z} _{tr}(X)\to 0}
where
Z
t
r
(
Y
)
(
Z
)
:=
Hom
c
o
r
(
Z
,
Y
)
{\displaystyle \mathbf {Z} _{tr}(Y)(Z):={\text{Hom}}_{cor}(Z,Y)}
is the representable functor over the category of presheaves with transfers. For the Nisnevich topology, the local rings are Henselian, and a finite cover of a Henselian ring is given by a product of Henselian rings, showing exactness.
== Local rings in the Nisnevich topology ==
If x is a point of a scheme X, then the local ring of x in the Nisnevich topology is the Henselization of the local ring of x in the Zariski topology. This differs from the Etale topology where the local rings are strict henselizations. One of the important points between the two cases can be seen when looking at a local ring
(
R
,
p
)
{\displaystyle (R,{\mathfrak {p}})}
with residue field
κ
{\displaystyle \kappa }
. In this case, the residue fields of the Henselization and strict Henselization differ
(
R
,
p
)
h
⇝
κ
(
R
,
p
)
s
h
⇝
κ
s
e
p
{\displaystyle {\begin{aligned}(R,{\mathfrak {p}})^{h}&\rightsquigarrow \kappa \\(R,{\mathfrak {p}})^{sh}&\rightsquigarrow \kappa ^{sep}\end{aligned}}}
so the residue field of the strict Henselization gives the separable closure of the original residue field
κ
{\displaystyle \kappa }
.
== Examples of Nisnevich Covering ==
Consider the étale cover given by
Spec
(
C
[
x
,
t
,
t
−
1
]
/
(
x
2
−
t
)
)
→
Spec
(
C
[
t
,
t
−
1
]
)
{\displaystyle {\text{Spec}}(\mathbb {C} [x,t,t^{-1}]/(x^{2}-t))\to {\text{Spec}}(\mathbb {C} [t,t^{-1}])}
If we look at the associated morphism of residue fields for the generic point of the base, we see that this is a degree 2 extension
C
(
t
)
→
C
(
t
)
[
x
]
(
x
2
−
t
)
{\displaystyle \mathbb {C} (t)\to {\frac {\mathbb {C} (t)[x]}{(x^{2}-t)}}}
This implies that this étale cover is not Nisnevich. We can add the étale morphism
A
1
−
{
0
,
1
}
→
A
1
−
{
0
}
{\displaystyle \mathbb {A} ^{1}-\{0,1\}\to \mathbb {A} ^{1}-\{0\}}
to get a Nisnevich cover since there is an isomorphism of points for the generic point of
A
1
−
{
0
}
{\displaystyle \mathbb {A} ^{1}-\{0\}}
.
=== Conditional covering ===
If we take
A
1
{\displaystyle \mathbb {A} ^{1}}
as a scheme over a field
k
{\displaystyle k}
, then a coveringpg 21 given by
i
:
A
1
−
{
a
}
↪
A
1
f
:
A
1
−
{
0
}
→
A
1
{\displaystyle {\begin{aligned}i:\mathbb {A} ^{1}-\{a\}\hookrightarrow \mathbb {A} ^{1}\\f:\mathbb {A} ^{1}-\{0\}\to \mathbb {A} ^{1}\end{aligned}}}
where
i
{\displaystyle i}
is the inclusion and
f
(
x
)
=
x
k
{\displaystyle f(x)=x^{k}}
, then this covering is Nisnevich if and only if
x
k
=
a
{\displaystyle x^{k}=a}
has a solution over
k
{\displaystyle k}
. Otherwise, the covering cannot be a surjection on
k
{\displaystyle k}
-points. In this case, the covering is only an Etale covering.
=== Zariski coverings ===
Every Zariski coveringpg 21 is Nisnevich but the converse doesn't hold in general. This can be easily proven using any of the definitions since the residue fields will always be an isomorphism regardless of the Zariski cover, and by definition a Zariski cover will give a surjection on points. In addition, Zariski inclusions are always Etale morphisms.
== Applications ==
Nisnevich introduced his topology to provide a cohomological interpretation of the class set of an affine group scheme, which was originally defined in adelic terms. He used it to partially prove a conjecture of Alexander Grothendieck and Jean-Pierre Serre which states that a rationally trivial torsor under a reductive group scheme over an integral regular Noetherian base scheme is locally trivial in the Zariski topology. One of the key properties of the Nisnevich topology is the existence of a descent spectral sequence. Let X be a Noetherian scheme of finite Krull dimension, and let Gn(X) be the Quillen K-groups of the category of coherent sheaves on X. If
G
~
n
cd
(
X
)
{\displaystyle {\tilde {G}}_{n}^{\,{\text{cd}}}(X)}
is the sheafification of these groups with respect to the Nisnevich topology, there is a convergent spectral sequence
E
2
p
,
q
=
H
p
(
X
cd
,
G
~
q
cd
)
⇒
G
q
−
p
(
X
)
{\displaystyle E_{2}^{p,q}=H^{p}(X_{\text{cd}},{\tilde {G}}_{q}^{\,{\text{cd}}})\Rightarrow G_{q-p}(X)}
for p ≥ 0, q ≥ 0, and p - q ≥ 0. If
ℓ
{\displaystyle \ell }
is a prime number not equal to the characteristic of X, then there is an analogous convergent spectral sequence for K-groups with coefficients in
Z
/
ℓ
Z
{\displaystyle \mathbf {Z} /\ell \mathbf {Z} }
.
The Nisnevich topology has also found important applications in algebraic K-theory, A¹ homotopy theory and the theory of motives.
== See also ==
Presheaf with transfers
Mixed motives (math)
A¹ homotopy theory
Henselian ring
== References ==
Nisnevich, Yevsey A. (1989). "The completely decomposed topology on schemes and associated descent spectral sequences in algebraic K-theory". In J. F. Jardine and V. P. Snaith (ed.). Algebraic K-theory: connections with geometry and topology. Proceedings of the NATO Advanced Study Institute held in Lake Louise, Alberta, December 7--11, 1987. NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences. Vol. 279. Dordrecht: Kluwer Academic Publishers Group. pp. 241–342., available at Nisnevich's website
Levine, Marc (2008), Motivic Homotopy Theory (PDF) | Wikipedia/Nisnevich_topology |
In category theory, a branch of mathematics, a Grothendieck topology is a structure on a category C that makes the objects of C act like the open sets of a topological space. A category together with a choice of Grothendieck topology is called a site.
Grothendieck topologies axiomatize the notion of an open cover. Using the notion of covering provided by a Grothendieck topology, it becomes possible to define sheaves on a category and their cohomology. This was first done in algebraic geometry and algebraic number theory by Alexander Grothendieck to define the étale cohomology of a scheme. It has been used to define other cohomology theories since then, such as ℓ-adic cohomology, flat cohomology, and crystalline cohomology. While Grothendieck topologies are most often used to define cohomology theories, they have found other applications as well, such as to John Tate's theory of rigid analytic geometry.
There is a natural way to associate a site to an ordinary topological space, and Grothendieck's theory is loosely regarded as a generalization of classical topology. Under meager point-set hypotheses, namely sobriety, this is completely accurate—it is possible to recover a sober space from its associated site. However simple examples such as the indiscrete topological space show that not all topological spaces can be expressed using Grothendieck topologies. Conversely, there are Grothendieck topologies that do not come from topological spaces.
The term "Grothendieck topology" has changed in meaning. In Artin (1962) it meant what is now called a Grothendieck pretopology, and some authors still use this old meaning. Giraud (1964) modified the definition to use sieves rather than covers. Much of the time this does not make much difference, as each Grothendieck pretopology determines a unique Grothendieck topology, though quite different pretopologies can give the same topology.
== Overview ==
André Weil's famous Weil conjectures proposed that certain properties of equations with integral coefficients should be understood as geometric properties of the algebraic variety that they define. His conjectures postulated that there should be a cohomology theory of algebraic varieties that gives number-theoretic information about their defining equations. This cohomology theory was known as the "Weil cohomology", but using the tools he had available, Weil was unable to construct it.
In the early 1960s, Alexander Grothendieck introduced étale maps into algebraic geometry as algebraic analogues of local analytic isomorphisms in analytic geometry. He used étale coverings to define an algebraic analogue of the fundamental group of a topological space. Soon Jean-Pierre Serre noticed that some properties of étale coverings mimicked those of open immersions, and that consequently it was possible to make constructions that imitated the cohomology functor
H
1
{\displaystyle H^{1}}
. Grothendieck saw that it would be possible to use Serre's idea to define a cohomology theory that he suspected would be the Weil cohomology. To define this cohomology theory, Grothendieck needed to replace the usual, topological notion of an open covering with one that would use étale coverings instead. Grothendieck also saw how to phrase the definition of covering abstractly; this is where the definition of a Grothendieck topology comes from.
== Definition ==
=== Motivation ===
The classical definition of a sheaf begins with a topological space
X
{\displaystyle X}
. A sheaf associates information to the open sets of
X
{\displaystyle X}
. This information can be phrased abstractly by letting
O
(
X
)
{\displaystyle O(X)}
be the category whose objects are the open subsets
U
{\displaystyle U}
of
X
{\displaystyle X}
and whose morphisms are the inclusion maps
V
→
U
{\displaystyle V\rightarrow U}
of open sets
U
{\displaystyle U}
and
V
{\displaystyle V}
of
X
{\displaystyle X}
. We will call such maps open immersions, just as in the context of schemes. Then a presheaf on
X
{\displaystyle X}
is a contravariant functor from
O
(
X
)
{\displaystyle O(X)}
to the category of sets, and a sheaf is a presheaf that satisfies the gluing axiom (here including the separation axiom). The gluing axiom is phrased in terms of pointwise covering, i.e.,
{
U
i
}
{\displaystyle \{U_{i}\}}
covers
U
{\displaystyle U}
if and only if
⋃
i
U
i
=
U
{\displaystyle \bigcup _{i}U_{i}=U}
. In this definition,
U
i
{\displaystyle U_{i}}
is an open subset of
X
{\displaystyle X}
. Grothendieck topologies replace each
U
i
{\displaystyle U_{i}}
with an entire family of open subsets; in this example,
U
i
{\displaystyle U_{i}}
is replaced by the family of all open immersions
V
i
j
→
U
i
{\displaystyle V_{ij}\to U_{i}}
. Such a collection is called a sieve. Pointwise covering is replaced by the notion of a covering family; in the above example, the set of all
{
V
i
j
→
U
i
}
j
{\displaystyle \{V_{ij}\to U_{i}\}_{j}}
as
i
{\displaystyle i}
varies is a covering family of
U
{\displaystyle U}
. Sieves and covering families can be axiomatized, and once this is done open sets and pointwise covering can be replaced by other notions that describe other properties of the space
X
{\displaystyle X}
.
=== Sieves ===
In a Grothendieck topology, the notion of a collection of open subsets of U stable under inclusion is replaced by the notion of a sieve. If c is any given object in C, a sieve on c is a subfunctor of the functor Hom(−, c); (this is the Yoneda embedding applied to c). In the case of O(X), a sieve S on an open set U selects a collection of open subsets of U that is stable under inclusion. More precisely, consider that for any open subset V of U, S(V) will be a subset of Hom(V, U), which has only one element, the open immersion V → U. Then V will be considered "selected" by S if and only if S(V) is nonempty. If W is a subset of V, then there is a morphism S(V) → S(W) given by composition with the inclusion W → V. If S(V) is non-empty, it follows that S(W) is also non-empty.
If S is a sieve on X, and f: Y → X is a morphism, then left composition by f gives a sieve on Y called the pullback of S along f, denoted by f
∗
{\displaystyle ^{\ast }}
S. It is defined as the fibered product S ×Hom(−, X) Hom(−, Y) together with its natural embedding in Hom(−, Y). More concretely, for each object Z of C, f
∗
{\displaystyle ^{\ast }}
S(Z) = { g: Z → Y | fg
∈
{\displaystyle \in }
S(Z) }, and f
∗
{\displaystyle ^{\ast }}
S inherits its action on morphisms by being a subfunctor of Hom(−, Y). In the classical example, the pullback of a collection {Vi} of subsets of U along an inclusion W → U is the collection {Vi∩W}.
=== Grothendieck topology ===
A Grothendieck topology J on a category C is a collection, for each object c of C, of distinguished sieves on c, denoted by J(c) and called covering sieves of c. This selection will be subject to certain axioms, stated below. Continuing the previous example, a sieve S on an open set U in O(X) will be a covering sieve if and only if the union of all the open sets V for which S(V) is nonempty equals U; in other words, if and only if S gives us a collection of open sets that cover U in the classical sense.
==== Axioms ====
The conditions we impose on a Grothendieck topology are:
(T 1) (Base change) If S is a covering sieve on X, and f: Y → X is a morphism, then the pullback f
∗
{\displaystyle \ast }
S is a covering sieve on Y.
(T 2) (Local character) Let S be a covering sieve on X, and let T be any sieve on X. Suppose that for each object Y of C and each arrow f: Y → X in S(X), the pullback sieve f
∗
{\displaystyle \ast }
T is a covering sieve on Y. Then T is a covering sieve on X.
(T 3) (Identity) Hom(−, X) is a covering sieve on X for any object X in C.
The base change axiom corresponds to the idea that if {Ui} covers U, then {Ui ∩ V} should cover U ∩ V. The local character axiom corresponds to the idea that if {Ui} covers U and {Vij}j
∈
{\displaystyle \in }
Ji covers Ui for each i, then the collection {Vij} for all i and j should cover U. Lastly, the identity axiom corresponds to the idea that any set is covered by itself via the identity map.
==== Grothendieck pretopologies ====
In fact, it is possible to put these axioms in another form where their geometric character is more apparent, assuming that the underlying category C contains certain fibered products. In this case, instead of specifying sieves, we can specify that certain collections of maps with a common codomain should cover their codomain. These collections are called covering families. If the collection of all covering families satisfies certain axioms, then we say that they form a Grothendieck pretopology. These axioms are:
(PT 0) (Existence of fibered products) For all objects X of C, and for all morphisms X0 → X that appear in some covering family of X, and for all morphisms Y → X, the fibered product X0 ×X Y exists.
(PT 1) (Stability under base change) For all objects X of C, all morphisms Y → X, and all covering families {Xα → X}, the family {Xα ×X Y → Y} is a covering family.
(PT 2) (Local character) If {Xα → X} is a covering family, and if for all α, {Xβα → Xα} is a covering family, then the family of composites {Xβα → Xα → X} is a covering family.
(PT 3) (Isomorphisms) If f: Y → X is an isomorphism, then {f} is a covering family.
For any pretopology, the collection of all sieves that contain a covering family from the pretopology is always a Grothendieck topology.
For categories with fibered products, there is a converse. Given a collection of arrows {Xα → X}, we construct a sieve S by letting S(Y) be the set of all morphisms Y → X that factor through some arrow Xα → X. This is called the sieve generated by {Xα → X}. Now choose a topology. Say that {Xα → X} is a covering family if and only if the sieve that it generates is a covering sieve for the given topology. It is easy to check that this defines a pretopology.
(PT 3) is sometimes replaced by a weaker axiom:
(PT 3') (Identity) If 1X : X → X is the identity arrow, then {1X} is a covering family.
(PT 3) implies (PT 3'), but not conversely. However, suppose that we have a collection of covering families that satisfies (PT 0) through (PT 2) and (PT 3'), but not (PT 3). These families generate a pretopology. The topology generated by the original collection of covering families is then the same as the topology generated by the pretopology, because the sieve generated by an isomorphism Y → X is Hom(−, X). Consequently, if we restrict our attention to topologies, (PT 3) and (PT 3') are equivalent.
== Sites and sheaves ==
Let C be a category and let J be a Grothendieck topology on C. The pair (C, J) is called a site.
A presheaf on a category is a contravariant functor from C to the category of all sets. Note that for this definition C is not required to have a topology. A sheaf on a site, however, should allow gluing, just like sheaves in classical topology. Consequently, we define a sheaf on a site to be a presheaf F such that for all objects X and all covering sieves S on X, the natural map Hom(Hom(−, X), F) → Hom(S, F), induced by the inclusion of S into Hom(−, X), is a bijection. Halfway in between a presheaf and a sheaf is the notion of a separated presheaf, where the natural map above is required to be only an injection, not a bijection, for all sieves S. A morphism of presheaves or of sheaves is a natural transformation of functors. The category of all sheaves on C is the topos defined by the site (C, J).
Using the Yoneda lemma, it is possible to show that a presheaf on the category O(X) is a sheaf on the topology defined above if and only if it is a sheaf in the classical sense.
Sheaves on a pretopology have a particularly simple description: For each covering family {Xα → X}, the diagram
F
(
X
)
→
∏
α
∈
A
F
(
X
α
)
⟶
⟶
∏
α
,
β
∈
A
F
(
X
α
×
X
X
β
)
{\displaystyle F(X)\rightarrow \prod _{\alpha \in A}F(X_{\alpha }){{{} \atop \longrightarrow } \atop {\longrightarrow \atop {}}}\prod _{\alpha ,\beta \in A}F(X_{\alpha }\times _{X}X_{\beta })}
must be an equalizer. For a separated presheaf, the first arrow need only be injective.
Similarly, one can define presheaves and sheaves of abelian groups, rings, modules, and so on. One can require either that a presheaf F is a contravariant functor to the category of abelian groups (or rings, or modules, etc.), or that F be an abelian group (ring, module, etc.) object in the category of all contravariant functors from C to the category of sets. These two definitions are equivalent.
== Examples of sites ==
=== The discrete and indiscrete topologies ===
Let C be any category. To define the discrete topology, we declare all sieves to be covering sieves. If C has all fibered products, this is equivalent to declaring all families to be covering families. To define the indiscrete topology, also known as the coarse or chaotic topology, we declare only the sieves of the form Hom(−, X) to be covering sieves. The indiscrete topology is generated by the pretopology that has only isomorphisms for covering families. A sheaf on the indiscrete site is the same thing as a presheaf.
=== The canonical topology ===
Let C be any category. The Yoneda embedding gives a functor Hom(−, X) for each object X of C. The canonical topology is the biggest (finest) topology such that every representable presheaf, i.e. presheaf of the form Hom(−, X), is a sheaf. A covering sieve or covering family for this site is said to be strictly universally epimorphic because it consists of the legs of a colimit cone (under the full diagram on the domains of its constituent morphisms) and these colimits are stable under pullbacks along morphisms in C. A topology that is less fine than the canonical topology, that is, for which every covering sieve is strictly universally epimorphic, is called subcanonical. Subcanonical sites are exactly the sites for which every presheaf of the form Hom(−, X) is a sheaf. Most sites encountered in practice are subcanonical.
=== Small site associated to a topological space ===
We repeat the example that we began with above. Let X be a topological space. We defined O(X) to be the category whose objects are the open sets of X and whose morphisms are inclusions of open sets. Note that for an open set U and a sieve S on U, the set S(V) contains either zero or one element for every open set V. The covering sieves on an object U of O(X) are those sieves S satisfying the following condition:
If W is the union of all the sets V such that S(V) is non-empty, then W = U.
This notion of cover matches the usual notion in point-set topology.
This topology can also naturally be expressed as a pretopology. We say that a family of inclusions {Vα
⊆
{\displaystyle \subseteq }
U} is a covering family if and only if the union
∪
{\displaystyle \cup }
Vα equals U. This site is called the small site associated to a topological space X.
=== Big site associated to a topological space ===
Let Spc be the category of all topological spaces. Given any family of functions {uα : Vα → X}, we say that it is a surjective family or that the morphisms uα are jointly surjective if
∪
{\displaystyle \cup }
uα(Vα) equals X. We define a pretopology on Spc by taking the covering families to be surjective families all of whose members are open immersions. Let S be a sieve on Spc. S is a covering sieve for this topology if and only if:
For all Y and every morphism f : Y → X in S(Y), there exists a V and a g : V → X such that g is an open immersion, g is in S(V), and f factors through g.
If W is the union of all the sets f(Y), where f : Y → X is in S(Y), then W = X.
Fix a topological space X. Consider the comma category Spc/X of topological spaces with a fixed continuous map to X. The topology on Spc induces a topology on Spc/X. The covering sieves and covering families are almost exactly the same; the only difference is that now all the maps involved commute with the fixed maps to X. This is the big site associated to a topological space X. Notice that Spc is the big site associated to the one point space. This site was first considered by Jean Giraud.
=== The big and small sites of a manifold ===
Let M be a manifold. M has a category of open sets O(M) because it is a topological space, and it gets a topology as in the above example. For two open sets U and V of M, the fiber product U ×M V is the open set U ∩ V, which is still in O(M). This means that the topology on O(M) is defined by a pretopology, the same pretopology as before.
Let Mfd be the category of all manifolds and continuous maps. (Or smooth manifolds and smooth maps, or real analytic manifolds and analytic maps, etc.) Mfd is a subcategory of Spc, and open immersions are continuous (or smooth, or analytic, etc.), so Mfd inherits a topology from Spc. This lets us construct the big site of the manifold M as the site Mfd/M. We can also define this topology using the same pretopology we used above. Notice that to satisfy (PT 0), we need to check that for any continuous map of manifolds X → Y and any open subset U of Y, the fibered product U ×Y X is in Mfd/M. This is just the statement that the preimage of an open set is open. Notice, however, that not all fibered products exist in Mfd because the preimage of a smooth map at a critical value need not be a manifold.
=== Topologies on the category of schemes ===
The category of schemes, denoted Sch, has a tremendous number of useful topologies. A complete understanding of some questions may require examining a scheme using several different topologies. All of these topologies have associated small and big sites. The big site is formed by taking the entire category of schemes and their morphisms, together with the covering sieves specified by the topology. The small site over a given scheme is formed by only taking the objects and morphisms that are part of a cover of the given scheme.
The most elementary of these is the Zariski topology. Let X be a scheme. X has an underlying topological space, and this topological space determines a Grothendieck topology. The Zariski topology on Sch is generated by the pretopology whose covering families are jointly surjective families of scheme-theoretic open immersions. The covering sieves S for Zar are characterized by the following two properties:
For all Y and every morphism f : Y → X in S(Y), there exists a V and a g : V → X such that g is an open immersion, g is in S(V), and f factors through g.
If W is the union of all the sets f(Y), where f : Y → X is in S(Y), then W = X.
Despite their outward similarities, the topology on Zar is not the restriction of the topology on Spc! This is because there are morphisms of schemes that are topologically open immersions but that are not scheme-theoretic open immersions. For example, let A be a non-reduced ring and let N be its ideal of nilpotents. The quotient map A → A/N induces a map Spec A/N → Spec A, which is the identity on underlying topological spaces. To be a scheme-theoretic open immersion it must also induce an isomorphism on structure sheaves, which this map does not do. In fact, this map is a closed immersion.
The étale topology is finer than the Zariski topology. It was the first Grothendieck topology to be closely studied. Its covering families are jointly surjective families of étale morphisms. It is finer than the Nisnevich topology, but neither finer nor coarser than the cdh and l′ topologies.
There are two flat topologies, the fppf topology and the fpqc topology. fppf stands for fidèlement plate de présentation finie, and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat, of finite presentation, and is quasi-finite. fpqc stands for fidèlement plate et quasi-compacte, and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat. In both categories, a covering family is defined to be a family that is a cover on Zariski open subsets. In the fpqc topology, any faithfully flat and quasi-compact morphism is a cover. These topologies are closely related to descent. The fpqc topology is finer than all the topologies mentioned above, and it is very close to the canonical topology.
Grothendieck introduced crystalline cohomology to study the p-torsion part of the cohomology of characteristic p varieties. In the crystalline topology, which is the basis of this theory, the underlying category has objects given by infinitesimal thickenings together with divided power structures. Crystalline sites are examples of sites with no final object.
== Continuous and cocontinuous functors ==
There are two natural types of functors between sites. They are given by functors that are compatible with the topology in a certain sense.
=== Continuous functors ===
If (C, J) and (D, K) are sites and u : C → D is a functor, then u is continuous if for every sheaf F on D with respect to the topology K, the presheaf Fu is a sheaf with respect to the topology J. Continuous functors induce functors between the corresponding topoi by sending a sheaf F to Fu. These functors are called pushforwards. If
C
~
{\displaystyle {\tilde {C}}}
and
D
~
{\displaystyle {\tilde {D}}}
denote the topoi associated to C and D, then the pushforward functor is
u
s
:
D
~
→
C
~
{\displaystyle u_{s}:{\tilde {D}}\to {\tilde {C}}}
.
us admits a left adjoint us called the pullback. us need not preserve limits, even finite limits.
In the same way, u sends a sieve on an object X of C to a sieve on the object uX of D. A continuous functor sends covering sieves to covering sieves. If J is the topology defined by a pretopology, and if u commutes with fibered products, then u is continuous if and only if it sends covering sieves to covering sieves and if and only if it sends covering families to covering families. In general, it is not sufficient for u to send covering sieves to covering sieves (see SGA IV 3, Exemple 1.9.3).
=== Cocontinuous functors ===
Again, let (C, J) and (D, K) be sites and v : C → D be a functor. If X is an object of C and R is a sieve on vX, then R can be pulled back to a sieve S as follows: A morphism f : Z → X is in S if and only if v(f) : vZ → vX is in R. This defines a sieve. v is cocontinuous if and only if for every object X of C and every covering sieve R of vX, the pullback S of R is a covering sieve on X.
Composition with v sends a presheaf F on D to a presheaf Fv on C, but if v is cocontinuous, this need not send sheaves to sheaves. However, this functor on presheaf categories, usually denoted
v
^
∗
{\displaystyle {\hat {v}}^{*}}
, admits a right adjoint
v
^
∗
{\displaystyle {\hat {v}}_{*}}
. Then v is cocontinuous if and only if
v
^
∗
{\displaystyle {\hat {v}}_{*}}
sends sheaves to sheaves, that is, if and only if it restricts to a functor
v
∗
:
C
~
→
D
~
{\displaystyle v_{*}:{\tilde {C}}\to {\tilde {D}}}
. In this case, the composite of
v
^
∗
{\displaystyle {\hat {v}}^{*}}
with the associated sheaf functor is a left adjoint of v* denoted v*. Furthermore, v* preserves finite limits, so the adjoint functors v* and v* determine a geometric morphism of topoi
C
~
→
D
~
{\displaystyle {\tilde {C}}\to {\tilde {D}}}
.
=== Morphisms of sites ===
A continuous functor u : C → D is a morphism of sites D → C (not C → D) if us preserves finite limits. In this case, us and us determine a geometric morphism of topoi
C
~
→
D
~
{\displaystyle {\tilde {C}}\to {\tilde {D}}}
. The reasoning behind the convention that a continuous functor C → D is said to determine a morphism of sites in the opposite direction is that this agrees with the intuition coming from the case of topological spaces. A continuous map of topological spaces X → Y determines a continuous functor O(Y) → O(X). Since the original map on topological spaces is said to send X to Y, the morphism of sites is said to as well.
A particular case of this happens when a continuous functor admits a left adjoint. Suppose that u : C → D and v : D → C are functors with u right adjoint to v. Then u is continuous if and only if v is cocontinuous, and when this happens, us is naturally isomorphic to v* and us is naturally isomorphic to v*. In particular, u is a morphism of sites.
== See also ==
Fibered category
Lawvere–Tierney topology
== Notes ==
== References ==
Artin, Michael (1962). Grothendieck topologies. Notes on a Seminar Spring 1962. Department of Mathematics, Harvard University. OCLC 680377057. Zbl 0208.48701.
Demazure, Michel; Grothendieck, Alexandre, eds. (1970). Séminaire de Géométrie Algébrique du Bois Marie — 1962–64 — Schémas en groupes — (SGA 3) vol. 1. Lecture notes in mathematics (in French). Vol. 151. Springer. pp. xv+564. Zbl 0212.52810.
Artin, Michael (1972). Alexandre Grothendieck; Jean-Louis Verdier (eds.). Théorie des Topos et Cohomologie Etale des Schémas. Lecture notes in mathematics (in French). Vol. 269. Springer. xix+525. doi:10.1007/BFb0081551. ISBN 978-3-540-37549-4.
Giraud, Jean (1964), "Analysis situs", Séminaire Bourbaki, 1962/63. Fasc. 3, Paris: Secrétariat mathématique, MR 0193122
Shatz, Stephen S. (1972). Profinite groups, arithmetic, and geometry. Annals of Mathematics Studies. Vol. 67. Princeton University Press. ISBN 0-691-08017-8. MR 0347778. Zbl 0236.12002.
Nisnevich, Yevsey A. (2012) [1989]. "The completely decomposed topology on schemes and associated descent spectral sequences in algebraic K-theory". In Jardine, J. F.; Snaith, V. P. (eds.). Algebraic K-theory: connections with geometry and topology. Proceedings of the NATO Advanced Study Institute held in Lake Louise, Alberta, December 7–11, 1987. NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences. Vol. 279. Springer. pp. 241–342. doi:10.1007/978-94-009-2399-7_11. ISBN 978-94-009-2399-7. Zbl 0715.14009.
== External links ==
The birthday of Grothendieck topologies
The birthday of Grothendieck topologies (non-archived version) | Wikipedia/Grothendieck_topology |
In mathematics, and more specifically in ring theory, an ideal of a ring is a special subset of its elements. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer (even or odd) results in an even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring in a way similar to how, in group theory, a normal subgroup can be used to construct a quotient group.
Among the integers, the ideals correspond one-for-one with the non-negative integers: in this ring, every ideal is a principal ideal consisting of the multiples of a single non-negative number. However, in other rings, the ideals may not correspond directly to the ring elements, and certain properties of integers, when generalized to rings, attach more naturally to the ideals than to the elements of the ring. For instance, the prime ideals of a ring are analogous to prime numbers, and the Chinese remainder theorem can be generalized to ideals. There is a version of unique prime factorization for the ideals of a Dedekind domain (a type of ring important in number theory).
The related, but distinct, concept of an ideal in order theory is derived from the notion of ideal in ring theory. A fractional ideal is a generalization of an ideal, and the usual ideals are sometimes called integral ideals for clarity.
== History ==
Ernst Kummer invented the concept of ideal numbers to serve as the "missing" factors in number rings in which unique factorization fails; here the word "ideal" is in the sense of existing in imagination only, in analogy with "ideal" objects in geometry such as points at infinity.
In 1876, Richard Dedekind replaced Kummer's undefined concept by concrete sets of numbers, sets that he called ideals, in the third edition of Dirichlet's book Vorlesungen über Zahlentheorie, to which Dedekind had added many supplements.
Later the notion was extended beyond number rings to the setting of polynomial rings and other commutative rings by David Hilbert and especially Emmy Noether.
== Definitions ==
Given a ring R, a left ideal is a subset I of R that is a subgroup of the additive group of
R
{\displaystyle R}
that "absorbs multiplication from the left by elements of
R
{\displaystyle R}
"; that is,
I
{\displaystyle I}
is a left ideal if it satisfies the following two conditions:
(
I
,
+
)
{\displaystyle (I,+)}
is a subgroup of
(
R
,
+
)
{\displaystyle (R,+)}
,
For every
r
∈
R
{\displaystyle r\in R}
and every
x
∈
I
{\displaystyle x\in I}
, the product
r
x
{\displaystyle rx}
is in
I
{\displaystyle I}
.
In other words, a left ideal is a left submodule of R, considered as a left module over itself.
A right ideal is defined similarly, with the condition
r
x
∈
I
{\displaystyle rx\in I}
replaced by
x
r
∈
I
{\displaystyle xr\in I}
. A two-sided ideal is a left ideal that is also a right ideal.
If the ring is commutative, the three definitions are the same, and one talks simply of an ideal. In the non-commutative case, "ideal" is often used instead of "two-sided ideal".
If I is a left, right or two-sided ideal, the relation
x
∼
y
{\displaystyle x\sim y}
if and only if
x
−
y
∈
I
{\displaystyle x-y\in I}
is an equivalence relation on R, and the set of equivalence classes forms a left, right or bi module denoted
R
/
I
{\displaystyle R/I}
and called the quotient of R by I. (It is an instance of a congruence relation and is a generalization of modular arithmetic.)
If the ideal I is two-sided,
R
/
I
{\displaystyle R/I}
is a ring, and the function
R
→
R
/
I
{\displaystyle R\to R/I}
that associates to each element of R its equivalence class is a surjective ring homomorphism that has the ideal as its kernel. Conversely, the kernel of a ring homomorphism is a two-sided ideal. Therefore, the two-sided ideals are exactly the kernels of ring homomorphisms.
=== Note on convention ===
By convention, a ring has the multiplicative identity. But some authors do not require a ring to have the multiplicative identity; i.e., for them, a ring is a rng. For a rng R, a left ideal I is a subrng with the additional property that
r
x
{\displaystyle rx}
is in I for every
r
∈
R
{\displaystyle r\in R}
and every
x
∈
I
{\displaystyle x\in I}
. (Right and two-sided ideals are defined similarly.) For a ring, an ideal I (say a left ideal) is rarely a subring; since a subring shares the same multiplicative identity with the ambient ring R, if I were a subring, for every
r
∈
R
{\displaystyle r\in R}
, we have
r
=
r
1
∈
I
;
{\displaystyle r=r1\in I;}
i.e.,
I
=
R
{\displaystyle I=R}
.
The notion of an ideal does not involve associativity; thus, an ideal is also defined for non-associative rings (often without the multiplicative identity) such as a Lie algebra.
== Examples and properties ==
(For the sake of brevity, some results are stated only for left ideals but are usually also true for right ideals with appropriate notation changes.)
In a ring R, the set R itself forms a two-sided ideal of R called the unit ideal. It is often also denoted by
(
1
)
{\displaystyle (1)}
since it is precisely the two-sided ideal generated (see below) by the unity
1
R
{\displaystyle 1_{R}}
. Also, the set
{
0
R
}
{\displaystyle \{0_{R}\}}
consisting of only the additive identity 0R forms a two-sided ideal called the zero ideal and is denoted by
(
0
)
{\displaystyle (0)}
. Every (left, right or two-sided) ideal contains the zero ideal and is contained in the unit ideal.
An (left, right or two-sided) ideal that is not the unit ideal is called a proper ideal (as it is a proper subset). Note: a left ideal
a
{\displaystyle {\mathfrak {a}}}
is proper if and only if it does not contain a unit element, since if
u
∈
a
{\displaystyle u\in {\mathfrak {a}}}
is a unit element, then
r
=
(
r
u
−
1
)
u
∈
a
{\displaystyle r=(ru^{-1})u\in {\mathfrak {a}}}
for every
r
∈
R
{\displaystyle r\in R}
. Typically there are plenty of proper ideals. In fact, if R is a skew-field, then
(
0
)
,
(
1
)
{\displaystyle (0),(1)}
are its only ideals and conversely: that is, a nonzero ring R is a skew-field if
(
0
)
,
(
1
)
{\displaystyle (0),(1)}
are the only left (or right) ideals. (Proof: if
x
{\displaystyle x}
is a nonzero element, then the principal left ideal
R
x
{\displaystyle Rx}
(see below) is nonzero and thus
R
x
=
(
1
)
{\displaystyle Rx=(1)}
; i.e.,
y
x
=
1
{\displaystyle yx=1}
for some nonzero
y
{\displaystyle y}
. Likewise,
z
y
=
1
{\displaystyle zy=1}
for some nonzero
z
{\displaystyle z}
. Then
z
=
z
(
y
x
)
=
(
z
y
)
x
=
x
{\displaystyle z=z(yx)=(zy)x=x}
.)
The even integers form an ideal in the ring
Z
{\displaystyle \mathbb {Z} }
of all integers, since the sum of any two even integers is even, and the product of any integer with an even integer is also even; this ideal is usually denoted by
2
Z
{\displaystyle 2\mathbb {Z} }
. More generally, the set of all integers divisible by a fixed integer
n
{\displaystyle n}
is an ideal denoted
n
Z
{\displaystyle n\mathbb {Z} }
. In fact, every non-zero ideal of the ring
Z
{\displaystyle \mathbb {Z} }
is generated by its smallest positive element, as a consequence of Euclidean division, so
Z
{\displaystyle \mathbb {Z} }
is a principal ideal domain.
The set of all polynomials with real coefficients that are divisible by the polynomial
x
2
+
1
{\displaystyle x^{2}+1}
is an ideal in the ring of all real-coefficient polynomials
R
[
x
]
{\displaystyle \mathbb {R} [x]}
.
Take a ring
R
{\displaystyle R}
and positive integer
n
{\displaystyle n}
. For each
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
, the set of all
n
×
n
{\displaystyle n\times n}
matrices with entries in
R
{\displaystyle R}
whose
i
{\displaystyle i}
-th row is zero is a right ideal in the ring
M
n
(
R
)
{\displaystyle M_{n}(R)}
of all
n
×
n
{\displaystyle n\times n}
matrices with entries in
R
{\displaystyle R}
. It is not a left ideal. Similarly, for each
1
≤
j
≤
n
{\displaystyle 1\leq j\leq n}
, the set of all
n
×
n
{\displaystyle n\times n}
matrices whose
j
{\displaystyle j}
-th column is zero is a left ideal but not a right ideal.
The ring
C
(
R
)
{\displaystyle C(\mathbb {R} )}
of all continuous functions
f
{\displaystyle f}
from
R
{\displaystyle \mathbb {R} }
to
R
{\displaystyle \mathbb {R} }
under pointwise multiplication contains the ideal of all continuous functions
f
{\displaystyle f}
such that
f
(
1
)
=
0
{\displaystyle f(1)=0}
. Another ideal in
C
(
R
)
{\displaystyle C(\mathbb {R} )}
is given by those functions that vanish for large enough arguments, i.e. those continuous functions
f
{\displaystyle f}
for which there exists a number
L
>
0
{\displaystyle L>0}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
whenever
|
x
|
>
L
{\displaystyle \vert x\vert >L}
.
A ring is called a simple ring if it is nonzero and has no two-sided ideals other than
(
0
)
,
(
1
)
{\displaystyle (0),(1)}
. Thus, a skew-field is simple and a simple commutative ring is a field. The matrix ring over a skew-field is a simple ring.
If
f
:
R
→
S
{\displaystyle f:R\to S}
is a ring homomorphism, then the kernel
ker
(
f
)
=
f
−
1
(
0
S
)
{\displaystyle \ker(f)=f^{-1}(0_{S})}
is a two-sided ideal of
R
{\displaystyle R}
. By definition,
f
(
1
R
)
=
1
S
{\displaystyle f(1_{R})=1_{S}}
, and thus if
S
{\displaystyle S}
is not the zero ring (so
1
S
≠
0
S
{\displaystyle 1_{S}\neq 0_{S}}
), then
ker
(
f
)
{\displaystyle \ker(f)}
is a proper ideal. More generally, for each left ideal I of S, the pre-image
f
−
1
(
I
)
{\displaystyle f^{-1}(I)}
is a left ideal. If I is a left ideal of R, then
f
(
I
)
{\displaystyle f(I)}
is a left ideal of the subring
f
(
R
)
{\displaystyle f(R)}
of S: unless f is surjective,
f
(
I
)
{\displaystyle f(I)}
need not be an ideal of S; see also § Extension and contraction of an ideal.
Ideal correspondence: Given a surjective ring homomorphism
f
:
R
→
S
{\displaystyle f:R\to S}
, there is a bijective order-preserving correspondence between the left (resp. right, two-sided) ideals of
R
{\displaystyle R}
containing the kernel of
f
{\displaystyle f}
and the left (resp. right, two-sided) ideals of
S
{\displaystyle S}
: the correspondence is given by
I
↦
f
(
I
)
{\displaystyle I\mapsto f(I)}
and the pre-image
J
↦
f
−
1
(
J
)
{\displaystyle J\mapsto f^{-1}(J)}
. Moreover, for commutative rings, this bijective correspondence restricts to prime ideals, maximal ideals, and radical ideals (see the Types of ideals section for the definitions of these ideals).
If M is a left R-module and
S
⊂
M
{\displaystyle S\subset M}
a subset, then the annihilator
Ann
R
(
S
)
=
{
r
∈
R
∣
r
s
=
0
,
s
∈
S
}
{\displaystyle \operatorname {Ann} _{R}(S)=\{r\in R\mid rs=0,s\in S\}}
of S is a left ideal. Given ideals
a
,
b
{\displaystyle {\mathfrak {a}},{\mathfrak {b}}}
of a commutative ring R, the R-annihilator of
(
b
+
a
)
/
a
{\displaystyle ({\mathfrak {b}}+{\mathfrak {a}})/{\mathfrak {a}}}
is an ideal of R called the ideal quotient of
a
{\displaystyle {\mathfrak {a}}}
by
b
{\displaystyle {\mathfrak {b}}}
and is denoted by
(
a
:
b
)
{\displaystyle ({\mathfrak {a}}:{\mathfrak {b}})}
; it is an instance of idealizer in commutative algebra.
Let
a
i
,
i
∈
S
{\displaystyle {\mathfrak {a}}_{i},i\in S}
be an ascending chain of left ideals in a ring R; i.e.,
S
{\displaystyle S}
is a totally ordered set and
a
i
⊂
a
j
{\displaystyle {\mathfrak {a}}_{i}\subset {\mathfrak {a}}_{j}}
for each
i
<
j
{\displaystyle i<j}
. Then the union
⋃
i
∈
S
a
i
{\displaystyle \textstyle \bigcup _{i\in S}{\mathfrak {a}}_{i}}
is a left ideal of R. (Note: this fact remains true even if R is without the unity 1.)
The above fact together with Zorn's lemma proves the following: if
E
⊂
R
{\displaystyle E\subset R}
is a possibly empty subset and
a
0
⊂
R
{\displaystyle {\mathfrak {a}}_{0}\subset R}
is a left ideal that is disjoint from E, then there is an ideal that is maximal among the ideals containing
a
0
{\displaystyle {\mathfrak {a}}_{0}}
and disjoint from E. (Again this is still valid if the ring R lacks the unity 1.) When
R
≠
0
{\displaystyle R\neq 0}
, taking
a
0
=
(
0
)
{\displaystyle {\mathfrak {a}}_{0}=(0)}
and
E
=
{
1
}
{\displaystyle E=\{1\}}
, in particular, there exists a left ideal that is maximal among proper left ideals (often simply called a maximal left ideal); see Krull's theorem for more.
An arbitrary union of ideals need not be an ideal, but the following is still true: given a possibly empty subset X of R, there is the smallest left ideal containing X, called the left ideal generated by X and is denoted by
R
X
{\displaystyle RX}
. Such an ideal exists since it is the intersection of all left ideals containing X. Equivalently,
R
X
{\displaystyle RX}
is the set of all the (finite) left R-linear combinations of elements of X over R:
R
X
=
{
r
1
x
1
+
⋯
+
r
n
x
n
∣
n
∈
N
,
r
i
∈
R
,
x
i
∈
X
}
{\displaystyle RX=\{r_{1}x_{1}+\dots +r_{n}x_{n}\mid n\in \mathbb {N} ,r_{i}\in R,x_{i}\in X\}}
(since such a span is the smallest left ideal containing X.) A right (resp. two-sided) ideal generated by X is defined in the similar way. For "two-sided", one has to use linear combinations from both sides; i.e.,
R
X
R
=
{
r
1
x
1
s
1
+
⋯
+
r
n
x
n
s
n
∣
n
∈
N
,
r
i
∈
R
,
s
i
∈
R
,
x
i
∈
X
}
.
{\displaystyle RXR=\{r_{1}x_{1}s_{1}+\dots +r_{n}x_{n}s_{n}\mid n\in \mathbb {N} ,r_{i}\in R,s_{i}\in R,x_{i}\in X\}.}
A left (resp. right, two-sided) ideal generated by a single element x is called the principal left (resp. right, two-sided) ideal generated by x and is denoted by
R
x
{\displaystyle Rx}
(resp.
x
R
,
R
x
R
{\displaystyle xR,RxR}
). The principal two-sided ideal
R
x
R
{\displaystyle RxR}
is often also denoted by
(
x
)
{\displaystyle (x)}
. If
X
=
{
x
1
,
…
,
x
n
}
{\displaystyle X=\{x_{1},\dots ,x_{n}\}}
is a finite set, then
R
X
R
{\displaystyle RXR}
is also written as
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\dots ,x_{n})}
.
There is a bijective correspondence between ideals and congruence relations (equivalence relations that respect the ring structure) on the ring: Given an ideal
I
{\displaystyle I}
of a ring
R
{\displaystyle R}
, let
x
∼
y
{\displaystyle x\sim y}
if
x
−
y
∈
I
{\displaystyle x-y\in I}
. Then
∼
{\displaystyle \sim }
is a congruence relation on
R
{\displaystyle R}
. Conversely, given a congruence relation
∼
{\displaystyle \sim }
on
R
{\displaystyle R}
, let
I
=
{
x
∈
R
:
x
∼
0
}
{\displaystyle I=\{x\in R:x\sim 0\}}
. Then
I
{\displaystyle I}
is an ideal of
R
{\displaystyle R}
.
== Types of ideals ==
To simplify the description all rings are assumed to be commutative. The non-commutative case is discussed in detail in the respective articles.
Ideals are important because they appear as kernels of ring homomorphisms and allow one to define factor rings. Different types of ideals are studied because they can be used to construct different types of factor rings.
Maximal ideal: A proper ideal I is called a maximal ideal if there exists no other proper ideal J with I a proper subset of J. The factor ring of a maximal ideal is a simple ring in general and is a field for commutative rings.
Minimal ideal: A nonzero ideal is called minimal if it contains no other nonzero ideal.
Zero ideal: the ideal
{
0
}
{\displaystyle \{0\}}
.
Unit ideal: the whole ring (being the ideal generated by
1
{\displaystyle 1}
).
Prime ideal: A proper ideal
I
{\displaystyle I}
is called a prime ideal if for any
a
{\displaystyle a}
and
b
{\displaystyle b}
in
R
{\displaystyle R}
, if
a
b
{\displaystyle ab}
is in
I
{\displaystyle I}
, then at least one of
a
{\displaystyle a}
and
b
{\displaystyle b}
is in
I
{\displaystyle I}
. The factor ring of a prime ideal is a prime ring in general and is an integral domain for commutative rings.
Radical ideal or semiprime ideal: A proper ideal I is called radical or semiprime if for any a in
R
{\displaystyle R}
, if an is in I for some n, then a is in I. The factor ring of a radical ideal is a semiprime ring for general rings, and is a reduced ring for commutative rings.
Primary ideal: An ideal I is called a primary ideal if for all a and b in R, if ab is in I, then at least one of a and bn is in I for some natural number n. Every prime ideal is primary, but not conversely. A semiprime primary ideal is prime.
Principal ideal: An ideal generated by one element.
Finitely generated ideal: This type of ideal is finitely generated as a module.
Primitive ideal: A left primitive ideal is the annihilator of a simple left module.
Irreducible ideal: An ideal is said to be irreducible if it cannot be written as an intersection of ideals that properly contain it.
Comaximal ideals: Two ideals I, J are said to be comaximal if
x
+
y
=
1
{\displaystyle x+y=1}
for some
x
∈
I
{\displaystyle x\in I}
and
y
∈
J
{\displaystyle y\in J}
.
Regular ideal: This term has multiple uses. See the article for a list.
Nil ideal: An ideal is a nil ideal if each of its elements is nilpotent.
Nilpotent ideal: Some power of it is zero.
Parameter ideal: an ideal generated by a system of parameters.
Perfect ideal: A proper ideal I in a Noetherian ring
R
{\displaystyle R}
is called a perfect ideal if its grade equals the projective dimension of the associated quotient ring,
grade
(
I
)
=
proj
dim
(
R
/
I
)
{\displaystyle {\textrm {grade}}(I)={\textrm {proj}}\dim(R/I)}
. A perfect ideal is unmixed.
Unmixed ideal: A proper ideal I in a Noetherian ring
R
{\displaystyle R}
is called an unmixed ideal (in height) if the height of I is equal to the height of every associated prime P of
R
/
I
{\displaystyle R/I}
. (This is stronger than saying that
R
/
I
{\displaystyle R/I}
is equidimensional. See also equidimensional ring.
Two other important terms using "ideal" are not always ideals of their ring. See their respective articles for details:
Fractional ideal: This is usually defined when
R
{\displaystyle R}
is a commutative domain with quotient field
K
{\displaystyle K}
. Despite their names, fractional ideals are
R
{\displaystyle R}
submodules of
K
{\displaystyle K}
with a special property. If the fractional ideal is contained entirely in
R
{\displaystyle R}
, then it is truly an ideal of
R
{\displaystyle R}
.
Invertible ideal: Usually an invertible ideal A is defined as a fractional ideal for which there is another fractional ideal B such that AB = BA = R. Some authors may also apply "invertible ideal" to ordinary ring ideals A and B with AB = BA = R in rings other than domains.
== Ideal operations ==
The sum and product of ideals are defined as follows. For
a
{\displaystyle {\mathfrak {a}}}
and
b
{\displaystyle {\mathfrak {b}}}
, left (resp. right) ideals of a ring R, their sum is
a
+
b
:=
{
a
+
b
∣
a
∈
a
and
b
∈
b
}
{\displaystyle {\mathfrak {a}}+{\mathfrak {b}}:=\{a+b\mid a\in {\mathfrak {a}}{\mbox{ and }}b\in {\mathfrak {b}}\}}
,
which is a left (resp. right) ideal,
and, if
a
,
b
{\displaystyle {\mathfrak {a}},{\mathfrak {b}}}
are two-sided,
a
b
:=
{
a
1
b
1
+
⋯
+
a
n
b
n
∣
a
i
∈
a
and
b
i
∈
b
,
i
=
1
,
2
,
…
,
n
;
for
n
=
1
,
2
,
…
}
,
{\displaystyle {\mathfrak {a}}{\mathfrak {b}}:=\{a_{1}b_{1}+\dots +a_{n}b_{n}\mid a_{i}\in {\mathfrak {a}}{\mbox{ and }}b_{i}\in {\mathfrak {b}},i=1,2,\dots ,n;{\mbox{ for }}n=1,2,\dots \},}
i.e. the product is the ideal generated by all products of the form ab with a in
a
{\displaystyle {\mathfrak {a}}}
and b in
b
{\displaystyle {\mathfrak {b}}}
.
Note
a
+
b
{\displaystyle {\mathfrak {a}}+{\mathfrak {b}}}
is the smallest left (resp. right) ideal containing both
a
{\displaystyle {\mathfrak {a}}}
and
b
{\displaystyle {\mathfrak {b}}}
(or the union
a
∪
b
{\displaystyle {\mathfrak {a}}\cup {\mathfrak {b}}}
), while the product
a
b
{\displaystyle {\mathfrak {a}}{\mathfrak {b}}}
is contained in the intersection of
a
{\displaystyle {\mathfrak {a}}}
and
b
{\displaystyle {\mathfrak {b}}}
.
The distributive law holds for two-sided ideals
a
,
b
,
c
{\displaystyle {\mathfrak {a}},{\mathfrak {b}},{\mathfrak {c}}}
,
a
(
b
+
c
)
=
a
b
+
a
c
{\displaystyle {\mathfrak {a}}({\mathfrak {b}}+{\mathfrak {c}})={\mathfrak {a}}{\mathfrak {b}}+{\mathfrak {a}}{\mathfrak {c}}}
,
(
a
+
b
)
c
=
a
c
+
b
c
{\displaystyle ({\mathfrak {a}}+{\mathfrak {b}}){\mathfrak {c}}={\mathfrak {a}}{\mathfrak {c}}+{\mathfrak {b}}{\mathfrak {c}}}
.
If a product is replaced by an intersection, a partial distributive law holds:
a
∩
(
b
+
c
)
⊃
a
∩
b
+
a
∩
c
{\displaystyle {\mathfrak {a}}\cap ({\mathfrak {b}}+{\mathfrak {c}})\supset {\mathfrak {a}}\cap {\mathfrak {b}}+{\mathfrak {a}}\cap {\mathfrak {c}}}
where the equality holds if
a
{\displaystyle {\mathfrak {a}}}
contains
b
{\displaystyle {\mathfrak {b}}}
or
c
{\displaystyle {\mathfrak {c}}}
.
Remark: The sum and the intersection of ideals is again an ideal; with these two operations as join and meet, the set of all ideals of a given ring forms a complete modular lattice. The lattice is not, in general, a distributive lattice. The three operations of intersection, sum (or join), and product make the set of ideals of a commutative ring into a quantale.
If
a
,
b
{\displaystyle {\mathfrak {a}},{\mathfrak {b}}}
are ideals of a commutative ring R, then
a
∩
b
=
a
b
{\displaystyle {\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}}
in the following two cases (at least)
a
+
b
=
(
1
)
{\displaystyle {\mathfrak {a}}+{\mathfrak {b}}=(1)}
a
{\displaystyle {\mathfrak {a}}}
is generated by elements that form a regular sequence modulo
b
{\displaystyle {\mathfrak {b}}}
.
(More generally, the difference between a product and an intersection of ideals is measured by the Tor functor:
Tor
1
R
(
R
/
a
,
R
/
b
)
=
(
a
∩
b
)
/
a
b
{\displaystyle \operatorname {Tor} _{1}^{R}(R/{\mathfrak {a}},R/{\mathfrak {b}})=({\mathfrak {a}}\cap {\mathfrak {b}})/{\mathfrak {a}}{\mathfrak {b}}}
.)
An integral domain is called a Dedekind domain if for each pair of ideals
a
⊂
b
{\displaystyle {\mathfrak {a}}\subset {\mathfrak {b}}}
, there is an ideal
c
{\displaystyle {\mathfrak {c}}}
such that
a
=
b
c
{\displaystyle {\mathfrak {\mathfrak {a}}}={\mathfrak {b}}{\mathfrak {c}}}
. It can then be shown that every nonzero ideal of a Dedekind domain can be uniquely written as a product of maximal ideals, a generalization of the fundamental theorem of arithmetic.
== Examples of ideal operations ==
In
Z
{\displaystyle \mathbb {Z} }
we have
(
n
)
∩
(
m
)
=
lcm
(
n
,
m
)
Z
{\displaystyle (n)\cap (m)=\operatorname {lcm} (n,m)\mathbb {Z} }
since
(
n
)
∩
(
m
)
{\displaystyle (n)\cap (m)}
is the set of integers that are divisible by both
n
{\displaystyle n}
and
m
{\displaystyle m}
.
Let
R
=
C
[
x
,
y
,
z
,
w
]
{\displaystyle R=\mathbb {C} [x,y,z,w]}
and let
a
=
(
z
,
w
)
,
b
=
(
x
+
z
,
y
+
w
)
,
c
=
(
x
+
z
,
w
)
{\displaystyle {\mathfrak {a}}=(z,w),{\mathfrak {b}}=(x+z,y+w),{\mathfrak {c}}=(x+z,w)}
. Then,
a
+
b
=
(
z
,
w
,
x
+
z
,
y
+
w
)
=
(
x
,
y
,
z
,
w
)
{\displaystyle {\mathfrak {a}}+{\mathfrak {b}}=(z,w,x+z,y+w)=(x,y,z,w)}
and
a
+
c
=
(
z
,
w
,
x
)
{\displaystyle {\mathfrak {a}}+{\mathfrak {c}}=(z,w,x)}
a
b
=
(
z
(
x
+
z
)
,
z
(
y
+
w
)
,
w
(
x
+
z
)
,
w
(
y
+
w
)
)
=
(
z
2
+
x
z
,
z
y
+
w
z
,
w
x
+
w
z
,
w
y
+
w
2
)
{\displaystyle {\mathfrak {a}}{\mathfrak {b}}=(z(x+z),z(y+w),w(x+z),w(y+w))=(z^{2}+xz,zy+wz,wx+wz,wy+w^{2})}
a
c
=
(
x
z
+
z
2
,
z
w
,
x
w
+
z
w
,
w
2
)
{\displaystyle {\mathfrak {a}}{\mathfrak {c}}=(xz+z^{2},zw,xw+zw,w^{2})}
a
∩
b
=
a
b
{\displaystyle {\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}}
while
a
∩
c
=
(
w
,
x
z
+
z
2
)
≠
a
c
{\displaystyle {\mathfrak {a}}\cap {\mathfrak {c}}=(w,xz+z^{2})\neq {\mathfrak {a}}{\mathfrak {c}}}
In the first computation, we see the general pattern for taking the sum of two finitely generated ideals, it is the ideal generated by the union of their generators. In the last three we observe that products and intersections agree whenever the two ideals intersect in the zero ideal. These computations can be checked using Macaulay2.
== Radical of a ring ==
Ideals appear naturally in the study of modules, especially in the form of a radical.
For simplicity, we work with commutative rings but, with some changes, the results are also true for non-commutative rings.
Let R be a commutative ring. By definition, a primitive ideal of R is the annihilator of a (nonzero) simple R-module. The Jacobson radical
J
=
Jac
(
R
)
{\displaystyle J=\operatorname {Jac} (R)}
of R is the intersection of all primitive ideals. Equivalently,
J
=
⋂
m
maximal ideals
m
.
{\displaystyle J=\bigcap _{{\mathfrak {m}}{\text{ maximal ideals}}}{\mathfrak {m}}.}
Indeed, if
M
{\displaystyle M}
is a simple module and x is a nonzero element in M, then
R
x
=
M
{\displaystyle Rx=M}
and
R
/
Ann
(
M
)
=
R
/
Ann
(
x
)
≃
M
{\displaystyle R/\operatorname {Ann} (M)=R/\operatorname {Ann} (x)\simeq M}
, meaning
Ann
(
M
)
{\displaystyle \operatorname {Ann} (M)}
is a maximal ideal. Conversely, if
m
{\displaystyle {\mathfrak {m}}}
is a maximal ideal, then
m
{\displaystyle {\mathfrak {m}}}
is the annihilator of the simple R-module
R
/
m
{\displaystyle R/{\mathfrak {m}}}
. There is also another characterization (the proof is not hard):
J
=
{
x
∈
R
∣
1
−
y
x
is a unit element for every
y
∈
R
}
.
{\displaystyle J=\{x\in R\mid 1-yx\,{\text{ is a unit element for every }}y\in R\}.}
For a not-necessarily-commutative ring, it is a general fact that
1
−
y
x
{\displaystyle 1-yx}
is a unit element if and only if
1
−
x
y
{\displaystyle 1-xy}
is (see the link) and so this last characterization shows that the radical can be defined both in terms of left and right primitive ideals.
The following simple but important fact (Nakayama's lemma) is built-in to the definition of a Jacobson radical: if M is a module such that
J
M
=
M
{\displaystyle JM=M}
, then M does not admit a maximal submodule, since if there is a maximal submodule
L
⊊
M
{\displaystyle L\subsetneq M}
,
J
⋅
(
M
/
L
)
=
0
{\displaystyle J\cdot (M/L)=0}
and so
M
=
J
M
⊂
L
⊊
M
{\displaystyle M=JM\subset L\subsetneq M}
, a contradiction. Since a nonzero finitely generated module admits a maximal submodule, in particular, one has:
If
J
M
=
M
{\displaystyle JM=M}
and M is finitely generated, then
M
=
0
{\displaystyle M=0}
.
A maximal ideal is a prime ideal and so one has
nil
(
R
)
=
⋂
p
prime ideals
p
⊂
Jac
(
R
)
{\displaystyle \operatorname {nil} (R)=\bigcap _{{\mathfrak {p}}{\text{ prime ideals }}}{\mathfrak {p}}\subset \operatorname {Jac} (R)}
where the intersection on the left is called the nilradical of R. As it turns out,
nil
(
R
)
{\displaystyle \operatorname {nil} (R)}
is also the set of nilpotent elements of R.
If R is an Artinian ring, then
Jac
(
R
)
{\displaystyle \operatorname {Jac} (R)}
is nilpotent and
nil
(
R
)
=
Jac
(
R
)
{\displaystyle \operatorname {nil} (R)=\operatorname {Jac} (R)}
. (Proof: first note the DCC implies
J
n
=
J
n
+
1
{\displaystyle J^{n}=J^{n+1}}
for some n. If (DCC)
a
⊋
Ann
(
J
n
)
{\displaystyle {\mathfrak {a}}\supsetneq \operatorname {Ann} (J^{n})}
is an ideal properly minimal over the latter, then
J
⋅
(
a
/
Ann
(
J
n
)
)
=
0
{\displaystyle J\cdot ({\mathfrak {a}}/\operatorname {Ann} (J^{n}))=0}
. That is,
J
n
a
=
J
n
+
1
a
=
0
{\displaystyle J^{n}{\mathfrak {a}}=J^{n+1}{\mathfrak {a}}=0}
, a contradiction.)
== Extension and contraction of an ideal ==
Let A and B be two commutative rings, and let f : A → B be a ring homomorphism. If
a
{\displaystyle {\mathfrak {a}}}
is an ideal in A, then
f
(
a
)
{\displaystyle f({\mathfrak {a}})}
need not be an ideal in B (e.g. take f to be the inclusion of the ring of integers Z into the field of rationals Q). The extension
a
e
{\displaystyle {\mathfrak {a}}^{e}}
of
a
{\displaystyle {\mathfrak {a}}}
in B is defined to be the ideal in B generated by
f
(
a
)
{\displaystyle f({\mathfrak {a}})}
. Explicitly,
a
e
=
{
∑
y
i
f
(
x
i
)
:
x
i
∈
a
,
y
i
∈
B
}
{\displaystyle {\mathfrak {a}}^{e}={\Big \{}\sum y_{i}f(x_{i}):x_{i}\in {\mathfrak {a}},y_{i}\in B{\Big \}}}
If
b
{\displaystyle {\mathfrak {b}}}
is an ideal of B, then
f
−
1
(
b
)
{\displaystyle f^{-1}({\mathfrak {b}})}
is always an ideal of A, called the contraction
b
c
{\displaystyle {\mathfrak {b}}^{c}}
of
b
{\displaystyle {\mathfrak {b}}}
to A.
Assuming f : A → B is a ring homomorphism,
a
{\displaystyle {\mathfrak {a}}}
is an ideal in A,
b
{\displaystyle {\mathfrak {b}}}
is an ideal in B, then:
b
{\displaystyle {\mathfrak {b}}}
is prime in B
⇒
{\displaystyle \Rightarrow }
b
c
{\displaystyle {\mathfrak {b}}^{c}}
is prime in A.
a
e
c
⊇
a
{\displaystyle {\mathfrak {a}}^{ec}\supseteq {\mathfrak {a}}}
b
c
e
⊆
b
{\displaystyle {\mathfrak {b}}^{ce}\subseteq {\mathfrak {b}}}
It is false, in general, that
a
{\displaystyle {\mathfrak {a}}}
being prime (or maximal) in A implies that
a
e
{\displaystyle {\mathfrak {a}}^{e}}
is prime (or maximal) in B. Many classic examples of this stem from algebraic number theory. For example, embedding
Z
→
Z
[
i
]
{\displaystyle \mathbb {Z} \to \mathbb {Z} \left\lbrack i\right\rbrack }
. In
B
=
Z
[
i
]
{\displaystyle B=\mathbb {Z} \left\lbrack i\right\rbrack }
, the element 2 factors as
2
=
(
1
+
i
)
(
1
−
i
)
{\displaystyle 2=(1+i)(1-i)}
where (one can show) neither of
1
+
i
,
1
−
i
{\displaystyle 1+i,1-i}
are units in B. So
(
2
)
e
{\displaystyle (2)^{e}}
is not prime in B (and therefore not maximal, as well). Indeed,
(
1
±
i
)
2
=
±
2
i
{\displaystyle (1\pm i)^{2}=\pm 2i}
shows that
(
1
+
i
)
=
(
(
1
−
i
)
−
(
1
−
i
)
2
)
{\displaystyle (1+i)=((1-i)-(1-i)^{2})}
,
(
1
−
i
)
=
(
(
1
+
i
)
−
(
1
+
i
)
2
)
{\displaystyle (1-i)=((1+i)-(1+i)^{2})}
, and therefore
(
2
)
e
=
(
1
+
i
)
2
{\displaystyle (2)^{e}=(1+i)^{2}}
.
On the other hand, if f is surjective and
a
⊇
ker
f
{\displaystyle {\mathfrak {a}}\supseteq \ker f}
then:
a
e
c
=
a
{\displaystyle {\mathfrak {a}}^{ec}={\mathfrak {a}}}
and
b
c
e
=
b
{\displaystyle {\mathfrak {b}}^{ce}={\mathfrak {b}}}
.
a
{\displaystyle {\mathfrak {a}}}
is a prime ideal in A
⇔
{\displaystyle \Leftrightarrow }
a
e
{\displaystyle {\mathfrak {a}}^{e}}
is a prime ideal in B.
a
{\displaystyle {\mathfrak {a}}}
is a maximal ideal in A
⇔
{\displaystyle \Leftrightarrow }
a
e
{\displaystyle {\mathfrak {a}}^{e}}
is a maximal ideal in B.
Remark: Let K be a field extension of L, and let B and A be the rings of integers of K and L, respectively. Then B is an integral extension of A, and we let f be the inclusion map from A to B. The behaviour of a prime ideal
a
=
p
{\displaystyle {\mathfrak {a}}={\mathfrak {p}}}
of A under extension is one of the central problems of algebraic number theory.
The following is sometimes useful: a prime ideal
p
{\displaystyle {\mathfrak {p}}}
is a contraction of a prime ideal if and only if
p
=
p
e
c
{\displaystyle {\mathfrak {p}}={\mathfrak {p}}^{ec}}
. (Proof: Assuming the latter, note
p
e
B
p
=
B
p
⇒
p
e
{\displaystyle {\mathfrak {p}}^{e}B_{\mathfrak {p}}=B_{\mathfrak {p}}\Rightarrow {\mathfrak {p}}^{e}}
intersects
A
−
p
{\displaystyle A-{\mathfrak {p}}}
, a contradiction. Now, the prime ideals of
B
p
{\displaystyle B_{\mathfrak {p}}}
correspond to those in B that are disjoint from
A
−
p
{\displaystyle A-{\mathfrak {p}}}
. Hence, there is a prime ideal
q
{\displaystyle {\mathfrak {q}}}
of B, disjoint from
A
−
p
{\displaystyle A-{\mathfrak {p}}}
, such that
q
B
p
{\displaystyle {\mathfrak {q}}B_{\mathfrak {p}}}
is a maximal ideal containing
p
e
B
p
{\displaystyle {\mathfrak {p}}^{e}B_{\mathfrak {p}}}
. One then checks that
q
{\displaystyle {\mathfrak {q}}}
lies over
p
{\displaystyle {\mathfrak {p}}}
. The converse is obvious.)
== Generalizations ==
Ideals can be generalized to any monoid object
(
R
,
⊗
)
{\displaystyle (R,\otimes )}
, where
R
{\displaystyle R}
is the object where the monoid structure has been forgotten. A left ideal of
R
{\displaystyle R}
is a subobject
I
{\displaystyle I}
that "absorbs multiplication from the left by elements of
R
{\displaystyle R}
"; that is,
I
{\displaystyle I}
is a left ideal if it satisfies the following two conditions:
I
{\displaystyle I}
is a subobject of
R
{\displaystyle R}
For every
r
∈
(
R
,
⊗
)
{\displaystyle r\in (R,\otimes )}
and every
x
∈
(
I
,
⊗
)
{\displaystyle x\in (I,\otimes )}
, the product
r
⊗
x
{\displaystyle r\otimes x}
is in
(
I
,
⊗
)
{\displaystyle (I,\otimes )}
.
A right ideal is defined with the condition "
r
⊗
x
∈
(
I
,
⊗
)
{\displaystyle r\otimes x\in (I,\otimes )}
" replaced by "'
x
⊗
r
∈
(
I
,
⊗
)
{\displaystyle x\otimes r\in (I,\otimes )}
". A two-sided ideal is a left ideal that is also a right ideal, and is sometimes simply called an ideal. When
R
{\displaystyle R}
is a commutative monoid object respectively, the definitions of left, right, and two-sided ideal coincide, and the term ideal is used alone.
== See also ==
Modular arithmetic
Noether isomorphism theorem
Boolean prime ideal theorem
Ideal theory
Ideal (order theory)
Ideal norm
Splitting of prime ideals in Galois extensions
Ideal sheaf
== Notes ==
== References ==
== External links ==
Levinson, Jake (July 14, 2014). "The Geometric Interpretation for Extension of Ideals?". Stack Exchange. | Wikipedia/Finitely_generated_ideal |
In mathematics, a base (or basis; pl.: bases) for the topology τ of a topological space (X, τ) is a family
B
{\displaystyle {\mathcal {B}}}
of open subsets of X such that every open set of the topology is equal to the union of some sub-family of
B
{\displaystyle {\mathcal {B}}}
. For example, the set of all open intervals in the real number line
R
{\displaystyle \mathbb {R} }
is a basis for the Euclidean topology on
R
{\displaystyle \mathbb {R} }
because every open interval is an open set, and also every open subset of
R
{\displaystyle \mathbb {R} }
can be written as a union of some family of open intervals.
Bases are ubiquitous throughout topology. The sets in a base for a topology, which are called basic open sets, are often easier to describe and use than arbitrary open sets. Many important topological definitions such as continuity and convergence can be checked using only basic open sets instead of arbitrary open sets. Some topologies have a base of open sets with specific useful properties that may make checking such topological definitions easier.
Not all families of subsets of a set
X
{\displaystyle X}
form a base for a topology on
X
{\displaystyle X}
. Under some conditions detailed below, a family of subsets will form a base for a (unique) topology on
X
{\displaystyle X}
, obtained by taking all possible unions of subfamilies. Such families of sets are very frequently used to define topologies. A weaker notion related to bases is that of a subbase for a topology. Bases for topologies are also closely related to neighborhood bases.
== Definition and basic properties ==
Given a topological space
(
X
,
τ
)
{\displaystyle (X,\tau )}
, a base (or basis) for the topology
τ
{\displaystyle \tau }
(also called a base for
X
{\displaystyle X}
if the topology is understood) is a family
B
⊆
τ
{\displaystyle {\mathcal {B}}\subseteq \tau }
of open sets such that every open set of the topology can be represented as the union of some subfamily of
B
{\displaystyle {\mathcal {B}}}
. The elements of
B
{\displaystyle {\mathcal {B}}}
are called basic open sets.
Equivalently, a family
B
{\displaystyle {\mathcal {B}}}
of subsets of
X
{\displaystyle X}
is a base for the topology
τ
{\displaystyle \tau }
if and only if
B
⊆
τ
{\displaystyle {\mathcal {B}}\subseteq \tau }
and for every open set
U
{\displaystyle U}
in
X
{\displaystyle X}
and point
x
∈
U
{\displaystyle x\in U}
there is some basic open set
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
such that
x
∈
B
⊆
U
{\displaystyle x\in B\subseteq U}
.
For example, the collection of all open intervals in the real line forms a base for the standard topology on the real numbers. More generally, in a metric space
M
{\displaystyle M}
the collection of all open balls about points of
M
{\displaystyle M}
forms a base for the topology.
In general, a topological space
(
X
,
τ
)
{\displaystyle (X,\tau )}
can have many bases. The whole topology
τ
{\displaystyle \tau }
is always a base for itself (that is,
τ
{\displaystyle \tau }
is a base for
τ
{\displaystyle \tau }
). For the real line, the collection of all open intervals is a base for the topology. So is the collection of all open intervals with rational endpoints, or the collection of all open intervals with irrational endpoints, for example. Note that two different bases need not have any basic open set in common. One of the topological properties of a space
X
{\displaystyle X}
is the minimum cardinality of a base for its topology, called the weight of
X
{\displaystyle X}
and denoted
w
(
X
)
{\displaystyle w(X)}
. From the examples above, the real line has countable weight.
If
B
{\displaystyle {\mathcal {B}}}
is a base for the topology
τ
{\displaystyle \tau }
of a space
X
{\displaystyle X}
, it satisfies the following properties:
(B1) The elements of
B
{\displaystyle {\mathcal {B}}}
cover
X
{\displaystyle X}
, i.e., every point
x
∈
X
{\displaystyle x\in X}
belongs to some element of
B
{\displaystyle {\mathcal {B}}}
.
(B2) For every
B
1
,
B
2
∈
B
{\displaystyle B_{1},B_{2}\in {\mathcal {B}}}
and every point
x
∈
B
1
∩
B
2
{\displaystyle x\in B_{1}\cap B_{2}}
, there exists some
B
3
∈
B
{\displaystyle B_{3}\in {\mathcal {B}}}
such that
x
∈
B
3
⊆
B
1
∩
B
2
{\displaystyle x\in B_{3}\subseteq B_{1}\cap B_{2}}
.
Property (B1) corresponds to the fact that
X
{\displaystyle X}
is an open set; property (B2) corresponds to the fact that
B
1
∩
B
2
{\displaystyle B_{1}\cap B_{2}}
is an open set.
Conversely, suppose
X
{\displaystyle X}
is just a set without any topology and
B
{\displaystyle {\mathcal {B}}}
is a family of subsets of
X
{\displaystyle X}
satisfying properties (B1) and (B2). Then
B
{\displaystyle {\mathcal {B}}}
is a base for the topology that it generates. More precisely, let
τ
{\displaystyle \tau }
be the family of all subsets of
X
{\displaystyle X}
that are unions of subfamilies of
B
.
{\displaystyle {\mathcal {B}}.}
Then
τ
{\displaystyle \tau }
is a topology on
X
{\displaystyle X}
and
B
{\displaystyle {\mathcal {B}}}
is a base for
τ
{\displaystyle \tau }
.
(Sketch:
τ
{\displaystyle \tau }
defines a topology because it is stable under arbitrary unions by construction, it is stable under finite intersections by (B2), it contains
X
{\displaystyle X}
by (B1), and it contains the empty set as the union of the empty subfamily of
B
{\displaystyle {\mathcal {B}}}
. The family
B
{\displaystyle {\mathcal {B}}}
is then a base for
τ
{\displaystyle \tau }
by construction.) Such families of sets are a very common way of defining a topology.
In general, if
X
{\displaystyle X}
is a set and
B
{\displaystyle {\mathcal {B}}}
is an arbitrary collection of subsets of
X
{\displaystyle X}
, there is a (unique) smallest topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
containing
B
{\displaystyle {\mathcal {B}}}
. (This topology is the intersection of all topologies on
X
{\displaystyle X}
containing
B
{\displaystyle {\mathcal {B}}}
.) The topology
τ
{\displaystyle \tau }
is called the topology generated by
B
{\displaystyle {\mathcal {B}}}
, and
B
{\displaystyle {\mathcal {B}}}
is called a subbase for
τ
{\displaystyle \tau }
.
The topology
τ
{\displaystyle \tau }
consists of
X
{\displaystyle X}
together with all arbitrary unions of finite intersections of elements of
B
{\displaystyle {\mathcal {B}}}
(see the article about subbase.) Now, if
B
{\displaystyle {\mathcal {B}}}
also satisfies properties (B1) and (B2), the topology generated by
B
{\displaystyle {\mathcal {B}}}
can be described in a simpler way without having to take intersections:
τ
{\displaystyle \tau }
is the set of all unions of elements of
B
{\displaystyle {\mathcal {B}}}
(and
B
{\displaystyle {\mathcal {B}}}
is a base for
τ
{\displaystyle \tau }
in that case).
There is often an easy way to check condition (B2). If the intersection of any two elements of
B
{\displaystyle {\mathcal {B}}}
is itself an element of
B
{\displaystyle {\mathcal {B}}}
or is empty, then condition (B2) is automatically satisfied (by taking
B
3
=
B
1
∩
B
2
{\displaystyle B_{3}=B_{1}\cap B_{2}}
). For example, the Euclidean topology on the plane admits as a base the set of all open rectangles with horizontal and vertical sides, and a nonempty intersection of two such basic open sets is also a basic open set. But another base for the same topology is the collection of all open disks; and here the full (B2) condition is necessary.
An example of a collection of open sets that is not a base is the set
S
{\displaystyle S}
of all semi-infinite intervals of the forms
(
−
∞
,
a
)
{\displaystyle (-\infty ,a)}
and
(
a
,
∞
)
{\displaystyle (a,\infty )}
with
a
∈
R
{\displaystyle a\in \mathbb {R} }
. The topology generated by
S
{\displaystyle S}
contains all open intervals
(
a
,
b
)
=
(
−
∞
,
b
)
∩
(
a
,
∞
)
{\displaystyle (a,b)=(-\infty ,b)\cap (a,\infty )}
, hence
S
{\displaystyle S}
generates the standard topology on the real line. But
S
{\displaystyle S}
is only a subbase for the topology, not a base: a finite open interval
(
a
,
b
)
{\displaystyle (a,b)}
does not contain any element of
S
{\displaystyle S}
(equivalently, property (B2) does not hold).
== Examples ==
The set Γ of all open intervals in
R
{\displaystyle \mathbb {R} }
forms a basis for the Euclidean topology on
R
{\displaystyle \mathbb {R} }
.
A non-empty family of subsets of a set X that is closed under finite intersections of two or more sets, which is called a π-system on X, is necessarily a base for a topology on X if and only if it covers X. By definition, every σ-algebra, every filter (and so in particular, every neighborhood filter), and every topology is a covering π-system and so also a base for a topology. In fact, if Γ is a filter on X then { ∅ } ∪ Γ is a topology on X and Γ is a basis for it. A base for a topology does not have to be closed under finite intersections and many are not. But nevertheless, many topologies are defined by bases that are also closed under finite intersections. For example, each of the following families of subset of
R
{\displaystyle \mathbb {R} }
is closed under finite intersections and so each forms a basis for some topology on
R
{\displaystyle \mathbb {R} }
:
The set Γ of all bounded open intervals in
R
{\displaystyle \mathbb {R} }
generates the usual Euclidean topology on
R
{\displaystyle \mathbb {R} }
.
The set Σ of all bounded closed intervals in
R
{\displaystyle \mathbb {R} }
generates the discrete topology on
R
{\displaystyle \mathbb {R} }
and so the Euclidean topology is a subset of this topology. This is despite the fact that Γ is not a subset of Σ. Consequently, the topology generated by Γ, which is the Euclidean topology on
R
{\displaystyle \mathbb {R} }
, is coarser than the topology generated by Σ. In fact, it is strictly coarser because Σ contains non-empty compact sets which are never open in the Euclidean topology.
The set Γ
Q
{\displaystyle \mathbb {Q} }
of all intervals in Γ such that both endpoints of the interval are rational numbers generates the same topology as Γ. This remains true if each instance of the symbol Γ is replaced by Σ.
Σ∞ = { [r, ∞) : r ∈
R
{\displaystyle \mathbb {R} }
} generates a topology that is strictly coarser than the topology generated by Σ. No element of Σ∞ is open in the Euclidean topology on
R
{\displaystyle \mathbb {R} }
.
Γ∞ = { (r, ∞) : r ∈
R
{\displaystyle \mathbb {R} }
} generates a topology that is strictly coarser than both the Euclidean topology and the topology generated by Σ∞. The sets Σ∞ and Γ∞ are disjoint, but nevertheless Γ∞ is a subset of the topology generated by Σ∞.
=== Objects defined in terms of bases ===
The order topology on a totally ordered set admits a collection of open-interval-like sets as a base.
In a metric space the collection of all open balls forms a base for the topology.
The discrete topology has the collection of all singletons as a base.
A second-countable space is one that has a countable base.
The Zariski topology on the spectrum of a ring has a base consisting of open sets that have specific useful properties. For the usual base for this topology, every finite intersection of basic open sets is a basic open set.
The Zariski topology of
C
n
{\displaystyle \mathbb {C} ^{n}}
is the topology that has the algebraic sets as closed sets. It has a base formed by the set complements of algebraic hypersurfaces.
The Zariski topology of the spectrum of a ring (the set of the prime ideals) has a base such that each element consists of all prime ideals that do not contain a given element of the ring.
== Theorems ==
A topology
τ
2
{\displaystyle \tau _{2}}
is finer than a topology
τ
1
{\displaystyle \tau _{1}}
if and only if for each
x
∈
X
{\displaystyle x\in X}
and each basic open set
B
{\displaystyle B}
of
τ
1
{\displaystyle \tau _{1}}
containing
x
{\displaystyle x}
, there is a basic open set of
τ
2
{\displaystyle \tau _{2}}
containing
x
{\displaystyle x}
and contained in
B
{\displaystyle B}
.
If
B
1
,
…
,
B
n
{\displaystyle {\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}}
are bases for the topologies
τ
1
,
…
,
τ
n
{\displaystyle \tau _{1},\ldots ,\tau _{n}}
then the collection of all set products
B
1
×
⋯
×
B
n
{\displaystyle B_{1}\times \cdots \times B_{n}}
with each
B
i
∈
B
i
{\displaystyle B_{i}\in {\mathcal {B}}_{i}}
is a base for the product topology
τ
1
×
⋯
×
τ
n
.
{\displaystyle \tau _{1}\times \cdots \times \tau _{n}.}
In the case of an infinite product, this still applies, except that all but finitely many of the base elements must be the entire space.
Let
B
{\displaystyle {\mathcal {B}}}
be a base for
X
{\displaystyle X}
and let
Y
{\displaystyle Y}
be a subspace of
X
{\displaystyle X}
. Then if we intersect each element of
B
{\displaystyle {\mathcal {B}}}
with
Y
{\displaystyle Y}
, the resulting collection of sets is a base for the subspace
Y
{\displaystyle Y}
.
If a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
maps every basic open set of
X
{\displaystyle X}
into an open set of
Y
{\displaystyle Y}
, it is an open map. Similarly, if every preimage of a basic open set of
Y
{\displaystyle Y}
is open in
X
{\displaystyle X}
, then
f
{\displaystyle f}
is continuous.
B
{\displaystyle {\mathcal {B}}}
is a base for a topological space
X
{\displaystyle X}
if and only if the subcollection of elements of
B
{\displaystyle {\mathcal {B}}}
which contain
x
{\displaystyle x}
form a local base at
x
{\displaystyle x}
, for any point
x
∈
X
{\displaystyle x\in X}
.
== Base for the closed sets ==
Closed sets are equally adept at describing the topology of a space. There is, therefore, a dual notion of a base for the closed sets of a topological space. Given a topological space
X
,
{\displaystyle X,}
a family
C
{\displaystyle {\mathcal {C}}}
of closed sets forms a base for the closed sets if and only if for each closed set
A
{\displaystyle A}
and each point
x
{\displaystyle x}
not in
A
{\displaystyle A}
there exists an element of
C
{\displaystyle {\mathcal {C}}}
containing
A
{\displaystyle A}
but not containing
x
.
{\displaystyle x.}
A family
C
{\displaystyle {\mathcal {C}}}
is a base for the closed sets of
X
{\displaystyle X}
if and only if its dual in
X
,
{\displaystyle X,}
that is the family
{
X
∖
C
:
C
∈
C
}
{\displaystyle \{X\setminus C:C\in {\mathcal {C}}\}}
of complements of members of
C
{\displaystyle {\mathcal {C}}}
, is a base for the open sets of
X
.
{\displaystyle X.}
Let
C
{\displaystyle {\mathcal {C}}}
be a base for the closed sets of
X
.
{\displaystyle X.}
Then
⋂
C
=
∅
{\displaystyle \bigcap {\mathcal {C}}=\varnothing }
For each
C
1
,
C
2
∈
C
{\displaystyle C_{1},C_{2}\in {\mathcal {C}}}
the union
C
1
∪
C
2
{\displaystyle C_{1}\cup C_{2}}
is the intersection of some subfamily of
C
{\displaystyle {\mathcal {C}}}
(that is, for any
x
∈
X
{\displaystyle x\in X}
not in
C
1
or
C
2
{\displaystyle C_{1}{\text{ or }}C_{2}}
there is some
C
3
∈
C
{\displaystyle C_{3}\in {\mathcal {C}}}
containing
C
1
∪
C
2
{\displaystyle C_{1}\cup C_{2}}
and not containing
x
{\displaystyle x}
).
Any collection of subsets of a set
X
{\displaystyle X}
satisfying these properties forms a base for the closed sets of a topology on
X
.
{\displaystyle X.}
The closed sets of this topology are precisely the intersections of members of
C
.
{\displaystyle {\mathcal {C}}.}
In some cases it is more convenient to use a base for the closed sets rather than the open ones. For example, a space is completely regular if and only if the zero sets form a base for the closed sets. Given any topological space
X
,
{\displaystyle X,}
the zero sets form the base for the closed sets of some topology on
X
.
{\displaystyle X.}
This topology will be the finest completely regular topology on
X
{\displaystyle X}
coarser than the original one. In a similar vein, the Zariski topology on An is defined by taking the zero sets of polynomial functions as a base for the closed sets.
== Weight and character ==
We shall work with notions established in (Engelking 1989, p. 12, pp. 127-128).
Fix
X
{\displaystyle X}
a topological space. Here, a network is a family
N
{\displaystyle {\mathcal {N}}}
of sets, for which, for all points
x
{\displaystyle x}
and open neighbourhoods U containing
x
{\displaystyle x}
, there exists
B
{\displaystyle B}
in
N
{\displaystyle {\mathcal {N}}}
for which
x
∈
B
⊆
U
.
{\displaystyle x\in B\subseteq U.}
Note that, unlike a basis, the sets in a network need not be open.
We define the weight,
w
(
X
)
{\displaystyle w(X)}
, as the minimum cardinality of a basis; we define the network weight,
n
w
(
X
)
{\displaystyle nw(X)}
, as the minimum cardinality of a network; the character of a point,
χ
(
x
,
X
)
,
{\displaystyle \chi (x,X),}
as the minimum cardinality of a neighbourhood basis for
x
{\displaystyle x}
in
X
{\displaystyle X}
; and the character of
X
{\displaystyle X}
to be
χ
(
X
)
≜
sup
{
χ
(
x
,
X
)
:
x
∈
X
}
.
{\displaystyle \chi (X)\triangleq \sup\{\chi (x,X):x\in X\}.}
The point of computing the character and weight is to be able to tell what sort of bases and local bases can exist. We have the following facts:
n
w
(
X
)
≤
w
(
X
)
{\displaystyle nw(X)\leq w(X)}
.
if
X
{\displaystyle X}
is discrete, then
w
(
X
)
=
n
w
(
X
)
=
|
X
|
{\displaystyle w(X)=nw(X)=|X|}
.
if
X
{\displaystyle X}
is Hausdorff, then
n
w
(
X
)
{\displaystyle nw(X)}
is finite if and only if
X
{\displaystyle X}
is finite discrete.
if
B
{\displaystyle B}
is a basis of
X
{\displaystyle X}
then there is a basis
B
′
⊆
B
{\displaystyle B'\subseteq B}
of size
|
B
′
|
≤
w
(
X
)
.
{\displaystyle |B'|\leq w(X).}
if
N
{\displaystyle N}
is a neighbourhood basis for
x
{\displaystyle x}
in
X
{\displaystyle X}
then there is a neighbourhood basis
N
′
⊆
N
{\displaystyle N'\subseteq N}
of size
|
N
′
|
≤
χ
(
x
,
X
)
.
{\displaystyle |N'|\leq \chi (x,X).}
if
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a continuous surjection, then
n
w
(
Y
)
≤
w
(
X
)
{\displaystyle nw(Y)\leq w(X)}
. (Simply consider the
Y
{\displaystyle Y}
-network
f
B
≜
{
f
(
U
)
:
U
∈
B
}
{\displaystyle fB\triangleq \{f(U):U\in B\}}
for each basis
B
{\displaystyle B}
of
X
{\displaystyle X}
.)
if
(
X
,
τ
)
{\displaystyle (X,\tau )}
is Hausdorff, then there exists a weaker Hausdorff topology
(
X
,
τ
′
)
{\displaystyle (X,\tau ')}
so that
w
(
X
,
τ
′
)
≤
n
w
(
X
,
τ
)
.
{\displaystyle w(X,\tau ')\leq nw(X,\tau ).}
So a fortiori, if
X
{\displaystyle X}
is also compact, then such topologies coincide and hence we have, combined with the first fact,
n
w
(
X
)
=
w
(
X
)
{\displaystyle nw(X)=w(X)}
.
if
f
:
X
→
Y
{\displaystyle f:X\to Y}
a continuous surjective map from a compact metrizable space to an Hausdorff space, then
Y
{\displaystyle Y}
is compact metrizable.
The last fact follows from
f
(
X
)
{\displaystyle f(X)}
being compact Hausdorff, and hence
n
w
(
f
(
X
)
)
=
w
(
f
(
X
)
)
≤
w
(
X
)
≤
ℵ
0
{\displaystyle nw(f(X))=w(f(X))\leq w(X)\leq \aleph _{0}}
(since compact metrizable spaces are necessarily second countable); as well as the fact that compact Hausdorff spaces are metrizable exactly in case they are second countable. (An application of this, for instance, is that every path in a Hausdorff space is compact metrizable.)
=== Increasing chains of open sets ===
Using the above notation, suppose that
w
(
X
)
≤
κ
{\displaystyle w(X)\leq \kappa }
some infinite cardinal. Then there does not exist a strictly increasing sequence of open sets (equivalently strictly decreasing sequence of closed sets) of length
≤
κ
+
{\displaystyle \leq \kappa ^{+}\!}
.
To see this (without the axiom of choice), fix
{
U
ξ
}
ξ
∈
κ
,
{\displaystyle \left\{U_{\xi }\right\}_{\xi \in \kappa },}
as a basis of open sets. And suppose per contra, that
{
V
ξ
}
ξ
∈
κ
+
{\displaystyle \left\{V_{\xi }\right\}_{\xi \in \kappa ^{+}}}
were a strictly increasing sequence of open sets. This means
∀
α
<
κ
+
:
V
α
∖
⋃
ξ
<
α
V
ξ
≠
∅
.
{\displaystyle \forall \alpha <\kappa ^{+}\!:\qquad V_{\alpha }\setminus \bigcup _{\xi <\alpha }V_{\xi }\neq \varnothing .}
For
x
∈
V
α
∖
⋃
ξ
<
α
V
ξ
,
{\displaystyle x\in V_{\alpha }\setminus \bigcup _{\xi <\alpha }V_{\xi },}
we may use the basis to find some
U
γ
{\displaystyle U_{\gamma }}
with
x
{\displaystyle x}
in
U
γ
⊆
V
α
{\displaystyle U_{\gamma }\subseteq V_{\alpha }}
. In this way we may well-define a map,
f
:
κ
+
→
κ
{\displaystyle f:\kappa ^{+}\!\to \kappa }
mapping each
α
{\displaystyle \alpha }
to the least
γ
{\displaystyle \gamma }
for which
U
γ
⊆
V
α
{\displaystyle U_{\gamma }\subseteq V_{\alpha }}
and meets
V
α
∖
⋃
ξ
<
α
V
ξ
.
{\displaystyle V_{\alpha }\setminus \bigcup _{\xi <\alpha }V_{\xi }.}
This map is injective, otherwise there would be
α
<
β
{\displaystyle \alpha <\beta }
with
f
(
α
)
=
f
(
β
)
=
γ
{\displaystyle f(\alpha )=f(\beta )=\gamma }
, which would further imply
U
γ
⊆
V
α
{\displaystyle U_{\gamma }\subseteq V_{\alpha }}
but also meets
V
β
∖
⋃
ξ
<
α
V
ξ
⊆
V
β
∖
V
α
,
{\displaystyle V_{\beta }\setminus \bigcup _{\xi <\alpha }V_{\xi }\subseteq V_{\beta }\setminus V_{\alpha },}
which is a contradiction. But this would go to show that
κ
+
≤
κ
{\displaystyle \kappa ^{+}\!\leq \kappa }
, a contradiction.
== See also ==
Esenin-Volpin's theorem
Gluing axiom
Neighbourhood system
== Notes ==
== References ==
== Bibliography == | Wikipedia/Basis_(topology) |
Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline (or offshoot) of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
== Overview ==
In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, or is too complicated. In such cases, numerical approximations are required. Computational physics is the subject that deals with these numerical approximations: the approximation of the solution is written as a finite (and typically large) number of simple mathematical operations (algorithm), and a computer is used to perform these operations and compute an approximated solution and respective error.
=== Status in physics ===
There is a debate about the status of computation within the scientific method. Sometimes it is regarded as more akin to theoretical physics; some others regard computer simulation as "computer experiments", yet still others consider it an intermediate or different branch between theoretical and experimental physics, a third way that supplements theory and experiment. While computers can be used in experiments for the measurement and recording (and storage) of data, this clearly does not constitute a computational approach.
== Challenges in computational physics ==
Computational physics problems are in general very difficult to solve exactly. This is due to several (mathematical) reasons: lack of algebraic and/or analytic solvability, complexity, and chaos. For example, even apparently simple problems, such as calculating the wavefunction of an electron orbiting an atom in a strong electric field (Stark effect), may require great effort to formulate a practical algorithm (if one can be found); other cruder or brute-force techniques, such as graphical methods or root finding, may be required. On the more advanced side, mathematical perturbation theory is also sometimes used (a working is shown for this particular example here). In addition, the computational cost and computational complexity for many-body problems (and their classical counterparts) tend to grow quickly. A macroscopic system typically has a size of the order of
10
23
{\displaystyle 10^{23}}
constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system and for classical N-body it is of order N-squared. Finally, many physical systems are inherently nonlinear at best, and at worst chaotic: this means it can be difficult to ensure any numerical errors do not grow to the point of rendering the 'solution' useless.
== Methods and algorithms ==
Because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Between them, one can consider:
root finding (using e.g. Newton-Raphson method)
system of linear equations (using e.g. LU decomposition)
ordinary differential equations (using e.g. Runge–Kutta methods)
integration (using e.g. Romberg method and Monte Carlo integration)
partial differential equations (using e.g. finite difference method and relaxation method)
matrix eigenvalue problem (using e.g. Jacobi eigenvalue algorithm and power iteration)
All these methods (and several others) are used to calculate physical properties of the modeled systems.
Computational physics also borrows a number of ideas from computational chemistry - for example, the density functional theory used by computational solid state physicists to calculate properties of solids is basically the same as that used by chemists to calculate the properties of molecules.
Furthermore, computational physics encompasses the tuning of the software/hardware structure to solve the problems (as the problems usually can be very large, in processing power need or in memory requests).
== Divisions ==
It is possible to find a corresponding computational branch for every major field in physics:
Computational mechanics consists of computational fluid dynamics (CFD), computational solid mechanics and computational contact mechanics.
Computational electrodynamics is the process of modeling the interaction of electromagnetic fields with physical objects and the environment. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics.
Computational chemistry is a rapidly growing field that was developed due to the quantum many-body problem.
Computational solid state physics is a very important division of computational physics dealing directly with material science.
Computational statistical mechanics is a field related to computational condensed matter which deals with the simulation of models and theories (such as percolation and spin models) that are difficult to solve otherwise.
Computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, (particularly through the use of agent based modeling and cellular automata) it also concerns itself with (and finds application in, through the use of its techniques) in the social sciences, network theory, and mathematical models for the propagation of disease (most notably, the SIR Model) and the spread of forest fires.
Numerical relativity is a (relatively) new field interested in finding numerical solutions to the field equations of both special relativity and general relativity.
Computational particle physics deals with problems motivated by particle physics.
Computational astrophysics is the application of these techniques and methods to astrophysical problems and phenomena.
Computational biophysics is a branch of biophysics and computational biology itself, applying methods of computer science and physics to large complex biological problems.
== Applications ==
Due to the broad class of problems computational physics deals, it is an essential component of modern research in different areas of physics, namely: accelerator physics, astrophysics, general theory of relativity (through numerical relativity), fluid mechanics (computational fluid dynamics), lattice field theory/lattice gauge theory (especially lattice quantum chromodynamics), plasma physics (see plasma modeling), simulating physical systems (using e.g. molecular dynamics), nuclear engineering computer codes, protein structure prediction, weather prediction, solid state physics, soft condensed matter physics, hypervelocity impact physics etc.
Computational solid state physics, for example, uses density functional theory to calculate properties of solids, a method similar to that used by chemists to study molecules. Other quantities of interest in solid state physics, such as the electronic band structure, magnetic properties and charge densities can be calculated by this and several methods, including the Luttinger-Kohn/k.p method and ab-initio methods.
On top of advanced physics software, there are also a myriad of tools of analytics available for beginning students of physics such as the PASCO Capstone software.
== See also ==
Advanced Simulation Library
CECAM - Centre européen de calcul atomique et moléculaire
Division of Computational Physics (DCOMP) of the American Physical Society
Important publications in computational physics
List of quantum chemistry and solid-state physics software
Mathematical and theoretical physics
Open Source Physics, computational physics libraries and pedagogical tools
Timeline of computational physics
Car–Parrinello molecular dynamics
== References ==
== Further reading ==
A.K. Hartmann, Practical Guide to Computer Simulations, World Scientific (2009)
International Journal of Modern Physics C (IJMPC): Physics and Computers Archived 2004-11-03 at the Wayback Machine, World Scientific
Steven E. Koonin, Computational Physics, Addison-Wesley (1986)
T. Pang, An Introduction to Computational Physics, Cambridge University Press (2010)
B. Stickler, E. Schachinger, Basic concepts in computational physics, Springer Verlag (2013). ISBN 9783319024349.
E. Winsberg, Science in the Age of Computer Simulation. Chicago: University of Chicago Press, 2010.
== External links ==
C20 IUPAP Commission on Computational Physics Archived 2015-11-15 at the Wayback Machine
American Physical Society: Division of Computational Physics
Institute of Physics: Computational Physics Group Archived 2015-02-13 at the Wayback Machine
SciDAC: Scientific Discovery through Advanced Computing
Open Source Physics
SCINET Scientific Software Framework
Computational Physics Course with youtube videos | Wikipedia/Computational_physics |
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation.
It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:
TCS covers a wide variety of topics including algorithms, data structures, computational complexity, parallel and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, cryptography, program semantics and verification, algorithmic game theory, machine learning, computational biology, computational economics, computational geometry, and computational number theory and algebra. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.
== History ==
While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved.
Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory.
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below:
== Topics ==
=== Algorithms ===
An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.
An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
=== Automata theory ===
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a section of mathematics and also of computer science). Automata comes from the Greek word αὐτόματα meaning "self-acting".
Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function/process).
=== Coding theory ===
Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error correction and more recently also for network coding. Codes are studied by various scientific disciplines – such as information theory, electrical engineering, mathematics, and computer science – for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.
=== Computational complexity theory ===
Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.
=== Computational geometry ===
Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry.
The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.
Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction).
=== Computational learning theory ===
Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some
useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and
uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.
=== Computational number theory ===
Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization.
=== Cryptography ===
Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.
=== Data structures ===
A data structure is a particular way of organizing data in a computer so that it can be used efficiently.
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables.
Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory.
=== Distributed computation ===
Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications, and blockchain networks like Bitcoin.
A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency.
=== Information-based complexity ===
Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration.
=== Formal methods ===
Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.
Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
=== Information theory ===
Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, anomaly detection and other forms of data analysis.
Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information.
=== Machine learning ===
Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs: 2 and using that to make predictions or decisions, rather than following only explicitly programmed instructions.
Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of
the same field.": vii
=== Natural computation ===
=== Parallel computation ===
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.
The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law.
=== Programming language theory and program semantics ===
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of theoretical computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation.
=== Quantum computation ===
A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.
Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.
=== Symbolic computation ===
Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation).
Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.
=== Very-large-scale integration ===
Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI allows IC makers to add all of these circuits into one chip.
== Organizations ==
European Association for Theoretical Computer Science
SIGACT
Simons Institute for the Theory of Computing
== Journals and newsletters ==
Discrete Mathematics and Theoretical Computer Science
Information and Computation
Theory of Computing (open access journal)
Formal Aspects of Computing
Journal of the ACM
SIAM Journal on Computing (SICOMP)
SIGACT News
Theoretical Computer Science
Theory of Computing Systems
TheoretiCS (open access journal)
International Journal of Foundations of Computer Science
Chicago Journal of Theoretical Computer Science (open access journal)
Foundations and Trends in Theoretical Computer Science
Journal of Automata, Languages and Combinatorics
Acta Informatica
Fundamenta Informaticae
ACM Transactions on Computation Theory
Computational Complexity
Journal of Complexity
ACM Transactions on Algorithms
Information Processing Letters
Open Computer Science (open access journal)
== Conferences ==
Annual ACM Symposium on Theory of Computing (STOC)
Annual IEEE Symposium on Foundations of Computer Science (FOCS)
Innovations in Theoretical Computer Science (ITCS)
Mathematical Foundations of Computer Science (MFCS)
International Computer Science Symposium in Russia (CSR)
ACM–SIAM Symposium on Discrete Algorithms (SODA)
IEEE Symposium on Logic in Computer Science (LICS)
Computational Complexity Conference (CCC)
International Colloquium on Automata, Languages and Programming (ICALP)
Annual Symposium on Computational Geometry (SoCG)
ACM Symposium on Principles of Distributed Computing (PODC)
ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)
Annual Conference on Learning Theory (COLT)
International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM)
Symposium on Theoretical Aspects of Computer Science (STACS)
European Symposium on Algorithms (ESA)
Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX)
Workshop on Randomization and Computation (RANDOM)
International Symposium on Algorithms and Computation (ISAAC)
International Symposium on Fundamentals of Computation Theory (FCT)
International Workshop on Graph-Theoretic Concepts in Computer Science (WG)
== See also ==
Formal science
Unsolved problems in computer science
Sun–Ni law
== Notes ==
== Further reading ==
Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1. Covers theory of computation, but also program semantics and quantification theory. Aimed at graduate students.
== External links ==
SIGACT directory of additional theory links (archived 15 July 2017)
Theory Matters Wiki Theoretical Computer Science (TCS) Advocacy Wiki
List of academic conferences in the area of theoretical computer science at confsearch
Theoretical Computer Science – StackExchange, a Question and Answer site for researchers in theoretical computer science
Computer Science Animated
Theory of computation at the Massachusetts Institute of Technology | Wikipedia/Theoretical_computer_science |
In the foundations of mathematics, von Neumann–Bernays–Gödel set theory (NBG) is an axiomatic set theory that is a conservative extension of Zermelo–Fraenkel–choice set theory (ZFC). NBG introduces the notion of class, which is a collection of sets defined by a formula whose quantifiers range only over sets. NBG can define classes that are larger than sets, such as the class of all sets and the class of all ordinals. Morse–Kelley set theory (MK) allows classes to be defined by formulas whose quantifiers range over classes. NBG is finitely axiomatizable, while ZFC and MK are not.
A key theorem of NBG is the class existence theorem, which states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. This class is built by mirroring the step-by-step construction of the formula with classes. Since all set-theoretic formulas are constructed from two kinds of atomic formulas (membership and equality) and finitely many logical symbols, only finitely many axioms are needed to build the classes satisfying them. This is why NBG is finitely axiomatizable. Classes are also used for other constructions, for handling the set-theoretic paradoxes, and for stating the axiom of global choice, which is stronger than ZFC's axiom of choice.
John von Neumann introduced classes into set theory in 1925. The primitive notions of his theory were function and argument. Using these notions, he defined class and set. Paul Bernays reformulated von Neumann's theory by taking class and set as primitive notions. Kurt Gödel simplified Bernays' theory for his relative consistency proof of the axiom of choice and the generalized continuum hypothesis.
== Classes in set theory ==
=== The uses of classes ===
Classes have several uses in NBG:
They produce a finite axiomatization of set theory.
They are used to state a "very strong form of the axiom of choice"—namely, the axiom of global choice: There exists a global choice function
G
{\displaystyle G}
defined on the class of all nonempty sets such that
G
(
x
)
∈
x
{\displaystyle G(x)\in x}
for every nonempty set
x
.
{\displaystyle x.}
This is stronger than ZFC's axiom of choice: For every set
s
{\displaystyle s}
of nonempty sets, there exists a choice function
f
{\displaystyle f}
defined on
s
{\displaystyle s}
such that
f
(
x
)
∈
x
{\displaystyle f(x)\in x}
for all
x
∈
s
.
{\displaystyle x\in s.}
The set-theoretic paradoxes are handled by recognizing that some classes cannot be sets. For example, assume that the class
O
r
d
{\displaystyle Ord}
of all ordinals is a set. Then
O
r
d
{\displaystyle Ord}
is a transitive set well-ordered by
∈
{\displaystyle \in }
. So, by definition,
O
r
d
{\displaystyle Ord}
is an ordinal. Hence,
O
r
d
∈
O
r
d
{\displaystyle Ord\in Ord}
, which contradicts
∈
{\displaystyle \in }
being a well-ordering of
O
r
d
.
{\displaystyle Ord.}
Therefore,
O
r
d
{\displaystyle Ord}
is not a set. A class that is not a set is called a proper class;
O
r
d
{\displaystyle Ord}
is a proper class.
Proper classes are useful in constructions. In his proof of the relative consistency of the axiom of global choice and the generalized continuum hypothesis, Gödel used proper classes to build the constructible universe. He constructed a function on the class of all ordinals that, for each ordinal, builds a constructible set by applying a set-building operation to previously constructed sets. The constructible universe is the image of this function.
=== Axiom schema versus class existence theorem ===
Once classes are added to the language of ZFC, it is easy to transform ZFC into a set theory with classes. First, the axiom schema of class comprehension is added. This axiom schema states: For every formula
ϕ
(
x
1
,
…
,
x
n
)
{\displaystyle \phi (x_{1},\ldots ,x_{n})}
that quantifies only over sets, there exists a class
A
{\displaystyle A}
consisting of the
n
{\displaystyle n}
-tuples satisfying the formula—that is,
∀
x
1
⋯
∀
x
n
[
(
x
1
,
…
,
x
n
)
∈
A
⟺
ϕ
(
x
1
,
…
,
x
n
)
]
.
{\displaystyle \forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n})].}
Then the axiom schema of replacement is replaced by a single axiom that uses a class. Finally, ZFC's axiom of extensionality is modified to handle classes: If two classes have the same elements, then they are identical. The other axioms of ZFC are not modified.
This theory is not finitely axiomatized. ZFC's replacement schema has been replaced by a single axiom, but the axiom schema of class comprehension has been introduced.
To produce a theory with finitely many axioms, the axiom schema of class comprehension is first replaced with finitely many class existence axioms. Then these axioms are used to prove the class existence theorem, which implies every instance of the axiom schema. The proof of this theorem requires only seven class existence axioms, which are used to convert the construction of a formula into the construction of a class satisfying the formula.
== Axiomatization of NBG ==
=== Classes and sets ===
NBG has two types of objects: classes and sets. Intuitively, every set is also a class. There are two ways to axiomatize this. Bernays used many-sorted logic with two sorts: classes and sets. Gödel avoided sorts by introducing primitive predicates:
C
l
s
(
A
)
{\displaystyle {\mathfrak {Cls}}(A)}
for "
A
{\displaystyle A}
is a class" and
M
(
A
)
{\displaystyle {\mathfrak {M}}(A)}
for "
A
{\displaystyle A}
is a set" (in German, "set" is Menge). He also introduced axioms stating that every set is a class and that if class
A
{\displaystyle A}
is a member of a class, then
A
{\displaystyle A}
is a set. Using predicates is the standard way to eliminate sorts. Elliott Mendelson modified Gödel's approach by having everything be a class and defining the set predicate
M
(
A
)
{\displaystyle M(A)}
as
∃
C
(
A
∈
C
)
.
{\displaystyle \exists C(A\in C).}
This modification eliminates Gödel's class predicate and his two axioms.
Bernays' two-sorted approach may appear more natural at first, but it creates a more complex theory. In Bernays' theory, every set has two representations: one as a set and the other as a class. Also, there are two membership relations: the first, denoted by "∈", is between two sets; the second, denoted by "η", is between a set and a class. This redundancy is required by many-sorted logic because variables of different sorts range over disjoint subdomains of the domain of discourse.
The differences between these two approaches do not affect what can be proved, but they do affect how statements are written. In Gödel's approach,
A
∈
C
{\displaystyle A\in C}
where
A
{\displaystyle A}
and
C
{\displaystyle C}
are classes is a valid statement. In Bernays' approach this statement has no meaning. However, if
A
{\displaystyle A}
is a set, there is an equivalent statement: Define "set
a
{\displaystyle a}
represents class
A
{\displaystyle A}
" if they have the same sets as members—that is,
∀
x
(
x
∈
a
⟺
x
η
A
)
.
{\displaystyle \forall x(x\in a\iff x\;\eta \;A).}
The statement
a
η
C
{\displaystyle a\;\eta \;C}
where set
a
{\displaystyle a}
represents class
A
{\displaystyle A}
is equivalent to Gödel's
A
∈
C
.
{\displaystyle A\in C.}
The approach adopted in this article is that of Gödel with Mendelson's modification. This means that NBG is an axiomatic system in first-order predicate logic with equality, and its only primitive notions are class and the membership relation.
=== Definitions and axioms of extensionality and pairing ===
A set is a class that belongs to at least one class:
A
{\displaystyle A}
is a set if and only if
∃
C
(
A
∈
C
)
{\displaystyle \exists C(A\in C)}
.
A class that is not a set is called a proper class:
A
{\displaystyle A}
is a proper class if and only if
∀
C
(
A
∉
C
)
{\displaystyle \forall C(A\notin C)}
.
Therefore, every class is either a set or a proper class, and no class is both.
Gödel introduced the convention that uppercase variables range over classes, while lowercase variables range over sets. Gödel also used names that begin with an uppercase letter to denote particular classes, including functions and relations defined on the class of all sets. Gödel's convention is used in this article. It allows us to write:
The following axioms and definitions are needed for the proof of the class existence theorem.
Axiom of extensionality. If two classes have the same elements, then they are identical.
∀
A
∀
B
[
∀
x
(
x
∈
A
⟺
x
∈
B
)
⟹
A
=
B
]
{\displaystyle \forall A\,\forall B\,[\forall x(x\in A\iff x\in B)\implies A=B]}
This axiom generalizes ZFC's axiom of extensionality to classes.
Axiom of pairing. If
x
{\displaystyle x}
and
y
{\displaystyle y}
are sets, then there exists a set
p
{\displaystyle p}
whose only members are
x
{\displaystyle x}
and
y
{\displaystyle y}
.
∀
x
∀
y
∃
p
∀
z
[
z
∈
p
⟺
(
z
=
x
∨
z
=
y
)
]
{\displaystyle \forall x\,\forall y\,\exists p\,\forall z\,[z\in p\iff (z=x\,\lor \,z=y)]}
As in ZFC, the axiom of extensionality implies the uniqueness of the set
p
{\displaystyle p}
, which allows us to introduce the notation
{
x
,
y
}
.
{\displaystyle \{x,y\}.}
Ordered pairs are defined by:
(
x
,
y
)
=
{
{
x
}
,
{
x
,
y
}
}
{\displaystyle (x,y)=\{\{x\},\{x,y\}\}}
Tuples are defined inductively using ordered pairs:
(
x
1
)
=
x
1
,
{\displaystyle (x_{1})=x_{1},}
For
n
>
1
:
(
x
1
,
…
,
x
n
−
1
,
x
n
)
=
(
(
x
1
,
…
,
x
n
−
1
)
,
x
n
)
.
{\displaystyle {\text{For }}n>1\!:(x_{1},\ldots ,x_{n-1},x_{n})=((x_{1},\ldots ,x_{n-1}),x_{n}).}
=== Class existence axioms and axiom of regularity ===
Class existence axioms will be used to prove the class existence theorem: For every formula in
n
{\displaystyle n}
free set variables that quantifies only over sets, there exists a class of
n
{\displaystyle n}
-tuples that satisfy it. The following example starts with two classes that are functions and builds a composite function. This example illustrates the techniques that are needed to prove the class existence theorem, which lead to the class existence axioms that are needed.
The class existence axioms are divided into two groups: axioms handling language primitives and axioms handling tuples. There are four axioms in the first group and three axioms in the second group.
Axioms for handling language primitives:
Membership. There exists a class
E
{\displaystyle E}
containing all the ordered pairs whose first component is a member of the second component.
∃
E
∀
x
∀
y
[
(
x
,
y
)
∈
E
⟺
x
∈
y
]
{\displaystyle \exists E\,\forall x\,\forall y\,[(x,y)\in E\iff x\in y]\!}
Intersection (conjunction). For any two classes
A
{\displaystyle A}
and
B
{\displaystyle B}
, there is a class
C
{\displaystyle C}
consisting precisely of the sets that belong to both
A
{\displaystyle A}
and
B
{\displaystyle B}
.
∀
A
∀
B
∃
C
∀
x
[
x
∈
C
⟺
(
x
∈
A
∧
x
∈
B
)
]
{\displaystyle \forall A\,\forall B\,\exists C\,\forall x\,[x\in C\iff (x\in A\,\land \,x\in B)]}
Complement (negation). For any class
A
{\displaystyle A}
, there is a class
B
{\displaystyle B}
consisting precisely of the sets not belonging to
A
{\displaystyle A}
.
∀
A
∃
B
∀
x
[
x
∈
B
⟺
¬
(
x
∈
A
)
]
{\displaystyle \forall A\,\exists B\,\forall x\,[x\in B\iff \neg (x\in A)]}
Domain (existential quantifier). For any class
A
{\displaystyle A}
, there is a class
B
{\displaystyle B}
consisting precisely of the first components of the ordered pairs of
A
{\displaystyle A}
.
∀
A
∃
B
∀
x
[
x
∈
B
⟺
∃
y
(
(
x
,
y
)
∈
A
)
]
{\displaystyle \forall A\,\exists B\,\forall x\,[x\in B\iff \exists y((x,y)\in A)]}
By the axiom of extensionality, class
C
{\displaystyle C}
in the intersection axiom and class
B
{\displaystyle B}
in the complement and domain axioms are unique. They will be denoted by:
A
∩
B
,
{\displaystyle A\cap B,}
∁
A
,
{\displaystyle \complement A,}
and
D
o
m
(
A
)
,
{\displaystyle Dom(A),}
respectively.
The first three axioms imply the existence of the empty class and the class of all sets: The membership axiom implies the existence of a class
E
.
{\displaystyle E.}
The intersection and complement axioms imply the existence of
E
∩
∁
E
{\displaystyle E\cap \complement E}
, which is empty. By the axiom of extensionality, this class is unique; it is denoted by
∅
.
{\displaystyle \emptyset .}
The complement of
∅
{\displaystyle \emptyset }
is the class
V
{\displaystyle V}
of all sets, which is also unique by extensionality. The set predicate
M
(
A
)
{\displaystyle M(A)}
, which was defined as
∃
C
(
A
∈
C
)
{\displaystyle \exists C(A\in C)}
, is now redefined as
A
∈
V
{\displaystyle A\in V}
to avoid quantifying over classes.
Axioms for handling tuples:
Product by
V
{\displaystyle V}
. For any class
A
{\displaystyle A}
, there is a class
B
{\displaystyle B}
consisting of the ordered pairs whose first component belongs to
A
{\displaystyle A}
.
∀
A
∃
B
∀
u
[
u
∈
B
⟺
∃
x
∃
y
(
u
=
(
x
,
y
)
∧
x
∈
A
)
]
{\displaystyle \forall A\,\exists B\,\forall u\,[u\in B\iff \exists x\,\exists y\,(u=(x,y)\land x\in A)]}
Circular permutation. For any class
A
{\displaystyle A}
, there is a class
B
{\displaystyle B}
whose 3‑tuples are obtained by applying the circular permutation
(
y
,
z
,
x
)
↦
(
x
,
y
,
z
)
{\displaystyle (y,z,x)\mapsto (x,y,z)}
to the 3‑tuples of
A
{\displaystyle A}
.
∀
A
∃
B
∀
x
∀
y
∀
z
[
(
x
,
y
,
z
)
∈
B
⟺
(
y
,
z
,
x
)
∈
A
]
{\displaystyle \forall A\,\exists B\,\forall x\,\forall y\,\forall z\,[(x,y,z)\in B\iff (y,z,x)\in A]}
Transposition. For any class
A
{\displaystyle A}
, there is a class
B
{\displaystyle B}
whose 3‑tuples are obtained by transposing the last two components of the 3‑tuples of
A
{\displaystyle A}
.
∀
A
∃
B
∀
x
∀
y
∀
z
[
(
x
,
y
,
z
)
∈
B
⟺
(
x
,
z
,
y
)
∈
A
]
{\displaystyle \forall A\,\exists B\,\forall x\,\forall y\,\forall z\,[(x,y,z)\in B\iff (x,z,y)\in A]}
By extensionality, the product by
V
{\displaystyle V}
axiom implies the existence of a unique class, which is denoted by
A
×
V
.
{\displaystyle A\times V.}
This axiom is used to define the class
V
n
{\displaystyle V^{n}}
of all
n
{\displaystyle n}
-tuples:
V
1
=
V
{\displaystyle V^{1}=V}
and
V
n
+
1
=
V
n
×
V
.
{\displaystyle V^{n+1}=V^{n}\times V.\,}
If
A
{\displaystyle A}
is a class, extensionality implies that
A
∩
V
n
{\displaystyle A\cap V^{n}}
is the unique class consisting of the
n
{\displaystyle n}
-tuples of
A
.
{\displaystyle A.}
For example, the membership axiom produces a class
E
{\displaystyle E}
that may contain elements that are not ordered pairs, while the intersection
E
∩
V
2
{\displaystyle E\cap V^{2}}
contains only the ordered pairs of
E
{\displaystyle E}
.
The circular permutation and transposition axioms do not imply the existence of unique classes because they specify only the 3‑tuples of class
B
.
{\displaystyle B.}
By specifying the 3‑tuples, these axioms also specify the
n
{\displaystyle n}
-tuples for
n
≥
4
{\displaystyle n\geq 4}
since:
(
x
1
,
…
,
x
n
−
2
,
x
n
−
1
,
x
n
)
=
(
(
x
1
,
…
,
x
n
−
2
)
,
x
n
−
1
,
x
n
)
.
{\displaystyle (x_{1},\ldots ,x_{n-2},x_{n-1},x_{n})=((x_{1},\ldots ,x_{n-2}),x_{n-1},x_{n}).}
The axioms for handling tuples and the domain axiom imply the following lemma, which is used in the proof of the class existence theorem.
One more axiom is needed to prove the class existence theorem: the axiom of regularity. Since the existence of the empty class has been proved, the usual statement of this axiom is given.
Axiom of regularity. Every nonempty set has at least one element with which it has no element in common.
∀
a
[
a
≠
∅
⟹
∃
u
(
u
∈
a
∧
u
∩
a
=
∅
)
]
.
{\displaystyle \forall a\,[a\neq \emptyset \implies \exists u(u\in a\land u\cap a=\emptyset )].}
This axiom implies that a set cannot belong to itself: Assume that
x
∈
x
{\displaystyle x\in x}
and let
a
=
{
x
}
.
{\displaystyle a=\{x\}.}
Then
x
∩
a
≠
∅
{\displaystyle x\cap a\neq \emptyset }
since
x
∈
x
∩
a
.
{\displaystyle x\in x\cap a.}
This contradicts the axiom of regularity because
x
{\displaystyle x}
is the only element in
a
.
{\displaystyle a.}
Therefore,
x
∉
x
.
{\displaystyle x\notin x.}
The axiom of regularity also prohibits infinite descending membership sequences of sets:
⋯
∈
x
n
+
1
∈
x
n
∈
⋯
∈
x
1
∈
x
0
.
{\displaystyle \cdots \in x_{n+1}\in x_{n}\in \cdots \in x_{1}\in x_{0}.}
Gödel stated regularity for classes rather than for sets in his 1940 monograph, which was based on lectures given in 1938. In 1939, he proved that regularity for sets implies regularity for classes.
=== Class existence theorem ===
The theorem's proof will be done in two steps:
Transformation rules are used to transform the given formula
ϕ
{\displaystyle \phi }
into an equivalent formula that simplifies the inductive part of the proof. For example, the only logical symbols in the transformed formula are
¬
{\displaystyle \neg }
,
∧
{\displaystyle \land }
, and
∃
{\displaystyle \exists }
, so the induction handles logical symbols with just three cases.
The class existence theorem is proved inductively for transformed formulas. Guided by the structure of the transformed formula, the class existence axioms are used to produce the unique class of
n
{\displaystyle n}
-tuples satisfying the formula.
Transformation rules. In rules 1 and 2 below,
Δ
{\displaystyle \Delta }
and
Γ
{\displaystyle \Gamma }
denote set or class variables. These two rules eliminate all occurrences of class variables before an
∈
{\displaystyle \in }
and all occurrences of equality. Each time rule 1 or 2 is applied to a subformula,
i
{\displaystyle i}
is chosen so that
z
i
{\displaystyle z_{i}}
differs from the other variables in the current formula. The three rules are repeated until there are no subformulas to which they can be applied. This produces a formula that is built only with
¬
{\displaystyle \neg }
,
∧
{\displaystyle \land }
,
∃
{\displaystyle \exists }
,
∈
{\displaystyle \in }
, set variables, and class variables
Y
k
{\displaystyle Y_{k}}
where
Y
k
{\displaystyle Y_{k}}
does not appear before an
∈
{\displaystyle \in }
.
Y
k
∈
Γ
{\displaystyle \,Y_{k}\in \Gamma }
is transformed into
∃
z
i
(
z
i
=
Y
k
∧
z
i
∈
Γ
)
.
{\displaystyle \exists z_{i}(z_{i}=Y_{k}\,\land \,z_{i}\in \Gamma ).}
Extensionality is used to transform
Δ
=
Γ
{\displaystyle \Delta =\Gamma }
into
∀
z
i
(
z
i
∈
Δ
⟺
z
i
∈
Γ
)
.
{\displaystyle \forall z_{i}(z_{i}\in \Delta \iff z_{i}\in \Gamma ).}
Logical identities are used to transform subformulas containing
∨
,
⟹
,
⟺
,
{\displaystyle \lor ,\implies ,\iff ,}
and
∀
{\displaystyle \forall }
to subformulas that only use
¬
,
∧
,
{\displaystyle \neg ,\land ,}
and
∃
.
{\displaystyle \exists .}
Transformation rules: bound variables. Consider the composite function formula of example 1 with its free set variables replaced by
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
:
∃
t
[
(
x
1
,
t
)
∈
F
∧
(
t
,
x
2
)
∈
G
]
.
{\displaystyle \exists t[(x_{1},t)\in F\,\land \,(t,x_{2})\in G].}
The inductive proof will remove
∃
t
{\displaystyle \exists t}
, which produces the formula
(
x
1
,
t
)
∈
F
∧
(
t
,
x
2
)
∈
G
.
{\displaystyle (x_{1},t)\in F\land (t,x_{2})\in G.}
However, since the class existence theorem is stated for subscripted variables, this formula does not have the form expected by the induction hypothesis. This problem is solved by replacing the variable
t
{\displaystyle t}
with
x
3
.
{\displaystyle x_{3}.}
Bound variables within nested quantifiers are handled by increasing the subscript by one for each successive quantifier. This leads to rule 4, which must be applied after the other rules since rules 1 and 2 produce quantified variables.
If a formula contains no free set variables other than
x
1
,
…
,
x
n
,
{\displaystyle x_{1},\dots ,x_{n},}
then bound variables that are nested within
q
{\displaystyle q}
quantifiers are replaced with
x
n
+
q
{\displaystyle x_{n+q}}
. These variables have (quantifier) nesting depth
q
{\displaystyle q}
.
Proof of the class existence theorem. The proof starts by applying the transformation rules to the given formula to produce a transformed formula. Since this formula is equivalent to the given formula, the proof is completed by proving the class existence theorem for transformed formulas.
Gödel pointed out that the class existence theorem "is a metatheorem, that is, a theorem about the system [NBG], not in the system …" It is a theorem about NBG because it is proved in the metatheory by induction on NBG formulas. Also, its proof—instead of invoking finitely many NBG axioms—inductively describes how to use NBG axioms to construct a class satisfying a given formula. For every formula, this description can be turned into a constructive existence proof that is in NBG. Therefore, this metatheorem can generate the NBG proofs that replace
uses of NBG's class existence theorem.
A recursive computer program succinctly captures the construction of a class from a given formula. The definition of this program does not depend on the proof of the class existence theorem. However, the proof is needed to prove that the class constructed by the program satisfies the given formula and is built using the axioms. This program is written in pseudocode that uses a Pascal-style case statement.
f
u
n
c
t
i
o
n
Class
(
ϕ
,
n
)
i
n
p
u
t
:
ϕ
is a transformed formula of the form
ϕ
(
x
1
,
…
,
x
n
,
Y
1
,
…
,
Y
m
)
;
n
specifies that a class of
n
-tuples is returned.
o
u
t
p
u
t
:
class
A
of
n
-tuples satisfying
∀
x
1
⋯
∀
x
n
[
(
x
1
,
…
,
x
n
)
∈
A
⟺
ϕ
(
x
1
,
…
,
x
n
,
Y
1
,
…
,
Y
m
)
]
.
b
e
g
i
n
c
a
s
e
ϕ
o
f
x
i
∈
x
j
:
r
e
t
u
r
n
E
i
,
j
,
n
;
//
E
i
,
j
,
n
=
{
(
x
1
,
…
,
x
n
)
:
x
i
∈
x
j
}
x
i
∈
Y
k
:
r
e
t
u
r
n
E
i
,
Y
k
,
n
;
//
E
i
,
Y
k
,
n
=
{
(
x
1
,
…
,
x
n
)
:
x
i
∈
Y
k
}
¬
ψ
:
r
e
t
u
r
n
∁
V
n
Class
(
ψ
,
n
)
;
//
∁
V
n
Class
(
ψ
,
n
)
=
V
n
∖
Class
(
ψ
,
n
)
ψ
1
∧
ψ
2
:
r
e
t
u
r
n
Class
(
ψ
1
,
n
)
∩
Class
(
ψ
2
,
n
)
;
∃
x
n
+
1
(
ψ
)
:
r
e
t
u
r
n
D
o
m
(
Class
(
ψ
,
n
+
1
)
)
;
//
x
n
+
1
is free in
ψ
;
Class
(
ψ
,
n
+
1
)
// returns a class of
(
n
+
1
)
-tuples
e
n
d
e
n
d
{\displaystyle {\begin{array}{l}\mathbf {function} \;{\text{Class}}(\phi ,\,n)\\\quad {\begin{array}{rl}\mathbf {input} \!:\;\,&\phi {\text{ is a transformed formula of the form }}\phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m});\\&n{\text{ specifies that a class of }}n{\text{-tuples is returned.}}\\\;\;\;\;\mathbf {output} \!:\;\,&{\text{class }}A{\text{ of }}n{\text{-tuples satisfying }}\\&\,\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})].\end{array}}\\\mathbf {begin} \\\quad \mathbf {case} \;\phi \;\mathbf {of} \\\qquad {\begin{alignedat}{2}x_{i}\in x_{j}:\;\;&\mathbf {return} \;\,E_{i,j,n};&&{\text{// }}E_{i,j,n}\;\,=\{(x_{1},\dots ,x_{n}):x_{i}\in x_{j}\}\\x_{i}\in Y_{k}:\;\;&\mathbf {return} \;\,E_{i,Y_{k},n};&&{\text{// }}E_{i,Y_{k},n}=\{(x_{1},\dots ,x_{n}):x_{i}\in Y_{k}\}\\\neg \psi :\;\;&\mathbf {return} \;\,\complement _{V^{n}}{\text{Class}}(\psi ,\,n);&&{\text{// }}\complement _{V^{n}}{\text{Class}}(\psi ,\,n)=V^{n}\setminus {\text{Class}}(\psi ,\,n)\\\psi _{1}\land \psi _{2}:\;\;&\mathbf {return} \;\,{\text{Class}}(\psi _{1},\,n)\cap {\text{Class}}(\psi _{2},\,n);&&\\\;\;\;\;\,\exists x_{n+1}(\psi ):\;\;&\mathbf {return} \;\,Dom({\text{Class}}(\psi ,\,n+1));&&{\text{// }}x_{n+1}{\text{ is free in }}\psi ;{\text{ Class}}(\psi ,\,n+1)\\&\ &&{\text{// returns a class of }}(n+1){\text{-tuples}}\end{alignedat}}\\\quad \mathbf {end} \\\mathbf {end} \end{array}}}
Let
ϕ
{\displaystyle \phi }
be the formula of example 2. The function call
A
=
C
l
a
s
s
(
ϕ
,
1
)
{\displaystyle A=Class(\phi ,1)}
generates the class
A
,
{\displaystyle A,}
which is compared below with
ϕ
.
{\displaystyle \phi .}
This shows that the construction of the class
A
{\displaystyle A}
mirrors the construction of its defining formula
ϕ
.
{\displaystyle \phi .}
ϕ
=
∃
x
2
(
x
2
∈
x
1
∧
¬
∃
x
3
(
x
3
∈
x
2
)
)
∧
∃
x
2
(
x
2
∈
x
1
∧
∃
x
3
(
x
3
∈
x
2
∧
¬
∃
x
4
(
x
4
∈
x
3
)
)
)
A
=
D
o
m
(
E
2
,
1
,
2
∩
∁
V
2
D
o
m
(
E
3
,
2
,
3
)
)
∩
D
o
m
(
E
2
,
1
,
2
∩
D
o
m
(
E
3
,
2
,
3
∩
∁
V
3
D
o
m
(
E
4
,
3
,
4
)
)
)
{\displaystyle {\begin{alignedat}{2}&\phi \;&&=\;\;\exists x_{2}\,(x_{2}\!\in \!x_{1}\land \;\;\neg \;\;\;\;\exists x_{3}\;(x_{3}\!\in \!x_{2}))\,\land \;\;\,\exists x_{2}\,(x_{2}\!\in \!x_{1}\land \;\;\,\exists x_{3}\,(x_{3}\!\in \!x_{2}\,\land \;\;\neg \;\;\;\;\exists x_{4}\;(x_{4}\!\in \!x_{3})))\\&A\;&&=Dom\,(\;E_{2,1,2}\;\cap \;\complement _{V^{2}}\,Dom\,(\;E_{3,2,3}\;))\,\cap \,Dom\,(\;E_{2,1,2}\;\cap \,Dom\,(\;\,E_{3,2,3}\;\cap \;\complement _{V^{3}}\,Dom\,(\;E_{4,3,4}\;)))\end{alignedat}}}
=== Extending the class existence theorem ===
Gödel extended the class existence theorem to formulas
ϕ
{\displaystyle \phi }
containing relations over classes (such as
Y
1
⊆
Y
2
{\displaystyle Y_{1}\subseteq Y_{2}}
and the unary relation
M
(
Y
1
)
{\displaystyle M(Y_{1})}
), special classes (such as
O
r
d
{\displaystyle Ord}
), and operations (such as
(
x
1
,
x
2
)
{\displaystyle (x_{1},x_{2})}
and
x
1
∩
Y
1
{\displaystyle x_{1}\cap Y_{1}}
). To extend the class existence theorem, the formulas defining relations, special classes, and operations must quantify only over sets. Then
ϕ
{\displaystyle \phi }
can be transformed into an equivalent formula satisfying the hypothesis of the class existence theorem.
The following definitions specify how formulas define relations, special classes, and operations:
A relation
R
{\displaystyle R}
is defined by:
R
(
Z
1
,
…
,
Z
k
)
⟺
ψ
R
(
Z
1
,
…
,
Z
k
)
.
{\displaystyle R(Z_{1},\dots ,Z_{k})\iff \psi _{R}(Z_{1},\dots ,Z_{k}).}
A special class
C
{\displaystyle C}
is defined by:
u
∈
C
⟺
ψ
C
(
u
)
.
{\displaystyle u\in C\iff \psi _{C}(u).}
An operation
P
{\displaystyle P}
is defined by:
u
∈
P
(
Z
1
,
…
,
Z
k
)
⟺
ψ
P
(
u
,
Z
1
,
…
,
Z
k
)
.
{\displaystyle u\in P(Z_{1},\dots ,Z_{k})\iff \psi _{P}(u,Z_{1},\dots ,Z_{k}).}
A term is defined by:
Variables and special classes are terms.
If
P
{\displaystyle P}
is an operation with
k
{\displaystyle k}
arguments and
Γ
1
,
…
,
Γ
k
{\displaystyle \Gamma _{1},\dots ,\Gamma _{k}}
are terms, then
P
(
Γ
1
,
…
,
Γ
k
)
{\displaystyle P(\Gamma _{1},\dots ,\Gamma _{k})}
is a term.
The following transformation rules eliminate relations, special classes, and operations. Each time rule 2b, 3b, or 4 is applied to a subformula,
i
{\displaystyle i}
is chosen so that
z
i
{\displaystyle z_{i}}
differs from the other variables in the current formula. The rules are repeated until there are no subformulas to which they can be applied.
Γ
1
,
…
,
Γ
k
,
Γ
,
{\displaystyle \,\Gamma _{1},\dots ,\Gamma _{k},\Gamma ,}
and
Δ
{\displaystyle \Delta }
denote terms.
A relation
R
(
Z
1
,
…
,
Z
k
)
{\displaystyle R(Z_{1},\dots ,Z_{k})}
is replaced by its defining formula
ψ
R
(
Z
1
,
…
,
Z
k
)
.
{\displaystyle \psi _{R}(Z_{1},\dots ,Z_{k}).}
Let
ψ
C
(
u
)
{\displaystyle \psi _{C}(u)}
be the defining formula for the special class
C
.
{\displaystyle C.}
Let
ψ
P
(
u
,
Z
1
,
…
,
Z
k
)
{\displaystyle \psi _{P}(u,Z_{1},\dots ,Z_{k})}
be the defining formula for the operation
P
(
Z
1
,
…
,
Z
k
)
.
{\displaystyle P(Z_{1},\dots ,Z_{k}).}
Extensionality is used to transform
Δ
=
Γ
{\displaystyle \Delta =\Gamma }
into
∀
z
i
(
z
i
∈
Δ
⟺
z
i
∈
Γ
)
.
{\displaystyle \forall z_{i}(z_{i}\in \Delta \iff z_{i}\in \Gamma ).}
=== Set axioms ===
The axioms of pairing and regularity, which were needed for the proof of the class existence theorem, have been given above. NBG contains four other set axioms. Three of these axioms deal with class operations being applied to sets.
Definition.
F
{\displaystyle F}
is a function if
F
⊆
V
2
∧
∀
x
∀
y
∀
z
[
(
x
,
y
)
∈
F
∧
(
x
,
z
)
∈
F
⟹
y
=
z
]
.
{\displaystyle F\subseteq V^{2}\land \forall x\,\forall y\,\forall z\,[(x,y)\in F\,\land \,(x,z)\in F\implies y=z].}
In set theory, the definition of a function does not require specifying the domain or codomain of the function (see Function (set theory)). NBG's definition of function generalizes ZFC's definition from a set of ordered pairs to a class of ordered pairs.
ZFC's definitions of the set operations of image, union, and power set are also generalized to class operations. The image of class
A
{\displaystyle A}
under the function
F
{\displaystyle F}
is
F
[
A
]
=
{
y
:
∃
x
(
x
∈
A
∧
(
x
,
y
)
∈
F
)
}
.
{\displaystyle F[A]=\{y:\exists x(x\in A\,\land \,(x,y)\in F)\}.}
This definition does not require that
A
⊆
D
o
m
(
F
)
.
{\displaystyle A\subseteq Dom(F).}
The union of class
A
{\displaystyle A}
is
∪
A
=
{
x
:
∃
y
(
x
∈
y
∧
y
∈
A
)
}
.
{\displaystyle \cup A=\{x:\exists y(x\in y\,\,\land \,y\in A)\}.}
The power class of
A
{\displaystyle A}
is
P
(
A
)
=
{
x
:
x
⊆
A
}
.
{\displaystyle {\mathcal {P}}(A)=\{x:x\subseteq A\}.}
The extended version of the class existence theorem implies the existence of these classes. The axioms of replacement, union, and power set imply that when these operations are applied to sets, they produce sets.
Axiom of replacement. If
F
{\displaystyle F}
is a function and
a
{\displaystyle a}
is a set, then
F
[
a
]
{\displaystyle F[a]}
, the image of
a
{\displaystyle a}
under
F
{\displaystyle F}
, is a set.
∀
F
∀
a
[
F
is a function
⟹
∃
b
∀
y
(
y
∈
b
⟺
∃
x
(
x
∈
a
∧
(
x
,
y
)
∈
F
)
)
]
.
{\displaystyle \forall F\,\forall a\,[F{\text{ is a function}}\implies \exists b\,\forall y\,(y\in b\iff \exists x(x\in a\,\land \,(x,y)\in F))].}
Not having the requirement
A
⊆
D
o
m
(
F
)
{\displaystyle A\subseteq Dom(F)}
in the definition of
F
[
A
]
{\displaystyle F[A]}
produces a stronger axiom of replacement, which is used in the following proof.
Axiom of union. If
a
{\displaystyle a}
is a set, then there is a set containing
∪
a
.
{\displaystyle \cup a.}
∀
a
∃
b
∀
x
[
∃
y
(
x
∈
y
∧
y
∈
a
)
⟹
x
∈
b
]
.
{\displaystyle \forall a\,\exists b\,\forall x\,[\,\exists y(x\in y\,\,\land \,y\in a)\implies x\in b\,].}
Axiom of power set. If
a
{\displaystyle a}
is a set, then there is a set containing
P
(
a
)
.
{\displaystyle {\mathcal {P}}(a).}
∀
a
∃
b
∀
x
(
x
⊆
a
⟹
x
∈
b
)
.
{\displaystyle \forall a\,\exists b\,\forall x\,(x\subseteq a\implies x\in b).}
Axiom of infinity. There exists a nonempty set
a
{\displaystyle a}
such that for all
x
{\displaystyle x}
in
a
{\displaystyle a}
, there exists a
y
{\displaystyle y}
in
a
{\displaystyle a}
such that
x
{\displaystyle x}
is a proper subset of
y
{\displaystyle y}
.
∃
a
[
∃
u
(
u
∈
a
)
∧
∀
x
(
x
∈
a
⟹
∃
y
(
y
∈
a
∧
x
⊂
y
)
)
]
.
{\displaystyle \exists a\,[\exists u(u\in a)\,\land \,\forall x(x\in a\implies \exists y(y\in a\,\land \,x\subset y))].}
The axioms of infinity and replacement prove the existence of the empty set. In the discussion of the class existence axioms, the existence of the empty class
∅
{\displaystyle \emptyset }
was proved. We now prove that
∅
{\displaystyle \emptyset }
is a set. Let function
F
=
∅
{\displaystyle F=\emptyset }
and let
a
{\displaystyle a}
be the set given by the axiom of infinity. By replacement, the image of
a
{\displaystyle a}
under
F
{\displaystyle F}
, which equals
∅
{\displaystyle \emptyset }
, is a set.
NBG's axiom of infinity is implied by ZFC's axiom of infinity:
∃
a
[
∅
∈
a
∧
∀
x
(
x
∈
a
⟹
x
∪
{
x
}
∈
a
)
]
.
{\displaystyle \,\exists a\,[\emptyset \in a\,\land \,\forall x(x\in a\implies x\cup \{x\}\in a)].\,}
The first conjunct of ZFC's axiom,
∅
∈
a
{\displaystyle \emptyset \in a}
, implies the first conjunct of NBG's axiom. The second conjunct of ZFC's axiom,
∀
x
(
x
∈
a
⟹
x
∪
{
x
}
∈
a
)
{\displaystyle \forall x(x\in a\implies x\cup \{x\}\in a)}
, implies the second conjunct of NBG's axiom since
x
⊂
x
∪
{
x
}
.
{\displaystyle x\subset x\cup \{x\}.}
To prove ZFC's axiom of infinity from NBG's axiom of infinity requires some of the other NBG axioms (see Weak axiom of infinity).
=== Axiom of global choice ===
The class concept allows NBG to have a stronger axiom of choice than ZFC. A choice function is a function
f
{\displaystyle f}
defined on a set
s
{\displaystyle s}
of nonempty sets such that
f
(
x
)
∈
x
{\displaystyle f(x)\in x}
for all
x
∈
s
.
{\displaystyle x\in s.}
ZFC's axiom of choice states that there exists a choice function for every set of nonempty sets. A global choice function is a function
G
{\displaystyle G}
defined on the class of all nonempty sets such that
G
(
x
)
∈
x
{\displaystyle G(x)\in x}
for every nonempty set
x
.
{\displaystyle x.}
The axiom of global choice states that there exists a global choice function. This axiom implies ZFC's axiom of choice since for every set
s
{\displaystyle s}
of nonempty sets,
G
|
s
{\displaystyle G\vert _{s}}
(the restriction of
G
{\displaystyle G}
to
s
{\displaystyle s}
) is a choice function for
s
.
{\displaystyle s.}
In 1964, William B. Easton proved that global choice is stronger than the axiom of choice by using forcing to construct a model that satisfies the axiom of choice and all the axioms of NBG except the axiom of global choice. The axiom of global choice is equivalent to every class having a well-ordering, while ZFC's axiom of choice is equivalent to every set having a well-ordering.
Axiom of global choice. There exists a function that chooses an element from every nonempty set.
∃
G
[
G
is a function
∧
∀
x
(
x
≠
∅
⟹
∃
y
(
y
∈
x
∧
(
x
,
y
)
∈
G
)
)
]
.
{\displaystyle \exists G\,[G{\text{ is a function}}\,\land \forall x(x\neq \emptyset \implies \exists y(y\in x\land (x,y)\in G))].}
== History ==
=== Von Neumann's 1925 axiom system ===
Von Neumann published an introductory article on his axiom system in 1925. In 1928, he provided a detailed treatment of his system. Von Neumann based his axiom system on two domains of primitive objects: functions and arguments. These domains overlap—objects that are in both domains are called argument-functions. Functions correspond to classes in NBG, and argument-functions correspond to sets. Von Neumann's primitive operation is function application, denoted by [a, x] rather than a(x) where a is a function and x is an argument. This operation produces an argument. Von Neumann defined classes and sets using functions and argument-functions that take only two values, A and B. He defined x ∈ a if [a, x] ≠ A.
Von Neumann's work in set theory was influenced by Georg Cantor's articles, Ernst Zermelo's 1908 axioms for set theory, and the 1922 critiques of Zermelo's set theory that were given independently by Abraham Fraenkel and Thoralf Skolem. Both Fraenkel and Skolem pointed out that Zermelo's axioms cannot prove the existence of the set {Z0, Z1, Z2, ...} where Z0 is the set of natural numbers and Zn+1 is the power set of Zn. They then introduced the axiom of replacement, which would guarantee the existence of such sets. However, they were reluctant to adopt this axiom: Fraenkel stated "that Replacement was too strong an axiom for 'general set theory'", while "Skolem only wrote that 'we could introduce' Replacement".
Von Neumann worked on the problems of Zermelo set theory and provided solutions for some of them:
A theory of ordinals
Problem: Cantor's theory of ordinal numbers cannot be developed in Zermelo set theory because it lacks the axiom of replacement.
Solution: Von Neumann recovered Cantor's theory by defining the ordinals using sets that are well-ordered by the ∈-relation, and by using the axiom of replacement to prove key theorems about the ordinals, such as every well-ordered set is order-isomorphic with an ordinal. In contrast to Fraenkel and Skolem, von Neumann emphasized how important the replacement axiom is for set theory: "In fact, I believe that no theory of ordinals is possible at all without this axiom."
A criterion identifying classes that are too large to be sets
Problem: Zermelo did not provide such a criterion. His set theory avoids the large classes that lead to the paradoxes, but it leaves out many sets, such as the one mentioned by Fraenkel and Skolem.
Solution: Von Neumann introduced the criterion: A class is too large to be a set if and only if it can be mapped onto the class V of all sets. Von Neumann realized that the set-theoretic paradoxes could be avoided by not allowing such large classes to be members of any class. Combining this restriction with his criterion, he obtained his axiom of limitation of size: A class C is not a member of any class if and only if C can be mapped onto V.
Finite axiomatization
Problem: Zermelo had used the imprecise concept of "definite propositional function" in his axiom of separation.
Solutions: Skolem introduced the axiom schema of separation that was later used in ZFC, and Fraenkel introduced an equivalent solution. However, Zermelo rejected both approaches "particularly because they implicitly involve the concept of natural number which, in Zermelo's view, should be based upon set theory." Von Neumann avoided axiom schemas by formalizing the concept of "definite propositional function" with his functions, whose construction requires only finitely many axioms. This led to his set theory having finitely many axioms. In 1961, Richard Montague proved that ZFC cannot be finitely axiomatized.
The axiom of regularity
Problem: Zermelo set theory starts with the empty set and an infinite set, and iterates the axioms of pairing, union, power set, separation, and choice to generate new sets. However, it does not restrict sets to these. For example, it allows sets that are not well-founded, such as a set x satisfying x ∈ x.
Solutions: Fraenkel introduced an axiom to exclude these sets. Von Neumann analyzed Fraenkel's axiom and stated that it was not "precisely formulated", but it would approximately say: "Besides the sets ... whose existence is absolutely required by the axioms, there are no further sets." Von Neumann proposed the axiom of regularity as a way to exclude non-well-founded sets, but did not include it in his axiom system. In 1930, Zermelo became the first to publish an axiom system that included regularity.
=== Von Neumann's 1929 axiom system ===
In 1929, von Neumann published an article containing the axioms that would lead to NBG. This article was motivated by his concern about the consistency of the axiom of limitation of size. He stated that this axiom "does a lot, actually too much." Besides implying the axioms of separation and replacement, and the well-ordering theorem, it also implies that any class whose cardinality is less than that of V is a set. Von Neumann thought that this last implication went beyond Cantorian set theory and concluded: "We must therefore discuss whether its [the axiom's] consistency is not even more problematic than an axiomatization of set theory that does not go beyond the necessary Cantorian framework."
Von Neumann started his consistency investigation by introducing his 1929 axiom system, which contains all the axioms of his 1925 axiom system except the axiom of limitation of size. He replaced this axiom with two of its consequences, the axiom of replacement and a choice axiom. Von Neumann's choice axiom states: "Every relation R has a subclass that is a function with the same domain as R."
Let S be von Neumann's 1929 axiom system. Von Neumann introduced the axiom system S + Regularity (which consists of S and the axiom of regularity) to demonstrate that his 1925 system is consistent relative to S. He proved:
If S is consistent, then S + Regularity is consistent.
S + Regularity implies the axiom of limitation of size. Since this is the only axiom of his 1925 axiom system that S + Regularity does not have, S + Regularity implies all the axioms of his 1925 system.
These results imply: If S is consistent, then von Neumann's 1925 axiom system is consistent. Proof: If S is consistent, then S + Regularity is consistent (result 1). Using proof by contradiction, assume that the 1925 axiom system is inconsistent, or equivalently: the 1925 axiom system implies a contradiction. Since S + Regularity implies the axioms of the 1925 system (result 2), S + Regularity also implies a contradiction. However, this contradicts the consistency of S + Regularity. Therefore, if S is consistent, then von Neumann's 1925 axiom system is consistent.
Since S is his 1929 axiom system, von Neumann's 1925 axiom system is consistent relative to his 1929 axiom system, which is closer to Cantorian set theory. The major differences between Cantorian set theory and the 1929 axiom system are classes and von Neumann's choice axiom. The axiom system S + Regularity was modified by Bernays and Gödel to produce the equivalent NBG axiom system.
=== Bernays' axiom system ===
In 1929, Paul Bernays started modifying von Neumann's new axiom system by taking classes and sets as primitives. He published his work in a series of articles appearing from 1937 to 1954. Bernays stated that:
The purpose of modifying the von Neumann system is to remain nearer to the structure of the original Zermelo system and to utilize at the same time some of the set-theoretic concepts of the Schröder logic and of Principia Mathematica which have become familiar to logicians. As will be seen, a considerable simplification results from this arrangement.
Bernays handled sets and classes in a two-sorted logic and introduced two membership primitives: one for membership in sets and one for membership in classes. With these primitives, he rewrote and simplified von Neumann's 1929 axioms. Bernays also included the axiom of regularity in his axiom system.
=== Gödel's axiom system (NBG) ===
In 1931, Bernays sent a letter containing his set theory to Kurt Gödel. Gödel simplified Bernays' theory by making every set a class, which allowed him to use just one sort and one membership primitive. He also weakened some of Bernays' axioms and replaced von Neumann's choice axiom with the equivalent axiom of global choice. Gödel used his axioms in his 1940 monograph on the relative consistency of global choice and the generalized continuum hypothesis.
Several reasons have been given for Gödel choosing NBG for his monograph:
Gödel gave a mathematical reason—NBG's global choice produces a stronger consistency theorem: "This stronger form of the axiom [of choice], if consistent with the other axioms, implies, of course, that a weaker form is also consistent."
Robert Solovay conjectured: "My guess is that he [Gödel] wished to avoid a discussion of the technicalities involved in developing the rudiments of model theory within axiomatic set theory."
Kenneth Kunen gave a reason for Gödel avoiding this discussion: "There is also a much more combinatorial approach to L [the constructible universe], developed by ... [Gödel in his 1940 monograph] in an attempt to explain his work to non-logicians. ... This approach has the merit of removing all vestiges of logic from the treatment of L."
Charles Parsons provided a philosophical reason for Gödel's choice: "This view [that 'property of set' is a primitive of set theory] may be reflected in Gödel's choice of a theory with class variables as the framework for ... [his monograph]."
Gödel's achievement together with the details of his presentation led to the prominence that NBG would enjoy for the next two decades. In 1963, Paul Cohen proved his independence proofs for ZF with the help of some tools that Gödel had developed for his relative consistency proofs for NBG. Later, ZFC became more popular than NBG. This was caused by several factors, including the extra work required to handle forcing in NBG, Cohen's 1966 presentation of forcing, which used ZF, and the proof that NBG is a conservative extension of ZFC.
== NBG, ZFC, and MK ==
NBG is not logically equivalent to ZFC because its language is more expressive: it can make statements about classes, which cannot be made in ZFC. However, NBG and ZFC imply the same statements about sets. Therefore, NBG is a conservative extension of ZFC. NBG implies theorems that ZFC does not imply, but since NBG is a conservative extension, these theorems must involve proper classes. For example, it is a theorem of NBG that the global axiom of choice implies that the proper class V can be well-ordered and that every proper class can be put into one-to-one correspondence with V.
One consequence of conservative extension is that ZFC and NBG are equiconsistent. Proving this uses the principle of explosion: from a contradiction, everything is provable. Assume that either ZFC or NBG is inconsistent. Then the inconsistent theory implies the contradictory statements ∅ = ∅ and ∅ ≠ ∅, which are statements about sets. By the conservative extension property, the other theory also implies these statements. Therefore, it is also inconsistent. So although NBG is more expressive, it is equiconsistent with ZFC. This result together with von Neumann's 1929 relative consistency proof implies that his 1925 axiom system with the axiom of limitation of size is equiconsistent with ZFC. This completely resolves von Neumann's concern about the relative consistency of this powerful axiom since ZFC is within the Cantorian framework.
Even though NBG is a conservative extension of ZFC, a theorem may have a shorter and more elegant proof in NBG than in ZFC (or vice versa). For a survey of known results of this nature, see Pudlák 1998.
Morse–Kelley set theory has an axiom schema of class comprehension that includes formulas whose quantifiers range over classes. MK is a stronger theory than NBG because MK proves the consistency of NBG, while Gödel's second incompleteness theorem implies that NBG cannot prove the consistency of NBG.
For a discussion of some ontological and other philosophical issues posed by NBG, especially when contrasted with ZFC and MK, see Appendix C of Potter 2004.
=== Models ===
ZFC, NBG, and MK have models describable in terms of the cumulative hierarchy Vα and the constructible hierarchy Lα . Let V include an inaccessible cardinal κ, let X ⊆ Vκ, and let Def(X) denote the class of first-order definable subsets of X with parameters. In symbols where "
(
X
,
∈
)
{\displaystyle (X,\in )}
" denotes the model with domain
X
{\displaystyle X}
and relation
∈
{\displaystyle \in }
, and "
⊨
{\displaystyle \models }
" denotes the satisfaction relation:
Def
(
X
)
:=
{
{
x
∣
x
∈
X
and
(
X
,
∈
)
⊨
ϕ
(
x
,
y
1
,
…
,
y
n
)
}
:
ϕ
is a first-order formula and
y
1
,
…
,
y
n
∈
X
}
.
{\displaystyle \operatorname {Def} (X):={\Bigl \{}\{x\mid x\in X{\text{ and }}(X,\in )\models \phi (x,y_{1},\ldots ,y_{n})\}:\phi {\text{ is a first-order formula and }}y_{1},\ldots ,y_{n}\in X{\Bigr \}}.}
Then:
(
V
κ
,
∈
)
{\displaystyle (V_{\kappa },\in )}
and
(
L
κ
,
∈
)
{\displaystyle (L_{\kappa },\in )}
are models of ZFC.
(Vκ, Vκ+1, ∈) is a model of MK where Vκ consists of the sets of the model and Vκ+1 consists of the classes of the model. Since a model of MK is a model of NBG, this model is also a model of NBG.
(Vκ, Def(Vκ), ∈) is a model of Mendelson's version of NBG, which replaces NBG's axiom of global choice with ZFC's axiom of choice. The axioms of ZFC are true in this model because (Vκ, ∈) is a model of ZFC. In particular, ZFC's axiom of choice holds, but NBG's global choice may fail. NBG's class existence axioms are true in this model because the classes whose existence they assert can be defined by first-order definitions. For example, the membership axiom holds since the class
E
{\displaystyle E}
is defined by:
E
=
{
x
∈
V
κ
:
(
V
κ
,
∈
)
⊨
∃
u
∃
v
[
x
=
(
u
,
v
)
∧
u
∈
v
]
}
.
{\displaystyle E=\{x\in V_{\kappa }:(V_{\kappa },\in )\models \exists u\ \exists v[x=(u,v)\land u\in v]\}.}
(Lκ, Lκ+, ∈), where κ+ is the successor cardinal of κ, is a model of NBG. NBG's class existence axioms are true in (Lκ, Lκ+, ∈). For example, the membership axiom holds since the class
E
{\displaystyle E}
is defined by:
E
=
{
x
∈
L
κ
:
(
L
κ
,
∈
)
⊨
∃
u
∃
v
[
x
=
(
u
,
v
)
∧
u
∈
v
]
}
.
{\displaystyle E=\{x\in L_{\kappa }:(L_{\kappa },\in )\models \exists u\ \exists v[x=(u,v)\land u\in v]\}.}
So E ∈ 𝒫(Lκ). In his proof that GCH is true in L, Gödel proved that 𝒫(Lκ) ⊆ Lκ+. Therefore, E ∈ Lκ+, so the membership axiom is true in (Lκ, Lκ+, ∈). Likewise, the other class existence axioms are true. The axiom of global choice is true because Lκ is well-ordered by the restriction of Gödel's function (which maps the class of ordinals to the constructible sets) to the ordinals less than κ. Therefore, (Lκ, Lκ+, ∈) is a model of NBG.
If
M
{\displaystyle M}
is a nonstandard model of
Z
F
C
{\displaystyle \mathrm {ZFC} }
, then
(
M
,
D
e
f
(
M
)
)
⊨
G
B
+
Δ
1
1
-CA
{\displaystyle (M,\mathrm {Def} (M))\vDash \mathrm {GB} +\Delta _{1}^{1}{\text{-CA}}}
is equivalent to "there exists an
X
{\displaystyle X}
such that
(
M
,
X
)
⊨
G
B
+
Δ
1
1
-CA
{\displaystyle (M,X)\vDash \mathrm {GB} +\Delta _{1}^{1}{\text{-CA}}}
", where
D
e
f
(
M
)
{\displaystyle \mathrm {Def} (M)}
is the set of subsets of
M
{\displaystyle M}
that are definable over
M
{\displaystyle M}
. This provides a second-order part for extending a given first-order nonstandard model of
Z
F
C
{\displaystyle \mathrm {ZFC} }
to a nonstandard model of
G
B
{\displaystyle \mathrm {GB} }
, if there is such an extension at all.
== Category theory ==
The ontology of NBG provides scaffolding for speaking about "large objects" without risking paradox. For instance, in some developments of category theory, a "large category" is defined as one whose objects and morphisms make up a proper class. On the other hand, a "small category" is one whose objects and morphisms are members of a set. Thus, we can speak of the "category of all sets" or "category of all small categories" without risking paradox since NBG supports large categories.
However, NBG does not support a "category of all categories" since large categories would be members of it and NBG does not allow proper classes to be members of anything. An ontological extension that enables us to talk formally about such a "category" is the conglomerate, which is a collection of classes. Then the "category of all categories" is defined by its objects: the conglomerate of all categories; and its morphisms: the conglomerate of all morphisms from A to B where A and B are objects. On whether an ontology including classes as well as sets is adequate for category theory, see Muller 2001.
== Notes ==
== References ==
== Bibliography ==
Adámek, Jiří; Herrlich, Horst; Strecker, George E. (1990), Abstract and Concrete Categories (The Joy of Cats) (1st ed.), New York: Wiley & Sons, ISBN 978-0-471-60922-3.
Adámek, Jiří; Herrlich, Horst; Strecker, George E. (2004) [1990], Abstract and Concrete Categories (The Joy of Cats) (Dover ed.), New York: Dover Publications, ISBN 978-0-486-46934-8.
Bernays, Paul (1937), "A System of Axiomatic Set Theory—Part I", The Journal of Symbolic Logic, 2 (1): 65–77, doi:10.2307/2268862, JSTOR 2268862.
Bernays, Paul (1941), "A System of Axiomatic Set Theory—Part II", The Journal of Symbolic Logic, 6 (1): 1–17, doi:10.2307/2267281, JSTOR 2267281, S2CID 250344277.
Bernays, Paul (1991), Axiomatic Set Theory (2nd Revised ed.), Dover Publications, ISBN 978-0-486-66637-2.
Bourbaki, Nicolas (2004), Elements of Mathematics: Theory of Sets, Springer, ISBN 978-3-540-22525-6.
Chuaqui, Rolando (1981), Axiomatic Set Theory: Impredicative Theories of Classes, North-Holland, ISBN 0-444-86178-5.
Cohen, Paul (1963), "The Independence of the Continuum Hypothesis", Proceedings of the National Academy of Sciences of the United States of America, 50 (6): 1143–1148, Bibcode:1963PNAS...50.1143C, doi:10.1073/pnas.50.6.1143, PMC 221287, PMID 16578557.
Cohen, Paul (1966), Set Theory and the Continuum Hypothesis, W. A. Benjamin.
Cohen, Paul (2008), Set Theory and the Continuum Hypothesis, Dover Publications, ISBN 978-0-486-46921-8.
Dawson, John W. (1997), Logical dilemmas: The life and work of Kurt Gödel, Wellesley, MA: AK Peters.
Easton, William B. (1964), Powers of Regular Cardinals (PhD thesis), Princeton University.
Felgner, Ulrich (1971), "Comparison of the axioms of local and universal choice" (PDF), Fundamenta Mathematicae, 71: 43–62, doi:10.4064/fm-71-1-43-62.
Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought (2nd revised ed.), Basel, Switzerland: Birkhäuser, ISBN 978-3-7643-8349-7.
Gödel, Kurt (1940), The Consistency of the Axiom of Choice and of the Generalized Continuum Hypothesis with the Axioms of Set Theory (Revised ed.), Princeton University Press, ISBN 978-0-691-07927-1 {{citation}}: ISBN / Date incompatibility (help).
Gödel, Kurt (2008), The Consistency of the Axiom of Choice and of the Generalized Continuum Hypothesis with the Axioms of Set Theory, with a foreword by Laver, Richard (Paperback ed.), Ishi Press, ISBN 978-0-923891-53-4.
Gödel, Kurt (1986), Collected Works, Volume 1: Publications 1929–1936, Oxford University Press, ISBN 978-0-19-514720-9.
Gödel, Kurt (1990), Collected Works, Volume 2: Publications 1938–1974, Oxford University Press, ISBN 978-0-19-514721-6.
Gödel, Kurt (2003), Collected Works, Volume 4: Correspondence A–G, Oxford University Press, ISBN 978-0-19-850073-5.
Gray, Robert (1991), "Computer programs and mathematical proofs", The Mathematical Intelligencer, 13 (4): 45–48, doi:10.1007/BF03028342, S2CID 121229549.
Hallett, Michael (1984), Cantorian Set Theory and Limitation of Size (Hardcover ed.), Oxford: Clarendon Press, ISBN 978-0-19-853179-1.
Hallett, Michael (1986), Cantorian Set Theory and Limitation of Size (Paperback ed.), Oxford: Clarendon Press, ISBN 978-0-19-853283-5.
Kanamori, Akihiro (2009b), The Higher Infinite: Large Cardinals in Set Theory from Their Beginnings, Springer, ISBN 978-3-540-88867-3.
Kanamori, Akihiro (2009), "Bernays and Set Theory" (PDF), Bulletin of Symbolic Logic, 15 (1): 43–69, doi:10.2178/bsl/1231081769, JSTOR 25470304, S2CID 15567244.
Kanamori, Akihiro (2012), "In Praise of Replacement" (PDF), Bulletin of Symbolic Logic, 18 (1): 46–90, doi:10.2178/bsl/1327328439, JSTOR 41472440, S2CID 18951854.
Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs (Hardcover ed.), North-Holland, ISBN 978-0-444-86839-8.
Kunen, Kenneth (2012), Set Theory: An Introduction to Independence Proofs (Paperback ed.), North-Holland, ISBN 978-0-444-56402-3.
Mendelson, Elliott (1997), An Introduction to Mathematical Logic (4th ed.), London: Chapman and Hall/CRC, ISBN 978-0-412-80830-2. - Pp. 225–86 contain the classic textbook treatment of NBG, showing how it does what we expect of set theory, by grounding relations, order theory, ordinal numbers, transfinite numbers, etc.
Mirimanoff, Dmitry (1917), "Les antinomies de Russell et de Burali-Forti et le problème fondamental de la théorie des ensembles", L'Enseignement Mathématique, 19: 37–52.
Montague, Richard (1961), "Semantic Closure and Non-Finite Axiomatizability I", in Buss, Samuel R. (ed.), Infinitistic Methods: Proceedings of the Symposium on Foundations of Mathematics, Pergamon Press, pp. 45–69.
Mostowski, Andrzej (1950), "Some impredicative definitions in the axiomatic set theory" (PDF), Fundamenta Mathematicae, 37: 111–124, doi:10.4064/fm-37-1-111-124.
Muller, F. A. (1 September 2001), "Sets, classes, and categories" (PDF), British Journal for the Philosophy of Science, 52 (3): 539–73, doi:10.1093/bjps/52.3.539.
Müller, Gurt, ed. (1976), Sets and Classes: On the Work of Paul Bernays, Studies in Logic and the Foundations of Mathematics Volume 84, Amsterdam: North Holland, ISBN 978-0-7204-2284-9.
Potter, Michael (2004), Set Theory and Its Philosophy: A Critical Introduction (Hardcover ed.), Oxford University Press, ISBN 978-0-19-926973-0.
Potter, Michael (2004p), Set Theory and Its Philosophy: A Critical Introduction (Paperback ed.), Oxford University Press, ISBN 978-0-19-927041-5.
Pudlák, Pavel (1998), "The Lengths of Proofs" (PDF), in Buss, Samuel R. (ed.), Handbook of Proof Theory, Elsevier, pp. 547–637, ISBN 978-0-444-89840-1.
Smullyan, Raymond M.; Fitting, Melvin (2010) [Revised and corrected edition: first published in 1996 by Oxford University Press], Set Theory and the Continuum Problem, Dover, ISBN 978-0-486-47484-7.
Solovay, Robert M. (1990), "Introductory note to 1938, 1939, 1939a and 1940", Kurt Gödel Collected Works, Volume 2: Publications 1938–1974, Oxford University Press, pp. 1–25, ISBN 978-0-19-514721-6.
von Neumann, John (1923), "Zur Einführung der transfiniten Zahlen", Acta Litt. Acad. Sc. Szeged X., 1: 199–208.
English translation: van Heijenoort, Jean (2002a) [1967], "On the introduction of transfinite numbers", From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (Fourth Printing ed.), Harvard University Press, pp. 346–354, ISBN 978-0-674-32449-7.
English translation: van Heijenoort, Jean (2002b) [1967], "An axiomatization of set theory", From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (Fourth Printing ed.), Harvard University Press, pp. 393–413, ISBN 978-0-674-32449-7.
von Neumann, John (1925), "Eine Axiomatisierung der Mengenlehre", Journal für die Reine und Angewandte Mathematik, 154: 219–240.
von Neumann, John (1928), "Die Axiomatisierung der Mengenlehre", Mathematische Zeitschrift, 27: 669–752, doi:10.1007/bf01171122, S2CID 123492324.
von Neumann, John (1929), "Über eine Widerspruchsfreiheitsfrage in der axiomatischen Mengenlehre", Journal für die Reine und Angewandte Mathematik, 160: 227–241.
== External links ==
"von Neumann-Bernays-Gödel set theory". PlanetMath.
Szudzik, Matthew. "von Neumann-Bernays-Gödel Set Theory". MathWorld. | Wikipedia/Von_Neumann–Bernays–Gödel_set_theory |
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. GPUs were later found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields including artificial intelligence (AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
== History ==
=== 1970s ===
Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.
A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito.
The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer). 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU.
=== 1980s ===
The NEC μPD7220 was the first implementation of a personal computer graphics display processor as a single large-scale integration (LSI) integrated circuit chip. This enabled the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became the best-known GPU until the mid-1980s. It was the first fully integrated VLSI (very large-scale integration) metal–oxide–semiconductor (NMOS) graphics display processor for PCs, supported up to 1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units. The Williams Electronics arcade games Robotron 2084, Joust, Sinistar, and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps.
In 1984, Hitachi released the ARTC HD63484, the first major CMOS graphics processor for personal computers. The ARTC could display up to 4K resolution when in monochrome mode. It was used in a number of graphics cards and terminals during the late 1980s. In 1985, the Amiga was released with a custom graphics chip including a blitter for bitmap manipulation, line drawing, and area fill. It also included a coprocessor with its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first fully programmable graphics processor. It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of the Texas Instruments Graphics Architecture ("TIGA") Windows accelerator cards.
In 1987, the IBM 8514 graphics system was released. It was one of the first video cards for IBM PC compatibles that implemented fixed-function 2D primitives in electronic hardware. Sharp's X68000, released in 1987, used a custom graphics chipset with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields. It served as a development machine for Capcom's CP System arcade board. Fujitsu's FM Towns computer, released in 1989, had support for a 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System.
IBM introduced its proprietary Video Graphics Array (VGA) display standard in 1987, with a maximum resolution of 640×480 pixels. In November 1988, NEC Home Electronics announced its creation of the Video Electronics Standards Association (VESA) to develop and promote a Super VGA (SVGA) computer display standard as a successor to VGA. Super VGA enabled graphics display resolutions up to 800×600 pixels, a 56% increase.
=== 1990s ===
In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised. The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. Fixed-function Windows accelerators surpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market.
Throughout the 1990s, 2D GUI acceleration evolved. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games in Windows 95 and later.
In the early- and mid-1990s, real-time 3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, and Sega Model 2, and the fifth-generation video game consoles such as the Saturn, PlayStation, and Nintendo 64. Arcade systems such as the Sega Model 2 and SGI Onyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards. Another early example is the Super FX chip, a RISC-based on-cartridge graphics chip used in some SNES games, notably Doom and Star Fox. Some systems used DSPs to accelerate transformations. Fujitsu, which worked on the Sega Model 2 arcade system, began working on integrating T&L into a single LSI solution for use in home computers in 1995; the Fujitsu Pinolite, the first 3D geometry processor for personal computers, released in 1997. The first hardware T&L GPU on home video game consoles was the Nintendo 64's Reality Coprocessor, released in 1996. In 1997, Mitsubishi released the 3Dpro/2MP, a GPU capable of transformation and lighting, for workstations and Windows NT desktops; ATi used it for its FireGL 4000 graphics card, released in 1997.
The term "GPU" was coined by Sony in reference to the 32-bit Sony GPU (designed by Toshiba) in the PlayStation video game console, released in 1994.
In the PC world, notable failed attempts for low-cost 3D graphics chips included the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D graphical user interface (GUI) acceleration entirely) such as the PowerVR and the 3dfx Voodoo. However, as manufacturing technology continued to progress, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were among the first to do this well. In 1997, Rendition collaborated with Hercules and Fujitsu on a "Thriller Conspiracy" project which combined a Fujitsu FXG-1 Pinolite geometry processor with a Vérité V2200 core to create a graphics card with a full T&L engine years before Nvidia's GeForce 256; This card, designed to reduce the load placed upon the system's CPU, never made it to market. NVIDIA RIVA 128 was one of the first consumer-facing GPU integrated 3D processing unit and 2D processing unit on a chip.
OpenGL was introduced in the early 1990s by Silicon Graphics as a professional graphics API, with proprietary hardware support for 3D rasterization. In 1994, Microsoft acquired Softimage, the dominant CGI movie production tool used for early CGI movie hits like Jurassic Park, Terminator 2 and Titanic. With that deal came a strategic relationship with SGI and a commercial license of their OpenGL libraries, enabling Microsoft to port the API to the Windows NT OS but not to the upcoming release of Windows 95. Although it was little known at the time, SGI had contracted with Microsoft to transition from Unix to the forthcoming Windows NT OS; the deal which was signed in 1995 was not announced publicly until 1998. In the intervening period, Microsoft worked closely with SGI to port OpenGL to Windows NT. In that era, OpenGL had no standard driver model for competing hardware accelerators to compete on the basis of support for higher level 3D texturing and lighting functionality. In 1994 Microsoft announced DirectX 1.0 and support for gaming in the forthcoming Windows 95 consumer OS. In 1995 Microsoft announced the acquisition of UK based Rendermorphics Ltd and the Direct3D driver model for the acceleration of consumer 3D graphics. The Direct3D driver model shipped with DirectX 2.0 in 1996. It included standards and specifications for 3D chip makers to compete to support 3D texture, lighting and Z-buffering. ATI, which was later to be acquired by AMD, began development on the first Direct3D GPUs. Nvidia quickly pivoted from a failed deal with Sega in 1996 to aggressively embracing support for Direct3D. In this era Microsoft merged their internal Direct3D and OpenGL teams and worked closely with SGI to unify driver standards for both industrial and consumer 3D graphics hardware accelerators. Microsoft ran annual events for 3D chip makers called "Meltdowns" to test their 3D hardware and drivers to work both with Direct3D and OpenGL. It was during this period of strong Microsoft influence over 3D standards that 3D accelerator cards moved beyond being simple rasterizers to become more powerful general purpose processors as support for hardware accelerated texture mapping, lighting, Z-buffering and compute created the modern GPU. During this period the same Microsoft team responsible for Direct3D and OpenGL driver standardization introduced their own Microsoft 3D chip design called Talisman. Details of this era are documented extensively in the books "Game of X" v.1 and v.2 by Russel Demaria, "Renegades of the Empire" by Mike Drummond, "Opening the Xbox" by Dean Takahashi and "Masters of Doom" by David Kushner. The Nvidia GeForce 256 (also known as NV10) was the first consumer-level card with hardware-accelerated T&L. While the OpenGL API provided software support for texture mapping and lighting, the first 3D hardware acceleration for these features arrived with the first Direct3D accelerated consumer GPU's.
=== 2000s ===
NVIDIA released the GeForce 256, marketed as the world's first GPU, integrating transform and lighting engines for advanced 3D graphics rendering. Nvidia was first to produce a chip capable of programmable shading: the GeForce 3. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. Used in the Xbox console, this chip competed with the one in the PlayStation 2, which used a custom vector unit for hardware-accelerated vertex processing (commonly referred to as VU0/VU1). The earliest incarnations of shader execution engines used in Xbox were not general-purpose and could not execute arbitrary pixel code. Vertices and pixels were processed by different units, which had their resources, with pixel shaders having tighter constraints (because they execute at higher frequencies than vertices). Pixel shading engines were more akin to a highly customizable function block and did not "run" a program. Many of these disparities between vertex and pixel shading were not addressed until the Unified Shader Model.
In October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used for bump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded.
With the introduction of the Nvidia GeForce 8 series and new generic stream processing units, GPUs became more generalized computing devices. Parallel GPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing or GPGPU for general purpose computing on GPU, has found applications in fields as diverse as machine learning, oil exploration, scientific image processing, linear algebra, statistics, 3D reconstruction, and stock options pricing. GPGPU was the precursor to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader. This entails some overheads since units like the scan converter are involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader).
Nvidia's CUDA platform, first introduced in 2007, was the earliest widely adopted programming model for GPU computing. OpenCL is an open standard defined by the Khronos Group that allows for the development of code for both GPUs and CPUs with an emphasis on portability. OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by Evans Data, OpenCL had become the second most popular HPC tool.
=== 2010s ===
In 2010, Nvidia partnered with Audi to power their cars' dashboards, using the Tegra GPU to provide increased functionality to cars' navigation and entertainment systems. Advances in GPU technology in cars helped advance self-driving technology. AMD's Radeon HD 6000 series cards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices. The Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia's 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease it according to its power draw. The Kepler microarchitecture was manufactured.
The PS4 and Xbox One were released in 2013; they both use GPUs based on AMD's Radeon HD 7850 and 7790. Nvidia's Kepler line of GPUs was followed by the Maxwell line, manufactured on the same process. Nvidia's 28 nm chips were manufactured by TSMC in Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power. Virtual reality headsets have high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release. Cards based on the Pascal microarchitecture were released in 2016. The GeForce 10 series of cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures. Nvidia released one non-consumer card under the new Volta architecture, the Titan V. Changes from the Titan XP, Pascal's high-end card, include an increase in the number of CUDA cores, the addition of tensor cores, and HBM2. Tensor cores are designed for deep learning, while high-bandwidth memory is on-die, stacked, lower-clocked memory that offers an extremely wide memory bus. To emphasize that the Titan V is not a gaming card, Nvidia removed the "GeForce GTX" suffix it adds to consumer gaming cards.
In 2018, Nvidia launched the RTX 20 series GPUs that added ray-tracing cores to GPUs, improving their performance on lighting effects. Polaris 11 and Polaris 10 GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards. AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V.
In 2019, AMD released the successor to their Graphics Core Next (GCN) microarchitecture/instruction set. Dubbed RDNA, the first product featuring it was the Radeon RX 5000 series of video cards. The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled the Radeon RX 6000 series, its RDNA 2 graphics cards with support for hardware-accelerated ray tracing. The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT. The RX 6700 XT, which is based on Navi 22, was launched in early 2021.
The PlayStation 5 and Xbox Series X and Series S were released in 2020; they both use GPUs based on the RDNA 2 microarchitecture with incremental improvements and different GPU configurations in each system's implementation.
Intel first entered the GPU market in the late 1990s, but produced lackluster 3D accelerators compared to the competition at the time. Rather than attempting to compete with the high-end manufacturers Nvidia and ATI/AMD, they began integrating Intel Graphics Technology GPUs into motherboard chipsets, beginning with the Intel 810 for the Pentium III, and later into CPUs. They began with the Intel Atom 'Pineview' laptop processor in 2009, continuing in 2010 with desktop processors in the first generation of the Intel Core line and with contemporary Pentiums and Celerons. This resulted in a large nominal market share, as the majority of computers with an Intel CPU also featured this embedded graphics processor. These generally lagged behind discrete processors in performance. Intel re-entered the discrete GPU market in 2022 with its Arc series, which competed with the then-current GeForce 30 series and Radeon 6000 series cards at competitive prices.
=== 2020s ===
In the 2020s, GPUs have been increasingly used for calculations involving embarrassingly parallel problems, such as training of neural networks on enormous datasets that are needed for large language models. Specialized processing cores on some modern workstation's GPUs are dedicated for deep learning since they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are expected to appear in consumer cards, as well.
== GPU companies ==
Many companies have produced GPUs under a number of brand names. In 2009, Intel, Nvidia, and AMD/ATI were the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition, Matrox produces GPUs. Chinese companies such as Jingjia Micro have also produced GPUs for the domestic market although in terms of worldwide sales, they still lag behind market leaders.
Modern smartphones use mostly Adreno GPUs from Qualcomm, PowerVR GPUs from Imagination Technologies, and Mali GPUs from ARM.
== Computational functions ==
Modern GPUs have traditionally used most of their transistors to do calculations related to 3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000–HD7000 lack dedicated 2D acceleration; it is emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons. Later, dedicated hardware was added to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations that are supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces.
Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in the semiconductor device fabrication, the clock signal frequency, and the number and size of various on-chip memory caches. Performance is also affected by the number of streaming multiprocessors (SM) for NVidia GPUs, or compute units (CU) for AMD GPUs, or Xe cores for Intel discrete GPUs, which describe the number of on-silicon processor core units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second (FLOPS); GPUs in the 2010s and 2020s typically deliver performance measured in teraflops (TFLOPS). This is an estimated performance measure, as other factors can affect the actual display rate.
=== GPU accelerated video decoding and encoding ===
Most GPUs made since 1995 support the YUV color space and hardware overlays, important for digital video playback, and many GPUs made since 2000 also support MPEG primitives such as motion compensation and iDCT. This hardware-accelerated video decoding, in which portions of the video decoding process and video post-processing are offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding", or "GPU hardware assisted video decoding".
Recent graphics cards decode high-definition video on the card, offloading the central processing unit. The most common APIs for GPU accelerated video decoding are DxVA for Microsoft Windows operating systems and VDPAU, VAAPI, XvMC, and XvBA for Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded with MPEG-1, MPEG-2, MPEG-4 ASP (MPEG-4 Part 2), MPEG-4 AVC (H.264 / DivX 6), VC-1, WMV3/WMV9, Xvid / OpenDivX (DivX 4), and DivX 5 codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2.
There are several dedicated hardware video decoding and encoding solutions.
==== Video decoding processes that can be accelerated ====
Video decoding processes that can be accelerated by modern GPU hardware are:
Motion compensation (mocomp)
Inverse discrete cosine transform (iDCT)
Inverse telecine 3:2 and 2:2 pull-down correction
Inverse modified discrete cosine transform (iMDCT)
In-loop deblocking filter
Intra-frame prediction
Inverse quantization (IQ)
Variable-length decoding (VLD), more commonly known as slice-level acceleration
Spatial-temporal deinterlacing and automatic interlace/progressive source detection
Bitstream processing (Context-adaptive variable-length coding/Context-adaptive binary arithmetic coding) and perfect pixel positioning
These operations also have applications in video editing, encoding, and transcoding.
=== 2D graphics APIs ===
An earlier GPU may support one or more 2D graphics API for 2D acceleration, such as GDI and DirectDraw.
=== 3D graphics APIs ===
A GPU can support one or more 3D graphics API, such as DirectX, Metal, OpenGL, OpenGL ES, Vulkan.
== GPU forms ==
=== Terminology ===
In the 1970s, the term "GPU" originally stood for graphics processor unit and described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output. In 1994, Sony used the term (now standing for graphics processing unit) in reference to the PlayStation console's Toshiba-designed Sony GPU. The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU". It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002. The AMD Alveo MA35D features dual VPU’s, each using the 5 nm process in 2023.
In personal computers, there are two main forms of GPUs. Each has many synonyms:
Dedicated graphics also called discrete graphics.
Integrated graphics also called shared graphics solutions, integrated graphics processors (IGP), or unified memory architecture (UMA).
==== Usage-specific GPU ====
Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations:
Gaming
GeForce GTX, RTX
Nvidia Titan
Radeon HD, R5, R7, R9, RX, Vega and Navi series
Radeon VII
Intel Arc
Cloud Gaming
Nvidia GRID
Radeon Sky
Workstation
Nvidia Quadro
Nvidia RTX
AMD FirePro
AMD Radeon Pro
Intel Arc Pro
Cloud Workstation
Nvidia Tesla
AMD FireStream
Artificial Intelligence training and Cloud
Nvidia Tesla
AMD Radeon Instinct
Automated/Driverless car
Nvidia Drive PX
=== Dedicated graphics processing unit ===
Dedicated graphics processing units uses RAM that is dedicated to the GPU rather than relying on the computer’s main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card (see GDDR). Sometimes systems with dedicated discrete GPUs were called "DIS" systems as opposed to "UMA" systems (see next section).
Dedicated GPUs are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.
Graphics cards with dedicated GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIe) or Accelerated Graphics Port (AGP). They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available.
Technologies such as Scan-Line Interleave by 3dfx, SLI and NVLink by Nvidia and CrossFire by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them. Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video (processing multiple videos at once) and 3D rendering, for VFX, GPGPU workloads and for simulations, and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs.
=== Integrated graphics processing unit ===
Integrated graphics processing units (IGPU), integrated graphics, shared graphics solutions, integrated graphics processors (IGP), or unified memory architectures (UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of its northbridge chipset, or on the same die (integrated circuit) with the CPU (like AMD APU or Intel HD Graphics). On certain motherboards, AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007 computers with integrated graphics account for about 90% of all PC shipments. They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004. However, modern integrated graphics processors such as AMD Accelerated Processing Unit and Intel Graphics Technology (HD, UHD, Iris, Iris Pro, Iris Plus, and Xe-LP) can handle 2D graphics or low-stress 3D graphics.
Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between its VRAM and GPU core. This memory bus bandwidth can limit the performance of the GPU, though multi-channel memory can mitigate this deficiency. Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.
On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics, modern Intel processors with integrated graphics, Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs (without needing a large static split of the RAM) and thanks to zero copy transfers, removes the need for either copying data over a bus between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data.
=== Hybrid graphics processing ===
Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's HyperMemory and Nvidia's TurboCache.
Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the high latency of the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory.
=== Stream processing and general purpose GPUs (GPGPU) ===
It is common to use a general purpose graphics processing unit (GPGPU) as a modified form of stream processor (or a vector processor), running compute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics processing unit" above) GPU designers, AMD and Nvidia, are pursuing this approach with an array of applications. Both Nvidia and AMD teamed with Stanford University to create a GPU-based client for the Folding@home distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.
GPGPUs can be used for many types of embarrassingly parallel tasks including ray tracing. They are generally suited to high-throughput computations that exhibit data-parallelism to exploit the wide vector width SIMD architecture of the GPU.
GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.
GPUs support API extensions to the C programming language such as OpenCL and OpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards: AMD APP SDK from AMD, and CUDA from Nvidia. These allow functions called compute kernels to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.
Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general, and for accelerating the fitness evaluation in genetic programming in particular. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's SIMD architecture. However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there. Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs.
=== External GPU (eGPU) ===
An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering.
Therefore, it is desirable to attach a GPU to some external bus of a notebook. PCI Express is the only bus used for this purpose. The port may be, for example, an ExpressCard or mPCIe port (PCIe ×1, up to 5 or 2.5 Gbit/s respectively), a Thunderbolt 1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively), a USB4 port with Thunderbolt compatibility, or an OCuLink port. Those ports are only available on certain notebook systems. eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts.
== Energy efficiency ==
== Sales ==
In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of PC GPUs totaled around 75.5 million units, down 19% year-over-year.
== See also ==
=== Hardware ===
List of AMD graphics processing units
List of Nvidia graphics processing units
List of Intel graphics processing units
List of discrete and integrated graphics processing units
Intel GMA
Larrabee
Nvidia PureVideo – the bit-stream technology from Nvidia used in their graphics chips to accelerate video decoding on hardware GPU with DXVA.
SoC
UVD (Unified Video Decoder) – the video decoding bit-stream technology from ATI to support hardware (GPU) decode with DXVA
=== APIs ===
=== Applications ===
GPU cluster
Mathematica – includes built-in support for CUDA and OpenCL GPU execution
Molecular modeling on GPU
Deeplearning4j – open-source, distributed deep learning for Java
== References ==
== Sources ==
Peddie, Jon (1 January 2023). The History of the GPU – New Developments. Springer Nature. ISBN 978-3-03-114047-1. OCLC 1356877844.
== External links == | Wikipedia/Graphics_processing_unit |
In mathematics, a surjective function (also known as surjection, or onto function ) is a function f such that, for every element y of the function's codomain, there exists at least one element x in the function's domain such that f(x) = y. In other words, for a function f : X → Y, the codomain Y is the image of the function's domain X. It is not required that x be unique; the function f may map one or more elements of X to the same element of Y.
The term surjective and the related terms injective and bijective were introduced by Nicolas Bourbaki, a group of mainly French 20th-century mathematicians who, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French word sur means over or above, and relates to the fact that the image of the domain of a surjective function completely covers the function's codomain.
Any function induces a surjection by restricting its codomain to the image of its domain. Every surjective function has a right inverse assuming the axiom of choice, and every function with a right inverse is necessarily a surjection. The composition of surjective functions is always surjective. Any function can be decomposed into a surjection and an injection.
== Definition ==
A surjective function is a function whose image is equal to its codomain. Equivalently, a function
f
{\displaystyle f}
with domain
X
{\displaystyle X}
and codomain
Y
{\displaystyle Y}
is surjective if for every
y
{\displaystyle y}
in
Y
{\displaystyle Y}
there exists at least one
x
{\displaystyle x}
in
X
{\displaystyle X}
with
f
(
x
)
=
y
{\displaystyle f(x)=y}
. Surjections are sometimes denoted by a two-headed rightwards arrow (U+21A0 ↠ RIGHTWARDS TWO HEADED ARROW), as in
f
:
X
↠
Y
{\displaystyle f\colon X\twoheadrightarrow Y}
.
Symbolically,
If
f
:
X
→
Y
{\displaystyle f\colon X\rightarrow Y}
, then
f
{\displaystyle f}
is said to be surjective if
∀
y
∈
Y
,
∃
x
∈
X
,
f
(
x
)
=
y
{\displaystyle \forall y\in Y,\,\exists x\in X,\;\;f(x)=y}
.
== Examples ==
For any set X, the identity function idX on X is surjective.
The function f : Z → {0, 1} defined by f(n) = n mod 2 (that is, even integers are mapped to 0 and odd integers to 1) is surjective.
The function f : R → R defined by f(x) = 2x + 1 is surjective (and even bijective), because for every real number y, we have an x such that f(x) = y: such an appropriate x is (y − 1)/2.
The function f : R → R defined by f(x) = x3 − 3x is surjective, because the pre-image of any real number y is the solution set of the cubic polynomial equation x3 − 3x − y = 0, and every cubic polynomial with real coefficients has at least one real root. However, this function is not injective (and hence not bijective), since, for example, the pre-image of y = 2 is {x = −1, x = 2}. (In fact, the pre-image of this function for every y, −2 ≤ y ≤ 2 has more than one element.)
The function g : R → R defined by g(x) = x2 is not surjective, since there is no real number x such that x2 = −1. However, the function g : R → R≥0 defined by g(x) = x2 (with the restricted codomain) is surjective, since for every y in the nonnegative real codomain Y, there is at least one x in the real domain X such that x2 = y.
The natural logarithm function ln : (0, +∞) → R is a surjective and even bijective (mapping from the set of positive real numbers to the set of all real numbers). Its inverse, the exponential function, if defined with the set of real numbers as the domain and the codomain, is not surjective (as its range is the set of positive real numbers).
The matrix exponential is not surjective when seen as a map from the space of all n×n matrices to itself. It is, however, usually defined as a map from the space of all n×n matrices to the general linear group of degree n (that is, the group of all n×n invertible matrices). Under this definition, the matrix exponential is surjective for complex matrices, although still not surjective for real matrices.
The projection from a cartesian product A × B to one of its factors is surjective, unless the other factor is empty.
In a 3D video game, vectors are projected onto a 2D flat screen by means of a surjective function.
== Properties ==
A function is bijective if and only if it is both surjective and injective.
If (as is often done) a function is identified with its graph, then surjectivity is not a property of the function itself, but rather a property of the mapping. This is, the function together with its codomain. Unlike injectivity, surjectivity cannot be read off of the graph of the function alone.
=== Surjections as right invertible functions ===
The function g : Y → X is said to be a right inverse of the function f : X → Y if f(g(y)) = y for every y in Y (g can be undone by f). In other words, g is a right inverse of f if the composition f o g of g and f in that order is the identity function on the domain Y of g. The function g need not be a complete inverse of f because the composition in the other order, g o f, may not be the identity function on the domain X of f. In other words, f can undo or "reverse" g, but cannot necessarily be reversed by it.
Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to the axiom of choice.
If f : X → Y is surjective and B is a subset of Y, then f(f −1(B)) = B. Thus, B can be recovered from its preimage f −1(B).
For example, in the first illustration in the gallery, there is some function g such that g(C) = 4. There is also some function f such that f(4) = C. It doesn't matter that g is not unique (it would also work if g(C) equals 3); it only matters that f "reverses" g.
=== Surjections as epimorphisms ===
A function f : X → Y is surjective if and only if it is right-cancellative: given any functions g,h : Y → Z, whenever g o f = h o f, then g = h. This property is formulated in terms of functions and their composition and can be generalized to the more general notion of the morphisms of a category and their composition. Right-cancellative morphisms are called epimorphisms. Specifically, surjective functions are precisely the epimorphisms in the category of sets. The prefix epi is derived from the Greek preposition ἐπί meaning over, above, on.
Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inverse g of a morphism f is called a section of f. A morphism with a right inverse is called a split epimorphism.
=== Surjections as binary relations ===
Any function with domain X and codomain Y can be seen as a left-total and right-unique binary relation between X and Y by identifying it with its function graph. A surjective function with domain X and codomain Y is then a binary relation between X and Y that is right-unique and both left-total and right-total.
=== Cardinality of the domain of a surjection ===
The cardinality of the domain of a surjective function is greater than or equal to the cardinality of its codomain: If f : X → Y is a surjective function, then X has at least as many elements as Y, in the sense of cardinal numbers. (The proof appeals to the axiom of choice to show that a function
g : Y → X satisfying f(g(y)) = y for all y in Y exists. g is easily seen to be injective, thus the formal definition of |Y| ≤ |X| is satisfied.)
Specifically, if both X and Y are finite with the same number of elements, then f : X → Y is surjective if and only if f is injective.
Given two sets X and Y, the notation X ≤* Y is used to say that either X is empty or that there is a surjection from Y onto X. Using the axiom of choice one can show that X ≤* Y and Y ≤* X together imply that |Y| = |X|, a variant of the Schröder–Bernstein theorem.
=== Composition and decomposition ===
The composition of surjective functions is always surjective: If f and g are both surjective, and the codomain of g is equal to the domain of f, then f o g is surjective. Conversely, if f o g is surjective, then f is surjective (but g, the function applied first, need not be). These properties generalize from surjections in the category of sets to any epimorphisms in any category.
Any function can be decomposed into a surjection and an injection: For any function h : X → Z there exist a surjection f : X → Y and an injection g : Y → Z such that h = g o f. To see this, define Y to be the set of preimages h−1(z) where z is in h(X). These preimages are disjoint and partition X. Then f carries each x to the element of Y which contains it, and g carries each element of Y to the point in Z to which h sends its points. Then f is surjective since it is a projection map, and g is injective by definition.
=== Induced surjection and induced bijection ===
Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection defined on a quotient of its domain by collapsing all arguments mapping to a given fixed image. More precisely, every surjection f : A → B can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of all preimages under f. Let P(~) : A → A/~ be the projection map which sends each x in A to its equivalence class [x]~, and let fP : A/~ → B be the well-defined function given by fP([x]~) = f(x). Then f = fP o P(~).
== The set of surjections ==
Given fixed finite sets A and B, one can form the set of surjections A ↠ B. The cardinality of this set is one of the twelve aspects of Rota's Twelvefold way, and is given by
|
B
|
!
{
|
A
|
|
B
|
}
{\textstyle |B|!{\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}}
, where
{
|
A
|
|
B
|
}
{\textstyle {\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}}
denotes a Stirling number of the second kind.
== Gallery ==
== See also ==
Bijection, injection and surjection
Cover (algebra)
Covering map
Enumeration
Fiber bundle
Index set
Section (category theory)
== References ==
== Further reading ==
Bourbaki, N. (2004) [1968]. Theory of Sets. Elements of Mathematics. Vol. 1. Springer. doi:10.1007/978-3-642-59309-3. ISBN 978-3-540-22525-6. LCCN 2004110815. | Wikipedia/Surjective_function |
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.
In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.
== Motivation ==
As a motivating example, consider the problem of finding an ‘a’ in an array of n elements.
Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s.
Output: Find an ‘a’ in the array.
We give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm.
Las Vegas algorithm:
This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
lim
n
→
∞
∑
i
=
1
n
i
2
i
=
2
{\displaystyle \lim _{n\to \infty }\sum _{i=1}^{n}{\frac {i}{2^{i}}}=2}
Since it is constant, the expected run time over many calls is
Θ
(
1
)
{\displaystyle \Theta (1)}
. (See Big Theta notation)
Monte Carlo algorithm:
If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is:
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is
Θ
(
1
)
{\displaystyle \Theta (1)}
.
Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)) such as in the Prisoner's dilemma. It is for this reason that randomness is ubiquitous in cryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing.
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the Monte Carlo method for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
== Computational complexity ==
Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied. The most basic randomized complexity class is RP, which is the class of decision problems for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.
The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.
== Early history ==
=== Sorting ===
Quicksort was discovered by Tony Hoare in 1959, and subsequently published in 1961. In the same year, Hoare published the quickselect algorithm, which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.
=== Number theory ===
In 1917, Henry Cabourn Pocklington introduced a randomized algorithm known as Pocklington's algorithm for efficiently finding square roots modulo prime numbers.
In 1970, Elwyn Berlekamp introduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field. In 1977, Robert M. Solovay and Volker Strassen discovered a polynomial-time randomized primality test (i.e., determining the primality of a number). Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test could also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-time deterministic algorithms for primality testing were known.
=== Data structures ===
One of the earliest randomized data structures is the hash table, which was introduced in 1953 by Hans Peter Luhn at IBM. Luhn's hash table used chaining to resolve collisions and was also one of the first applications of linked lists. Subsequently, in 1954, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research introduced linear probing, although Andrey Ershov independently had the same idea in 1957. In 1962, Donald Knuth performed the first correct analysis of linear probing, although the memorandum containing his analysis was not published until much later. The first published analysis was due to Konheim and Weiss in 1966.
Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random. In 1979, Carter and Wegman introduced universal hash functions, which they showed could be used to implement chained hash tables with constant expected time per operation.
Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as the Bloom filter. In 1989, Raimund Seidel and Cecilia R. Aragon introduced a randomized balanced search tree known as the treap. In the same year, William Pugh introduced another randomized search tree known as the skip list.
=== Implicit uses in combinatorics ===
Prior to the popularization of randomized algorithms in computer science, Paul Erdős popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic method. Erdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs. He famously used a more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.
== Examples ==
=== Quicksort ===
Quicksort is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require O(n2) time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.
=== Randomized incremental constructions in geometry ===
In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.
=== Min cut ===
Input: A graph G(V,E)
Output: A cut partitioning the vertices into L and R, with the minimum number of edges between L and R.
Recall that the contraction of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's basic algorithm:
begin
i = 1
repeat
repeat
Take a random edge (u,v) ∈ E in G
replace u and v with the contraction u'
until only 2 nodes remain
obtain the corresponding cut result Ci
i = i + 1
until i = m
output the minimum cut among C1, C2, ..., Cm.
end
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is
O
(
n
)
{\displaystyle O(n)}
, and n denotes the number of vertices.
After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 3.
==== Analysis of algorithm ====
The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is
∏
i
=
1
m
Pr
(
C
i
≠
C
)
=
∏
i
=
1
m
(
1
−
Pr
(
C
i
=
C
)
)
.
{\displaystyle \prod _{i=1}^{m}\Pr(C_{i}\neq C)=\prod _{i=1}^{m}(1-\Pr(C_{i}=C)).}
By lemma 1, the probability that Ci = C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j ∈ {0, 1, …, n − 3}. Gj has n − j vertices. We use the chain rule of conditional possibilities.
The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is
1
−
k
|
E
(
G
j
)
|
{\displaystyle 1-{\frac {k}{|E(G_{j})|}}}
. Note that Gj still has min cut of size k, so by Lemma 2, it still has at least
(
n
−
j
)
k
2
{\displaystyle {\frac {(n-j)k}{2}}}
edges.
Thus,
1
−
k
|
E
(
G
j
)
|
≥
1
−
2
n
−
j
=
n
−
j
−
2
n
−
j
{\displaystyle 1-{\frac {k}{|E(G_{j})|}}\geq 1-{\frac {2}{n-j}}={\frac {n-j-2}{n-j}}}
.
So by the chain rule, the probability of finding the min cut C is
Pr
[
C
i
=
C
]
≥
(
n
−
2
n
)
(
n
−
3
n
−
1
)
(
n
−
4
n
−
2
)
…
(
3
5
)
(
2
4
)
(
1
3
)
.
{\displaystyle \Pr[C_{i}=C]\geq \left({\frac {n-2}{n}}\right)\left({\frac {n-3}{n-1}}\right)\left({\frac {n-4}{n-2}}\right)\ldots \left({\frac {3}{5}}\right)\left({\frac {2}{4}}\right)\left({\frac {1}{3}}\right).}
Cancellation gives
Pr
[
C
i
=
C
]
≥
2
n
(
n
−
1
)
{\displaystyle \Pr[C_{i}=C]\geq {\frac {2}{n(n-1)}}}
. Thus the probability that the algorithm succeeds is at least
1
−
(
1
−
2
n
(
n
−
1
)
)
m
{\displaystyle 1-\left(1-{\frac {2}{n(n-1)}}\right)^{m}}
. For
m
=
n
(
n
−
1
)
2
ln
n
{\displaystyle m={\frac {n(n-1)}{2}}\ln n}
, this is equivalent to
1
−
1
n
{\displaystyle 1-{\frac {1}{n}}}
. The algorithm finds the min cut with probability
1
−
1
n
{\displaystyle 1-{\frac {1}{n}}}
, in time
O
(
m
n
)
=
O
(
n
3
log
n
)
{\displaystyle O(mn)=O(n^{3}\log n)}
.
== Derandomization ==
Randomness can be viewed as a resource, like space and time. Derandomization is then the process of removing randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in computational complexity, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.
There are specific methods that can be employed to derandomize particular randomized algorithms:
the method of conditional probabilities, and its generalization, pessimistic estimators
discrepancy theory (which is used to derandomize geometric algorithms)
the exploitation of limited independence in the random variables used by the algorithm, such as the pairwise independence used in universal hashing
the use of expander graphs (or dispersers in general) to amplify a limited amount of initial randomness (this last approach is also referred to as generating pseudorandom bits from a random source, and leads to the related topic of pseudorandomness)
changing the randomized algorithm to use a hash function as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)
== Where randomness helps ==
When the model of computation is restricted to Turing machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
Based on the initial motivating example: given an exponentially long string of 2k characters, half a's and half b's, a random-access machine requires 2k−1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
The natural way of carrying out a numerical computation in embedded systems or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization
In communication complexity, the equality of two strings can be verified to some reliability using
log
n
{\displaystyle \log n}
bits of communication with a randomized protocol. Any deterministic protocol requires
Θ
(
n
)
{\displaystyle \Theta (n)}
bits if defending against a strong opponent.
The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time. Bárány and Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE. However, if it is required that the verifier be deterministic, then IP = NP.
In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions.
== See also ==
Approximate counting algorithm
Atlantic City algorithm
Bogosort
Count–min sketch
HyperLogLog
Karger's algorithm
Las Vegas algorithm
Monte Carlo algorithm
Principle of deferred decision
Probabilistic analysis of algorithms
Probabilistic roadmap
Randomized algorithms as zero-sum games
== Notes ==
== References ==
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 1990. ISBN 0-262-03293-7. Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp. 91–122.
Dirk Draheim. "Semantics of the Probabilistic Typed Lambda Calculus (Markov Chain Semantics, Termination Behavior, and Denotational Semantics)." Springer, 2017.
Jon Kleinberg and Éva Tardos. Algorithm Design. Chapter 13: "Randomized algorithms".
Fallis, D. (2000). "The reliability of randomized algorithms". The British Journal for the Philosophy of Science. 51 (2): 255–271. doi:10.1093/bjps/51.2.255.
M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York (NY), 2005.
Rajeev Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995.
Rajeev Motwani and P. Raghavan. Randomized Algorithms. A survey on Randomized Algorithms.
Christos Papadimitriou (1993), Computational Complexity (1st ed.), Addison Wesley, ISBN 978-0-201-53082-7 Chapter 11: Randomized computation, pp. 241–278.
Rabin, Michael O. (1980). "Probabilistic algorithm for testing primality". Journal of Number Theory. 12: 128–138. doi:10.1016/0022-314X(80)90084-0.
A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.
"Randomized Algorithms for Scientific Computing" (RASC), OSTI.GOV (July 10th, 2021). | Wikipedia/Randomized_algorithm |
In proof theory, a branch of mathematical logic, elementary function arithmetic (EFA), also called elementary arithmetic and exponential function arithmetic, is the system of arithmetic with the usual elementary properties of 0, 1, +, ×,
x
y
{\displaystyle x^{y}}
, together with induction for formulas with bounded quantifiers.
EFA is a very weak logical system, whose proof theoretic ordinal is
ω
3
{\displaystyle \omega ^{3}}
, but still seems able to prove much of ordinary mathematics that can be stated in the language of first-order arithmetic.
== Definition ==
EFA is a system in first order logic (with equality). Its language contains:
two constants
0
{\displaystyle 0}
,
1
{\displaystyle 1}
,
three binary operations
+
{\displaystyle +}
,
×
{\displaystyle \times }
,
exp
{\displaystyle {\textrm {exp}}}
, with
exp
(
x
,
y
)
{\displaystyle {\textrm {exp}}(x,y)}
usually written as
x
y
{\displaystyle x^{y}}
,
a binary relation symbol
<
{\displaystyle <}
(This is not really necessary as it can be written in terms of the other operations and is sometimes omitted, but is convenient for defining bounded quantifiers).
Bounded quantifiers are those of the form
∀
(
x
<
y
)
{\displaystyle \forall (x<y)}
and
∃
(
x
<
y
)
{\displaystyle \exists (x<y)}
which are abbreviations for
∀
x
(
x
<
y
)
→
…
{\displaystyle \forall x(x<y)\rightarrow \ldots }
and
∃
x
(
x
<
y
)
∧
…
{\displaystyle \exists x(x<y)\land \ldots }
in the usual way.
The axioms of EFA are
The axioms of Robinson arithmetic for
0
{\displaystyle 0}
,
1
{\displaystyle 1}
,
+
{\displaystyle +}
,
×
{\displaystyle \times }
,
<
{\displaystyle <}
The axioms for exponentiation:
x
0
=
1
{\displaystyle x^{0}=1}
,
x
y
+
1
=
x
y
×
x
{\displaystyle x^{y+1}=x^{y}\times x}
.
Induction for formulas all of whose quantifiers are bounded (but which may contain free variables).
== Friedman's grand conjecture ==
Harvey Friedman's grand conjecture implies that many mathematical theorems, such as Fermat's Last Theorem, can be proved in very weak systems such as EFA.
The original statement of the conjecture from Friedman (1999) is:
"Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in EFA. EFA is the weak fragment of Peano Arithmetic based on the usual quantifier-free axioms for 0, 1, +, ×, exp, together with the scheme of induction for all formulas in the language all of whose quantifiers are bounded."
While it is easy to construct artificial arithmetical statements that are true but not provable in EFA, the point of Friedman's conjecture is that natural examples of such statements in mathematics seem to be rare. Some natural examples include consistency statements from logic, several statements related to Ramsey theory such as the Szemerédi regularity lemma, and the graph minor theorem.
== Related systems ==
Several related computational complexity classes have similar properties to EFA:
One can omit the binary function symbol exp from the language, by taking Robinson arithmetic together with induction for all formulas with bounded quantifiers and an axiom stating roughly that exponentiation is a function defined everywhere. This is similar to EFA and has the same proof theoretic strength, but is more cumbersome to work with.
There are weak fragments of second-order arithmetic called
R
C
A
0
∗
{\displaystyle {\mathsf {RCA}}_{0}^{*}}
and
W
K
L
0
∗
{\displaystyle {\mathsf {WKL}}_{0}^{*}}
that are conservative over EFA for
Π
2
0
{\displaystyle \Pi _{2}^{0}}
sentences (i.e. any
Π
2
0
{\displaystyle \Pi _{2}^{0}}
sentences proven by
R
C
A
0
∗
{\displaystyle {\mathsf {RCA}}_{0}^{*}}
or
W
K
L
0
∗
{\displaystyle {\mathsf {WKL}}_{0}^{*}}
are already proven by EFA.) In particular, they are conservative for consistency statements. These fragments are sometimes studied in reverse mathematics (Simpson 2009).
Elementary recursive arithmetic (ERA) is a subsystem of primitive recursive arithmetic (PRA) in which recursion is restricted to bounded sums and products. This also has the same
Π
2
0
{\displaystyle \Pi _{2}^{0}}
sentences as EFA, in the sense that whenever EFA proves ∀x∃y P(x,y), with P quantifier-free, ERA proves the open formula P(x,T(x)), with T a term definable in ERA. Like PRA, ERA can be defined in an entirely logic-free manner, with just the rules of substitution and induction, and defining equations for all elementary recursive functions. Unlike PRA, however, the elementary recursive functions can be characterized by the closure under composition and projection of a finite number of basis functions, and thus only a finite number of defining equations are needed.
== See also ==
Elementary function – A kind of mathematical function
Grzegorczyk hierarchy – Functions in computability theory
Reverse mathematics – Branch of mathematical logic
Ordinal analysis – Mathematical technique used in proof theory
Tarski's high school algebra problem – Mathematical problem
== References ==
Avigad, Jeremy (2003), "Number theory and elementary arithmetic", Philosophia Mathematica, Series III, 11 (3): 257–284, doi:10.1093/philmat/11.3.257, ISSN 0031-8019, MR 2006194
Friedman, Harvey (1999), grand conjectures
Simpson, Stephen G. (2009), Subsystems of second order arithmetic, Perspectives in Logic (2nd ed.), Cambridge University Press, ISBN 978-0-521-88439-6, MR 1723993 | Wikipedia/Elementary_function_arithmetic |
Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory (MLTT)) is a type theory and an alternative foundation of mathematics.
Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.
== Design ==
Martin-Löf designed the type theory on the principles of mathematical constructivism. Constructivism requires any existence proof to contain a "witness". So, any proof of "there exists a prime greater than 1000" must identify a specific number that is both prime and greater than 1000. Intuitionistic type theory accomplished this design goal by internalizing the BHK interpretation. A useful consequence is that proofs become mathematical objects that can be examined, compared, and manipulated.
Intuitionistic type theory's type constructors were built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication (
A
⟹
B
{\displaystyle A\implies B}
) corresponds to the type of a function (
A
→
B
{\displaystyle A\to B}
). This correspondence is called the Curry–Howard isomorphism. Prior type theories had also followed this isomorphism, but Martin-Löf's was the first to extend it to predicate logic by introducing dependent types.
== Type theory ==
A type theory is a kind of mathematical ontology, or foundation, describing the fundamental objects that exist. In the standard foundation, set theory combined with mathematical logic, the fundamental object is the set, which is a container that contains elements. In type theory, the fundamental object is the term, each of which belongs to one and only one type.
Intuitionistic type theory has three finite types, which are then composed using five different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's. So, each feature of the type theory does double duty as a feature of both math and logic.
=== 0 type, 1 type and 2 type ===
There are three finite types: The 0 type contains no terms. The 1 type contains one canonical term. The 2 type contains two canonical terms.
Because the 0 type contains no terms, it is also called the empty type. It is used to represent anything that cannot exist. It is also written
⊥
{\displaystyle \bot }
and represents anything unprovable (that is, a proof of it cannot exist). As a result, negation is defined as a function to it:
¬
A
:=
A
→
⊥
{\displaystyle \neg A:=A\to \bot }
.
Likewise, the 1 type contains one canonical term and represents existence. It also is called the unit type.
Finally, the 2 type contains two canonical terms. It represents a definite choice between two values. It is used for Boolean values but not propositions.
Propositions are instead represented by particular types. For instance, a true proposition can be represented by the 1 type, while a false proposition can be represented by the 0 type. But we cannot assert that these are the only propositions, i.e. the law of excluded middle does not hold for propositions in intuitionistic type theory.
=== Σ type constructor ===
Σ-types contain ordered pairs. As with typical ordered pair (or 2-tuple) types, a Σ-type can describe the Cartesian product,
A
×
B
{\displaystyle A\times B}
, of two other types,
A
{\displaystyle A}
and
B
{\displaystyle B}
. Logically, such an ordered pair would hold a proof of
A
{\displaystyle A}
and a proof of
B
{\displaystyle B}
, so one may see such a type written as
A
∧
B
{\displaystyle A\wedge B}
.
Σ-types are more powerful than typical ordered pair types because of dependent typing. In the ordered pair, the type of the second term can depend on the value of the first term. For example, the first term of the pair might be a natural number and the second term's type might be a sequence of reals of length equal to the first term. Such a type would be written:
∑
n
:
N
Vec
(
R
,
n
)
{\displaystyle \sum _{n{\mathbin {:}}{\mathbb {N} }}\operatorname {Vec} ({\mathbb {R} },n)}
Using set-theory terminology, this is similar to an indexed disjoint union of sets. In the case of the usual cartesian product, the type of the second term does not depend on the value of the first term. Thus the type describing the cartesian product
N
×
R
{\displaystyle {\mathbb {N} }\times {\mathbb {R} }}
is written:
∑
n
:
N
R
{\displaystyle \sum _{n{\mathbin {:}}{\mathbb {N} }}{\mathbb {R} }}
It is important to note here that the value of the first term,
n
{\displaystyle n}
, is not depended on by the type of the second term,
R
{\displaystyle {\mathbb {R} }}
.
Σ-types can be used to build up longer dependently-typed tuples used in mathematics and the records or structs used in most programming languages. An example of a dependently-typed 3-tuple is two integers and a proof that the first integer is smaller than the second integer, described by the type:
∑
m
:
Z
∑
n
:
Z
(
(
m
<
n
)
=
True
)
{\displaystyle \sum _{m{\mathbin {:}}{\mathbb {Z} }}{\sum _{n{\mathbin {:}}{\mathbb {Z} }}((m<n)={\text{True}})}}
Dependent typing allows Σ-types to serve the role of existential quantifier. The statement "there exists an
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
, such that
P
(
n
)
{\displaystyle P(n)}
is proven" becomes the type of ordered pairs where the first item is the value
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
and the second item is a proof of
P
(
n
)
{\displaystyle P(n)}
. Notice that the type of the second item (proofs of
P
(
n
)
{\displaystyle P(n)}
) depends on the value in the first part of the ordered pair (
n
{\displaystyle n}
). Its type would be:
∑
n
:
N
P
(
n
)
{\displaystyle \sum _{n{\mathbin {:}}{\mathbb {N} }}P(n)}
=== Π type constructor ===
Π-types contain functions. As with typical function types, they consist of an input type and an output type. They are more powerful than typical function types however, in that the return type can depend on the input value. Functions in type theory are different from set theory. In set theory, you look up the argument's value in a set of ordered pairs. In type theory, the argument is substituted into a term and then computation ("reduction") is applied to the term.
As an example, the type of a function that, given a natural number
n
{\displaystyle n}
, returns a vector containing
n
{\displaystyle n}
real numbers is written:
∏
n
:
N
Vec
(
R
,
n
)
{\displaystyle \prod _{n{\mathbin {:}}{\mathbb {N} }}\operatorname {Vec} ({\mathbb {R} },n)}
When the output type does not depend on the input value, the function type is often simply written with a
→
{\displaystyle \to }
. Thus,
N
→
R
{\displaystyle {\mathbb {N} }\to {\mathbb {R} }}
is the type of functions from natural numbers to real numbers. Such Π-types correspond to logical implication. The logical proposition
A
⟹
B
{\displaystyle A\implies B}
corresponds to the type
A
→
B
{\displaystyle A\to B}
, containing functions that take proofs-of-A and return proofs-of-B. This type could be written more consistently as:
∏
a
:
A
B
{\displaystyle \prod _{a{\mathbin {:}}A}B}
Π-types are also used in logic for universal quantification. The statement "for every
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
,
P
(
n
)
{\displaystyle P(n)}
is proven" becomes a function from
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
to proofs of
P
(
n
)
{\displaystyle P(n)}
. Thus, given the value for
n
{\displaystyle n}
the function generates a proof that
P
(
⋅
)
{\displaystyle P(\,\cdot \,)}
holds for that value. The type would be
∏
n
:
N
P
(
n
)
{\displaystyle \prod _{n{\mathbin {:}}{\mathbb {N} }}P(n)}
=== = type constructor ===
=-types are created from two terms. Given two terms like
2
+
2
{\displaystyle 2+2}
and
2
⋅
2
{\displaystyle 2\cdot 2}
, you can create a new type
2
+
2
=
2
⋅
2
{\displaystyle 2+2=2\cdot 2}
. The terms of that new type represent proofs that the pair reduce to the same canonical term. Thus, since both
2
+
2
{\displaystyle 2+2}
and
2
⋅
2
{\displaystyle 2\cdot 2}
compute to the canonical term
4
{\displaystyle 4}
, there will be a term of the type
2
+
2
=
2
⋅
2
{\displaystyle 2+2=2\cdot 2}
. In intuitionistic type theory, there is a single way to introduce =-types and that is by reflexivity:
refl
:
∏
a
:
A
(
a
=
a
)
.
{\displaystyle \operatorname {refl} {\mathbin {:}}\prod _{a{\mathbin {:}}A}(a=a).}
It is possible to create =-types such as
1
=
2
{\displaystyle 1=2}
where the terms do not reduce to the same canonical term, but you will be unable to create terms of that new type. In fact, if you were able to create a term of
1
=
2
{\displaystyle 1=2}
, you could create a term of
⊥
{\displaystyle \bot }
. Putting that into a function would generate a function of type
1
=
2
→
⊥
{\displaystyle 1=2\to \bot }
. Since
…
→
⊥
{\displaystyle \ldots \to \bot }
is how intuitionistic type theory defines negation, you would have
¬
(
1
=
2
)
{\displaystyle \neg (1=2)}
or, finally,
1
≠
2
{\displaystyle 1\neq 2}
.
Equality of proofs is an area of active research in proof theory and has led to the development of homotopy type theory and other type theories.
=== Inductive types ===
Inductive types allow the creation of complex, self-referential types. For example, a linked list of natural numbers is either an empty list or a pair of a natural number and another linked list. Inductive types can be used to define unbounded mathematical structures like trees, graphs, etc.. In fact, the natural numbers type may be defined as an inductive type, either being
0
{\displaystyle 0}
or the successor of another natural number.
Inductive types define new constants, such as zero
0
:
N
{\displaystyle 0{\mathbin {:}}{\mathbb {N} }}
and the successor function
S
:
N
→
N
{\displaystyle S{\mathbin {:}}{\mathbb {N} }\to {\mathbb {N} }}
. Since
S
{\displaystyle S}
does not have a definition and cannot be evaluated using substitution, terms like
S
0
{\displaystyle S0}
and
S
S
S
0
{\displaystyle SSS0}
become the canonical terms of the natural numbers.
Proofs on inductive types are made possible by induction. Each new inductive type comes with its own inductive rule. To prove a predicate
P
(
⋅
)
{\displaystyle P(\,\cdot \,)}
for every natural number, you use the following rule:
N
-
e
l
i
m
:
P
(
0
)
→
(
∏
n
:
N
P
(
n
)
→
P
(
S
(
n
)
)
)
→
∏
n
:
N
P
(
n
)
{\displaystyle {\operatorname {{\mathbb {N} }-elim} }\,{\mathbin {:}}P(0)\,\to \left(\prod _{n{\mathbin {:}}{\mathbb {N} }}P(n)\to P(S(n))\right)\to \prod _{n{\mathbin {:}}{\mathbb {N} }}P(n)}
Inductive types in intuitionistic type theory are defined in terms of W-types, the type of well-founded trees. Later work in type theory generated coinductive types, induction-recursion, and induction-induction for working on types with more obscure kinds of self-referentiality. Higher inductive types allow equality to be defined between terms.
=== Universe types ===
The universe types allow proofs to be written about all the types created with the other type constructors. Every term in the universe type
U
0
{\displaystyle {\mathcal {U}}_{0}}
can be mapped to a type created with any combination of
0
,
1
,
2
,
Σ
,
Π
,
=
,
{\displaystyle 0,1,2,\Sigma ,\Pi ,=,}
and the inductive type constructor. However, to avoid paradoxes, there is no term in
U
n
{\displaystyle {\mathcal {U}}_{n}}
that maps to
U
n
{\displaystyle {\mathcal {U}}_{n}}
for any
n
∈
N
{\displaystyle {\mathcal {n}}\in \mathbb {N} }
.
To write proofs about all "the small types" and
U
0
{\displaystyle {\mathcal {U}}_{0}}
, you must use
U
1
{\displaystyle {\mathcal {U}}_{1}}
, which does contain a term for
U
0
{\displaystyle {\mathcal {U}}_{0}}
, but not for itself
U
1
{\displaystyle {\mathcal {U}}_{1}}
. Similarly, for
U
2
{\displaystyle {\mathcal {U}}_{2}}
. There is a predicative hierarchy of universes, so to quantify a proof over any fixed constant
k
{\displaystyle k}
universes, you can use
U
k
+
1
{\displaystyle {\mathcal {U}}_{k+1}}
.
Universe types are a tricky feature of type theories. Martin-Löf's original type theory had to be changed to account for Girard's paradox. Later research covered topics such as "super universes", "Mahlo universes", and impredicative universes.
== Judgements ==
The formal definition of intuitionistic type theory is written using judgements. For example, in the statement "if
A
{\displaystyle A}
is a type and
B
{\displaystyle B}
is a type then
∑
a
:
A
B
{\displaystyle \textstyle \sum _{a:A}B}
is a type" there are judgements of "is a type", "and", and "if ... then ...". The expression
∑
a
:
A
B
{\displaystyle \textstyle \sum _{a:A}B}
is not a judgement; it is the type being defined.
This second level of the type theory can be confusing, particularly where it comes to equality. There is a judgement of term equality, which might say
4
=
2
+
2
{\displaystyle 4=2+2}
. It is a statement that two terms reduce to the same canonical term. There is also a judgement of type equality, say that
A
=
B
{\displaystyle A=B}
, which means every element of
A
{\displaystyle A}
is an element of the type
B
{\displaystyle B}
and vice versa. At the type level, there is a type
4
=
2
+
2
{\displaystyle 4=2+2}
and it contains terms if there is a proof that
4
{\displaystyle 4}
and
2
+
2
{\displaystyle 2+2}
reduce to the same value. (Terms of this type are generated using the term-equality judgement.) Lastly, there is an English-language level of equality, because we use the word "four" and symbol "
4
{\displaystyle 4}
" to refer to the canonical term
S
S
S
S
0
{\displaystyle SSSS0}
. Synonyms like these are called "definitionally equal" by Martin-Löf.
The description of judgements below is based on the discussion in Nordström, Petersson, and Smith.
The formal theory works with types and objects.
A type is declared by:
A
T
y
p
e
{\displaystyle A\ {\mathsf {Type}}}
An object exists and is in a type if:
a
:
A
{\displaystyle a{\mathbin {:}}A}
Objects can be equal
a
=
b
{\displaystyle a=b}
and types can be equal
A
=
B
{\displaystyle A=B}
A type that depends on an object from another type is declared
(
x
:
A
)
B
{\displaystyle (x{\mathbin {:}}A)B}
and removed by substitution
B
[
x
/
a
]
{\displaystyle B[x/a]}
, replacing the variable
x
{\displaystyle x}
with the object
a
{\displaystyle a}
in
B
{\displaystyle B}
.
An object that depends on an object from another type can be done two ways.
If the object is "abstracted", then it is written
[
x
]
b
{\displaystyle [x]b}
and removed by substitution
b
[
x
/
a
]
{\displaystyle b[x/a]}
, replacing the variable
x
{\displaystyle x}
with the object
a
{\displaystyle a}
in
b
{\displaystyle b}
.
The object-depending-on-object can also be declared as a constant as part of a recursive type. An example of a recursive type is:
0
:
N
{\displaystyle 0{\mathbin {:}}\mathbb {N} }
S
:
N
→
N
{\displaystyle S{\mathbin {:}}\mathbb {N} \to \mathbb {N} }
Here,
S
{\displaystyle S}
is a constant object-depending-on-object. It is not associated with an abstraction. Constants like
S
{\displaystyle S}
can be removed by defining equality. Here the relationship with addition is defined using equality and using pattern matching to handle the recursive aspect of
S
{\displaystyle S}
:
add
:
(
N
×
N
)
→
N
add
(
0
,
b
)
=
b
add
(
S
(
a
)
,
b
)
=
S
(
add
(
a
,
b
)
)
)
{\displaystyle {\begin{aligned}\operatorname {add} &{\mathbin {:}}\ (\mathbb {N} \times \mathbb {N} )\to \mathbb {N} \\\operatorname {add} (0,b)&=b\\\operatorname {add} (S(a),b)&=S(\operatorname {add} (a,b)))\end{aligned}}}
S
{\displaystyle S}
is manipulated as an opaque constant - it has no internal structure for substitution.
So, objects and types and these relations are used to express formulae in the theory. The following styles of judgements are used to create new objects, types and relations from existing ones:
By convention, there is a type that represents all other types. It is called
U
{\displaystyle {\mathcal {U}}}
(or
Set
{\displaystyle \operatorname {Set} }
). Since
U
{\displaystyle {\mathcal {U}}}
is a type, the members of it are objects. There is a dependent type
El
{\displaystyle \operatorname {El} }
that maps each object to its corresponding type. In most texts
El
{\displaystyle \operatorname {El} }
is never written. From the context of the statement, a reader can almost always tell whether
A
{\displaystyle A}
refers to a type, or whether it refers to the object in
U
{\displaystyle {\mathcal {U}}}
that corresponds to the type.
This is the complete foundation of the theory. Everything else is derived.
To implement logic, each proposition is given its own type. The objects in those types represent the different possible ways to prove the proposition. If there is no proof for the proposition, then the type has no objects in it. Operators like "and" and "or" that work on propositions introduce new types and new objects. So
A
×
B
{\displaystyle A\times B}
is a type that depends on the type
A
{\displaystyle A}
and the type
B
{\displaystyle B}
. The objects in that dependent type are defined to exist for every pair of objects in
A
{\displaystyle A}
and
B
{\displaystyle B}
. If either
A
{\displaystyle A}
or
B
{\displaystyle B}
have no proof and is an empty type, then the new type representing
A
×
B
{\displaystyle A\times B}
is also empty.
This can be done for other types (booleans, natural numbers, etc.) and their operators.
== Categorical models of type theory ==
Using the language of category theory, R. A. G. Seely introduced the notion of a locally cartesian closed category (LCCC) as the basic model of type theory. This has been refined by Hofmann and Dybjer to Categories with Families or Categories with Attributes based on earlier work by Cartmell.
A category with families is a category C of contexts (in which the objects are contexts, and the context morphisms are substitutions), together with a functor T : Cop → Fam(Set).
Fam(Set) is the category of families of Sets, in which objects are pairs
(
A
,
B
)
{\displaystyle (A,B)}
of an "index set" A and a function B: X → A, and morphisms are pairs of functions f : A → A' and g : X → X' , such that B' ° g = f ° B – in other words, f maps Ba to Bg(a).
The functor T assigns to a context G a set
T
y
(
G
)
{\displaystyle Ty(G)}
of types, and for each
A
:
T
y
(
G
)
{\displaystyle A:Ty(G)}
, a set
T
m
(
G
,
A
)
{\displaystyle Tm(G,A)}
of terms. The axioms for a functor require that these play harmoniously with substitution. Substitution is usually written in the form Af or af, where A is a type in
T
y
(
G
)
{\displaystyle Ty(G)}
and a is a term in
T
m
(
G
,
A
)
{\displaystyle Tm(G,A)}
, and f is a substitution from D to G. Here
A
f
:
T
y
(
D
)
{\displaystyle Af:Ty(D)}
and
a
f
:
T
m
(
D
,
A
f
)
{\displaystyle af:Tm(D,Af)}
.
The category C must contain a terminal object (the empty context), and a final object for a form of product called comprehension, or context extension, in which the right element is a type in the context of the left element. If G is a context, and
A
:
T
y
(
G
)
{\displaystyle A:Ty(G)}
, then there should be an object
(
G
,
A
)
{\displaystyle (G,A)}
final among contexts D with mappings p : D → G, q : Tm(D,Ap).
A logical framework, such as Martin-Löf's, takes the form of closure conditions on the context-dependent sets of types and terms: that there should be a type called Set, and for each set a type, that the types should be closed under forms of dependent sum and product, and so forth.
A theory such as that of predicative set theory expresses closure conditions on the types of sets and their elements: that they should be closed under operations that reflect dependent sum and product, and under various forms of inductive definition.
== Extensional versus intensional ==
A fundamental distinction is extensional vs intensional type theory. In extensional type theory, definitional (i.e., computational) equality is not distinguished from propositional equality, which requires proof. As a consequence type checking becomes undecidable in extensional type theory because programs in the theory might not terminate. For example, such a theory allows one to give a type to the Y-combinator; a detailed example of this can be found in Nordstöm and Petersson Programming in Martin-Löf's Type Theory. However, this does not prevent extensional type theory from being a basis for a practical tool; for example, Nuprl is based on extensional type theory.
In contrast, in intensional type theory type checking is decidable, but the representation of standard mathematical concepts is somewhat more cumbersome, since intensional reasoning requires using setoids or similar constructions. There are many common mathematical objects that are hard to work with or cannot be represented without this, for example, integer numbers, rational numbers, and real numbers. Integers and rational numbers can be represented without setoids, but this representation is difficult to work with. Cauchy real numbers cannot be represented without this.
Homotopy type theory works on resolving this problem. It allows one to define higher inductive types, which not only define first-order constructors (values or points), but higher-order constructors, i.e. equalities between elements (paths), equalities between equalities (homotopies), ad infinitum.
== Implementations of type theory ==
Different forms of type theory have been implemented as the formal systems underlying a number of proof assistants. While many are based on Per Martin-Löf's ideas, many have added features, more axioms, or a different philosophical background. For instance, the Nuprl system is based on computational type theory and Coq is based on the calculus of (co)inductive constructions. Dependent types also feature in the design of programming languages such as ATS, Cayenne, Epigram, Agda, and Idris.
== Martin-Löf type theories ==
Per Martin-Löf constructed several type theories that were published at various times, some of them much later than when the preprints with their description became accessible to specialists (among others Jean-Yves Girard and Giovanni Sambin). The list below attempts to list all the theories that have been described in a printed form and to sketch the key features that distinguished them from each other. All of these theories had dependent products, dependent sums, disjoint unions, finite types and natural numbers. All the theories had the same reduction rules that did not include η-reduction either for dependent products or for dependent sums, except for MLTT79 where the η-reduction for dependent products is added.
MLTT71 was the first type theory created by Per Martin-Löf. It appeared in a preprint in 1971. It had one universe, but this universe had a name in itself, i.e., it was a type theory with, as it is called today, "Type in Type". Jean-Yves Girard has shown that this system was inconsistent, and the preprint was never published.
MLTT72 was presented in a 1972 preprint that has now been published. That theory had one universe V and no identity types (=-types). The universe was "predicative" in the sense that the dependent product of a family of objects from V over an object that was not in V such as, for example, V itself, was not assumed to be in V. The universe was à la Russell's Principia Mathematica, i.e., one would write directly "T∈V" and "t∈T" (Martin-Löf uses the sign "∈" instead of modern ":") without an added constructor such as "El".
MLTT73 was the first definition of a type theory that Per Martin-Löf published (it was presented at the Logic Colloquium '73 and published in 1975). There are identity types, which he describes as "propositions", but since no real distinction between propositions and the rest of the types is introduced the meaning of this is unclear. There is what later acquires the name of J-eliminator but yet without a name (see pp. 94–95). There is in this theory an infinite sequence of universes V0, ..., Vn, ... . The universes are predicative, à la Russell and non-cumulative. In fact, Corollary 3.10 on p. 115 says that if A∈Vm and B∈Vn are such that A and B are convertible then m = n.
MLTT79 was presented in 1979 and published in 1982. In this paper, Martin-Löf introduced the four basic types of judgement for the dependent type theory that has since become fundamental in the study of the meta-theory of such systems. He also introduced contexts as a separate concept in it (see p. 161). There are identity types with the J-eliminator (which already appeared in MLTT73 but did not have this name there) but also with the rule that makes the theory "extensional" (p. 169). There are W-types. There is an infinite sequence of predicative universes that are cumulative.
Bibliopolis: there is a discussion of a type theory in the Bibliopolis book from 1984, but it is somewhat open-ended and does not seem to represent a particular set of choices and so there is no specific type theory associated with it.
== See also ==
Intuitionistic logic
Typed lambda calculus
== Notes ==
== References ==
Martin-Löf, Per; Sambin, Giovanni (1984). Intuitionistic type theory (PDF). Napoli: Bibliopolis. ISBN 978-8870881059. OCLC 12731401.
== Further reading ==
Per Martin-Löf's Notes, as recorded by Giovanni Sambin (1980)
Nordström, Bengt; Petersson, Kent; Smith, Jan M. (1990). Programming in Martin-Löf's Type Theory. Oxford University Press. ISBN 9780198538141.
Thompson, Simon (1991). Type Theory and Functional Programming. Addison-Wesley. ISBN 0-201-41667-0.
Granström, Johan G. (2011). Treatise on Intuitionistic Type Theory. Springer. ISBN 978-94-007-1735-0.
== External links ==
EU Types Project: Tutorials – lecture notes and slides from the Types Summer School 2005
n-Categories - Sketch of a Definition – letter from John Baez and James Dolan to Ross Street, November 29, 1995 | Wikipedia/Martin-Löf_type_theory |
Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory (MLTT)) is a type theory and an alternative foundation of mathematics.
Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.
== Design ==
Martin-Löf designed the type theory on the principles of mathematical constructivism. Constructivism requires any existence proof to contain a "witness". So, any proof of "there exists a prime greater than 1000" must identify a specific number that is both prime and greater than 1000. Intuitionistic type theory accomplished this design goal by internalizing the BHK interpretation. A useful consequence is that proofs become mathematical objects that can be examined, compared, and manipulated.
Intuitionistic type theory's type constructors were built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication (
A
⟹
B
{\displaystyle A\implies B}
) corresponds to the type of a function (
A
→
B
{\displaystyle A\to B}
). This correspondence is called the Curry–Howard isomorphism. Prior type theories had also followed this isomorphism, but Martin-Löf's was the first to extend it to predicate logic by introducing dependent types.
== Type theory ==
A type theory is a kind of mathematical ontology, or foundation, describing the fundamental objects that exist. In the standard foundation, set theory combined with mathematical logic, the fundamental object is the set, which is a container that contains elements. In type theory, the fundamental object is the term, each of which belongs to one and only one type.
Intuitionistic type theory has three finite types, which are then composed using five different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's. So, each feature of the type theory does double duty as a feature of both math and logic.
=== 0 type, 1 type and 2 type ===
There are three finite types: The 0 type contains no terms. The 1 type contains one canonical term. The 2 type contains two canonical terms.
Because the 0 type contains no terms, it is also called the empty type. It is used to represent anything that cannot exist. It is also written
⊥
{\displaystyle \bot }
and represents anything unprovable (that is, a proof of it cannot exist). As a result, negation is defined as a function to it:
¬
A
:=
A
→
⊥
{\displaystyle \neg A:=A\to \bot }
.
Likewise, the 1 type contains one canonical term and represents existence. It also is called the unit type.
Finally, the 2 type contains two canonical terms. It represents a definite choice between two values. It is used for Boolean values but not propositions.
Propositions are instead represented by particular types. For instance, a true proposition can be represented by the 1 type, while a false proposition can be represented by the 0 type. But we cannot assert that these are the only propositions, i.e. the law of excluded middle does not hold for propositions in intuitionistic type theory.
=== Σ type constructor ===
Σ-types contain ordered pairs. As with typical ordered pair (or 2-tuple) types, a Σ-type can describe the Cartesian product,
A
×
B
{\displaystyle A\times B}
, of two other types,
A
{\displaystyle A}
and
B
{\displaystyle B}
. Logically, such an ordered pair would hold a proof of
A
{\displaystyle A}
and a proof of
B
{\displaystyle B}
, so one may see such a type written as
A
∧
B
{\displaystyle A\wedge B}
.
Σ-types are more powerful than typical ordered pair types because of dependent typing. In the ordered pair, the type of the second term can depend on the value of the first term. For example, the first term of the pair might be a natural number and the second term's type might be a sequence of reals of length equal to the first term. Such a type would be written:
∑
n
:
N
Vec
(
R
,
n
)
{\displaystyle \sum _{n{\mathbin {:}}{\mathbb {N} }}\operatorname {Vec} ({\mathbb {R} },n)}
Using set-theory terminology, this is similar to an indexed disjoint union of sets. In the case of the usual cartesian product, the type of the second term does not depend on the value of the first term. Thus the type describing the cartesian product
N
×
R
{\displaystyle {\mathbb {N} }\times {\mathbb {R} }}
is written:
∑
n
:
N
R
{\displaystyle \sum _{n{\mathbin {:}}{\mathbb {N} }}{\mathbb {R} }}
It is important to note here that the value of the first term,
n
{\displaystyle n}
, is not depended on by the type of the second term,
R
{\displaystyle {\mathbb {R} }}
.
Σ-types can be used to build up longer dependently-typed tuples used in mathematics and the records or structs used in most programming languages. An example of a dependently-typed 3-tuple is two integers and a proof that the first integer is smaller than the second integer, described by the type:
∑
m
:
Z
∑
n
:
Z
(
(
m
<
n
)
=
True
)
{\displaystyle \sum _{m{\mathbin {:}}{\mathbb {Z} }}{\sum _{n{\mathbin {:}}{\mathbb {Z} }}((m<n)={\text{True}})}}
Dependent typing allows Σ-types to serve the role of existential quantifier. The statement "there exists an
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
, such that
P
(
n
)
{\displaystyle P(n)}
is proven" becomes the type of ordered pairs where the first item is the value
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
and the second item is a proof of
P
(
n
)
{\displaystyle P(n)}
. Notice that the type of the second item (proofs of
P
(
n
)
{\displaystyle P(n)}
) depends on the value in the first part of the ordered pair (
n
{\displaystyle n}
). Its type would be:
∑
n
:
N
P
(
n
)
{\displaystyle \sum _{n{\mathbin {:}}{\mathbb {N} }}P(n)}
=== Π type constructor ===
Π-types contain functions. As with typical function types, they consist of an input type and an output type. They are more powerful than typical function types however, in that the return type can depend on the input value. Functions in type theory are different from set theory. In set theory, you look up the argument's value in a set of ordered pairs. In type theory, the argument is substituted into a term and then computation ("reduction") is applied to the term.
As an example, the type of a function that, given a natural number
n
{\displaystyle n}
, returns a vector containing
n
{\displaystyle n}
real numbers is written:
∏
n
:
N
Vec
(
R
,
n
)
{\displaystyle \prod _{n{\mathbin {:}}{\mathbb {N} }}\operatorname {Vec} ({\mathbb {R} },n)}
When the output type does not depend on the input value, the function type is often simply written with a
→
{\displaystyle \to }
. Thus,
N
→
R
{\displaystyle {\mathbb {N} }\to {\mathbb {R} }}
is the type of functions from natural numbers to real numbers. Such Π-types correspond to logical implication. The logical proposition
A
⟹
B
{\displaystyle A\implies B}
corresponds to the type
A
→
B
{\displaystyle A\to B}
, containing functions that take proofs-of-A and return proofs-of-B. This type could be written more consistently as:
∏
a
:
A
B
{\displaystyle \prod _{a{\mathbin {:}}A}B}
Π-types are also used in logic for universal quantification. The statement "for every
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
,
P
(
n
)
{\displaystyle P(n)}
is proven" becomes a function from
n
{\displaystyle n}
of type
N
{\displaystyle {\mathbb {N} }}
to proofs of
P
(
n
)
{\displaystyle P(n)}
. Thus, given the value for
n
{\displaystyle n}
the function generates a proof that
P
(
⋅
)
{\displaystyle P(\,\cdot \,)}
holds for that value. The type would be
∏
n
:
N
P
(
n
)
{\displaystyle \prod _{n{\mathbin {:}}{\mathbb {N} }}P(n)}
=== = type constructor ===
=-types are created from two terms. Given two terms like
2
+
2
{\displaystyle 2+2}
and
2
⋅
2
{\displaystyle 2\cdot 2}
, you can create a new type
2
+
2
=
2
⋅
2
{\displaystyle 2+2=2\cdot 2}
. The terms of that new type represent proofs that the pair reduce to the same canonical term. Thus, since both
2
+
2
{\displaystyle 2+2}
and
2
⋅
2
{\displaystyle 2\cdot 2}
compute to the canonical term
4
{\displaystyle 4}
, there will be a term of the type
2
+
2
=
2
⋅
2
{\displaystyle 2+2=2\cdot 2}
. In intuitionistic type theory, there is a single way to introduce =-types and that is by reflexivity:
refl
:
∏
a
:
A
(
a
=
a
)
.
{\displaystyle \operatorname {refl} {\mathbin {:}}\prod _{a{\mathbin {:}}A}(a=a).}
It is possible to create =-types such as
1
=
2
{\displaystyle 1=2}
where the terms do not reduce to the same canonical term, but you will be unable to create terms of that new type. In fact, if you were able to create a term of
1
=
2
{\displaystyle 1=2}
, you could create a term of
⊥
{\displaystyle \bot }
. Putting that into a function would generate a function of type
1
=
2
→
⊥
{\displaystyle 1=2\to \bot }
. Since
…
→
⊥
{\displaystyle \ldots \to \bot }
is how intuitionistic type theory defines negation, you would have
¬
(
1
=
2
)
{\displaystyle \neg (1=2)}
or, finally,
1
≠
2
{\displaystyle 1\neq 2}
.
Equality of proofs is an area of active research in proof theory and has led to the development of homotopy type theory and other type theories.
=== Inductive types ===
Inductive types allow the creation of complex, self-referential types. For example, a linked list of natural numbers is either an empty list or a pair of a natural number and another linked list. Inductive types can be used to define unbounded mathematical structures like trees, graphs, etc.. In fact, the natural numbers type may be defined as an inductive type, either being
0
{\displaystyle 0}
or the successor of another natural number.
Inductive types define new constants, such as zero
0
:
N
{\displaystyle 0{\mathbin {:}}{\mathbb {N} }}
and the successor function
S
:
N
→
N
{\displaystyle S{\mathbin {:}}{\mathbb {N} }\to {\mathbb {N} }}
. Since
S
{\displaystyle S}
does not have a definition and cannot be evaluated using substitution, terms like
S
0
{\displaystyle S0}
and
S
S
S
0
{\displaystyle SSS0}
become the canonical terms of the natural numbers.
Proofs on inductive types are made possible by induction. Each new inductive type comes with its own inductive rule. To prove a predicate
P
(
⋅
)
{\displaystyle P(\,\cdot \,)}
for every natural number, you use the following rule:
N
-
e
l
i
m
:
P
(
0
)
→
(
∏
n
:
N
P
(
n
)
→
P
(
S
(
n
)
)
)
→
∏
n
:
N
P
(
n
)
{\displaystyle {\operatorname {{\mathbb {N} }-elim} }\,{\mathbin {:}}P(0)\,\to \left(\prod _{n{\mathbin {:}}{\mathbb {N} }}P(n)\to P(S(n))\right)\to \prod _{n{\mathbin {:}}{\mathbb {N} }}P(n)}
Inductive types in intuitionistic type theory are defined in terms of W-types, the type of well-founded trees. Later work in type theory generated coinductive types, induction-recursion, and induction-induction for working on types with more obscure kinds of self-referentiality. Higher inductive types allow equality to be defined between terms.
=== Universe types ===
The universe types allow proofs to be written about all the types created with the other type constructors. Every term in the universe type
U
0
{\displaystyle {\mathcal {U}}_{0}}
can be mapped to a type created with any combination of
0
,
1
,
2
,
Σ
,
Π
,
=
,
{\displaystyle 0,1,2,\Sigma ,\Pi ,=,}
and the inductive type constructor. However, to avoid paradoxes, there is no term in
U
n
{\displaystyle {\mathcal {U}}_{n}}
that maps to
U
n
{\displaystyle {\mathcal {U}}_{n}}
for any
n
∈
N
{\displaystyle {\mathcal {n}}\in \mathbb {N} }
.
To write proofs about all "the small types" and
U
0
{\displaystyle {\mathcal {U}}_{0}}
, you must use
U
1
{\displaystyle {\mathcal {U}}_{1}}
, which does contain a term for
U
0
{\displaystyle {\mathcal {U}}_{0}}
, but not for itself
U
1
{\displaystyle {\mathcal {U}}_{1}}
. Similarly, for
U
2
{\displaystyle {\mathcal {U}}_{2}}
. There is a predicative hierarchy of universes, so to quantify a proof over any fixed constant
k
{\displaystyle k}
universes, you can use
U
k
+
1
{\displaystyle {\mathcal {U}}_{k+1}}
.
Universe types are a tricky feature of type theories. Martin-Löf's original type theory had to be changed to account for Girard's paradox. Later research covered topics such as "super universes", "Mahlo universes", and impredicative universes.
== Judgements ==
The formal definition of intuitionistic type theory is written using judgements. For example, in the statement "if
A
{\displaystyle A}
is a type and
B
{\displaystyle B}
is a type then
∑
a
:
A
B
{\displaystyle \textstyle \sum _{a:A}B}
is a type" there are judgements of "is a type", "and", and "if ... then ...". The expression
∑
a
:
A
B
{\displaystyle \textstyle \sum _{a:A}B}
is not a judgement; it is the type being defined.
This second level of the type theory can be confusing, particularly where it comes to equality. There is a judgement of term equality, which might say
4
=
2
+
2
{\displaystyle 4=2+2}
. It is a statement that two terms reduce to the same canonical term. There is also a judgement of type equality, say that
A
=
B
{\displaystyle A=B}
, which means every element of
A
{\displaystyle A}
is an element of the type
B
{\displaystyle B}
and vice versa. At the type level, there is a type
4
=
2
+
2
{\displaystyle 4=2+2}
and it contains terms if there is a proof that
4
{\displaystyle 4}
and
2
+
2
{\displaystyle 2+2}
reduce to the same value. (Terms of this type are generated using the term-equality judgement.) Lastly, there is an English-language level of equality, because we use the word "four" and symbol "
4
{\displaystyle 4}
" to refer to the canonical term
S
S
S
S
0
{\displaystyle SSSS0}
. Synonyms like these are called "definitionally equal" by Martin-Löf.
The description of judgements below is based on the discussion in Nordström, Petersson, and Smith.
The formal theory works with types and objects.
A type is declared by:
A
T
y
p
e
{\displaystyle A\ {\mathsf {Type}}}
An object exists and is in a type if:
a
:
A
{\displaystyle a{\mathbin {:}}A}
Objects can be equal
a
=
b
{\displaystyle a=b}
and types can be equal
A
=
B
{\displaystyle A=B}
A type that depends on an object from another type is declared
(
x
:
A
)
B
{\displaystyle (x{\mathbin {:}}A)B}
and removed by substitution
B
[
x
/
a
]
{\displaystyle B[x/a]}
, replacing the variable
x
{\displaystyle x}
with the object
a
{\displaystyle a}
in
B
{\displaystyle B}
.
An object that depends on an object from another type can be done two ways.
If the object is "abstracted", then it is written
[
x
]
b
{\displaystyle [x]b}
and removed by substitution
b
[
x
/
a
]
{\displaystyle b[x/a]}
, replacing the variable
x
{\displaystyle x}
with the object
a
{\displaystyle a}
in
b
{\displaystyle b}
.
The object-depending-on-object can also be declared as a constant as part of a recursive type. An example of a recursive type is:
0
:
N
{\displaystyle 0{\mathbin {:}}\mathbb {N} }
S
:
N
→
N
{\displaystyle S{\mathbin {:}}\mathbb {N} \to \mathbb {N} }
Here,
S
{\displaystyle S}
is a constant object-depending-on-object. It is not associated with an abstraction. Constants like
S
{\displaystyle S}
can be removed by defining equality. Here the relationship with addition is defined using equality and using pattern matching to handle the recursive aspect of
S
{\displaystyle S}
:
add
:
(
N
×
N
)
→
N
add
(
0
,
b
)
=
b
add
(
S
(
a
)
,
b
)
=
S
(
add
(
a
,
b
)
)
)
{\displaystyle {\begin{aligned}\operatorname {add} &{\mathbin {:}}\ (\mathbb {N} \times \mathbb {N} )\to \mathbb {N} \\\operatorname {add} (0,b)&=b\\\operatorname {add} (S(a),b)&=S(\operatorname {add} (a,b)))\end{aligned}}}
S
{\displaystyle S}
is manipulated as an opaque constant - it has no internal structure for substitution.
So, objects and types and these relations are used to express formulae in the theory. The following styles of judgements are used to create new objects, types and relations from existing ones:
By convention, there is a type that represents all other types. It is called
U
{\displaystyle {\mathcal {U}}}
(or
Set
{\displaystyle \operatorname {Set} }
). Since
U
{\displaystyle {\mathcal {U}}}
is a type, the members of it are objects. There is a dependent type
El
{\displaystyle \operatorname {El} }
that maps each object to its corresponding type. In most texts
El
{\displaystyle \operatorname {El} }
is never written. From the context of the statement, a reader can almost always tell whether
A
{\displaystyle A}
refers to a type, or whether it refers to the object in
U
{\displaystyle {\mathcal {U}}}
that corresponds to the type.
This is the complete foundation of the theory. Everything else is derived.
To implement logic, each proposition is given its own type. The objects in those types represent the different possible ways to prove the proposition. If there is no proof for the proposition, then the type has no objects in it. Operators like "and" and "or" that work on propositions introduce new types and new objects. So
A
×
B
{\displaystyle A\times B}
is a type that depends on the type
A
{\displaystyle A}
and the type
B
{\displaystyle B}
. The objects in that dependent type are defined to exist for every pair of objects in
A
{\displaystyle A}
and
B
{\displaystyle B}
. If either
A
{\displaystyle A}
or
B
{\displaystyle B}
have no proof and is an empty type, then the new type representing
A
×
B
{\displaystyle A\times B}
is also empty.
This can be done for other types (booleans, natural numbers, etc.) and their operators.
== Categorical models of type theory ==
Using the language of category theory, R. A. G. Seely introduced the notion of a locally cartesian closed category (LCCC) as the basic model of type theory. This has been refined by Hofmann and Dybjer to Categories with Families or Categories with Attributes based on earlier work by Cartmell.
A category with families is a category C of contexts (in which the objects are contexts, and the context morphisms are substitutions), together with a functor T : Cop → Fam(Set).
Fam(Set) is the category of families of Sets, in which objects are pairs
(
A
,
B
)
{\displaystyle (A,B)}
of an "index set" A and a function B: X → A, and morphisms are pairs of functions f : A → A' and g : X → X' , such that B' ° g = f ° B – in other words, f maps Ba to Bg(a).
The functor T assigns to a context G a set
T
y
(
G
)
{\displaystyle Ty(G)}
of types, and for each
A
:
T
y
(
G
)
{\displaystyle A:Ty(G)}
, a set
T
m
(
G
,
A
)
{\displaystyle Tm(G,A)}
of terms. The axioms for a functor require that these play harmoniously with substitution. Substitution is usually written in the form Af or af, where A is a type in
T
y
(
G
)
{\displaystyle Ty(G)}
and a is a term in
T
m
(
G
,
A
)
{\displaystyle Tm(G,A)}
, and f is a substitution from D to G. Here
A
f
:
T
y
(
D
)
{\displaystyle Af:Ty(D)}
and
a
f
:
T
m
(
D
,
A
f
)
{\displaystyle af:Tm(D,Af)}
.
The category C must contain a terminal object (the empty context), and a final object for a form of product called comprehension, or context extension, in which the right element is a type in the context of the left element. If G is a context, and
A
:
T
y
(
G
)
{\displaystyle A:Ty(G)}
, then there should be an object
(
G
,
A
)
{\displaystyle (G,A)}
final among contexts D with mappings p : D → G, q : Tm(D,Ap).
A logical framework, such as Martin-Löf's, takes the form of closure conditions on the context-dependent sets of types and terms: that there should be a type called Set, and for each set a type, that the types should be closed under forms of dependent sum and product, and so forth.
A theory such as that of predicative set theory expresses closure conditions on the types of sets and their elements: that they should be closed under operations that reflect dependent sum and product, and under various forms of inductive definition.
== Extensional versus intensional ==
A fundamental distinction is extensional vs intensional type theory. In extensional type theory, definitional (i.e., computational) equality is not distinguished from propositional equality, which requires proof. As a consequence type checking becomes undecidable in extensional type theory because programs in the theory might not terminate. For example, such a theory allows one to give a type to the Y-combinator; a detailed example of this can be found in Nordstöm and Petersson Programming in Martin-Löf's Type Theory. However, this does not prevent extensional type theory from being a basis for a practical tool; for example, Nuprl is based on extensional type theory.
In contrast, in intensional type theory type checking is decidable, but the representation of standard mathematical concepts is somewhat more cumbersome, since intensional reasoning requires using setoids or similar constructions. There are many common mathematical objects that are hard to work with or cannot be represented without this, for example, integer numbers, rational numbers, and real numbers. Integers and rational numbers can be represented without setoids, but this representation is difficult to work with. Cauchy real numbers cannot be represented without this.
Homotopy type theory works on resolving this problem. It allows one to define higher inductive types, which not only define first-order constructors (values or points), but higher-order constructors, i.e. equalities between elements (paths), equalities between equalities (homotopies), ad infinitum.
== Implementations of type theory ==
Different forms of type theory have been implemented as the formal systems underlying a number of proof assistants. While many are based on Per Martin-Löf's ideas, many have added features, more axioms, or a different philosophical background. For instance, the Nuprl system is based on computational type theory and Coq is based on the calculus of (co)inductive constructions. Dependent types also feature in the design of programming languages such as ATS, Cayenne, Epigram, Agda, and Idris.
== Martin-Löf type theories ==
Per Martin-Löf constructed several type theories that were published at various times, some of them much later than when the preprints with their description became accessible to specialists (among others Jean-Yves Girard and Giovanni Sambin). The list below attempts to list all the theories that have been described in a printed form and to sketch the key features that distinguished them from each other. All of these theories had dependent products, dependent sums, disjoint unions, finite types and natural numbers. All the theories had the same reduction rules that did not include η-reduction either for dependent products or for dependent sums, except for MLTT79 where the η-reduction for dependent products is added.
MLTT71 was the first type theory created by Per Martin-Löf. It appeared in a preprint in 1971. It had one universe, but this universe had a name in itself, i.e., it was a type theory with, as it is called today, "Type in Type". Jean-Yves Girard has shown that this system was inconsistent, and the preprint was never published.
MLTT72 was presented in a 1972 preprint that has now been published. That theory had one universe V and no identity types (=-types). The universe was "predicative" in the sense that the dependent product of a family of objects from V over an object that was not in V such as, for example, V itself, was not assumed to be in V. The universe was à la Russell's Principia Mathematica, i.e., one would write directly "T∈V" and "t∈T" (Martin-Löf uses the sign "∈" instead of modern ":") without an added constructor such as "El".
MLTT73 was the first definition of a type theory that Per Martin-Löf published (it was presented at the Logic Colloquium '73 and published in 1975). There are identity types, which he describes as "propositions", but since no real distinction between propositions and the rest of the types is introduced the meaning of this is unclear. There is what later acquires the name of J-eliminator but yet without a name (see pp. 94–95). There is in this theory an infinite sequence of universes V0, ..., Vn, ... . The universes are predicative, à la Russell and non-cumulative. In fact, Corollary 3.10 on p. 115 says that if A∈Vm and B∈Vn are such that A and B are convertible then m = n.
MLTT79 was presented in 1979 and published in 1982. In this paper, Martin-Löf introduced the four basic types of judgement for the dependent type theory that has since become fundamental in the study of the meta-theory of such systems. He also introduced contexts as a separate concept in it (see p. 161). There are identity types with the J-eliminator (which already appeared in MLTT73 but did not have this name there) but also with the rule that makes the theory "extensional" (p. 169). There are W-types. There is an infinite sequence of predicative universes that are cumulative.
Bibliopolis: there is a discussion of a type theory in the Bibliopolis book from 1984, but it is somewhat open-ended and does not seem to represent a particular set of choices and so there is no specific type theory associated with it.
== See also ==
Intuitionistic logic
Typed lambda calculus
== Notes ==
== References ==
Martin-Löf, Per; Sambin, Giovanni (1984). Intuitionistic type theory (PDF). Napoli: Bibliopolis. ISBN 978-8870881059. OCLC 12731401.
== Further reading ==
Per Martin-Löf's Notes, as recorded by Giovanni Sambin (1980)
Nordström, Bengt; Petersson, Kent; Smith, Jan M. (1990). Programming in Martin-Löf's Type Theory. Oxford University Press. ISBN 9780198538141.
Thompson, Simon (1991). Type Theory and Functional Programming. Addison-Wesley. ISBN 0-201-41667-0.
Granström, Johan G. (2011). Treatise on Intuitionistic Type Theory. Springer. ISBN 978-94-007-1735-0.
== External links ==
EU Types Project: Tutorials – lecture notes and slides from the Types Summer School 2005
n-Categories - Sketch of a Definition – letter from John Baez and James Dolan to Ross Street, November 29, 1995 | Wikipedia/Intuitionistic_type_theory |
A typed lambda calculus is a typed formalism that uses the lambda symbol (
λ
{\displaystyle \lambda }
) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see kinds below). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus, but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here, typability usually captures desirable properties of the program (e.g., the program will not cause a memory access violation).
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of certain classes of categories. For example, the simply typed lambda calculus is the language of Cartesian closed categories (CCCs).
== Kinds of typed lambda calculi ==
Various typed lambda calculi have been studied. The simply typed lambda calculus has only one type constructor, the arrow
→
{\displaystyle \to }
, and its only types are basic types and function types
σ
→
τ
{\displaystyle \sigma \to \tau }
. System T extends the simply typed lambda calculus with a type of natural numbers and higher-order primitive recursion; in this system all functions provably recursive in Peano arithmetic are definable. System F allows polymorphism by using universal quantification over all types; from a logical perspective it can describe all functions that are provably total in second-order logic. Lambda calculi with dependent types are the base of intuitionistic type theory, the calculus of constructions and the logical framework (LF), a pure lambda calculus with dependent types. Based on work by Berardi on pure type systems, Henk Barendregt proposed the Lambda cube to systematize the relations of pure typed lambda calculi (including simply typed lambda calculus, System F, LF and the calculus of constructions).
Some typed lambda calculi introduce a notion of subtyping, i.e. if
A
{\displaystyle A}
is a subtype of
B
{\displaystyle B}
, then all terms of type
A
{\displaystyle A}
also have type
B
{\displaystyle B}
. Typed lambda calculi with subtyping are the simply typed lambda calculus with conjunctive types and System F<:.
All the systems mentioned so far, with the exception of the untyped lambda calculus, are strongly normalizing: all computations terminate. Therefore, they cannot describe all Turing-computable functions. As another consequence they are consistent as a logic, i.e. there are uninhabited types. There exist, however, typed lambda calculi that are not strongly normalizing. For example the dependently typed lambda calculus with a type of all types (Type : Type) is not normalizing due to Girard's paradox. This system is also the simplest pure type system, a formalism which generalizes the Lambda cube. Systems with explicit recursion combinators, such as Plotkin's "Programming language for Computable Functions" (PCF), are not normalizing, but they are not intended to be interpreted as a logic. Indeed, PCF is a prototypical, typed functional programming language, where types are used to ensure that programs are well-behaved but not necessarily that they are terminating.
== Applications to programming languages ==
In computer programming, the routines (functions, procedures, methods) of strongly typed programming languages closely correspond to typed lambda expressions.
== See also ==
Kappa calculus—an analogue of typed lambda calculus which excludes higher-order functions
== Notes ==
== Further reading ==
Barendregt, Henk (1992). "Lambda Calculi with Types". In Abramsky, S. (ed.). Background: Computational Structures. Handbook of Logic in Computer Science. Vol. 2. Oxford University Press. pp. 117–309. ISBN 9780198537618.
Brandl, Helmut (2022). Calculus of Constructions / Typed Lambda Calculus | Wikipedia/Typed_lambda_calculus |
In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, giving rise to a formal system that combines the language with deduction rules. An element
ϕ
∈
T
{\displaystyle \phi \in T}
of a deductively closed theory
T
{\displaystyle T}
is then called a theorem of the theory. In many deductive systems there is usually a subset
Σ
⊆
T
{\displaystyle \Sigma \subseteq T}
that is called "the set of axioms" of the theory
T
{\displaystyle T}
, in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
== General theories (as expressed in formal language) ==
When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate.
The construction of a theory begins by specifying a definite non-empty conceptual class
E
{\displaystyle {\mathcal {E}}}
, the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them.
A theory
T
{\displaystyle {\mathcal {T}}}
is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to
T
{\displaystyle {\mathcal {T}}}
are called the elementary theorems of
T
{\displaystyle {\mathcal {T}}}
and are said to be true. In this way, a theory can be seen as a way of designating a subset of
E
{\displaystyle {\mathcal {E}}}
that only contain statements that are true.
This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to
T
{\displaystyle {\mathcal {T}}}
. Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.
=== Subtheories and extensions ===
A theory
S
{\displaystyle {\mathcal {S}}}
is a subtheory of a theory
T
{\displaystyle {\mathcal {T}}}
if
S
{\displaystyle {\mathcal {S}}}
is a subset of
T
{\displaystyle {\mathcal {T}}}
. If
T
{\displaystyle {\mathcal {T}}}
is a subset of
S
{\displaystyle {\mathcal {S}}}
then
S
{\displaystyle {\mathcal {S}}}
is called an extension or a supertheory of
T
{\displaystyle {\mathcal {T}}}
=== Deductive theories ===
A theory is said to be a deductive theory if
T
{\displaystyle {\mathcal {T}}}
is an inductive class, which is to say that its content is based on some formal deductive system and that some of its elementary statements are taken as axioms. In a deductive theory, any sentence that is a logical consequence of one or more of the axioms is also a sentence of that theory. More formally, if
⊢
{\displaystyle \vdash }
is a Tarski-style consequence relation, then
T
{\displaystyle {\mathcal {T}}}
is closed under
⊢
{\displaystyle \vdash }
(and so each of its theorems is a logical consequence of its axioms) if and only if, for all sentences
ϕ
{\displaystyle \phi }
in the language of the theory
T
{\displaystyle {\mathcal {T}}}
, if
T
⊢
ϕ
{\displaystyle {\mathcal {T}}\vdash \phi }
, then
ϕ
∈
T
{\displaystyle \phi \in {\mathcal {T}}}
; or, equivalently, if
T
′
{\displaystyle {\mathcal {T}}'}
is a finite subset of
T
{\displaystyle {\mathcal {T}}}
(possibly the set of axioms of
T
{\displaystyle {\mathcal {T}}}
in the case of finitely axiomatizable theories) and
T
′
⊢
ϕ
{\displaystyle {\mathcal {T}}'\vdash \phi }
, then
ϕ
∈
T
′
{\displaystyle \phi \in {\mathcal {T}}'}
, and therefore
ϕ
∈
T
{\displaystyle \phi \in {\mathcal {T}}}
.
=== Consistency and completeness ===
A syntactically consistent theory is a theory from which not every sentence in the underlying language can be proven (with respect to some deductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies the principle of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory.
A satisfiable theory is a theory that has a model. This means there is a structure M that satisfies every sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ.
A consistent theory is sometimes defined to be a syntactically consistent theory, and sometimes defined to be a satisfiable theory. For first-order logic, the most important case, it follows from the completeness theorem that the two meanings coincide. In other logics, such as second-order logic, there are syntactically consistent theories that are not satisfiable, such as ω-inconsistent theories.
A complete consistent theory (or just a complete theory) is a consistent theory
T
{\displaystyle {\mathcal {T}}}
such that for every sentence φ in its language, either φ is provable from
T
{\displaystyle {\mathcal {T}}}
or
T
{\displaystyle {\mathcal {T}}}
∪
{\displaystyle \cup }
{φ} is inconsistent. For theories closed under logical consequence, this means that for every sentence φ, either φ or its negation is contained in the theory. An incomplete theory is a consistent theory that is not complete.
(see also ω-consistent theory for a stronger notion of consistency.)
=== Interpretation of a theory ===
An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation, otherwise it is called a partial interpretation.
=== Theories associated with a structure ===
Each structure has several associated theories. The complete theory of a structure A is the set of all first-order sentences over the signature of A that are satisfied by A. It is denoted by Th(A). More generally, the theory of K, a class of σ-structures, is the set of all first-order σ-sentences that are satisfied by all structures in K, and is denoted by Th(K). Clearly Th(A) = Th({A}). These notions can also be defined with respect to other logics.
For each σ-structure A, there are several associated theories in a larger signature σ' that extends σ by adding one new constant symbol for each element of the domain of A. (If the new constant symbols are identified with the elements of A that they represent, σ' can be taken to be σ
∪
{\displaystyle \cup }
A.) The cardinality of σ' is thus the larger of the cardinality of σ and the cardinality of A.
The diagram of A consists of all atomic or negated atomic σ'-sentences that are satisfied by A and is denoted by diagA. The positive diagram of A is the set of all atomic σ'-sentences that A satisfies. It is denoted by diag+A. The elementary diagram of A is the set eldiagA of all first-order σ'-sentences that are satisfied by A or, equivalently, the complete (first-order) theory of the natural expansion of A to the signature σ'.
== First-order theories ==
A first-order theory
Q
S
{\displaystyle {\mathcal {QS}}}
is a set of sentences in a first-order formal language
Q
{\displaystyle {\mathcal {Q}}}
.
=== Derivation in a first-order theory ===
There are many formal derivation ("proof") systems for first-order logic. These include Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method and resolution.
=== Syntactic consequence in a first-order theory ===
A formula A is a syntactic consequence of a first-order theory
Q
S
{\displaystyle {\mathcal {QS}}}
if there is a derivation of A using only formulas in
Q
S
{\displaystyle {\mathcal {QS}}}
as non-logical axioms. Such a formula A is also called a theorem of
Q
S
{\displaystyle {\mathcal {QS}}}
. The notation "
Q
S
⊢
A
{\displaystyle {\mathcal {QS}}\vdash A}
" indicates A is a theorem of
Q
S
{\displaystyle {\mathcal {QS}}}
.
=== Interpretation of a first-order theory ===
An interpretation of a first-order theory provides a semantics for the formulas of the theory. An interpretation is said to satisfy a formula if the formula is true according to the interpretation. A model of a first-order theory
Q
S
{\displaystyle {\mathcal {QS}}}
is an interpretation in which every formula of
Q
S
{\displaystyle {\mathcal {QS}}}
is satisfied.
=== First-order theories with identity ===
A first-order theory
Q
S
{\displaystyle {\mathcal {QS}}}
is a first-order theory with identity if
Q
S
{\displaystyle {\mathcal {QS}}}
includes the identity relation symbol "=" and the reflexivity and substitution axiom schemes for this symbol.
=== Topics related to first-order theories ===
Compactness theorem
Consistent set
Deduction theorem
Lindenbaum's lemma
Löwenheim–Skolem theorem
== Examples ==
One way to specify a theory is to define a set of axioms in a particular language. The theory can be taken to include just those axioms, or their logical or provable consequences, as desired. Theories obtained this way include ZFC and Peano arithmetic.
A second way to specify a theory is to begin with a structure, and let the theory be the set of sentences that are satisfied by the structure. This is a method for producing complete theories through the semantic route, with examples including the set of true sentences under the structure (N, +, ×, 0, 1, =), where N is the set of natural numbers, and the set of true sentences under the structure (R, +, ×, 0, 1, =), where R is the set of real numbers. The first of these, called the theory of true arithmetic, cannot be written as the set of logical consequences of any enumerable set of axioms.
The theory of (R, +, ×, 0, 1, =) was shown by Tarski to be decidable; it is the theory of real closed fields (see Decidability of first-order theories of the real numbers for more).
== See also ==
Axiomatic system
Interpretability
List of first-order theories
Mathematical theory
== References ==
== Further reading ==
Hodges, Wilfrid (1997). A shorter model theory. Cambridge University Press. ISBN 0-521-58713-1. | Wikipedia/Theory_(mathematical_logic) |
In model theory, interpretation of a structure M in another structure N (typically of a different signature) is a technical notion that approximates the idea of representing M inside N. For example, every reduct or definitional expansion of a structure N has an interpretation in N.
Many model-theoretic properties are preserved under interpretability. For example, if the theory of N is stable and M is interpretable in N, then the theory of M is also stable.
Note that in other areas of mathematical logic, the term "interpretation" may refer to a structure, rather than being used in the sense defined here. These two notions of "interpretation" are related but nevertheless distinct. Similarly, "interpretability" may refer to a related but distinct notion about representation and provability of sentences between theories.
== Definition ==
An interpretation of a structure M in a structure N with parameters (or without parameters, respectively)
is a pair
(
n
,
f
)
{\displaystyle (n,f)}
where
n is a natural number and
f
{\displaystyle f}
is a surjective map from a subset of
Nn onto M
such that the
f
{\displaystyle f}
-preimage (more precisely the
f
k
{\displaystyle f^{k}}
-preimage) of every set X ⊆ Mk definable in M by a first-order formula without parameters
is definable (in N) by a first-order formula with parameters (or without parameters, respectively).
Since the value of n for an interpretation
(
n
,
f
)
{\displaystyle (n,f)}
is often clear from context, the map
f
{\displaystyle f}
itself is also called an interpretation.
To verify that the preimage of every definable (without parameters) set in M is definable in N (with or without parameters), it is sufficient to check the preimages of the following definable sets:
the domain of M;
the diagonal of M2;
every relation in the signature of M;
the graph of every function in the signature of M.
In model theory the term definable often refers to definability with parameters; if this convention is used, definability without parameters is expressed by the term 0-definable. Similarly, an interpretation with parameters may be referred to as simply an interpretation, and an interpretation without parameters as a 0-interpretation.
== Bi-interpretability ==
If L, M and N are three structures, L is interpreted in M,
and M is interpreted in N, then one can naturally construct a composite interpretation of L in N.
If two structures M and N are interpreted in each other, then by combining the interpretations in two possible ways, one obtains an interpretation of each of the two structures in itself.
This observation permits one to define an equivalence relation among structures, reminiscent of the homotopy equivalence among topological spaces.
Two structures M and N are bi-interpretable if there exists an interpretation of M in N and an interpretation of N in M such that the composite interpretations of M in itself and of N in itself are definable in M and in N, respectively (the composite interpretations being viewed as operations on M and on N).
== Example ==
The partial map f from Z × Z onto Q that maps (x, y) to x/y if y ≠ 0 provides an interpretation of the field Q of rational numbers in the ring Z of integers (to be precise, the interpretation is (2, f)).
In fact, this particular interpretation is often used to define the rational numbers.
To see that it is an interpretation (without parameters), one needs to check the following preimages of definable sets in Q:
the preimage of Q is defined by the formula φ(x, y) given by ¬ (y = 0);
the preimage of the diagonal of Q is defined by the formula φ(x1, y1, x2, y2) given by x1 × y2 = x2 × y1;
the preimages of 0 and 1 are defined by the formulas φ(x, y) given by x = 0 and x = y;
the preimage of the graph of addition is defined by the formula φ(x1, y1, x2, y2, x3, y3) given by x1×y2×y3 + x2×y1×y3 = x3×y1×y2;
the preimage of the graph of multiplication is defined by the formula φ(x1, y1, x2, y2, x3, y3) given by x1×x2×y3 = x3×y1×y2.
== References ==
== Further reading ==
Ahlbrandt, Gisela; Ziegler, Martin (1986), "Quasi finitely axiomatizable totally categorical theories", Annals of Pure and Applied Logic, 30: 63–82, doi:10.1016/0168-0072(86)90037-0
Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6 (Section 4.3)
Poizat, Bruno (2000), A Course in Model Theory, Springer, ISBN 978-0-387-98655-5 (Section 9.4) | Wikipedia/Interpretation_(model_theory) |
In model theory and related areas of mathematics, a type is an object that describes how a (real or possible) element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x1, x2,..., xn that are true of a set of n-tuples of an L-structure
M
{\displaystyle {\mathcal {M}}}
. Depending on the context, types can be complete or partial and they may use a fixed set of constants, A, from the structure
M
{\displaystyle {\mathcal {M}}}
. The question of which types represent actual elements of
M
{\displaystyle {\mathcal {M}}}
leads to the ideas of saturated models and omitting types.
== Formal definition ==
Consider a structure
M
{\displaystyle {\mathcal {M}}}
for a language L. Let M be the universe of the structure. For every A ⊆ M, let L(A) be the language obtained from L by adding a constant ca for every a ∈ A. In other words,
L
(
A
)
=
L
∪
{
c
a
:
a
∈
A
}
.
{\displaystyle L(A)=L\cup \{c_{a}:a\in A\}.}
A 1-type (of
M
{\displaystyle {\mathcal {M}}}
) over A is a set p(x) of formulas in L(A) with at most one free variable x (therefore 1-type) such that for every finite subset p0(x) ⊆ p(x) there is some b ∈ M, depending on p0(x), with
M
⊨
p
0
(
b
)
{\displaystyle {\mathcal {M}}\models p_{0}(b)}
(i.e. all formulas in p0(x) are true in
M
{\displaystyle {\mathcal {M}}}
when x is replaced by b).
Similarly an n-type (of
M
{\displaystyle {\mathcal {M}}}
) over A is defined to be a set p(x1,...,xn) = p(x) of formulas in L(A), each having its free variables occurring only among the given n free variables x1,...,xn, such that for every finite subset p0(x) ⊆ p(x) there are some elements b1,...,bn ∈ M with
M
⊨
p
0
(
b
1
,
…
,
b
n
)
{\displaystyle {\mathcal {M}}\models p_{0}(b_{1},\ldots ,b_{n})}
.
A complete type of
M
{\displaystyle {\mathcal {M}}}
over A is one that is maximal with respect to inclusion. Equivalently, for every
ϕ
(
x
)
∈
L
(
A
,
x
)
{\displaystyle \phi ({\boldsymbol {x}})\in L(A,{\boldsymbol {x}})}
either
ϕ
(
x
)
∈
p
(
x
)
{\displaystyle \phi ({\boldsymbol {x}})\in p({\boldsymbol {x}})}
or
¬
ϕ
(
x
)
∈
p
(
x
)
{\displaystyle \lnot \phi ({\boldsymbol {x}})\in p({\boldsymbol {x}})}
. Any non-complete type is called a partial type.
So, the word type in general refers to any n-type, partial or complete, over any chosen set of parameters (possibly the empty set).
An n-type p(x) is said to be realized in
M
{\displaystyle {\mathcal {M}}}
if there is an element b ∈ Mn such that
M
⊨
p
(
b
)
{\displaystyle {\mathcal {M}}\models p({\boldsymbol {b}})}
. The existence of such a realization is guaranteed for any type by the compactness theorem, although the realization might take place in some elementary extension of
M
{\displaystyle {\mathcal {M}}}
, rather than in
M
{\displaystyle {\mathcal {M}}}
itself.
If a complete type is realized by b in
M
{\displaystyle {\mathcal {M}}}
, then the type is typically denoted
t
p
n
M
(
b
/
A
)
{\displaystyle tp_{n}^{\mathcal {M}}({\boldsymbol {b}}/A)}
and referred to as the complete type of b over A.
A type p(x) is said to be isolated by
φ
{\displaystyle \varphi }
, for
φ
∈
p
(
x
)
{\displaystyle \varphi \in p(x)}
, if for all
ψ
(
x
)
∈
p
(
x
)
,
{\displaystyle \psi ({\boldsymbol {x}})\in p({\boldsymbol {x}}),}
we have
Th
(
M
)
⊨
φ
(
x
)
→
ψ
(
x
)
{\displaystyle \operatorname {Th} ({\mathcal {M}})\models \varphi ({\boldsymbol {x}})\rightarrow \psi ({\boldsymbol {x}})}
. Since finite subsets of a type are always realized in
M
{\displaystyle {\mathcal {M}}}
, there is always an element b ∈ Mn such that φ(b) is true in
M
{\displaystyle {\mathcal {M}}}
; i.e.
M
⊨
φ
(
b
)
{\displaystyle {\mathcal {M}}\models \varphi ({\boldsymbol {b}})}
, thus b realizes the entire isolated type. So isolated types will be realized in every elementary substructure or extension. Because of this, isolated types can never be omitted (see below).
A model that realizes the maximum possible variety of types is called a saturated model, and the ultrapower construction provides one way of producing saturated models.
== Examples of types ==
Consider the language L with one binary relation symbol, which we denote as
∈
{\displaystyle \in }
. Let
M
{\displaystyle {\mathcal {M}}}
be the structure
⟨
ω
,
∈
ω
⟩
{\displaystyle \langle \omega ,\in _{\omega }\rangle }
for this language, which is the ordinal
ω
{\displaystyle \omega }
with its standard well-ordering. Let
T
{\displaystyle {\mathcal {T}}}
denote the first-order theory of
M
{\displaystyle {\mathcal {M}}}
.
Consider the set of L(ω)-formulas
p
(
x
)
:=
{
n
∈
ω
x
∣
n
∈
ω
}
{\displaystyle p(x):=\{n\in _{\omega }x\mid n\in \omega \}}
. First, we claim this is a type. Let
p
0
(
x
)
⊆
p
(
x
)
{\displaystyle p_{0}(x)\subseteq p(x)}
be a finite subset of
p
(
x
)
{\displaystyle p(x)}
. We need to find a
b
∈
ω
{\displaystyle b\in \omega }
that satisfies all the formulas in
p
0
{\displaystyle p_{0}}
. Well, we can just take the successor of the largest ordinal mentioned in the set of formulas
p
0
(
x
)
{\displaystyle p_{0}(x)}
. Then this will clearly contain all the ordinals mentioned in
p
0
(
x
)
{\displaystyle p_{0}(x)}
. Thus we have that
p
(
x
)
{\displaystyle p(x)}
is a type.
Next, note that
p
(
x
)
{\displaystyle p(x)}
is not realized in
M
{\displaystyle {\mathcal {M}}}
. For, if it were there would be some
n
∈
ω
{\displaystyle n\in \omega }
that contains every element of
ω
{\displaystyle \omega }
.
If we wanted to realize the type, we might be tempted to consider the structure
⟨
ω
+
1
,
∈
ω
+
1
⟩
{\displaystyle \langle \omega +1,\in _{\omega +1}\rangle }
, which is indeed an extension of
M
{\displaystyle {\mathcal {M}}}
that realizes the type. Unfortunately, this extension is not elementary, for example, it does not satisfy
T
{\displaystyle {\mathcal {T}}}
. In particular, the sentence
∃
x
∀
y
(
y
∈
x
∨
y
=
x
)
{\displaystyle \exists x\forall y(y\in x\lor y=x)}
is satisfied by this structure and not by
M
{\displaystyle {\mathcal {M}}}
.
So, we wish to realize the type in an elementary extension. We can do this by defining a new L-structure, which we will denote
M
′
{\displaystyle {\mathcal {M}}'}
. The domain of the structure will be
ω
∪
Z
′
{\displaystyle \omega \cup \mathbb {Z} '}
where
Z
′
{\displaystyle \mathbb {Z} '}
is the set of integers adorned in such a way that
Z
′
∩
ω
=
∅
{\displaystyle \mathbb {Z} '\cap \omega =\emptyset }
. Let
<
{\displaystyle <}
denote the usual order of
Z
′
{\displaystyle \mathbb {Z} '}
. We interpret the symbol
∈
{\displaystyle \in }
in our new structure by
∈
M
′
=
∈
ω
∪
<
∪
(
ω
×
Z
′
)
{\displaystyle \in _{{\mathcal {M}}'}=\in _{\omega }\cup <\cup \,(\omega \times \mathbb {Z} ')}
. The idea being that we are adding a "
Z
{\displaystyle \mathbb {Z} }
-chain", or copy of the integers, above all the finite ordinals. Clearly any element of
Z
′
{\displaystyle \mathbb {Z} '}
realizes the type
p
(
x
)
{\displaystyle p(x)}
. Moreover, one can verify that this extension is elementary.
Another example: the complete type of the number 2 over the empty set, considered as a member of the natural numbers, would be the set of all first-order statements (in the language of Peano arithmetic), describing a variable x, that are true when x = 2. This set would include formulas such as
x
≠
1
+
1
+
1
{\displaystyle \,\!x\neq 1+1+1}
,
x
≤
1
+
1
+
1
+
1
+
1
{\displaystyle x\leq 1+1+1+1+1}
, and
∃
y
(
y
<
x
)
{\displaystyle \exists y(y<x)}
. This is an example of an isolated type, since, working over the theory of the naturals, the formula
x
=
1
+
1
{\displaystyle x=1+1}
implies all other formulas that are true about the number 2.
As a further example, the statements
∀
y
(
y
2
<
2
⟹
y
<
x
)
{\displaystyle \forall y(y^{2}<2\implies y<x)}
and
∀
y
(
(
y
>
0
∧
y
2
>
2
)
⟹
y
>
x
)
{\displaystyle \forall y((y>0\land y^{2}>2)\implies y>x)}
describing the square root of 2 are consistent with the axioms of ordered fields, and can be extended to a complete type. This type is not realized in the ordered field of rational numbers, but is realized in the ordered field of reals. Similarly, the infinite set of formulas (over the empty set) {x>1, x>1+1, x>1+1+1, ...} is not realized in the ordered field of real numbers, but is realized in the ordered field of hyperreals. Similarly, we can specify a type
{
0
<
x
<
1
/
n
∣
n
∈
N
}
{\displaystyle \{0<x<1/n\mid n\in \mathbb {N} \}}
that is realized by an infinitesimal hyperreal that violates the Archimedean property.
The reason it is useful to restrict the parameters to a certain subset of the model is that it helps to distinguish the types that can be satisfied from those that cannot. For example, using the entire set of real numbers as parameters one could generate an uncountably infinite set of formulas like
x
≠
1
{\displaystyle x\neq 1}
,
x
≠
π
{\displaystyle x\neq \pi }
, ... that would explicitly rule out every possible real value for x, and therefore could never be realized within the real numbers.
== Stone spaces ==
It is useful to consider the set of complete n-types over A as a topological space. Consider the following equivalence relation on formulas in the free variables x1,..., xn with parameters in A:
ψ
≡
ϕ
⇔
M
⊨
∀
x
1
,
…
,
x
n
(
ψ
(
x
1
,
…
,
x
n
)
↔
ϕ
(
x
1
,
…
,
x
n
)
)
.
{\displaystyle \psi \equiv \phi \Leftrightarrow {\mathcal {M}}\models \forall x_{1},\ldots ,x_{n}(\psi (x_{1},\ldots ,x_{n})\leftrightarrow \phi (x_{1},\ldots ,x_{n})).}
One can show that
ψ
≡
ϕ
{\displaystyle \psi \equiv \phi }
if and only if they are contained in exactly the same complete types.
The set of formulas in free variables x1,...,xn over A up to this equivalence relation is a Boolean algebra (and is canonically isomorphic to the set of A-definable subsets of Mn). The complete n-types correspond to ultrafilters of this Boolean algebra. The set of complete n-types can be made into a topological space by taking the sets of types containing a given formula as a basis of open sets. This constructs the Stone space associated to the Boolean algebra, which is a compact, Hausdorff, and totally disconnected space.
Example. The complete theory of algebraically closed fields of characteristic 0 has quantifier elimination, which allows one to show that the possible complete 1-types (over the empty set) correspond to:
Roots of a given irreducible non-constant polynomial over the rationals with leading coefficient 1. For example, the type of square roots of 2. Each of these types is an isolated point of the Stone space.
Transcendental elements, which are not roots of any non-zero polynomial. This type is a point in the Stone space that is closed but not isolated.
In other words, the 1-types correspond exactly to the prime ideals of the polynomial ring Q[x] over the rationals Q: if r is an element of the model of type p, then the ideal corresponding to p is the set of polynomials with r as a root (which is only the zero polynomial if r is transcendental). More generally, the complete n-types correspond to the prime ideals of the polynomial ring Q[x1,...,xn], in other words to the points of the prime spectrum of this ring. (The Stone space topology can in fact be viewed as the Zariski topology of a Boolean ring induced in a natural way from the Boolean algebra. While the Zariski topology is not in general Hausdorff, it is in the case of Boolean rings.) For example, if q(x,y) is an irreducible polynomial in two variables, there is a 2-type whose realizations are (informally) pairs (x,y) of elements with q(x,y)=0.
== Omitting types theorem ==
Given a complete n-type p one can ask if there is a model of the theory that omits p, in other words there is no n-tuple in the model that realizes p.
If p is an isolated point in the Stone space, i.e. if {p} is an open set, it is easy to see that every model realizes p (at least if the theory is complete). The omitting types theorem says that conversely if p is not isolated then there is a countable model omitting p (provided that the language is countable).
Example: In the theory of algebraically closed fields of characteristic 0, there is a 1-type represented by elements that are transcendental over the prime field Q. This is a non-isolated point of the Stone space (in fact, the only non-isolated point). The field of algebraic numbers is a model omitting this type, and the algebraic closure of any
transcendental extension of the rationals is a model realizing this type.
All the other types are "algebraic numbers" (more precisely, they are the sets of first-order statements satisfied by some given algebraic number), and all such types are realized in all algebraically closed fields of characteristic 0.
== References ==
Hodges, Wilfrid (1997). A shorter model theory. Cambridge University Press. ISBN 0-521-58713-1.
Chang, C.C.; Keisler, H. Jerome (1989). Model Theory (3rd ed.). Elsevier. ISBN 0-7204-0692-7.
Marker, David (2002). Model Theory: An Introduction. Graduate Texts in Mathematics. Vol. 217. Springer. ISBN 0-387-98760-6. | Wikipedia/Type_(model_theory) |
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle.
== History ==
The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models.
== Definition ==
A Boolean algebra is a set A, equipped with two binary operations ∧ (called "meet" or "and"), ∨ (called "join" or "or"), a unary operation ¬ (called "complement" or "not") and two elements 0 and 1 in A (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols ⊥ and ⊤, respectively), such that for all elements a, b and c of A, the following axioms hold:
Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties).
A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required 0 and 1 to be distinct elements in order to exclude this case.)
It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that
a = b ∧ a if and only if a ∨ b = b.
The relation ≤ defined by a ≤ b if these equivalent conditions hold, is a partial order with least element 0 and greatest element 1. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤.
The first four pairs of axioms constitute a definition of a bounded lattice.
It follows from the first five pairs of axioms that any complement is unique.
The set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual.
== Examples ==
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1, and is defined by the rules:
It has applications in logic, interpreting 0 as false, 1 as true, ∧ as and, ∨ as or, and ¬ as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent.
The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input–output behavior. Furthermore, every possible input–output behavior can be modeled by a suitable Boolean expression.
The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables). This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras:
(a ∨ b) ∧ (¬a ∨ c) ∧ (b ∨ c) ≡ (a ∨ b) ∧ (¬a ∨ c)
(a ∧ b) ∨ (¬a ∧ c) ∨ (b ∧ c) ≡ (a ∧ b) ∨ (¬a ∧ c)
The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with the two operations ∨ := ∪ (union) and ∧ := ∩ (intersection). The smallest element 0 is the empty set and the largest element 1 is the set S itself.
After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms:
The set A of all subsets of S that are either finite or cofinite is a Boolean algebra and an algebra of sets called the finite–cofinite algebra. If S is infinite then the set of all cofinite subsets of S, which is called the Fréchet filter, is a free ultrafilter on A. However, the Fréchet filter is not an ultrafilter on the power set of S.
Starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo logical equivalence). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra.
Given any linearly ordered set L with a least element, the interval algebra is the smallest Boolean algebra of subsets of L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to ∞. Interval algebras are useful in the study of Lindenbaum–Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra.
For any natural number n, the set of all positive divisors of n, defining a ≤ b if a divides b, forms a distributive lattice. This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top elements of this Boolean algebra are the natural numbers 1 and n, respectively. The complement of a is given by n/a. The meet and the join of a and b are given by the greatest common divisor (gcd) and the least common multiple (lcm) of a and b, respectively. The ring addition a + b is given by lcm(a, b) / gcd(a, b). The picture shows an example for n = 30. As a counter-example, considering the non-square-free n = 60, the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1.
Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the collection of all subsets of X that are both open and closed forms a Boolean algebra with the operations ∨ := ∪ (union) and ∧ := ∩ (intersection).
If R is an arbitrary ring then its set of central idempotents, which is the set
A
=
{
e
∈
R
:
e
2
=
e
and
e
x
=
x
e
for all
x
∈
R
}
,
{\displaystyle A=\left\{e\in R:e^{2}=e{\text{ and }}ex=xe\;{\text{ for all }}\;x\in R\right\},}
becomes a Boolean algebra when its operations are defined by e ∨ f := e + f − ef and e ∧ f := ef.
== Homomorphisms and isomorphisms ==
A homomorphism between two Boolean algebras A and B is a function f : A → B such that for all a, b in A:
f(a ∨ b) = f(a) ∨ f(b),
f(a ∧ b) = f(a) ∧ f(b),
f(0) = 0,
f(1) = 1.
It then follows that f(¬a) = ¬f(a) for all a in A. The class of all Boolean algebras, together with this notion of morphism, forms a full subcategory of the category of lattices.
An isomorphism between two Boolean algebras A and B is a homomorphism f : A → B with an inverse homomorphism, that is, a homomorphism g : B → A such that the composition g ∘ f : A → A is the identity function on A, and the composition f ∘ g : B → B is the identity function on B. A homomorphism of Boolean algebras is an isomorphism if and only if it is bijective.
== Boolean rings ==
Every Boolean algebra (A, ∧, ∨) gives rise to a ring (A, +, ·) by defining a + b := (a ∧ ¬b) ∨ (b ∧ ¬a) = (a ∨ b) ∧ ¬(a ∧ b) (this operation is called symmetric difference in the case of sets and XOR in the case of logic) and a · b := a ∧ b. The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the 1 of the Boolean algebra. This ring has the property that a · a = a for all a in A; rings with this property are called Boolean rings.
Conversely, if a Boolean ring A is given, we can turn it into a Boolean algebra by defining x ∨ y := x + y + (x · y) and x ∧ y := x · y.
Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a map f : A → B is a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are equivalent; in fact the categories are isomorphic.
Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every Boolean ring.
More generally, Boudet, Jouannaud, and Schmidt-Schauß (1989) gave an algorithm to solve equations between arbitrary Boolean-ring expressions.
Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications in automated theorem proving.
== Ideals and filters ==
An ideal of the Boolean algebra A is a nonempty subset I such that for all x, y in I we have x ∨ y in I and for all a in A we have a ∧ x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called prime if I ≠ A and if a ∧ b in I always implies a in I or b in I. Furthermore, for every a ∈ A we have that a ∧ −a = 0 ∈ I, and then if I is prime we have a ∈ I or −a ∈ I for every a ∈ A. An ideal I of A is called maximal if I ≠ A and if the only ideal properly containing I is A itself. For an ideal I, if a ∉ I and −a ∉ I, then I ∪ {a} or I ∪ {−a} is contained in another proper ideal J. Hence, such an I is not maximal, and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring A.
The dual of an ideal is a filter. A filter of the Boolean algebra A is a nonempty subset p such that for all x, y in p we have x ∧ y in p and for all a in A we have a ∨ x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the ultrafilter lemma and cannot be proven in Zermelo–Fraenkel set theory (ZF), if ZF is consistent. Within ZF, the ultrafilter lemma is strictly weaker than the axiom of choice.
The ultrafilter lemma has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc.
== Representations ==
It can be shown that every finite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is a power of two.
Stone's celebrated representation theorem for Boolean algebras states that every Boolean algebra A is isomorphic to the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdorff) topological space.
== Axiomatics ==
The first axiomatization of Boolean lattices/algebras in general was given by the English philosopher and mathematician Alfred North Whitehead in 1898.
It included the above axioms and additionally x ∨ 1 = 1 and x ∧ 0 = 0.
In 1904, the American mathematician Edward V. Huntington (1874–1952) gave probably the most parsimonious axiomatization based on ∧, ∨, ¬, even proving the associativity laws (see box).
He also proved that these axioms are independent of each other.
In 1933, Huntington set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation + and a unary functional symbol n, to be read as 'complement', which satisfy the following laws:
Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit:
do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996, William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the computer program EQP he designed. For a simplification of McCune's proof, see Dahn (1998).
Further work has been done for reducing the number of axioms; see Minimal axioms for Boolean algebra.
== Generalizations ==
Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, a distributive lattice B is a generalized Boolean lattice, if it has a smallest element 0 and for any elements a and b in B such that a ≤ b, there exists an element x such that a ∧ x = 0 and a ∨ x = b. Defining a \ b as the unique x such that (a ∧ b) ∨ x = a and (a ∧ b) ∧ x = 0, we say that the structure (B, ∧, ∨, \, 0) is a generalized Boolean algebra, while (B, ∨, 0) is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals of Boolean lattices.
A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed linear subspaces for separable Hilbert spaces.
== See also ==
== Notes ==
== References ==
=== Works cited ===
Davey, B.A.; Priestley, H.A. (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks. Cambridge University Press.
Cohn, Paul M. (2003), Basic Algebra: Groups, Rings, and Fields, Springer, pp. 51, 70–81, ISBN 9781852335878
Givant, Steven; Halmos, Paul (2009), Introduction to Boolean Algebras, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-40293-2.
Goodstein, R. L. (2012), "Chapter 2: The self-dual system of axioms", Boolean Algebra, Courier Dover Publications, pp. 21ff, ISBN 9780486154978
Hsiang, Jieh (1985). "Refutational Theorem Proving Using Term Rewriting Systems". Artificial Intelligence. 25 (3): 255–300. doi:10.1016/0004-3702(85)90074-8.
Huntington, Edward V. (1904). "Sets of Independent Postulates for the Algebra of Logic". Transactions of the American Mathematical Society. 5 (3): 288–309. doi:10.1090/s0002-9947-1904-1500675-4. JSTOR 1986459.
Padmanabhan, Ranganathan; Rudeanu, Sergiu (2008), Axioms for lattices and boolean algebras, World Scientific, ISBN 978-981-283-454-6.
Stone, Marshall H. (1936). "The Theory of Representations for Boolean Algebra". Transactions of the American Mathematical Society. 40: 37–111. doi:10.1090/s0002-9947-1936-1501865-8.
Whitehead, A.N. (1898). A Treatise on Universal Algebra. Cambridge University Press. ISBN 978-1-4297-0032-0. {{cite book}}: ISBN / Date incompatibility (help)
=== General references ===
Brown, Stephen; Vranesic, Zvonko (2002), Fundamentals of Digital Logic with VHDL Design (2nd ed.), McGraw–Hill, ISBN 978-0-07-249938-4. See Section 2.5.
Boudet, A.; Jouannaud, J.P.; Schmidt-Schauß, M. (1989). "Unification in Boolean Rings and Abelian Groups". Journal of Symbolic Computation. 8 (5): 449–477. doi:10.1016/s0747-7171(89)80054-9.
Cori, Rene; Lascar, Daniel (2000), Mathematical Logic: A Course with Exercises, Oxford University Press, ISBN 978-0-19-850048-3. See Chapter 2.
Dahn, B. I. (1998), "Robbins Algebras are Boolean: A Revision of McCune's Computer-Generated Solution of the Robbins Problem", Journal of Algebra, 208 (2): 526–532, doi:10.1006/jabr.1998.7467.
Halmos, Paul (1963), Lectures on Boolean Algebras, Van Nostrand, ISBN 978-0-387-90094-0 {{citation}}: ISBN / Date incompatibility (help).
Halmos, Paul; Givant, Steven (1998), Logic as Algebra, Dolciani Mathematical Expositions, vol. 21, Mathematical Association of America, ISBN 978-0-88385-327-6.
Huntington, E. V. (1933a), "New sets of independent postulates for the algebra of logic" (PDF), Transactions of the American Mathematical Society, 35 (1), American Mathematical Society: 274–304, doi:10.2307/1989325, JSTOR 1989325.
Huntington, E. V. (1933b), "Boolean algebra: A correction", Transactions of the American Mathematical Society, 35 (2): 557–558, doi:10.2307/1989783, JSTOR 1989783
Mendelson, Elliott (1970), Boolean Algebra and Switching Circuits, Schaum's Outline Series in Mathematics, McGraw–Hill, ISBN 978-0-07-041460-0
Monk, J. Donald; Bonnet, R., eds. (1989), Handbook of Boolean Algebras, North-Holland, ISBN 978-0-444-87291-3. In 3 volumes. (Vol.1:ISBN 978-0-444-70261-6, Vol.2:ISBN 978-0-444-87152-7, Vol.3:ISBN 978-0-444-87153-4)
Sikorski, Roman (1966), Boolean Algebras, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Verlag.
Stoll, R. R. (1963), Set Theory and Logic, W. H. Freeman, ISBN 978-0-486-63829-4 {{citation}}: ISBN / Date incompatibility (help). Reprinted by Dover Publications, 1979.
== External links ==
"Boolean algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra", by J. Donald Monk.
McCune W., 1997. Robbins Algebras Are Boolean JAR 19(3), 263–276
"Boolean Algebra" by Eric W. Weisstein, Wolfram Demonstrations Project, 2007.
Burris, Stanley N.; Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2.
Weisstein, Eric W. "Boolean Algebra". MathWorld. | Wikipedia/Axiomatization_of_Boolean_algebras |
Computable functions are the basic objects of study in computability theory. Informally, a function is computable if there is an algorithm that computes the value of the function for every value of its argument. Because of the lack of a precise definition of the concept of algorithm, every formal definition of computability must refer to a specific model of computation.
Many such models of computation have been proposed, the major ones being Turing machines, register machines, lambda calculus and general recursive functions. Although these four are of a very different nature, they provide exactly the same class of computable functions, and, for every model of computation that has ever been proposed, the computable functions for such a model are computable for the above four models of computation.
The Church–Turing thesis is the unprovable assertion that every notion of computability that can be imagined can compute only functions that are computable in the above sense.
Before the precise definition of computable functions, mathematicians often used the informal term effectively calculable. This term has since come to be identified with the computable functions. The effective computability of these functions does not imply that they can be efficiently computed (i.e. computed within a reasonable amount of time). In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input. The fields of feasible computability and computational complexity study functions that can be computed efficiently.
The Blum axioms can be used to define an abstract computational complexity theory on the set of computable functions. In computational complexity theory, the problem of computing the value of a function is known as a function problem, by contrast to decision problems whose results are either "yes" of "no".
== Definition ==
Computability of a function is an informal notion. One way to describe it is to say that a function is computable if its value can be obtained by an effective procedure. With more rigor, a function
f
:
N
k
→
N
{\displaystyle f:\mathbb {N} ^{k}\rightarrow \mathbb {N} }
is computable if and only if there is an effective procedure that, given any k-tuple
x
{\displaystyle \mathbf {x} }
of natural numbers, will produce the value
f
(
x
)
{\displaystyle f(\mathbf {x} )}
. In agreement with this definition, the remainder of this article presumes that computable functions take finitely many natural numbers as arguments and produce a value which is a single natural number.
As counterparts to this informal description, there exist multiple formal, mathematical definitions. The class of computable functions can be defined in many equivalent models of computation, including
Turing machines
General recursive functions
Lambda calculus
Post machines (Post–Turing machines and tag machines).
Register machines
Although these models use different representations for the functions, their inputs, and their outputs, translations exist between any two models, and so every model describes essentially the same class of functions, giving rise to the opinion that formal computability is both natural and not too narrow. These functions are sometimes referred to as "recursive", to contrast with the informal term "computable", a distinction stemming from a 1934 discussion between Kleene and Gödel.p.6
For example, one can formalize computable functions as μ-recursive functions, which are partial functions that take finite tuples of natural numbers and return a single natural number (just as above). They are the smallest class of partial functions that includes the constant, successor, and projection functions, and is closed under composition, primitive recursion, and the μ operator.
Equivalently, computable functions can be formalized as functions which can be calculated by an idealized computing agent such as a Turing machine or a register machine. Formally speaking, a partial function
f
:
N
k
→
N
{\displaystyle f:\mathbb {N} ^{k}\rightarrow \mathbb {N} }
can be calculated if and only if there exists a computer program with the following properties:
If
f
(
x
)
{\displaystyle f(\mathbf {x} )}
is defined, then the program will terminate on the input
x
{\displaystyle \mathbf {x} }
with the value
f
(
x
)
{\displaystyle f(\mathbf {x} )}
stored in the computer memory.
If
f
(
x
)
{\displaystyle f(\mathbf {x} )}
is undefined, then the program never terminates on the input
x
{\displaystyle \mathbf {x} }
.
== Characteristics of computable functions ==
The basic characteristic of a computable function is that there must be a finite procedure (an algorithm) telling how to compute the function. The models of computation listed above give different interpretations of what a procedure is and how it is used, but these interpretations share many properties. The fact that these models give equivalent classes of computable functions stems from the fact that each model is capable of reading and mimicking a procedure for any of the other models, much as a compiler is able to read instructions in one computer language and emit instructions in another language.
Enderton [1977] gives the following characteristics of a procedure for computing a computable function; similar characterizations have been given by Turing [1936], Rogers [1967], and others.
"There must be exact instructions (i.e. a program), finite in length, for the procedure." Thus every computable function must have a finite program that completely describes how the function is to be computed. It is possible to compute the function by just following the instructions; no guessing or special insight is required.
"If the procedure is given a k-tuple x in the domain of f, then after a finite number of discrete steps the procedure must terminate and produce f(x)." Intuitively, the procedure proceeds step by step, with a specific rule to cover what to do at each step of the calculation. Only finitely many steps can be carried out before the value of the function is returned.
"If the procedure is given a k-tuple x which is not in the domain of f, then the procedure might go on forever, never halting. Or it might get stuck at some point (i.e., one of its instructions cannot be executed), but it must not pretend to produce a value for f at x." Thus if a value for f(x) is ever found, it must be the correct value. It is not necessary for the computing agent to distinguish correct outcomes from incorrect ones because the procedure is defined as correct if and only if it produces an outcome.
Enderton goes on to list several clarifications of these 3 requirements of the procedure for a computable function:
The procedure must theoretically work for arbitrarily large arguments. It is not assumed that the arguments are smaller than the number of atoms in the Earth, for example.
The procedure is required to halt after finitely many steps in order to produce an output, but it may take arbitrarily many steps before halting. No time limitation is assumed.
Although the procedure may use only a finite amount of storage space during a successful computation, there is no bound on the amount of space that is used. It is assumed that additional storage space can be given to the procedure whenever the procedure asks for it.
To summarise, based on this view a function is computable if:
The field of computational complexity studies functions with prescribed bounds on the time and/or space allowed in a successful computation.
== Computable sets and relations ==
A set A of natural numbers is called computable (synonyms: recursive, decidable) if there is a computable, total function f such that for any natural number n, f(n) = 1 if n is in A and f(n) = 0 if n is not in A.
A set of natural numbers is called computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function f such that for each number n, f(n) is defined if and only if n is in the set. Thus a set is computably enumerable if and only if it is the domain of some computable function. The word enumerable is used because the following are equivalent for a nonempty subset B of the natural numbers:
B is the domain of a computable function.
B is the range of a total computable function. If B is infinite then the function can be assumed to be injective.
If a set B is the range of a function f then the function can be viewed as an
enumeration of B, because the list f(0), f(1), ... will include every element of B.
Because each finitary relation on the natural numbers can be identified with a corresponding set of finite sequences of natural numbers, the notions of computable relation and computably enumerable relation can be defined from their analogues for sets.
== Formal languages ==
In computability theory in computer science, it is common to consider formal languages. An alphabet is an arbitrary set. A word on an alphabet is a finite sequence of symbols from the alphabet; the same symbol may be used more than once. For example, binary strings are exactly the words on the alphabet {0, 1}. A language is a subset of the collection of all words on a fixed alphabet. For example, the collection of all binary strings that contain exactly 3 ones is a language over the binary alphabet.
A key property of a formal language is the level of difficulty required to decide whether a given word is in the language. Some coding system must be developed to allow a computable function to take an arbitrary word in the language as input; this is usually considered routine. A language is called computable (synonyms: recursive, decidable) if there is a computable function f such that for each word w over the alphabet, f(w) = 1 if the word is in the language and f(w) = 0 if the word is not in the language. Thus a language is computable just in case there is a procedure that is able to correctly tell whether arbitrary words are in the language.
A language is computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function f such that f(w) is defined if and only if the word w is in the language. The term enumerable has the same etymology as in computably enumerable sets of natural numbers.
== Examples ==
The following functions are computable:
Each function with a finite domain; e.g., any finite sequence of natural numbers.
Each constant function f : Nk → N, f(n1,...nk) := n.
Addition f : N2 → N, f(n1,n2) := n1 + n2
The greatest common divisor of two numbers
A Bézout coefficient of two numbers
The smallest prime factor of a number
If f and g are computable, then so are: f + g, f * g,
f
∘
g
{\displaystyle \color {Blue}f\circ g}
if
f is unary, max(f,g), min(f,g), arg max{y ≤ f(x)} and many more combinations.
The following examples illustrate that a function may be computable though it is not known which algorithm computes it.
The function f such that f(n) = 1 if there is a sequence of at least n consecutive fives in the decimal expansion of π, and f(n) = 0 otherwise, is computable. (The function f is either the constant 1 function, which is computable, or else there is a k such that f(n) = 1 if n < k and f(n) = 0 if n ≥ k. Every such function is computable. It is not known whether there are arbitrarily long runs of fives in the decimal expansion of π, so we don't know which of those functions is f. Nevertheless, we know that the function f must be computable.)
Each finite segment of an uncomputable sequence of natural numbers (such as the Busy Beaver function Σ) is computable. E.g., for each natural number n, there exists an algorithm that computes the finite sequence Σ(0), Σ(1), Σ(2), ..., Σ(n) — in contrast to the fact that there is no algorithm that computes the entire Σ-sequence, i.e. Σ(n) for all n. Thus, "Print 0, 1, 4, 6, 13" is a trivial algorithm to compute Σ(0), Σ(1), Σ(2), Σ(3), Σ(4); similarly, for any given value of n, such a trivial algorithm exists (even though it may never be known or produced by anyone) to compute Σ(0), Σ(1), Σ(2), ..., Σ(n).
== Church–Turing thesis ==
The Church–Turing thesis states that any function computable from a procedure possessing the three properties listed above is a computable function. Because these three properties are not formally stated, the Church–Turing thesis cannot be proved. The following facts are often taken as evidence for the thesis:
Many equivalent models of computation are known, and they all give the same definition of computable function (or a weaker version, in some instances).
No stronger model of computation which is generally considered to be effectively calculable has been proposed.
The Church–Turing thesis is sometimes used in proofs to justify that a particular function is computable by giving a concrete description of a procedure for the computation. This is permitted because it is believed that all such uses of the thesis can be removed by the tedious process of writing a formal procedure for the function in some model of computation.
== Provability ==
Given a function (or, similarly, a set), one may be interested not only if it is computable, but also whether this can be proven in a particular proof system (usually first order Peano arithmetic). A function that can be proven to be computable is called provably total.
The set of provably total functions is recursively enumerable: one can enumerate all the provably total functions by enumerating all their corresponding proofs, that prove their computability. This can be done by enumerating all the proofs of the proof system and ignoring irrelevant ones.
=== Relation to recursively defined functions ===
In a function defined by a recursive definition, each value is defined by a fixed first-order formula of other, previously defined values of the same function or other functions, which might be simply constants. A subset of these is the primitive recursive functions. Another example is the Ackermann function, which is recursively defined but not primitive recursive.
For definitions of this type to avoid circularity or infinite regress, it is necessary that recursive calls to the same function within a definition be to arguments that are smaller in some well-partial-order on the function's domain. For instance, for the Ackermann function
A
{\displaystyle A}
, whenever the definition of
A
(
x
,
y
)
{\displaystyle A(x,y)}
refers to
A
(
p
,
q
)
{\displaystyle A(p,q)}
, then
(
p
,
q
)
<
(
x
,
y
)
{\displaystyle (p,q)<(x,y)}
w.r.t. the lexicographic order on pairs of natural numbers. In this case, and in the case of the primitive recursive functions, well-ordering is obvious, but some "refers-to" relations are nontrivial to prove as being well-orderings. Any function defined recursively in a well-ordered way is computable: each value can be computed by expanding a tree of recursive calls to the function, and this expansion must terminate after a finite number of calls, because otherwise Kőnig's lemma would lead to an infinite descending sequence of calls, violating the assumption of well-ordering.
=== Total functions that are not provably total ===
In a sound proof system, every provably total function is indeed total, but the converse is not true: in every first-order proof system that is strong enough and sound (including Peano arithmetic), one can prove (in another proof system) the existence of total functions that cannot be proven total in the proof system.
If the total computable functions are enumerated via the Turing machines that produces them, then the above statement can be shown, if the proof system is sound, by a similar diagonalization argument to that used above, using the enumeration of provably total functions given earlier. One uses a Turing machine that enumerates the relevant proofs, and for every input n calls fn(n) (where fn is n-th function by this enumeration) by invoking the Turing machine that computes it according to the n-th proof. Such a Turing machine is guaranteed to halt if the proof system is sound.
== Uncomputable functions and unsolvable problems ==
Every computable function has a finite procedure giving explicit, unambiguous instructions on how to compute it. Furthermore, this procedure has to be encoded in the finite alphabet used by the computational model, so there are only countably many computable functions. For example, functions may be encoded using a string of bits (the alphabet Σ = {0, 1}).
The real numbers are uncountable so most real numbers are not computable. See computable number. The set of finitary functions on the natural numbers is uncountable so most are not computable. Concrete examples of such functions are Busy beaver, Kolmogorov complexity, or any function that outputs the digits of a noncomputable number, such as Chaitin's constant.
Similarly, most subsets of the natural numbers are not computable. The halting problem was the first such set to be constructed. The Entscheidungsproblem, proposed by David Hilbert, asked whether there is an effective procedure to determine which mathematical statements (coded as natural numbers) are true. Turing and Church independently showed in the 1930s that this set of natural numbers is not computable. According to the Church–Turing thesis, there is no effective procedure (with an algorithm) which can perform these computations.
== Extensions of computability ==
=== Relative computability ===
The notion of computability of a function can be relativized to an arbitrary set of natural numbers A. A function f is defined to be computable in A (equivalently A-computable or computable relative to A) when it satisfies the definition of a computable function with modifications allowing access to A as an oracle. As with the concept of a computable function relative computability can be given equivalent definitions in many different models of computation. This is commonly accomplished by supplementing the model of computation with an additional primitive operation which asks whether a given integer is a member of A. We can also talk about f being computable in g by identifying g with its graph.
=== Higher recursion theory ===
Hyperarithmetical theory studies those sets that can be computed from a computable ordinal number of iterates of the Turing jump of the empty set. This is equivalent to sets defined by both a universal and existential formula in the language of second order arithmetic and to some models of Hypercomputation. Even more general recursion theories have been studied, such as E-recursion theory in which any set can be used as an argument to an E-recursive function.
=== Hyper-computation ===
Although the Church–Turing thesis states that the computable functions include all functions with algorithms, it is possible to consider broader classes of functions that relax the requirements that algorithms must possess. The field of Hypercomputation studies models of computation that go beyond normal Turing computation.
== See also ==
Computable number
Effective method
Theory of computation
Recursion theory
Turing degree
Arithmetical hierarchy
Hypercomputation
Super-recursive algorithm
Semicomputable function
== References ==
Cutland, Nigel. Computability. Cambridge University Press, 1980.
Enderton, H.B. Elements of recursion theory. Handbook of Mathematical Logic (North-Holland 1977) pp. 527–566.
Rogers, H. Theory of recursive functions and effective computation (McGraw–Hill 1967).
Turing, A. (1937), On Computable Numbers, With an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, Volume 42 (1937), p.230–265. Reprinted in M. Davis (ed.), The Undecidable, Raven Press, Hewlett, NY, 1965. | Wikipedia/Computable_function |
In mathematical logic, an uninterpreted function or function symbol is one that has no other property than its name and n-ary form. Function symbols are used, together with constants and variables, to form terms.
The theory of uninterpreted functions is also sometimes called the free theory, because it is freely generated, and thus a free object, or the empty theory, being the theory having an empty set of sentences (in analogy to an initial algebra). Theories with a non-empty set of equations are known as equational theories. The satisfiability problem for free theories is solved by syntactic unification; algorithms for the latter are used by interpreters for various computer languages, such as Prolog. Syntactic unification is also used in algorithms for the satisfiability problem for certain other equational theories, see Unification (computer science).
== Example ==
As an example of uninterpreted functions for SMT-LIB, if this input is given to an SMT solver:
the SMT solver would return "This input is satisfiable". That happens because f is an uninterpreted function (i.e., all that is known about f is its signature), so it is possible that f(10) = 1. But by applying the input below:
the SMT solver would return "This input is unsatisfiable". That happens because f, being a function, can never return different values for the same input.
== Discussion ==
The decision problem for free theories is particularly important, because many theories can be reduced by it.
Free theories can be solved by searching for common subexpressions to form the congruence closure. Solvers include satisfiability modulo theories solvers.
== See also ==
Algebraic data type
Initial algebra
Term algebra
Theory of pure equality
== Notes ==
== References == | Wikipedia/Uninterpreted_function |
Visualization (or visualisation ), also known as graphics visualization, is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering purposes that actively involve scientific requirements.
Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics (and 3D computer graphics) may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization.
== Overview ==
The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia (2nd century AD), a map of China (1137 AD), and Minard's map (1861) of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles.
Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since then, there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization.
Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time.
Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data.
Scientific visualization is usually done with specialized software, though there are a few exceptions, noted below. Some of these specialized programs have been released as open source software, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also many proprietary software packages of scientific visualization tools.
Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, and VTK toolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images.
== Applications ==
=== Scientific visualization ===
As a subject in computer science, scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning.
Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques. It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common.
=== Data and information visualization ===
Data visualization is a related subcategory of visualization dealing with statistical graphics and geospatial data (as in thematic cartography) that is abstracted in schematic form.
Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question.
=== Educational visualization ===
Educational visualization is using a simulation to create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment.
=== Knowledge visualization ===
The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer-based visualization methods complementarily. Thus properly designed visualization is an important part of not only data analysis but knowledge transfer process, too. Knowledge transfer may be significantly improved using hybrid designs as it enhances information density but may decrease clarity as well. For example, visualization of a 3D scalar field may be implemented using iso-surfaces for field distribution and textures for the gradient of the field. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and estimates in different fields by using various complementary visualizations.
See also: picture dictionary, visual dictionary
=== Product visualization ===
Product visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of product lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping, and simulation. 3D product visualization promises more interactive experiences for online shoppers, but also challenges retailers to overcome hurdles in the production of 3D content, as large-scale 3D content production can be extremely costly and time-consuming.
=== Visual communication ===
Visual communication is the communication of ideas through the visual display of information. Primarily associated with two dimensional images, it includes: alphanumerics, art, signs, and electronic resources. Recent research in the field has focused on web design and graphically oriented usability.
=== Visual analytics ===
Visual analytics focuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface".
Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces.
Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security.
== Interactivity ==
Interactive visualization or interactive visualisation is a branch of graphic visualization in computer science that involves studying how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient.
For a visualization to be considered interactive it must satisfy two criteria:
Human input: control of some aspect of the visual representation of information, or of the information being represented, must be available to a human, and
Response time: changes made by the human must be incorporated into the visualization in a timely manner. In general, interactive visualization is considered a soft real-time task.
One particular type of interactive visualization is virtual reality (VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (see stereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract).
Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a video-conference), or text (i.e., IRC) messages.
=== Human control of visualization ===
The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide. People can:
Pick some part of an existing visual representation;
Locate a point of interest (which may not have an existing representation);
Stroke a path;
Choose an option from a list of options;
Valuate by inputting a number; and
Write by inputting text.
All of these actions require a physical device. Input devices range from the common – keyboards, mice, graphics tablets, trackballs, and touchpads – to the esoteric – wired gloves, boom arms, and even omnidirectional treadmills.
These input actions can be used to control both the unique information being represented or the way that the information is presented. When the information being presented is altered, the visualization is usually part of a feedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is called computational steering.
More frequently, the representation of the information is changed rather than the information itself.
=== Rapid response to human input ===
Experiments have shown that a delay of more than 20 ms between when input is provided and a visual representation is updated is noticeable by most people . Thus it is desirable for an interactive visualization to provide a rendering based on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term "interactive visualization" is usually applied to systems that provide feedback to users within several seconds of input. The term interactive framerate is often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (frame/s) is considered good while 0.1 frame/s would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure of bandwidth while humans are more sensitive to latency. Specifically, it is possible to achieve a good framerate of 50 frame/s but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person.
The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include
Parallel rendering – where more than one computer or video card is used simultaneously to render an image. Multiple frames can be rendered at the same time by different computers and the results transferred over the network for display on a single monitor. This requires each computer to hold a copy of all the information to be rendered and increases bandwidth, but also increases latency. Also, each computer can render a different region of a single frame and send the results over a network for display. This again requires each computer to hold all of the data and can lead to a load imbalance when one computer is responsible for rendering a region of the screen with more information than other computers. Finally, each computer can render an entire frame containing a subset of the information. The resulting images plus the associated depth buffer can then be sent across the network and merged with the images from other computers. The result is a single frame containing all the information to be rendered, even though no single computer's memory held all of the information. This is called parallel depth compositing and is used when large amounts of information must be rendered interactively.
Progressive rendering – where a framerate is guaranteed by rendering some subset of the information to be presented and providing incremental (progressive) improvements to the rendering once the visualization is no longer changing.
Level-of-detail (LOD) rendering – where simplified representations of information are rendered to achieve a desired framerate while a person is providing input and then the full representation is used to generate a still image once the person is through manipulating the visualization. One common variant of LOD rendering is subsampling. When the information being represented is stored in a topologically rectangular array (as is common with digital photos, MRI scans, and finite difference simulations), a lower resolution version can easily be generated by skipping n points for each 1 point rendered. Subsampling can also be used to accelerate rendering techniques such as volume visualization that require more than twice the computations for an image twice the size. By rendering a smaller image and then scaling the image to fill the requested screen space, much less time is required to render the same data.
Frameless rendering – where the visualization is no longer presented as a time series of images, but as a single image where different regions are updated over time.
== See also ==
Graphical perception
Spatial visualization ability
Visual language
== References ==
== Further reading ==
Battiti, Roberto; Mauro Brunato (2011). Reactive Business Intelligence. From Data to Models to Insight. Trento, Italy: Reactive Search Srl. ISBN 978-88-905795-0-9.
Bederson, Benjamin B., and Ben Shneiderman. The Craft of Information Visualization: Readings and Reflections, Morgan Kaufmann, 2003, ISBN 1-55860-915-6.
Cleveland, William S. (1993). Visualizing Data.
Cleveland, William S. (1994). The Elements of Graphing Data.
Charles D. Hansen, Chris Johnson. The Visualization Handbook, Academic Press (June 2004).
Kravetz, Stephen A. and David Womble. ed. Introduction to Bioinformatics. Totowa, N.J. Humana Press, 2003.
Mackinlay, Jock D. (1999). Readings in information visualization: using vision to think. Card, S. K., Ben Shneiderman (eds.). Morgan Kaufmann Publishers Inc. pp. 686. ISBN 1-55860-533-9.
Will Schroeder, Ken Martin, Bill Lorensen. The Visualization Toolkit, by August 2004.
Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, ISBN 0-13-206550-9.
Edward R. Tufte (1992). The Visual Display of Quantitative Information
Edward R. Tufte (1990). Envisioning Information.
Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative.
Matthew Ward, Georges Grinstein, Daniel Keim. Interactive Data Visualization: Foundations, Techniques, and Applications. (May 2010).
Wilkinson, Leland. The Grammar of Graphics, Springer ISBN 0-387-24544-8
== External links ==
National Institute of Standards and Technology
Scientific Visualization Tutorials, Georgia Tech
Scientific Visualization Studio (NASA)
Visual-literacy.org, (e.g. Periodic Table of Visualization Methods)
Conferences
Many conferences occur where interactive visualization academic papers are presented and published.
Amer. Soc. of Information Science and Technology (ASIS&T SIGVIS) Special Interest Group in Visualization Information and Sound
ACM SIGCHI
ACM SIGGRAPH
ACM VRST
Eurographics
IEEE Visualization
ACM Transactions on Graphics
IEEE Transactions on Visualization and Computer Graphics | Wikipedia/Visualization_(graphics) |
The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. Sometimes, it is called first-order propositional logic to contrast it with System F, but it should not be confused with first-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.
Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
Propositional logic is typically studied with a formal language, in which propositions are represented by letters, which are called propositional variables. These are then used, together with symbols for connectives, to make propositional formula. Because of this, the propositional variables are called atomic formulas of a formal propositional language. While the atomic propositions are typically represented by letters of the alphabet, there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic.
The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic, in which formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false. The principle of bivalence and the law of excluded middle are upheld. By comparison with first-order logic, truth-functional propositional logic is considered to be zeroth-order logic.
== History ==
Although propositional logic had been hinted by earlier philosophers, Chrysippus is often credited with development of a deductive system for propositional logic as his main achievement in the 3rd century BC which was expanded by his successor Stoics. The logic was focused on propositions. This was different from the traditional syllogistic logic, which focused on terms. However, most of the original writings were lost and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic.
Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematician Gottfried Leibniz, whose calculus ratiocinator was, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan, completely independent of Leibniz.
Gottlob Frege's predicate logic builds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth. The invention of truth tables, however, is of uncertain attribution.
Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables".
== Sentences ==
Propositional logic, as currently studied in universities, is a specification of a standard of logical consequence in which only the meanings of propositional connectives are considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences.
=== Declarative sentences ===
Propositional logic deals with statements, which are defined as declarative sentences having truth value. Examples of statements might include:
Wikipedia is a free online encyclopedia that anyone can edit.
London is the capital of England.
All Wikipedia editors speak at least three languages.
Declarative sentences are contrasted with questions, such as "What is Wikipedia?", and imperative statements, such as "Please add citations to support the claims in this article.". Such non-declarative sentences have no truth value, and are only dealt with in nonclassical logics, called erotetic and imperative logics.
=== Compounding sentences with connectives ===
In propositional logic, a statement can contain one or more other statements as parts. Compound sentences are formed from simpler sentences and express relationships among the constituent sentences. This is done by combining them with logical connectives: the main types of compound sentences are negations, conjunctions, disjunctions, implications, and biconditionals, which are formed by using the corresponding connectives to connect propositions. In English, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional). Examples of such compound sentences might include:
Wikipedia is a free online encyclopedia that anyone can edit, and millions already have. (conjunction)
It is not true that all Wikipedia editors speak at least three languages. (negation)
Either London is the capital of England, or London is the capital of the United Kingdom, or both. (disjunction)
If sentences lack any logical connectives, they are called simple sentences, or atomic sentences; if they contain one or more logical connectives, they are called compound sentences, or molecular sentences.
Sentential connectives are a broader category that includes logical connectives. Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence, or that inflect a single sentence to create a new sentence. A logical connective, or propositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express) propositions, the new sentence that results from its application also is (or expresses) a proposition. Philosophers disagree about what exactly a proposition is, as well as about which sentential connectives in natural languages should be counted as logical connectives. Sentential connectives are also called sentence-functors, and logical connectives are also called truth-functors.
== Arguments ==
An argument is defined as a pair of things, namely a set of sentences, called the premises, and a sentence, called the conclusion. The conclusion is claimed to follow from the premises, and the premises are claimed to support the conclusion.
=== Example argument ===
The following is an example of an argument within the scope of propositional logic:
Premise 1: If it's raining, then it's cloudy.
Premise 2: It's raining.
Conclusion: It's cloudy.
The logical form of this argument is known as modus ponens, which is a classically valid form. So, in classical logic, the argument is valid, although it may or may not be sound, depending on the meteorological facts in a given context. This example argument will be reused when explaining § Formalization.
=== Validity and soundness ===
An argument is valid if, and only if, it is necessary that, if all its premises are true, its conclusion is true. Alternatively, an argument is valid if, and only if, it is impossible for all the premises to be true while the conclusion is false.
Validity is contrasted with soundness. An argument is sound if, and only if, it is valid and all its premises are true. Otherwise, it is unsound.
Logic, in general, aims to precisely specify valid arguments. This is done by defining a valid argument as one in which its conclusion is a logical consequence of its premises, which, when this is understood as semantic consequence, means that there is no case in which the premises are true but the conclusion is not true – see § Semantics below.
== Formalization ==
Propositional logic is typically studied through a formal system in which formulas of a formal language are interpreted to represent propositions. This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the § Example argument. The formal language for a propositional calculus will be fully specified in § Language, and an overview of proof systems will be given in § Proof systems.
=== Propositional variables ===
Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives, it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables). With propositional variables, the § Example argument would then be symbolized as follows:
Premise 1:
P
→
Q
{\displaystyle P\to Q}
Premise 2:
P
{\displaystyle P}
Conclusion:
Q
{\displaystyle Q}
When P is interpreted as "It's raining" and Q as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form.
When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as
P
{\displaystyle P}
,
Q
{\displaystyle Q}
and
R
{\displaystyle R}
) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.
=== Gentzen notation ===
If we assume that the validity of modus ponens has been accepted as an axiom, then the same § Example argument can also be depicted like this:
P
→
Q
,
P
Q
{\displaystyle {\frac {P\to Q,P}{Q}}}
This method of displaying it is Gentzen's notation for natural deduction and sequent calculus. The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence,> which is also symbolized with ⊢. So the above can also be written in one line as
P
→
Q
,
P
⊢
Q
{\displaystyle P\to Q,P\vdash Q}
.
Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because the natural deduction inference rule of modus ponens has been assumed. For more on inference rules, see the sections on proof systems below.
== Language ==
The language (commonly called
L
{\displaystyle {\mathcal {L}}}
) of a propositional calculus is defined in terms of:
a set of primitive symbols, called atomic formulas, atomic sentences, atoms, placeholders, prime formulas, proposition letters, sentence letters, or variables, and
a set of operator symbols, called connectives, logical connectives, logical operators, truth-functional connectives, truth-functors, or propositional connectives.
A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The language
L
{\displaystyle {\mathcal {L}}}
, then, is defined either as being identical to its set of well-formed formulas, or as containing that set (together with, for instance, its set of connectives and variables).
Usually the syntax of
L
{\displaystyle {\mathcal {L}}}
is defined recursively by just a few definitions, as seen next; some authors explicitly include parentheses as punctuation marks when defining their language's syntax, while others use them without comment.
=== Syntax ===
Given a set of atomic propositional variables
p
1
{\displaystyle p_{1}}
,
p
2
{\displaystyle p_{2}}
,
p
3
{\displaystyle p_{3}}
, ..., and a set of propositional connectives
c
1
1
{\displaystyle c_{1}^{1}}
,
c
2
1
{\displaystyle c_{2}^{1}}
,
c
3
1
{\displaystyle c_{3}^{1}}
, ...,
c
1
2
{\displaystyle c_{1}^{2}}
,
c
2
2
{\displaystyle c_{2}^{2}}
,
c
3
2
{\displaystyle c_{3}^{2}}
, ...,
c
1
3
{\displaystyle c_{1}^{3}}
,
c
2
3
{\displaystyle c_{2}^{3}}
,
c
3
3
{\displaystyle c_{3}^{3}}
, ..., a formula of propositional logic is defined recursively by these definitions:
Definition 1: Atomic propositional variables are formulas.
Definition 2: If
c
n
m
{\displaystyle c_{n}^{m}}
is a propositional connective, and
⟨
{\displaystyle \langle }
A, B, C, …
⟩
{\displaystyle \rangle }
is a sequence of m, possibly but not necessarily atomic, possibly but not necessarily distinct, formulas, then the result of applying
c
n
m
{\displaystyle c_{n}^{m}}
to
⟨
{\displaystyle \langle }
A, B, C, …
⟩
{\displaystyle \rangle }
is a formula.
Definition 3: Nothing else is a formula.
Writing the result of applying
c
n
m
{\displaystyle c_{n}^{m}}
to
⟨
{\displaystyle \langle }
A, B, C, …
⟩
{\displaystyle \rangle }
in functional notation, as
c
n
m
{\displaystyle c_{n}^{m}}
(A, B, C, …), we have the following as examples of well-formed formulas:
p
5
{\displaystyle p_{5}}
c
3
2
(
p
2
,
p
9
)
{\displaystyle c_{3}^{2}(p_{2},p_{9})}
c
3
2
(
p
1
,
c
2
1
(
p
3
)
)
{\displaystyle c_{3}^{2}(p_{1},c_{2}^{1}(p_{3}))}
c
1
3
(
p
4
,
p
6
,
c
2
2
(
p
1
,
p
2
)
)
{\displaystyle c_{1}^{3}(p_{4},p_{6},c_{2}^{2}(p_{1},p_{2}))}
c
4
2
(
c
1
1
(
p
7
)
,
c
3
1
(
p
8
)
)
{\displaystyle c_{4}^{2}(c_{1}^{1}(p_{7}),c_{3}^{1}(p_{8}))}
c
2
3
(
c
1
2
(
p
3
,
p
4
)
,
c
2
1
(
p
5
)
,
c
3
2
(
p
6
,
p
7
)
)
{\displaystyle c_{2}^{3}(c_{1}^{2}(p_{3},p_{4}),c_{2}^{1}(p_{5}),c_{3}^{2}(p_{6},p_{7}))}
c
3
1
(
c
1
3
(
p
2
,
p
3
,
c
2
2
(
p
4
,
p
5
)
)
)
{\displaystyle c_{3}^{1}(c_{1}^{3}(p_{2},p_{3},c_{2}^{2}(p_{4},p_{5})))}
What was given as Definition 2 above, which is responsible for the composition of formulas, is referred to by Colin Howson as the principle of composition. It is this recursion in the definition of a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the language
L
{\displaystyle {\mathcal {L}}}
are built up from the atoms as ultimate building blocks. Composite formulas (all formulas besides atoms) are called molecules, or molecular sentences. (This is an imperfect analogy with chemistry, since a chemical molecule may sometimes have only one atom, as in monatomic gases.)
The definition that "nothing else is a formula", given above as Definition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax. In particular, it excludes infinitely long formulas from being well-formed. It is sometimes called the Closure Clause.
==== CF grammar in BNF ====
An alternative to the syntax definitions given above is to write a context-free (CF) grammar for the language
L
{\displaystyle {\mathcal {L}}}
in Backus-Naur form (BNF). This is more common in computer science than in philosophy. It can be done in many ways, of which a particularly brief one, for the common set of five connectives, is this single clause:
ϕ
::=
a
1
,
a
2
,
…
|
¬
ϕ
|
ϕ
&
ψ
|
ϕ
∨
ψ
|
ϕ
→
ψ
|
ϕ
↔
ψ
{\displaystyle \phi ::=a_{1},a_{2},\ldots ~|~\neg \phi ~|~\phi ~\&~\psi ~|~\phi \vee \psi ~|~\phi \rightarrow \psi ~|~\phi \leftrightarrow \psi }
This clause, due to its self-referential nature (since
ϕ
{\displaystyle \phi }
is in some branches of the definition of
ϕ
{\displaystyle \phi }
), also acts as a recursive definition, and therefore specifies the entire language. To expand it to add modal operators, one need only add …
|
◻
ϕ
|
◊
ϕ
{\displaystyle |~\Box \phi ~|~\Diamond \phi }
to the end of the clause.
=== Constants and schemata ===
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, or schematic letters, however, range over all formulas. (Schematic letters are also called metavariables.) It is common to represent propositional constants by A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ, and χ.
However, some authors recognize only two "propositional constants" in their formal system: the special symbol
⊤
{\displaystyle \top }
, called "truth", which always evaluates to True, and the special symbol
⊥
{\displaystyle \bot }
, called "falsity", which always evaluates to False. Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors", or equivalently, "nullary connectives".
== Semantics ==
To serve as a model of the logic of a given natural language, a formal language must be semantically interpreted. In classical logic, all propositions evaluate to exactly one of two truth-values: True or False. For example, "Wikipedia is a free online encyclopedia that anyone can edit" evaluates to True, while "Wikipedia is a paper encyclopedia" evaluates to False.
In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic. To learn about nonclassical logics with more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic".
=== Interpretation (case) and argument ===
For a given language
L
{\displaystyle {\mathcal {L}}}
, an interpretation, valuation, Boolean valuation, or case, is an assignment of semantic values to each formula of
L
{\displaystyle {\mathcal {L}}}
. For a formal language of classical logic, a case is defined as an assignment, to each formula of
L
{\displaystyle {\mathcal {L}}}
, of one or the other, but not both, of the truth values, namely truth (T, or 1) and falsity (F, or 0). An interpretation that follows the rules of classical logic is sometimes called a Boolean valuation. An interpretation of a formal language for classical logic is often expressed in terms of truth tables. Since each formula is only assigned a single truth-value, an interpretation may be viewed as a function, whose domain is
L
{\displaystyle {\mathcal {L}}}
, and whose range is its set of semantic values
V
=
{
T
,
F
}
{\displaystyle {\mathcal {V}}=\{{\mathsf {T}},{\mathsf {F}}\}}
, or
V
=
{
1
,
0
}
{\displaystyle {\mathcal {V}}=\{1,0\}}
.
For
n
{\displaystyle n}
distinct propositional symbols there are
2
n
{\displaystyle 2^{n}}
distinct possible interpretations. For any particular symbol
a
{\displaystyle a}
, for example, there are
2
1
=
2
{\displaystyle 2^{1}=2}
possible interpretations: either
a
{\displaystyle a}
is assigned T, or
a
{\displaystyle a}
is assigned F. And for the pair
a
{\displaystyle a}
,
b
{\displaystyle b}
there are
2
2
=
4
{\displaystyle 2^{2}=4}
possible interpretations: either both are assigned T, or both are assigned F, or
a
{\displaystyle a}
is assigned T and
b
{\displaystyle b}
is assigned F, or
a
{\displaystyle a}
is assigned F and
b
{\displaystyle b}
is assigned T. Since
L
{\displaystyle {\mathcal {L}}}
has
ℵ
0
{\displaystyle \aleph _{0}}
, that is, denumerably many propositional symbols, there are
2
ℵ
0
=
c
{\displaystyle 2^{\aleph _{0}}={\mathfrak {c}}}
, and therefore uncountably many distinct possible interpretations of
L
{\displaystyle {\mathcal {L}}}
as a whole.
Where
I
{\displaystyle {\mathcal {I}}}
is an interpretation and
φ
{\displaystyle \varphi }
and
ψ
{\displaystyle \psi }
represent formulas, the definition of an argument, given in § Arguments, may then be stated as a pair
⟨
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
,
ψ
⟩
{\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle }
, where
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}
is the set of premises and
ψ
{\displaystyle \psi }
is the conclusion. The definition of an argument's validity, i.e. its property that
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
⊨
ψ
{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi }
, can then be stated as its absence of a counterexample, where a counterexample is defined as a case
I
{\displaystyle {\mathcal {I}}}
in which the argument's premises
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}
are all true but the conclusion
ψ
{\displaystyle \psi }
is not true. As will be seen in § Semantic truth, validity, consequence, this is the same as to say that the conclusion is a semantic consequence of the premises.
=== Propositional connective semantics ===
An interpretation assigns semantic values to atomic formulas directly. Molecular formulas are assigned a function of the value of their constituent atoms, according to the connective used; the connectives are defined in such a way that the truth-value of a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, and only on those. This assumption is referred to by Colin Howson as the assumption of the truth-functionality of the connectives.
==== Semantics via. truth tables ====
Since logical connectives are defined semantically only in terms of the truth values that they take when the propositional variables that they're applied to take either of the two possible truth values, the semantic definition of the connectives is usually represented as a truth table for each of the connectives, as seen below:
This table covers each of the main five logical connectives: conjunction (here notated
p
∧
q
{\displaystyle p\land q}
), disjunction (p ∨ q), implication (p → q), biconditional (p ↔ q) and negation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators. For more truth tables for more different kinds of connectives, see the article "Truth table".
==== Semantics via assignment expressions ====
Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, where
I
(
φ
)
{\displaystyle {\mathcal {I}}(\varphi )}
is the interpretation of
φ
{\displaystyle \varphi }
, the five connectives are defined as:
I
(
¬
P
)
=
T
{\displaystyle {\mathcal {I}}(\neg P)={\mathsf {T}}}
if, and only if,
I
(
P
)
=
F
{\displaystyle {\mathcal {I}}(P)={\mathsf {F}}}
I
(
P
∧
Q
)
=
T
{\displaystyle {\mathcal {I}}(P\land Q)={\mathsf {T}}}
if, and only if,
I
(
P
)
=
T
{\displaystyle {\mathcal {I}}(P)={\mathsf {T}}}
and
I
(
Q
)
=
T
{\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}}
I
(
P
∨
Q
)
=
T
{\displaystyle {\mathcal {I}}(P\lor Q)={\mathsf {T}}}
if, and only if,
I
(
P
)
=
T
{\displaystyle {\mathcal {I}}(P)={\mathsf {T}}}
or
I
(
Q
)
=
T
{\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}}
I
(
P
→
Q
)
=
T
{\displaystyle {\mathcal {I}}(P\to Q)={\mathsf {T}}}
if, and only if, it is true that, if
I
(
P
)
=
T
{\displaystyle {\mathcal {I}}(P)={\mathsf {T}}}
, then
I
(
Q
)
=
T
{\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}}
I
(
P
↔
Q
)
=
T
{\displaystyle {\mathcal {I}}(P\leftrightarrow Q)={\mathsf {T}}}
if, and only if, it is true that
I
(
P
)
=
T
{\displaystyle {\mathcal {I}}(P)={\mathsf {T}}}
if, and only if,
I
(
Q
)
=
T
{\displaystyle {\mathcal {I}}(Q)={\mathsf {T}}}
Instead of
I
(
φ
)
{\displaystyle {\mathcal {I}}(\varphi )}
, the interpretation of
φ
{\displaystyle \varphi }
may be written out as
|
φ
|
{\displaystyle |\varphi |}
, or, for definitions such as the above,
I
(
φ
)
=
T
{\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}}
may be written simply as the English sentence "
φ
{\displaystyle \varphi }
is given the value
T
{\displaystyle {\mathsf {T}}}
". Yet other authors may prefer to speak of a Tarskian model
M
{\displaystyle {\mathfrak {M}}}
for the language, so that instead they'll use the notation
M
⊨
φ
{\displaystyle {\mathfrak {M}}\models \varphi }
, which is equivalent to saying
I
(
φ
)
=
T
{\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}}
, where
I
{\displaystyle {\mathcal {I}}}
is the interpretation function for
M
{\displaystyle {\mathfrak {M}}}
.
==== Connective definition methods ====
Some of these connectives may be defined in terms of others: for instance, implication,
p
→
q
{\displaystyle p\rightarrow q}
, may be defined in terms of disjunction and negation, as
¬
p
∨
q
{\displaystyle \neg p\lor q}
; and disjunction may be defined in terms of negation and conjunction, as
¬
(
¬
p
∧
¬
q
{\displaystyle \neg (\neg p\land \neg q}
. In fact, a truth-functionally complete system, in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (as Russell, Whitehead, and Hilbert did), or using only implication and negation (as Frege did), or using only conjunction and negation, or even using only a single connective for "not and" (the Sheffer stroke), as Jean Nicod did. A joint denial connective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property.
Some authors, namely Howson and Cunningham, distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object language
L
{\displaystyle {\mathcal {L}}}
. Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence", and/or the symbol ⇔, to denote their object language's biconditional connective.
=== Semantic truth, validity, consequence ===
Given
φ
{\displaystyle \varphi }
and
ψ
{\displaystyle \psi }
as formulas (or sentences) of a language
L
{\displaystyle {\mathcal {L}}}
, and
I
{\displaystyle {\mathcal {I}}}
as an interpretation (or case) of
L
{\displaystyle {\mathcal {L}}}
, then the following definitions apply:
Truth-in-a-case: A sentence
φ
{\displaystyle \varphi }
of
L
{\displaystyle {\mathcal {L}}}
is true under an interpretation
I
{\displaystyle {\mathcal {I}}}
if
I
{\displaystyle {\mathcal {I}}}
assigns the truth value T to
φ
{\displaystyle \varphi }
. If
φ
{\displaystyle \varphi }
is true under
I
{\displaystyle {\mathcal {I}}}
, then
I
{\displaystyle {\mathcal {I}}}
is called a model of
φ
{\displaystyle \varphi }
.
Falsity-in-a-case:
φ
{\displaystyle \varphi }
is false under an interpretation
I
{\displaystyle {\mathcal {I}}}
if, and only if,
¬
φ
{\displaystyle \neg \varphi }
is true under
I
{\displaystyle {\mathcal {I}}}
. This is the "truth of negation" definition of falsity-in-a-case. Falsity-in-a-case may also be defined by the "complement" definition:
φ
{\displaystyle \varphi }
is false under an interpretation
I
{\displaystyle {\mathcal {I}}}
if, and only if,
φ
{\displaystyle \varphi }
is not true under
I
{\displaystyle {\mathcal {I}}}
. In classical logic, these definitions are equivalent, but in nonclassical logics, they are not.
Semantic consequence: A sentence
ψ
{\displaystyle \psi }
of
L
{\displaystyle {\mathcal {L}}}
is a semantic consequence (
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
) of a sentence
φ
{\displaystyle \varphi }
if there is no interpretation under which
φ
{\displaystyle \varphi }
is true and
ψ
{\displaystyle \psi }
is not true.
Valid formula (tautology): A sentence
φ
{\displaystyle \varphi }
of
L
{\displaystyle {\mathcal {L}}}
is logically valid (
⊨
φ
{\displaystyle \models \varphi }
), or a tautology,ref name="ms32 if it is true under every interpretation, or true in every case.
Consistent sentence: A sentence of
L
{\displaystyle {\mathcal {L}}}
is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent. An inconsistent formula is also called self-contradictory, and said to be a self-contradiction, or simply a contradiction, although this latter name is sometimes reserved specifically for statements of the form
(
p
∧
¬
p
)
{\displaystyle (p\land \neg p)}
.
For interpretations (cases)
I
{\displaystyle {\mathcal {I}}}
of
L
{\displaystyle {\mathcal {L}}}
, these definitions are sometimes given:
Complete case: A case
I
{\displaystyle {\mathcal {I}}}
is complete if, and only if, either
φ
{\displaystyle \varphi }
is true-in-
I
{\displaystyle {\mathcal {I}}}
or
¬
φ
{\displaystyle \neg \varphi }
is true-in-
I
{\displaystyle {\mathcal {I}}}
, for any
φ
{\displaystyle \varphi }
in
L
{\displaystyle {\mathcal {L}}}
.
Consistent case: A case
I
{\displaystyle {\mathcal {I}}}
is consistent if, and only if, there is no
φ
{\displaystyle \varphi }
in
L
{\displaystyle {\mathcal {L}}}
such that both
φ
{\displaystyle \varphi }
and
¬
φ
{\displaystyle \neg \varphi }
are true-in-
I
{\displaystyle {\mathcal {I}}}
.
For classical logic, which assumes that all cases are complete and consistent, the following theorems apply:
For any given interpretation, a given formula is either true or false under it.
No formula is both true and false under the same interpretation.
φ
{\displaystyle \varphi }
is true under
I
{\displaystyle {\mathcal {I}}}
if, and only if,
¬
φ
{\displaystyle \neg \varphi }
is false under
I
{\displaystyle {\mathcal {I}}}
;
¬
φ
{\displaystyle \neg \varphi }
is true under
I
{\displaystyle {\mathcal {I}}}
if, and only if,
φ
{\displaystyle \varphi }
is not true under
I
{\displaystyle {\mathcal {I}}}
.
If
φ
{\displaystyle \varphi }
and
(
φ
→
ψ
)
{\displaystyle (\varphi \to \psi )}
are both true under
I
{\displaystyle {\mathcal {I}}}
, then
ψ
{\displaystyle \psi }
is true under
I
{\displaystyle {\mathcal {I}}}
.
If
⊨
φ
{\displaystyle \models \varphi }
and
⊨
(
φ
→
ψ
)
{\displaystyle \models (\varphi \to \psi )}
, then
⊨
ψ
{\displaystyle \models \psi }
.
(
φ
→
ψ
)
{\displaystyle (\varphi \to \psi )}
is true under
I
{\displaystyle {\mathcal {I}}}
if, and only if, either
φ
{\displaystyle \varphi }
is not true under
I
{\displaystyle {\mathcal {I}}}
, or
ψ
{\displaystyle \psi }
is true under
I
{\displaystyle {\mathcal {I}}}
.
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
if, and only if,
(
φ
→
ψ
)
{\displaystyle (\varphi \to \psi )}
is logically valid, that is,
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
if, and only if,
⊨
(
φ
→
ψ
)
{\displaystyle \models (\varphi \to \psi )}
.
== Proof systems ==
Proof systems in propositional logic can be broadly classified into semantic proof systems and syntactic proof systems, according to the kind of logical consequence that they rely on: semantic proof systems rely on semantic consequence (
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
), whereas syntactic proof systems rely on syntactic consequence (
φ
⊢
ψ
{\displaystyle \varphi \vdash \psi }
). Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system. This section gives a very brief overview of the kinds of proof systems, with anchors to the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one.
=== Semantic proof systems ===
Semantic proof systems rely on the concept of semantic consequence, symbolized as
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
, which indicates that if
φ
{\displaystyle \varphi }
is true, then
ψ
{\displaystyle \psi }
must also be true in every possible interpretation.
==== Truth tables ====
A truth table is a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario. By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory. See § Semantic proof via truth tables.
==== Semantic tableaux ====
A semantic tableau is another semantic proof technique that systematically explores the truth of a proposition. It constructs a tree where each branch represents a possible interpretation of the propositions involved. If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered a tautology. See § Semantic proof via tableaux.
=== Syntactic proof systems ===
Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence,
φ
⊢
ψ
{\displaystyle \varphi \vdash \psi }
, signifies that
ψ
{\displaystyle \psi }
can be derived from
φ
{\displaystyle \varphi }
using the rules of the formal system.
==== Axiomatic systems ====
An axiomatic system is a set of axioms or assumptions from which other statements (theorems) are logically derived. In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms. See § Syntactic proof via axioms.
==== Natural deduction ====
Natural deduction is a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning. Each rule reflects a particular logical connective and shows how it can be introduced or eliminated. See § Syntactic proof via natural deduction.
==== Sequent calculus ====
The sequent calculus is a formal system that represents logical deductions as sequences or "sequents" of formulas. Developed by Gerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic.
== Semantic proof via truth tables ==
Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using a truth table, which gives every possible interpretation (assignment of truth values to variables) of a formula. If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation). Further, if (and only if)
¬
φ
{\displaystyle \neg \varphi }
is valid, then
φ
{\displaystyle \varphi }
is inconsistent.
For instance, this table shows that "p → (q ∨ r → (r → ¬p))" is not valid:
The computation of the last column of the third line may be displayed as follows:
Further, using the theorem that
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
if, and only if,
(
φ
→
ψ
)
{\displaystyle (\varphi \to \psi )}
is valid, we can use a truth table to prove that a formula is a semantic consequence of a set of formulas:
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
⊨
ψ
{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi }
if, and only if, we can produce a truth table that comes out all true for the formula
(
(
⋀
i
=
1
n
φ
i
)
→
ψ
)
{\displaystyle \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)}
(that is, if
⊨
(
(
⋀
i
=
1
n
φ
i
)
→
ψ
)
{\displaystyle \models \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)}
).
== Semantic proof via tableaux ==
Since truth tables have 2n lines for n variables, they can be tiresomely long for large values of n. Analytic tableaux are a more efficient, but nevertheless mechanical, semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false."
Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below. These rules use "signed formulas", where a signed formula is an expression
T
X
{\displaystyle TX}
or
F
X
{\displaystyle FX}
, where
X
{\displaystyle X}
is a (unsigned) formula of the language
L
{\displaystyle {\mathcal {L}}}
. (Informally,
T
X
{\displaystyle TX}
is read "
X
{\displaystyle X}
is true", and
F
X
{\displaystyle FX}
is read "
X
{\displaystyle X}
is false".) Their formal semantic definition is that "under any interpretation, a signed formula
T
X
{\displaystyle TX}
is called true if
X
{\displaystyle X}
is true, and false if
X
{\displaystyle X}
is false, whereas a signed formula
F
X
{\displaystyle FX}
is called false if
X
{\displaystyle X}
is true, and true if
X
{\displaystyle X}
is false."
1
)
T
∼
X
F
X
F
∼
X
T
X
s
p
a
c
e
r
2
)
T
(
X
∧
Y
)
T
X
T
Y
F
(
X
∧
Y
)
F
X
|
F
Y
s
p
a
c
e
r
3
)
T
(
X
∨
Y
)
T
X
|
T
Y
F
(
X
∨
Y
)
F
X
F
Y
s
p
a
c
e
r
4
)
T
(
X
⊃
Y
)
F
X
|
T
Y
F
(
X
⊃
Y
)
T
X
F
Y
{\displaystyle {\begin{aligned}&1)\quad {\frac {T\sim X}{FX}}\quad &&{\frac {F\sim X}{TX}}\\{\phantom {spacer}}\\&2)\quad {\frac {T(X\land Y)}{\begin{matrix}TX\\TY\end{matrix}}}\quad &&{\frac {F(X\land Y)}{FX|FY}}\\{\phantom {spacer}}\\&3)\quad {\frac {T(X\lor Y)}{TX|TY}}\quad &&{\frac {F(X\lor Y)}{\begin{matrix}FX\\FY\end{matrix}}}\\{\phantom {spacer}}\\&4)\quad {\frac {T(X\supset Y)}{FX|TY}}\quad &&{\frac {F(X\supset Y)}{\begin{matrix}TX\\FY\end{matrix}}}\end{aligned}}}
In this notation, rule 2 means that
T
(
X
∧
Y
)
{\displaystyle T(X\land Y)}
yields both
T
X
,
T
Y
{\displaystyle TX,TY}
, whereas
F
(
X
∧
Y
)
{\displaystyle F(X\land Y)}
branches into
F
X
,
F
Y
{\displaystyle FX,FY}
. The notation is to be understood analogously for rules 3 and 4. Often, in tableaux for classical logic, the signed formula notation is simplified so that
T
φ
{\displaystyle T\varphi }
is written simply as
φ
{\displaystyle \varphi }
, and
F
φ
{\displaystyle F\varphi }
as
¬
φ
{\displaystyle \neg \varphi }
, which accounts for naming rule 1 the "Rule of Double Negation".
One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing a complete tableau. In some cases, a branch can come to contain both
T
X
{\displaystyle TX}
and
F
X
{\displaystyle FX}
for some
X
{\displaystyle X}
, which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.
To construct a tableau for an argument
⟨
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
,
ψ
⟩
{\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle }
, one first writes out the set of premise formulas,
{
φ
1
,
φ
2
,
φ
3
,
.
.
.
,
φ
n
}
{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}
, with one formula on each line, signed with
T
{\displaystyle T}
(that is,
T
φ
{\displaystyle T\varphi }
for each
T
φ
{\displaystyle T\varphi }
in the set); and together with those formulas (the order is unimportant), one also writes out the conclusion,
ψ
{\displaystyle \psi }
, signed with
F
{\displaystyle F}
(that is,
F
ψ
{\displaystyle F\psi }
). One then produces a truth tree (analytic tableau) by using all those lines according to the rules. A closed tree will be proof that the argument was valid, in virtue of the fact that
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
if, and only if,
{
φ
,
∼
ψ
}
{\displaystyle \{\varphi ,\sim \psi \}}
is inconsistent (also written as
φ
,
∼
ψ
⊨
{\displaystyle \varphi ,\sim \psi \models }
).
== List of classically valid argument forms ==
Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold. We use
φ
{\displaystyle \varphi }
⟚
ψ
{\displaystyle \psi }
to denote equivalence of
φ
{\displaystyle \varphi }
and
ψ
{\displaystyle \psi }
, that is, as an abbreviation for both
φ
⊨
ψ
{\displaystyle \varphi \models \psi }
and
ψ
⊨
φ
{\displaystyle \psi \models \varphi }
; as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it, although many authors prefer to read it as "entails", or as "models".
== Syntactic proof via natural deduction ==
Natural deduction, since it is a method of syntactical proof, is specified by providing inference rules (also called rules of proof) for a language with the typical set of connectives
{
−
,
&
,
∨
,
→
,
↔
}
{\displaystyle \{-,\&,\lor ,\to ,\leftrightarrow \}}
; no axioms are used other than these rules. The rules are covered below, and a proof example is given afterwards.
=== Notation styles ===
Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The § Gentzen notation, which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs—not to be confused with "truth trees", which is another name for analytic tableaux. There is also a style due to Stanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes, and there is a simplification of Jaśkowski's style due to Fredric Fitch (Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition. Lastly, there is the only notation style which will actually be used in this article, which is due to Patrick Suppes, but was much popularized by E.J. Lemmon and Benson Mates. This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for the editor who wrote this part of the article, who did not understand the complex LaTeX commands that would be required to produce proofs in the other methods.
A proof, then, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. See the § Natural deduction proof example.
=== Inference rules ===
Natural deduction inference rules, due ultimately to Gentzen, are given below. There are ten primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT and DN are commonly given rules, although they are not primitive.
=== Natural deduction proof example ===
The proof below derives
−
P
{\displaystyle -P}
from
P
→
Q
{\displaystyle P\to Q}
and
−
Q
{\displaystyle -Q}
using only MPP and RAA, which shows that MTT is not a primitive rule, since it can be derived from those two other rules.
== Syntactic proof via axioms ==
It is possible to perform proofs axiomatically, which means that certain tautologies are taken as self-evident and various others are deduced from them using modus ponens as an inference rule, as well as a rule of substitution, which permits replacing any well-formed formula with any substitution-instance of it. Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used.
This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the article Axiomatic system (logic).
=== Frege's Begriffsschrift ===
Although axiomatic proof has been used since the famous Ancient Greek textbook, Euclid's Elements of Geometry, in propositional logic it dates back to Gottlob Frege's 1879 Begriffsschrift. Frege's system used only implication and negation as connectives. It had six axioms:
Proposition 1:
a
→
(
b
→
a
)
{\displaystyle a\to (b\to a)}
Proposition 2:
(
c
→
(
b
→
a
)
)
→
(
(
c
→
b
)
→
(
c
→
a
)
)
{\displaystyle (c\to (b\to a))\to ((c\to b)\to (c\to a))}
Proposition 8:
(
d
→
(
b
→
a
)
)
→
(
b
→
(
d
→
a
)
)
{\displaystyle (d\to (b\to a))\to (b\to (d\to a))}
Proposition 28:
(
b
→
a
)
→
(
¬
a
→
¬
b
)
{\displaystyle (b\to a)\to (\neg a\to \neg b)}
Proposition 31:
¬
¬
a
→
a
{\displaystyle \neg \neg a\to a}
Proposition 41:
a
→
¬
¬
a
{\displaystyle a\to \neg \neg a}
These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic.
=== Łukasiewicz's P2 ===
Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence
C
C
N
p
N
q
C
p
q
{\displaystyle CCNpNqCpq}
". Which, taken out of Łukasiewicz's Polish notation into modern notation, means
(
¬
p
→
¬
q
)
→
(
p
→
q
)
{\displaystyle (\neg p\rightarrow \neg q)\rightarrow (p\rightarrow q)}
. Hence, Łukasiewicz is credited with this system of three axioms:
p
→
(
q
→
p
)
{\displaystyle p\to (q\to p)}
(
p
→
(
q
→
r
)
)
→
(
(
p
→
q
)
→
(
p
→
r
)
)
{\displaystyle (p\to (q\to r))\to ((p\to q)\to (p\to r))}
(
¬
p
→
¬
q
)
→
(
q
→
p
)
{\displaystyle (\neg p\to \neg q)\to (q\to p)}
Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2 and helped popularize it.
==== Schematic form of P2 ====
One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for any well-formed formulas), the axioms are given as:
φ
→
(
ψ
→
φ
)
{\displaystyle \varphi \to (\psi \to \varphi )}
(
φ
→
(
ψ
→
χ
)
)
→
(
(
φ
→
ψ
)
→
(
φ
→
χ
)
)
{\displaystyle (\varphi \to (\psi \to \chi ))\to ((\varphi \to \psi )\to (\varphi \to \chi ))}
(
¬
φ
→
¬
ψ
)
→
(
ψ
→
φ
)
{\displaystyle (\neg \varphi \to \neg \psi )\to (\psi \to \varphi )}
The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. It has also been attributed to Hilbert, and named
H
{\displaystyle {\mathcal {H}}}
in this context.
==== Proof example in P2 ====
As an example, a proof of
A
→
A
{\displaystyle A\to A}
in P2 is given below. First, the axioms are given names:
(A1)
(
p
→
(
q
→
p
)
)
{\displaystyle (p\to (q\to p))}
(A2)
(
(
p
→
(
q
→
r
)
)
→
(
(
p
→
q
)
→
(
p
→
r
)
)
)
{\displaystyle ((p\to (q\to r))\to ((p\to q)\to (p\to r)))}
(A3)
(
(
¬
p
→
¬
q
)
→
(
q
→
p
)
)
{\displaystyle ((\neg p\to \neg q)\to (q\to p))}
And the proof is as follows:
A
→
(
(
B
→
A
)
→
A
)
{\displaystyle A\to ((B\to A)\to A)}
(instance of (A1))
(
A
→
(
(
B
→
A
)
→
A
)
)
→
(
(
A
→
(
B
→
A
)
)
→
(
A
→
A
)
)
{\displaystyle (A\to ((B\to A)\to A))\to ((A\to (B\to A))\to (A\to A))}
(instance of (A2))
(
A
→
(
B
→
A
)
)
→
(
A
→
A
)
{\displaystyle (A\to (B\to A))\to (A\to A)}
(from (1) and (2) by modus ponens)
A
→
(
B
→
A
)
{\displaystyle A\to (B\to A)}
(instance of (A1))
A
→
A
{\displaystyle A\to A}
(from (4) and (3) by modus ponens)
== Solvers ==
One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable.: 81 Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.
== See also ==
=== Higher logical levels ===
First-order logic
Second-order propositional logic
Second-order logic
Higher-order logic
=== Related topics ===
== Notes ==
== References ==
== Further reading ==
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY.
Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978.
Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press, Cambridge, UK.
Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company.
=== Related works ===
Hofstadter, Douglas (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0-465-02656-2.
== External links ==
Klement, Kevin C. "Propositional Logic". In Fieser, James; Dowden, Bradley (eds.). Internet Encyclopedia of Philosophy. Retrieved 7 April 2025.
Franks, Curtis (2024). "Propositional Logic". In Zalta, Edward N.; Nodelman, Uri (eds.). Stanford Encyclopedia of Philosophy (Winter 2024 ed.). Metaphysics Research Lab, Stanford University. Retrieved 7 April 2025.
Formal Predicate Calculus, contains a systematic formal development with axiomatic proof
forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sentential logic.
Chapter 2 / Propositional Logic from Logic In Action
Propositional sequent calculus prover on Project Nayuki. (note: implication can be input in the form !X|Y, and a sequent can be a single formula prefixed with > and having no commas)
Propositional Logic - A Generative Grammar
A Propositional Calculator that helps to understand simple expressions | Wikipedia/Propositional_calculus |
In formal logic and related branches of mathematics, a functional predicate, or function symbol, is a logical symbol that may be applied to an object term to produce another object term.
Functional predicates are also sometimes called mappings, but that term has additional meanings in mathematics.
In a model, a function symbol will be modelled by a function.
Specifically, the symbol F in a formal language is a functional symbol if, given any symbol X representing an object in the language, F(X) is again a symbol representing an object in that language.
In typed logic, F is a functional symbol with domain type T and codomain type U if, given any symbol X representing an object of type T, F(X) is a symbol representing an object of type U.
One can similarly define function symbols of more than one variable, analogous to functions of more than one variable; a function symbol in zero variables is simply a constant symbol.
Now consider a model of the formal language, with the types T and U modelled by sets [T] and [U] and each symbol X of type T modelled by an element [X] in [T].
Then F can be modelled by the set
[
F
]
:=
{
(
[
X
]
,
[
F
(
X
)
]
)
:
[
X
]
∈
[
T
]
}
,
{\displaystyle [F]:={\big \{}([X],[F(X)]):[X]\in [\mathbf {T} ]{\big \}},}
which is simply a function with domain [T] and codomain [U].
It is a requirement of a consistent model that [F(X)] = [F(Y)] whenever [X] = [Y].
== Introducing new function symbols ==
In a treatment of predicate logic that allows one to introduce new predicate symbols, one will also want to be able to introduce new function symbols. Given the function symbols F and G, one can introduce a new function symbol F ∘ G, the composition of F and G, satisfying (F ∘ G)(X) = F(G(X)), for all X.
Of course, the right side of this equation doesn't make sense in typed logic unless the domain type of F matches the codomain type of G, so this is required for the composition to be defined.
One also gets certain function symbols automatically.
In untyped logic, there is an identity predicate id that satisfies id(X) = X for all X.
In typed logic, given any type T, there is an identity predicate idT with domain and codomain type T; it satisfies idT(X) = X for all X of type T.
Similarly, if T is a subtype of U, then there is an inclusion predicate of domain type T and codomain type U that satisfies the same equation; there are additional function symbols associated with other ways of constructing new types out of old ones.
Additionally, one can define functional predicates after proving an appropriate theorem.
(If you're working in a formal system that doesn't allow you to introduce new symbols after proving theorems, then you will have to use relation symbols to get around this, as in the next section.)
Specifically, if you can prove that for every X (or every X of a certain type), there exists a unique Y satisfying some condition P, then you can introduce a function symbol F to indicate this.
Note that P will itself be a relational predicate involving both X and Y.
So if there is such a predicate P and a theorem:
For all X of type T, for some unique Y of type U, P(X,Y),
then you can introduce a function symbol F of domain type T and codomain type U that satisfies:
For all X of type T, for all Y of type U, P(X,Y) if and only if Y = F(X).
== Doing without functional predicates ==
Many treatments of predicate logic don't allow functional predicates, only relational predicates.
This is useful, for example, in the context of proving metalogical theorems (such as Gödel's incompleteness theorems), where one doesn't want to allow the introduction of new functional symbols (nor any other new symbols, for that matter).
But there is a method of replacing functional symbols with relational symbols wherever the former may occur; furthermore, this is algorithmic and thus suitable for applying most metalogical theorems to the result.
Specifically, if F has domain type T and codomain type U, then it can be replaced with a predicate P of type (T,U).
Intuitively, P(X,Y) means F(X) = Y.
Then whenever F(X) would appear in a statement, you can replace it with a new symbol Y of type U and include another statement P(X,Y).
To be able to make the same deductions, you need an additional proposition:
For all X of type T, for some unique Y of type U, P(X,Y).
(Of course, this is the same proposition that had to be proven as a theorem before introducing a new function symbol in the previous section.)
Because the elimination of functional predicates is both convenient for some purposes and possible, many treatments of formal logic do not deal explicitly with function symbols but instead use only relation symbols; another way to think of this is that a functional predicate is a special kind of predicate, specifically one that satisfies the proposition above.
This may seem to be a problem if you wish to specify a proposition schema that applies only to functional predicates F; how do you know ahead of time whether it satisfies that condition?
To get an equivalent formulation of the schema, first replace anything of the form F(X) with a new variable Y.
Then universally quantify over each Y immediately after the corresponding X is introduced (that is, after X is quantified over, or at the beginning of the statement if X is free), and guard the quantification with P(X,Y).
Finally, make the entire statement a material consequence of the uniqueness condition for a functional predicate above.
Let us take as an example the axiom schema of replacement in Zermelo–Fraenkel set theory.
(This example uses mathematical symbols.)
This schema states (in one form), for any functional predicate F in one variable:
∀
A
,
∃
B
,
∀
C
,
C
∈
A
→
F
(
C
)
∈
B
.
{\displaystyle \forall A,\exists B,\forall C,C\in A\rightarrow F(C)\in B.}
First, we must replace F(C) with some other variable D:
∀
A
,
∃
B
,
∀
C
,
C
∈
A
→
D
∈
B
.
{\displaystyle \forall A,\exists B,\forall C,C\in A\rightarrow D\in B.}
Of course, this statement isn't correct; D must be quantified over just after C:
∀
A
,
∃
B
,
∀
C
,
∀
D
,
C
∈
A
→
D
∈
B
.
{\displaystyle \forall A,\exists B,\forall C,\forall D,C\in A\rightarrow D\in B.}
We still must introduce P to guard this quantification:
∀
A
,
∃
B
,
∀
C
,
∀
D
,
P
(
C
,
D
)
→
(
C
∈
A
→
D
∈
B
)
.
{\displaystyle \forall A,\exists B,\forall C,\forall D,P(C,D)\rightarrow (C\in A\rightarrow D\in B).}
This is almost correct, but it applies to too many predicates; what we actually want is:
(
∀
X
,
∃
!
Y
,
P
(
X
,
Y
)
)
→
(
∀
A
,
∃
B
,
∀
C
,
∀
D
,
P
(
C
,
D
)
→
(
C
∈
A
→
D
∈
B
)
)
.
{\displaystyle (\forall X,\exists !Y,P(X,Y))\rightarrow (\forall A,\exists B,\forall C,\forall D,P(C,D)\rightarrow (C\in A\rightarrow D\in B)).}
This version of the axiom schema of replacement is now suitable for use in a formal language that doesn't allow the introduction of new function symbols. Alternatively, one may interpret the original statement as a statement in such a formal language; it was merely an abbreviation for the statement produced at the end.
== See also ==
Function symbol (logic)
Logical connective
Logical constant | Wikipedia/Functional_predicate |
An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components are etched onto a small, flat piece ("chip") of semiconductor material, usually silicon. Integrated circuits are used in a wide range of electronic devices, including computers, smartphones, and televisions, to perform various functions such as processing and storing information. They have greatly impacted the field of electronics by enabling device miniaturization and enhanced functionality.
Integrated circuits are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count.
The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers.
Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s.
ICs have three main advantages over circuits constructed out of discrete components: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated.
== Terminology ==
An integrated circuit is defined as: A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. In strict usage, integrated circuit refers to the single-piece circuit construction originally known as a monolithic integrated circuit, which comprises a single piece of silicon. In general usage, circuits not meeting this strict definition are sometimes referred to as ICs, which are constructed using many different technologies, e.g. 3D IC, 2.5D IC, MCM, thin-film transistors, thick-film technologies, or hybrid integrated circuits. The choice of terminology frequently appears in discussions related to whether Moore's Law is obsolete.
== History ==
An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube first made in 1926. Unlike ICs, it was designed with the purpose of tax avoidance, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder. One million were manufactured, and were "a first step in integration of radioelectronic devices". The device contained an amplifier, composed of three triodes, two capacitors and four resistors in a six-pin device. Radios with the Loewe 3NF were less expensive than other radios, showing one of the advantages of integration over using discrete components, that would be seen decades later with ICs.
Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a three-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.
Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other.
The monolithic integrated circuit chip was enabled by the inventions of the planar process by Jean Hoerni and p–n junction isolation by Kurt Lehovec. Hoerni's invention was built on Carl Frosch and Lincoln Derick's work on surface protection and passivation by silicon dioxide masking and predeposition, as well as Fuller, Ditzenberger's and others work on the diffusion of impurities into silicon.
=== The first integrated circuits ===
A precursor idea to the IC was to create small ceramic substrates (so-called micromodules), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated". The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit.
However, Kilby's invention was not a true monolithic integrated circuit chip since it had external gold-wire connections, which would have made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. More practical than Kilby's implementation, Noyce's chip was made of silicon, whereas Kilby's was made of germanium, and Noyce's was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni and included the critical on-chip aluminum interconnecting lines. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's.
NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965.
=== TTL integrated circuits ===
Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s.
Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL).
=== MOS integrated circuits ===
Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET invented at Bell Labs between 1955 and 1960, made it possible to build high-density integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was pointed out by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959.
The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s.
Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip.
At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s.
Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from tens of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm2, with up to 25 million transistors per mm2.
The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems.
Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors.
Charge-coupled devices, and the closely related active-pixel sensors, are chips that are sensitive to light. They have largely replaced photographic film in scientific, medical, and consumer applications. Billions of these devices are now produced each year for applications such as cellphones, tablets, and digital cameras. This sub-field of ICs won the Nobel Prize in 2009.
Very small mechanical devices driven by electricity can be integrated onto chips, a technology known as microelectromechanical systems (MEMS). These devices were developed in the late 1980s and are used in a variety of commercial and military applications. Examples include DLP projectors, inkjet printers, and accelerometers and MEMS gyroscopes used to deploy automobile airbags.
Since the early 2000s, the integration of optical functionality (optical computing) into silicon chips has been actively pursued in both academic research and in industry resulting in the successful commercialization of silicon based integrated optical transceivers combining optical devices (modulators, detectors, routing) with CMOS based electronics. Photonic integrated circuits that use light such as Lightelligence's PACE (Photonic Arithmetic Computing Engine) also being developed, using the emerging field of physics known as photonics.
Integrated circuits are also being developed for sensor applications in medical implants or other bioelectronic devices. Special sealing techniques have to be applied in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials.
As of 2018, the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as:
various approaches to stacking several layers of transistors to make a three-dimensional integrated circuit (3DIC), such as through-silicon via, "monolithic 3D", stacked wire bonding, and other methodologies.
transistors built from other materials: graphene transistors, molybdenite transistors, carbon nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic devices, organic field-effect transistor, etc.
fabricating transistors over the entire surface of a small sphere of silicon.
modifications to the substrate, typically to make "flexible transistors" for a flexible display or other flexible electronics, possibly leading to a roll-away computer.
As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules/chiplets, three-dimensional integrated circuits, package on package, High Bandwidth Memory and through-silicon vias with die stacking to increase performance and reduce size, without having to reduce the size of the transistors. Such techniques are collectively known as advanced packaging. Advanced packaging is mainly divided into 2.5D and 3D packaging. 2.5D describes approaches such as multi-chip modules while 3D describes approaches where dies are stacked in one way or another, such as package on package and high bandwidth memory. All approaches involve 2 or more dies in a single package. Alternatively, approaches such as 3D NAND stack multiple layers on a single die. A technique has been demonstrated to include microfluidic cooling on integrated circuits, to improve cooling performance as well as peltier thermoelectric coolers on solder bumps, or thermal solder bumps used exclusively for heat dissipation, used in flip-chip.
== Design ==
The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units.
Modern semiconductor chips have billions of components, and are far too complex to be designed by hand. Software tools to help the designer are essential. Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design, verify, and analyze entire semiconductor chips. Some of the latest EDA tools use artificial intelligence (AI) to help engineers save time and improve chip performance.
== Types ==
Integrated circuits can be broadly classified into analog, digital and mixed signal, consisting of analog and digital signaling on the same IC.
Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, use boolean algebra to process "one" and "zero" signals.
Among the most advanced integrated circuits are the microprocessors or "cores", used in personal computers, cell-phones, etc. Several cores may be integrated together in a single IC or chip. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits.
In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do various LSI-type functions such as logic gates, adders and registers. Programmability comes in various forms – devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz.
Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), process continuous signals, and perform analog functions such as amplification, active filtering, demodulation, and mixing.
ICs can combine analog and digital circuits on a chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies.
Modern electronic component distributors often further sub-categorize integrated circuits:
Digital ICs are categorized as logic ICs (such as microprocessors and microcontrollers), memory chips (such as MOS memory and floating-gate memory), interface ICs (level shifters, serializer/deserializer, etc.), power management ICs, and programmable devices.
Analog ICs are categorized as linear integrated circuits and RF circuits (radio frequency circuits).
Mixed-signal integrated circuits are categorized as data acquisition ICs (including A/D converters, D/A converters, digital potentiometers), clock/timing ICs, switched capacitor (SC) circuits, and RF CMOS circuits.
Three-dimensional integrated circuits (3D ICs) are categorized into through-silicon via (TSV) ICs and Cu-Cu connection ICs.
== Manufacturing ==
=== Fabrication ===
The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure.
Semiconductor ICs are fabricated in a planar process which includes three key process steps – photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones, starting at the 22 nm node (Intel) or 16/14 nm nodes.
Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material.
Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (doped polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.
In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer (this is called "the self-aligned gate").: p.1 (see Fig. 1.1)
Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.
Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.
More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.
Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar junction transistor devices.
A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.
Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to pads, usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices.
As of 2022, a fabrication facility (commonly known as a semiconductor fab) can cost over US$12 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products; this is known as Rock's law. Such a facility features:
The wafers up to 300 mm in diameter (wider than a common dinner plate).
As of 2022, 5 nm transistors.
Copper interconnects where copper wiring replaces aluminum for interconnects.
Low-κ dielectric insulators.
Silicon on insulator (SOI).
Strained silicon in a process used by IBM known as Strained silicon directly on insulator (SSDOI).
Multigate devices such as tri-gate transistors.
ICs can be manufactured either in-house by integrated device manufacturers (IDMs) or using the foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia) only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services.
=== Packaging ===
The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic, which is commonly cresol-formaldehyde-novolac. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.
In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors.
Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for a much higher pin count than other package types, were developed in the 1990s. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket but are much harder to replace in case of device failure.
Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. As of 2018, AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages.
Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself.
When multiple dies are put in one package, the result is a system in package, abbreviated SiP. A multi-chip module (MCM), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy.
Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics.
The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983.
== Intellectual property ==
The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout designs. The US Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits.
A diplomatic conference held at Washington, D.C., in 1989 adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The treaty is currently not in force, but was partially integrated into the TRIPS agreement.
There are several United States patents connected to the integrated circuit, which include patents by J.S. Kilby US3,138,743, US3,261,081, US3,434,015 and by R.F. Stewart US3,138,747.
National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co.
Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in further chip rights developments.
Australia passed the Circuit Layouts Act of 1989 as a sui generis form of chip protection. Korea passed the Act Concerning the Layout-Design of Semiconductor Integrated Circuits in 1992.
== Generations ==
In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA.
Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production.
=== Small-scale integration (SSI) ===
The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI.
SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production.
The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers.
The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites.
=== Medium-scale integration (MSI) ===
The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).
MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips.
In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s.
=== Large-scale integration (LSI) ===
Further development, driven by the same MOSFET scaling technology and economic factors, led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors per chip.
The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.
=== Very-large-scale integration (VLSI) ===
"Very-large-scale integration" (VLSI) is a development that started with hundreds of thousands of transistors in the early 1980s. As of 2023, maximum transistor counts continue to grow beyond 5.3 trillion transistors per chip.
Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved, making it practical to finish designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. The complexity and density of modern VLSI devices made it no longer feasible to check the masks or do the original design by hand. Instead, engineers use EDA tools to perform most functional verification work.
In 1986, one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989, and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.
=== ULSI, WSI, SoC and 3D-IC ===
To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors.
Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.
A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures.
A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.
== Silicon labeling and graffiti ==
To allow identification during production, most silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These artistic additions, often created with great attention to detail, showcase the designers' creativity and add a touch of personality to otherwise utilitarian components. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling.
== ICs and IC families ==
The 555 timer IC
The Operational amplifier
7400-series integrated circuits
4000-series integrated circuits, the CMOS counterpart to the 7400 series (see also: 74HC00 series)
Intel 4004, generally regarded as the first commercially available microprocessor, which led to the 8008, the famous 8080 CPU, the 8086, 8088 (used in the original IBM PC), and the fully-backward compatible (with the 8088/8086) 80286, 80386/i386, i486, etc.
The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers of the early 1980s
The Motorola 6800 series of computer-related chips, leading to the 68000 and 88000 series (the 68000 series was very successful and was used in the Apple Lisa and pre-PowerPC-based Macintosh, Commodore Amiga, Atari ST/TT/Falcon030, and NeXT families of computers, along with many models of workstations and servers from many manufacturers in the 80s, along with many other systems and devices)
The LM-series of analog integrated circuits
== See also ==
Central processing unit
Chip carrier
CHIPS and Science Act
Chipset
Czochralski method
Dark silicon
Ion implantation
Integrated injection logic
Integrated passive devices
Interconnect bottleneck
Heat generation in integrated circuits
High-temperature operating life
Microelectronics
Monolithic microwave integrated circuit
Multi-threshold CMOS
Silicon–germanium
Sound chip
SPICE
Thermal simulations for integrated circuits
Hybrot
== References ==
== Further reading ==
Veendrick, H.J.M. (2025). Nanometer CMOS ICs, from Basics to ASICs. Springer. ISBN 978-3-031-64248-7. OCLC 1463505655.
Baker, R.J. (2010). CMOS: Circuit Design, Layout, and Simulation (3rd ed.). Wiley-IEEE. ISBN 978-0-470-88132-3. OCLC 699889340.
Marsh, Stephen P. (2006). Practical MMIC design. Artech House. ISBN 978-1-59693-036-0. OCLC 1261968369.
Camenzind, Hans (2005). Designing Analog Chips (PDF). Virtual Bookworm. ISBN 978-1-58939-718-7. OCLC 926613209. Archived from the original (PDF) on 12 June 2017. Hans Camenzind invented the 555 timer
Hodges, David; Jackson, Horace; Saleh, Resve (2003). Analysis and Design of Digital Integrated Circuits. McGraw-Hill. ISBN 978-0-07-228365-5. OCLC 840380650.
Rabaey, J.M.; Chandrakasan, A.; Nikolic, B. (2003). Digital Integrated Circuits (2nd ed.). Pearson. ISBN 978-0-13-090996-1. OCLC 893541089.
Mead, Carver; Conway, Lynn (1991). Introduction to VLSI systems. Addison Wesley Publishing Company. ISBN 978-0-201-04358-7. OCLC 634332043.
== External links ==
Media related to Integrated circuits at Wikimedia Commons
The first monolithic integrated circuits
A large chart listing ICs by generic number including access to most of the datasheets for the parts.
The History of the Integrated Circuit | Wikipedia/Integrated_circuit |
In mathematical logic, a theory can be extended with
new constants or function names under certain conditions with assurance that the extension will introduce
no contradiction. Extension by definitions is perhaps the best-known approach, but it requires
unique existence of an object with the desired property. Addition of new names can also be done
safely without uniqueness.
Suppose that a closed formula
∃
x
1
…
∃
x
m
φ
(
x
1
,
…
,
x
m
)
{\displaystyle \exists x_{1}\ldots \exists x_{m}\,\varphi (x_{1},\ldots ,x_{m})}
is a theorem of a first-order theory
T
{\displaystyle T}
. Let
T
1
{\displaystyle T_{1}}
be a theory obtained from
T
{\displaystyle T}
by extending its language with new constants
a
1
,
…
,
a
m
{\displaystyle a_{1},\ldots ,a_{m}}
and adding a new axiom
φ
(
a
1
,
…
,
a
m
)
{\displaystyle \varphi (a_{1},\ldots ,a_{m})}
.
Then
T
1
{\displaystyle T_{1}}
is a conservative extension of
T
{\displaystyle T}
, which means that the theory
T
1
{\displaystyle T_{1}}
has the same set of theorems in the original language (i.e., without constants
a
i
{\displaystyle a_{i}}
) as the theory
T
{\displaystyle T}
.
Such a theory can also be conservatively extended by introducing a new functional symbol:
Suppose that a closed formula
∀
x
→
∃
y
φ
(
y
,
x
→
)
{\displaystyle \forall {\vec {x}}\,\exists y\,\!\,\varphi (y,{\vec {x}})}
is a theorem of a first-order theory
T
{\displaystyle T}
, where we denote
x
→
:=
(
x
1
,
…
,
x
n
)
{\displaystyle {\vec {x}}:=(x_{1},\ldots ,x_{n})}
. Let
T
1
{\displaystyle T_{1}}
be a theory obtained from
T
{\displaystyle T}
by extending its language with a new functional symbol
f
{\displaystyle f}
(of arity
n
{\displaystyle n}
) and adding a new axiom
∀
x
→
φ
(
f
(
x
→
)
,
x
→
)
{\displaystyle \forall {\vec {x}}\,\varphi (f({\vec {x}}),{\vec {x}})}
. Then
T
1
{\displaystyle T_{1}}
is a conservative extension of
T
{\displaystyle T}
, i.e. the theories
T
{\displaystyle T}
and
T
1
{\displaystyle T_{1}}
prove the same theorems not involving the functional symbol
f
{\displaystyle f}
).
Shoenfield states the theorem in the form for a new function name, and constants are the same as functions
of zero arguments. In formal systems that admit ordered tuples, extension by multiple constants as shown here
can be accomplished by addition of a new constant tuple and the new constant names
having the values of elements of the tuple.
== See also ==
Conservative extension
Extension by definition
== References == | Wikipedia/Extension_by_new_constant_and_function_names |
A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
The elements of a dialogue system are not defined because this idea is under research, however, they are different from chatbot. The typical GUI wizard engages in a sort of dialogue, but it includes very few of the common dialogue system components, and the dialogue state is trivial.
== Background ==
After dialogue systems based only on written text processing starting from the early Sixties, the first speaking dialogue system was issued by the DARPA Project in the US in 1977. After the end of this 5-year project, some European projects issued the first dialogue system able to speak many languages (also French, German and Italian). Those first systems were used in the telecom industry to provide phone various services in specific domains, e.g. automated agenda and train tables service.
== Components ==
What sets of components are included in a dialogue system, and how those components divide up responsibilities differs from system to system. Principal to any dialogue system is the dialogue manager, which is a component that manages the state of the dialogue, and dialogue strategy. A typical activity cycle in a dialogue system contains the following phases:
The user speaks, and the input is converted to plain text by the system's input recogniser/decoder, which may include:
automatic speech recogniser (ASR)
gesture recogniser
handwriting recogniser
The text is analysed by a natural language understanding (NLU) unit, which may include:
Proper Name identification
part-of-speech tagging
Syntactic/semantic parser
The semantic information is analysed by the dialogue manager, which keeps the history and state of the dialogue and manages the general flow of the conversation.
Usually, the dialogue manager contacts one or more task managers, that have knowledge of the specific task domain.
The dialogue manager produces output using an output generator, which may include:
natural language generator
gesture generator
layout manager
Finally, the output is rendered using an output renderer, which may include:
text-to-speech engine (TTS)
talking head
robot or avatar
Dialogue systems that are based on a text-only interface (e.g. text-based chat) contain only stages 2–5.
== Types of systems ==
Dialogue systems fall into the following categories, which are listed here along a few dimensions. Many of the categories overlap and the distinctions may not be well established.
by modality
text-based
spoken dialogue system
graphical user interface
multi-modal
by device
telephone-based systems
PDA systems
in-car systems
robot systems
desktop/laptop systems
native
in-browser systems
in-virtual machine
in-virtual environment
robots
by style
command-based
menu-driven
natural language
speech graffiti
by initiative
system initiative
user initiative
mixed initiative
== Natural dialogue systems ==
"A Natural Dialogue System is a form of dialogue system that tries to improve usability and user satisfaction by imitating human behaviour" (Berg, 2014). It addresses the features of a human-to-human dialogue (e.g. sub dialogues and topic changes) and aims to integrate them into dialogue systems for human-machine interaction. Often, (spoken) dialogue systems require the user to adapt to the system because the system is only able to understand a very limited vocabulary, is not able to react to topic changes, and does not allow the user to influence the dialogue flow. Mixed-initiative is a way to enable the user to have an active part in the dialogue instead of only answering questions. However, the mere existence of mixed-initiative is not sufficient to be classified as a natural dialogue system. Other important aspects include:
Adaptivity of the system
Support of implicit confirmation
Usage of verification questions
Possibilities to correct information that has already been given
Over-informativeness (give more information than has been asked for)
Support negations
Understand references by analysing discourse and anaphora
Natural language generation to prevent monotonous and recurring prompts
Adaptive and situation-aware formulation
Social behaviour (greetings, the same level of formality as the user, politeness)
Quality of speech recognition and synthesis
Although most of these aspects are issues of many different research projects, there is a lack of tools that support the development of dialogue systems addressing these topics. Apart from VoiceXML that focuses on interactive voice response systems and is the basis for many spoken dialogue systems in industry (customer support applications) and AIML that is famous for the A.L.I.C.E. chatbot, none of these integrate linguistic features like dialogue acts or language generation. Therefore, NADIA (a research prototype) gives an idea of how to fill that gap and combines some of the aforementioned aspects like natural language generation, adaptive formulation, and sub dialogues.
== Performance ==
Some authors measure the dialogue system's performance in terms of the percentage of sentences completely right, by comparing the model of sentences (this measure is called Concept Sentence Accuracy or Sentence Understanding).
== Applications ==
Dialogue systems can support a broad range of applications in business enterprises, education, government, healthcare, and entertainment. For example:
Responding to customers' questions about products and services via a company's website or intranet portal
Customer service agent knowledge base: Allows agents to type in a customer's question and guide them with a response
Guided selling: Facilitating transactions by providing answers and guidance in the sales process, particularly for complex products being sold to novice customers
Help desk: Responding to internal employee questions, e.g., responding to HR questions
Website navigation: Guiding customers to relevant portions of complex websites—a Website concierge
Technical support: Responding to technical problems, such as diagnosing a problem with a product or device
Personalized service: Conversational agents can leverage internal and external databases to personalise interactions, such as answering questions about account balances, providing portfolio information, delivering frequent flier or membership information, for example
Training or education: They can provide problem-solving advice while the user learns
Simple dialogue systems are widely used to decrease the human workload in call centers. In this and other industrial telephony applications, the functionality provided by dialogue systems is known as interactive voice response or IVR.
Support scientist in data manipulation and analysis tasks, for example in genomics.
In some cases, conversational agents can interact with users using artificial characters. These agents are then referred to as embodied agents.
== Toolkits and architectures ==
A survey of current frameworks, languages and technologies for defining dialogue systems.
== See also ==
Call avoidance
== References ==
== Further reading ==
Will, Thomas (2007). Creating a Dynamic Speech Dialogue. VDM Verlag Dr. Müller. ISBN 978-3-8364-4990-8. | Wikipedia/Dialogue_systems |
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size n of the sorted list being searched, or in O(log n), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called a model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time.
For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2(n) + 1 time units are needed to return an answer.
== Cost models ==
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.
Two cost models are generally used:
the uniform cost model, also called unit-cost model (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved
the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved
The latter is more cumbersome to use, so it is only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.
A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.
== Run-time analysis ==
Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.
=== Shortcomings of empirical metrics ===
Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following:
Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error:
Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate.
=== Orders of growth ===
Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some n0 and a constant c, the run-time of that algorithm will never be larger than c × f(n). This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order O(n2).
Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is O(n2), but the average-case run-time is O(n log n).
=== Empirical orders of growth ===
Assuming the run-time follows power rule, t ≈ kna, the coefficient a can be found by taking empirical measurements of run-time {t1, t2} at some problem-size points {n1, n2}, and calculating t2/t1 = (n2/n1)a so that a = log(t2/t1)/log(n2/n1). In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table:
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.
=== Evaluating run-time complexity ===
The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:
1 get a positive integer n from input
2 if n > 10
3 print "This might take a while..."
4 for i = 1 to n
5 for j = 1 to i
6 print i * j
7 print "Done!"
A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at most T1, step 2 uses time at most T2, and so forth.
In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1–3 and step 7 is:
T
1
+
T
2
+
T
3
+
T
7
.
{\displaystyle T_{1}+T_{2}+T_{3}+T_{7}.\,}
The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 )
times, which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time.
Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:
T
6
+
2
T
6
+
3
T
6
+
⋯
+
(
n
−
1
)
T
6
+
n
T
6
{\displaystyle T_{6}+2T_{6}+3T_{6}+\cdots +(n-1)T_{6}+nT_{6}}
which can be factored as
[
1
+
2
+
3
+
⋯
+
(
n
−
1
)
+
n
]
T
6
=
[
1
2
(
n
2
+
n
)
]
T
6
{\displaystyle \left[1+2+3+\cdots +(n-1)+n\right]T_{6}=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}}
The total time required to run the inner loop test can be evaluated similarly:
2
T
5
+
3
T
5
+
4
T
5
+
⋯
+
(
n
−
1
)
T
5
+
n
T
5
+
(
n
+
1
)
T
5
=
T
5
+
2
T
5
+
3
T
5
+
4
T
5
+
⋯
+
(
n
−
1
)
T
5
+
n
T
5
+
(
n
+
1
)
T
5
−
T
5
{\displaystyle {\begin{aligned}&2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}\\={}&T_{5}+2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}-T_{5}\end{aligned}}}
which can be factored as
T
5
[
1
+
2
+
3
+
⋯
+
(
n
−
1
)
+
n
+
(
n
+
1
)
]
−
T
5
=
[
1
2
(
n
2
+
n
)
]
T
5
+
(
n
+
1
)
T
5
−
T
5
=
[
1
2
(
n
2
+
n
)
]
T
5
+
n
T
5
=
[
1
2
(
n
2
+
3
n
)
]
T
5
{\displaystyle {\begin{aligned}&T_{5}\left[1+2+3+\cdots +(n-1)+n+(n+1)\right]-T_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+(n+1)T_{5}-T_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+nT_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}\end{aligned}}}
Therefore, the total run-time for this algorithm is:
f
(
n
)
=
T
1
+
T
2
+
T
3
+
T
7
+
(
n
+
1
)
T
4
+
[
1
2
(
n
2
+
n
)
]
T
6
+
[
1
2
(
n
2
+
3
n
)
]
T
5
{\displaystyle f(n)=T_{1}+T_{2}+T_{3}+T_{7}+(n+1)T_{4}+\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}}
which reduces to
f
(
n
)
=
[
1
2
(
n
2
+
n
)
]
T
6
+
[
1
2
(
n
2
+
3
n
)
]
T
5
+
(
n
+
1
)
T
4
+
T
1
+
T
2
+
T
3
+
T
7
{\displaystyle f(n)=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}}
As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that f(n) = O(n2). Formally this can be proven as follows:
Prove that
[
1
2
(
n
2
+
n
)
]
T
6
+
[
1
2
(
n
2
+
3
n
)
]
T
5
+
(
n
+
1
)
T
4
+
T
1
+
T
2
+
T
3
+
T
7
≤
c
n
2
,
n
≥
n
0
{\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},\ n\geq n_{0}}
[
1
2
(
n
2
+
n
)
]
T
6
+
[
1
2
(
n
2
+
3
n
)
]
T
5
+
(
n
+
1
)
T
4
+
T
1
+
T
2
+
T
3
+
T
7
≤
(
n
2
+
n
)
T
6
+
(
n
2
+
3
n
)
T
5
+
(
n
+
1
)
T
4
+
T
1
+
T
2
+
T
3
+
T
7
(
for
n
≥
0
)
{\displaystyle {\begin{aligned}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\\\leq {}&(n^{2}+n)T_{6}+(n^{2}+3n)T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\ ({\text{for }}n\geq 0)\end{aligned}}}
Let k be a constant greater than or equal to [T1..T7]
T
6
(
n
2
+
n
)
+
T
5
(
n
2
+
3
n
)
+
(
n
+
1
)
T
4
+
T
1
+
T
2
+
T
3
+
T
7
≤
k
(
n
2
+
n
)
+
k
(
n
2
+
3
n
)
+
k
n
+
5
k
=
2
k
n
2
+
5
k
n
+
5
k
≤
2
k
n
2
+
5
k
n
2
+
5
k
n
2
(
for
n
≥
1
)
=
12
k
n
2
{\displaystyle {\begin{aligned}&T_{6}(n^{2}+n)+T_{5}(n^{2}+3n)+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq k(n^{2}+n)+k(n^{2}+3n)+kn+5k\\={}&2kn^{2}+5kn+5k\leq 2kn^{2}+5kn^{2}+5kn^{2}\ ({\text{for }}n\geq 1)=12kn^{2}\end{aligned}}}
Therefore
[
1
2
(
n
2
+
n
)
]
T
6
+
[
1
2
(
n
2
+
3
n
)
]
T
5
+
(
n
+
1
)
T
4
+
T
1
+
T
2
+
T
3
+
T
7
≤
c
n
2
,
n
≥
n
0
for
c
=
12
k
,
n
0
=
1
{\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},n\geq n_{0}{\text{ for }}c=12k,n_{0}=1}
A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:
4
+
∑
i
=
1
n
i
≤
4
+
∑
i
=
1
n
n
=
4
+
n
2
≤
5
n
2
(
for
n
≥
1
)
=
O
(
n
2
)
.
{\displaystyle 4+\sum _{i=1}^{n}i\leq 4+\sum _{i=1}^{n}n=4+n^{2}\leq 5n^{2}\ ({\text{for }}n\geq 1)=O(n^{2}).}
=== Growth rate analysis of other resources ===
The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages:
while file is still open:
let n = size of file
for every 100,000 kilobytes of increase in file size
double the amount of memory reserved
In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.
== Relevance ==
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
== Constant factors ==
Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data.
This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have
K
>
k
log
log
n
{\displaystyle K>k\log \log n}
so long as
K
/
k
>
6
{\displaystyle K/k>6}
and
n
<
2
2
6
=
2
64
{\displaystyle n<2^{2^{6}}=2^{64}}
.
For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity
n
log
n
{\displaystyle n\log n}
), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity
n
2
{\displaystyle n^{2}}
) for small data, as the simpler algorithm is faster on small data.
== See also ==
Amortized analysis
Analysis of parallel algorithms
Asymptotic computational complexity
Information-based complexity
Master theorem (analysis of algorithms)
NP-complete
Numerical analysis
Polynomial time
Program optimization
Scalability
Smoothed analysis
Termination analysis — the subproblem of checking whether a program will terminate at all
== Notes ==
== References ==
Sedgewick, Robert; Flajolet, Philippe (2013). An Introduction to the Analysis of Algorithms (2nd ed.). Addison-Wesley. ISBN 978-0-321-90575-8.
Greene, Daniel A.; Knuth, Donald E. (1982). Mathematics for the Analysis of Algorithms (Second ed.). Birkhäuser. ISBN 3-7643-3102-X.
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. & Stein, Clifford (2001). Introduction to Algorithms. Chapter 1: Foundations (Second ed.). Cambridge, MA: MIT Press and McGraw-Hill. pp. 3–122. ISBN 0-262-03293-7.
Sedgewick, Robert (1998). Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching (3rd ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-31452-6.
Knuth, Donald. The Art of Computer Programming. Addison-Wesley.
Goldreich, Oded (2010). Computational Complexity: A Conceptual Perspective. Cambridge University Press. ISBN 978-0-521-88473-0.
== External links ==
Media related to Analysis of algorithms at Wikimedia Commons | Wikipedia/Analysis_of_algorithms |
In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.
Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity).
== General traits ==
The CoC is a higher-order typed lambda calculus, initially developed by Thierry Coquand. It is well known for being at the top of Barendregt's lambda cube. It is possible within CoC to define functions from terms to terms, as well as terms to types, types to types, and types to terms.
The CoC is strongly normalizing, and hence consistent.
== Usage ==
The CoC has been developed alongside the Coq proof assistant. As features were added (or possible liabilities removed) to the theory, they became available in Coq.
Variants of the CoC are used in other proof assistants, such as Matita and Lean.
== The basics of the calculus of constructions ==
The calculus of constructions can be considered an extension of the Curry–Howard isomorphism. The Curry–Howard isomorphism associates a term in the simply typed lambda calculus with each natural-deduction proof in intuitionistic propositional logic. The calculus of constructions extends this isomorphism to proofs in the full intuitionistic predicate calculus, which includes proofs of quantified statements (which we will also call "propositions").
=== Terms ===
A term in the calculus of constructions is constructed using the following rules:
T
{\displaystyle \mathbf {T} }
is a term (also called type);
P
{\displaystyle \mathbf {P} }
is a term (also called prop, the type of all propositions);
Variables (
x
,
y
,
…
{\displaystyle x,y,\ldots }
) are terms;
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are terms, then so is
(
A
B
)
{\displaystyle (AB)}
;
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are terms and
x
{\displaystyle x}
is a variable, then the following are also terms:
(
λ
x
:
A
.
B
)
{\displaystyle (\lambda x:A.B)}
,
(
∀
x
:
A
.
B
)
{\displaystyle (\forall x:A.B)}
.
In other words, the term syntax, in Backus–Naur form, is then:
e
::=
T
∣
P
∣
x
∣
e
e
∣
λ
x
:
e
.
e
∣
∀
x
:
e
.
e
{\displaystyle e::=\mathbf {T} \mid \mathbf {P} \mid x\mid e\,e\mid \lambda x{\mathbin {:}}e.e\mid \forall x{\mathbin {:}}e.e}
The calculus of constructions has five kinds of objects:
proofs, which are terms whose types are propositions;
propositions, which are also known as small types;
predicates, which are functions that return propositions;
large types, which are the types of predicates (
P
{\displaystyle \mathbf {P} }
is an example of a large type);
T
{\displaystyle \mathbf {T} }
itself, which is the type of large types.
=== β-equivalence ===
As with the untyped lambda calculus, the calculus of constructions uses a basic notion of equivalence of terms, known as
β
{\displaystyle \beta }
-equivalence. This captures the meaning of
λ
{\displaystyle \lambda }
-abstraction:
(
λ
x
:
A
.
B
)
N
=
β
B
(
x
:=
N
)
{\displaystyle (\lambda x:A.B)N=_{\beta }B(x:=N)}
β
{\displaystyle \beta }
-equivalence is a congruence relation for the calculus of constructions, in the sense that
If
A
=
β
B
{\displaystyle A=_{\beta }B}
and
M
=
β
N
{\displaystyle M=_{\beta }N}
, then
A
M
=
β
B
N
{\displaystyle AM=_{\beta }BN}
.
=== Judgments ===
The calculus of constructions allows proving typing judgments:
x
1
:
A
1
,
x
2
:
A
2
,
…
⊢
t
:
B
{\displaystyle x_{1}:A_{1},x_{2}:A_{2},\ldots \vdash t:B}
,
which can be read as the implication
If variables
x
1
,
x
2
,
…
{\displaystyle x_{1},x_{2},\ldots }
have, respectively, types
A
1
,
A
2
,
…
{\displaystyle A_{1},A_{2},\ldots }
, then term
t
{\displaystyle t}
has type
B
{\displaystyle B}
.
The valid judgments for the calculus of constructions are derivable from a set of inference rules. In the following, we use
Γ
{\displaystyle \Gamma }
to mean a sequence of type assignments
x
1
:
A
1
,
x
2
:
A
2
,
…
{\displaystyle x_{1}:A_{1},x_{2}:A_{2},\ldots }
;
A
,
B
,
C
,
D
{\displaystyle A,B,C,D}
to mean terms; and
K
,
L
{\displaystyle K,L}
to mean either
P
{\displaystyle \mathbf {P} }
or
T
{\displaystyle \mathbf {T} }
. We shall write
B
[
x
:=
N
]
{\displaystyle B[x:=N]}
to mean the result of substituting the term
N
{\displaystyle N}
for the free variable
x
{\displaystyle x}
in the term
B
{\displaystyle B}
.
An inference rule is written in the form
Γ
⊢
A
:
B
Γ
′
⊢
C
:
D
{\displaystyle {\frac {\Gamma \vdash A:B}{\Gamma '\vdash C:D}}}
,
which means
if
Γ
⊢
A
:
B
{\displaystyle \Gamma \vdash A:B}
is a valid judgment, then so is
Γ
′
⊢
C
:
D
{\displaystyle \Gamma '\vdash C:D}
.
=== Inference rules for the calculus of constructions ===
1.
Γ
⊢
P
:
T
{\displaystyle {{} \over \Gamma \vdash \mathbf {P} :\mathbf {T} }}
2.
Γ
⊢
A
:
K
Γ
,
x
:
A
,
Γ
′
⊢
x
:
A
{\displaystyle {{\Gamma \vdash A:K} \over {\Gamma ,x:A,\Gamma '\vdash x:A}}}
3.
Γ
⊢
A
:
K
Γ
,
x
:
A
⊢
B
:
L
Γ
⊢
(
∀
x
:
A
.
B
)
:
L
{\displaystyle {\Gamma \vdash A:K\qquad \qquad \Gamma ,x:A\vdash B:L \over {\Gamma \vdash (\forall x:A.B):L}}}
4.
Γ
⊢
A
:
K
Γ
,
x
:
A
⊢
N
:
B
Γ
⊢
(
λ
x
:
A
.
N
)
:
(
∀
x
:
A
.
B
)
{\displaystyle {\Gamma \vdash A:K\qquad \qquad \Gamma ,x:A\vdash N:B \over {\Gamma \vdash (\lambda x:A.N):(\forall x:A.B)}}}
5.
Γ
⊢
M
:
(
∀
x
:
A
.
B
)
Γ
⊢
N
:
A
Γ
⊢
M
N
:
B
[
x
:=
N
]
{\displaystyle {\Gamma \vdash M:(\forall x:A.B)\qquad \qquad \Gamma \vdash N:A \over {\Gamma \vdash MN:B[x:=N]}}}
6.
Γ
⊢
M
:
A
A
=
β
B
Γ
⊢
B
:
K
Γ
⊢
M
:
B
{\displaystyle {\Gamma \vdash M:A\qquad \qquad A=_{\beta }B\qquad \qquad \Gamma \vdash B:K \over {\Gamma \vdash M:B}}}
=== Defining logical operators ===
The calculus of constructions has very few basic operators: the only logical operator for forming propositions is
∀
{\displaystyle \forall }
. However, this one operator is sufficient to define all the other logical operators:
A
⇒
B
≡
∀
x
:
A
.
B
(
x
∉
B
)
A
∧
B
≡
∀
C
:
P
.
(
A
⇒
B
⇒
C
)
⇒
C
A
∨
B
≡
∀
C
:
P
.
(
A
⇒
C
)
⇒
(
B
⇒
C
)
⇒
C
¬
A
≡
∀
C
:
P
.
(
A
⇒
C
)
∃
x
:
A
.
B
≡
∀
C
:
P
.
(
∀
x
:
A
.
(
B
⇒
C
)
)
⇒
C
{\displaystyle {\begin{array}{ccll}A\Rightarrow B&\equiv &\forall x:A.B&(x\notin B)\\A\wedge B&\equiv &\forall C:\mathbf {P} .(A\Rightarrow B\Rightarrow C)\Rightarrow C&\\A\vee B&\equiv &\forall C:\mathbf {P} .(A\Rightarrow C)\Rightarrow (B\Rightarrow C)\Rightarrow C&\\\neg A&\equiv &\forall C:\mathbf {P} .(A\Rightarrow C)&\\\exists x:A.B&\equiv &\forall C:\mathbf {P} .(\forall x:A.(B\Rightarrow C))\Rightarrow C&\end{array}}}
=== Defining data types ===
The basic data types used in computer science can be defined within the calculus of constructions:
Booleans
∀
A
:
P
.
A
⇒
A
⇒
A
{\displaystyle \forall A:\mathbf {P} .A\Rightarrow A\Rightarrow A}
Naturals
∀
A
:
P
.
(
A
⇒
A
)
⇒
A
⇒
A
{\displaystyle \forall A:\mathbf {P} .(A\Rightarrow A)\Rightarrow A\Rightarrow A}
Product
A
×
B
{\displaystyle A\times B}
A
∧
B
{\displaystyle A\wedge B}
Disjoint union
A
+
B
{\displaystyle A+B}
A
∨
B
{\displaystyle A\vee B}
Note that Booleans and Naturals are defined in the same way as in Church encoding. However, additional problems arise from propositional extensionality and proof irrelevance.
== See also ==
Pure type system
Lambda cube
System F
Dependent type
Intuitionistic type theory
Homotopy type theory
== References ==
== Sources == | Wikipedia/Calculus_of_constructions |
Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems.
== Syntax ==
A record type is a set of fields. A field is a pair consisting of a label and a type. Within a record type, field labels are unique. The witness of a record type is a record. A record is a similar set of fields, but fields contain objects instead of types. The object in each field must be of the type declared in the corresponding field in the record type.
Basic type:
[
x
:
I
n
d
]
{\displaystyle {\begin{bmatrix}{\text{x}}:Ind\end{bmatrix}}}
Object:
[
x
=
a
]
{\displaystyle {\begin{bmatrix}{\text{x}}=a\end{bmatrix}}}
Ptype:
[
x
:
I
n
d
c
boy
:
b
o
y
(
x
)
y
:
I
n
d
c
dog
:
d
o
g
(
y
)
c
hug
:
h
u
g
(
x
,
y
)
]
{\displaystyle \left[{\begin{array}{lll}{\text{x}}&:&Ind\\{\text{c}}_{\text{boy}}&:&boy({\text{x}})\\{\text{y}}&:&Ind\\{\text{c}}_{\text{dog}}&:&dog({\text{y}})\\{\text{c}}_{\text{hug}}&:&hug(x,y)\end{array}}\right]}
Object:
[
x
=
a
c
boy
=
p
1
y
=
b
c
dog
=
p
2
c
hug
=
p
3
]
{\displaystyle \left[{\begin{array}{lll}{\text{x}}&=&a\\{\text{c}}_{\text{boy}}&=&p_{1}\\{\text{y}}&=&b\\{\text{c}}_{\text{dog}}&=&p_{2}\\{\text{c}}_{\text{hug}}&=&p_{3}\end{array}}\right]}
where
a
{\displaystyle a}
and
b
{\displaystyle b}
are individuals (type
I
n
d
{\displaystyle Ind}
),
p
1
{\displaystyle p_{1}}
is proof that
a
{\displaystyle a}
is a boy, etc.
== References == | Wikipedia/Type_theory_with_records |
In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems of a first-order theory rather than conditional tautologies.
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
Hilbert style. Every line is an unconditional tautology (or theorem).
Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left.
Natural deduction. Every (conditional) line has exactly one asserted proposition on the right.
Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right.
In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced. This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
== Overview ==
In proof theory and mathematical logic, sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ, were introduced in 1934/1935 by Gerhard Gentzen as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main Theorem" (Hauptsatz) about LK and LJ was the cut-elimination theorem, a result with far-reaching meta-theoretic consequences, including consistency. Gentzen further demonstrated the power and flexibility of this technique a few years later, applying a cut-elimination argument to give a (transfinite) proof of the consistency of Peano arithmetic, in surprising response to Gödel's incompleteness theorems. Since this early work, sequent calculi, also called Gentzen systems, and the general concepts relating to them, have been widely applied in the fields of proof theory, mathematical logic, and automated deduction.
=== Hilbert-style deduction systems ===
One way to classify different styles of deduction systems is to look at the form of judgments in the system, i.e., which things may appear as the conclusion of a (sub)proof. The simplest judgment form is used in Hilbert-style deduction systems, where a judgment has the form
B
{\displaystyle B}
where
B
{\displaystyle B}
is any formula of first-order logic (or whatever logic the deduction system applies to, e.g., propositional calculus or a higher-order logic or a modal logic). The theorems are those formulas that appear as the concluding judgment in a valid proof. A Hilbert-style system needs no distinction between formulas and judgments; we make one here solely for comparison with the cases that follow.
The price paid for the simple syntax of a Hilbert-style system is that complete formal proofs tend to get extremely long. Concrete arguments about proofs in such a system almost always appeal to the deduction theorem. This leads to the idea of including the deduction theorem as a formal rule in the system, which happens in natural deduction.
=== Natural deduction systems ===
In natural deduction, judgments have the shape
A
1
,
A
2
,
…
,
A
n
⊢
B
{\displaystyle A_{1},A_{2},\ldots ,A_{n}\vdash B}
where the
A
i
{\displaystyle A_{i}}
's and
B
{\displaystyle B}
are again formulas and
n
≥
0
{\displaystyle n\geq 0}
. In other words, a judgment consists of a list (possibly empty) of formulas on the left-hand side of a turnstile symbol "
⊢
{\displaystyle \vdash }
", with a single formula on the right-hand side, (though permutations of the
A
i
{\displaystyle A_{i}}
's are often immaterial). The theorems are those formulae
B
{\displaystyle B}
such that
⊢
B
{\displaystyle \vdash B}
(with an empty left-hand side) is the conclusion of a valid proof.
(In some presentations of natural deduction, the
A
i
{\displaystyle A_{i}}
s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.)
The standard semantics of a judgment in natural deduction is that it asserts that whenever
A
1
{\displaystyle A_{1}}
,
A
2
{\displaystyle A_{2}}
, etc., are all true,
B
{\displaystyle B}
will also be true. The judgments
A
1
,
…
,
A
n
⊢
B
{\displaystyle A_{1},\ldots ,A_{n}\vdash B}
and
⊢
(
A
1
∧
⋯
∧
A
n
)
→
B
{\displaystyle \vdash (A_{1}\land \cdots \land A_{n})\rightarrow B}
are equivalent in the strong sense that a proof of either one may be extended to a proof of the other.
=== Sequent calculus systems ===
Finally, sequent calculus generalizes the form of a natural deduction judgment to
A
1
,
…
,
A
n
⊢
B
1
,
…
,
B
k
,
{\displaystyle A_{1},\ldots ,A_{n}\vdash B_{1},\ldots ,B_{k},}
a syntactic object called a sequent. The formulas on left-hand side of the turnstile are called the antecedent, and the formulas on right-hand side are called the succedent or consequent; together they are called cedents or sequents. Again,
A
i
{\displaystyle A_{i}}
and
B
i
{\displaystyle B_{i}}
are formulas, and
n
{\displaystyle n}
and
k
{\displaystyle k}
are nonnegative integers, that is, the left-hand-side or the right-hand-side (or neither or both) may be empty. As in natural deduction, theorems are those
B
{\displaystyle B}
where
⊢
B
{\displaystyle \vdash B}
is the conclusion of a valid proof.
The standard semantics of a sequent is an assertion that whenever every
A
i
{\displaystyle A_{i}}
is true, at least one
B
i
{\displaystyle B_{i}}
will also be true. Thus the empty sequent, having both cedents empty, is false. One way to express this is that a comma to the left of the turnstile should be thought of as an "and", and a comma to the right of the turnstile should be thought of as an (inclusive) "or". The sequents
A
1
,
…
,
A
n
⊢
B
1
,
…
,
B
k
{\displaystyle A_{1},\ldots ,A_{n}\vdash B_{1},\ldots ,B_{k}}
and
⊢
(
A
1
∧
⋯
∧
A
n
)
→
(
B
1
∨
⋯
∨
B
k
)
{\displaystyle \vdash (A_{1}\land \cdots \land A_{n})\rightarrow (B_{1}\lor \cdots \lor B_{k})}
are equivalent in the strong sense that a proof of either sequent may be extended to a proof of the other sequent.
At first sight, this extension of the judgment form may appear to be a strange complication—it is not motivated by an obvious shortcoming of natural deduction, and it is initially confusing that the comma seems to mean entirely different things on the two sides of the turnstile. However, in a classical context the semantics of the sequent can also (by propositional tautology) be expressed either as
⊢
¬
A
1
∨
¬
A
2
∨
⋯
∨
¬
A
n
∨
B
1
∨
B
2
∨
⋯
∨
B
k
{\displaystyle \vdash \neg A_{1}\lor \neg A_{2}\lor \cdots \lor \neg A_{n}\lor B_{1}\lor B_{2}\lor \cdots \lor B_{k}}
(at least one of the As is false, or one of the Bs is true)
or as
⊢
¬
(
A
1
∧
A
2
∧
⋯
∧
A
n
∧
¬
B
1
∧
¬
B
2
∧
⋯
∧
¬
B
k
)
{\displaystyle \vdash \neg (A_{1}\land A_{2}\land \cdots \land A_{n}\land \neg B_{1}\land \neg B_{2}\land \cdots \land \neg B_{k})}
(it cannot be the case that all of the As are true and all of the Bs are false).
In these formulations, the only difference between formulas on either side of the turnstile is that one side is negated. Thus, swapping left for right in a sequent corresponds to negating all of the constituent formulas. This means that a symmetry such as De Morgan's laws, which manifests itself as logical negation on the semantic level, translates directly into a left–right symmetry of sequents—and indeed, the inference rules in sequent calculus for dealing with conjunction (∧) are mirror images of those dealing with disjunction (∨).
Many logicians feel that this symmetric presentation offers a deeper insight in the structure of the logic than other styles of proof system, where the classical duality of negation is not as apparent in the rules.
=== Distinction between natural deduction and sequent calculus ===
Gentzen asserted a sharp distinction between his single-output natural deduction systems (NK and NJ) and his multiple-output sequent calculus systems (LK and LJ). He wrote that the intuitionistic natural deduction system NJ was somewhat ugly. He said that the special role of the excluded middle in the classical natural deduction system NK is removed in the classical sequent calculus system LK. He said that the sequent calculus LJ gave more symmetry than natural deduction NJ in the case of intuitionistic logic, as also in the case of classical logic (LK versus NK). Then he said that in addition to these reasons, the sequent calculus with multiple succedent formulas is intended particularly for his principal theorem ("Hauptsatz").
=== Origin of word "sequent" ===
The word "sequent" is taken from the word "Sequenz" in Gentzen's 1934 paper. Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."
== Proving logical formulas ==
=== Reduction trees ===
Sequent calculus can be seen as a tool for proving formulas in propositional logic, similar to the method of analytic tableaux. It gives a series of steps that allows one to reduce the problem of proving a logical formula to simpler and simpler formulas until one arrives at trivial ones.
Consider the following formula:
(
(
p
→
r
)
∨
(
q
→
r
)
)
→
(
(
p
∧
q
)
→
r
)
{\displaystyle ((p\rightarrow r)\lor (q\rightarrow r))\rightarrow ((p\land q)\rightarrow r)}
This is written in the following form, where the proposition that needs to be proven is to the right of the turnstile symbol
⊢
{\displaystyle \vdash }
:
⊢
(
(
p
→
r
)
∨
(
q
→
r
)
)
→
(
(
p
∧
q
)
→
r
)
{\displaystyle \vdash ((p\rightarrow r)\lor (q\rightarrow r))\rightarrow ((p\land q)\rightarrow r)}
Now, instead of proving this from the axioms, it is enough to assume the premise of the implication and then try to prove its conclusion. Hence one moves to the following sequent:
(
p
→
r
)
∨
(
q
→
r
)
⊢
(
p
∧
q
)
→
r
{\displaystyle (p\rightarrow r)\lor (q\rightarrow r)\vdash (p\land q)\rightarrow r}
Again the right hand side includes an implication, whose premise can further be assumed so that only its conclusion needs to be proven:
(
p
→
r
)
∨
(
q
→
r
)
,
(
p
∧
q
)
⊢
r
{\displaystyle (p\rightarrow r)\lor (q\rightarrow r),(p\land q)\vdash r}
Since the arguments in the left-hand side are assumed to be related by conjunction, this can be replaced by the following:
(
p
→
r
)
∨
(
q
→
r
)
,
p
,
q
⊢
r
{\displaystyle (p\rightarrow r)\lor (q\rightarrow r),p,q\vdash r}
This is equivalent to proving the conclusion in both cases of the disjunction on the first argument on the left. Thus we may split the sequent to two, where we now have to prove each separately:
p
→
r
,
p
,
q
⊢
r
{\displaystyle p\rightarrow r,p,q\vdash r}
q
→
r
,
p
,
q
⊢
r
{\displaystyle q\rightarrow r,p,q\vdash r}
In the case of the first judgment, we rewrite
p
→
r
{\displaystyle p\rightarrow r}
as
¬
p
∨
r
{\displaystyle \lnot p\lor r}
and split the sequent again to get:
¬
p
,
p
,
q
⊢
r
{\displaystyle \lnot p,p,q\vdash r}
r
,
p
,
q
⊢
r
{\displaystyle r,p,q\vdash r}
The second sequent is done; the first sequent can be further simplified into:
p
,
q
⊢
p
,
r
{\displaystyle p,q\vdash p,r}
This process can always be continued until there are only atomic formulas in each side.
The process can be graphically described by a rooted tree, as depicted on the right. The root of the tree is the formula we wish to prove; the leaves consist of atomic formulas only. The tree is known as a reduction tree.
The items to the left of the turnstile are understood to be connected by conjunction, and those to the right by disjunction. Therefore, when both consist only of atomic symbols, the sequent is accepted axiomatically (and always true) if and only if at least one of the symbols on the right also appears on the left.
Following are the rules by which one proceeds along the tree. Whenever one sequent is split into two, the tree vertex has two child vertices, and the tree is branched. Additionally, one may freely change the order of the arguments in each side; Γ and Δ stand for possible additional arguments.
The usual term for the horizontal line used in Gentzen-style layouts for natural deduction is inference line.
Starting with any formula in propositional logic, by a series of steps, the right side of the turnstile can be processed until it includes only atomic symbols. Then, the same is done for the left side. Since every logical operator appears in one of the rules above, and is removed by the rule, the process terminates when no logical operators remain: The formula has been decomposed.
Thus, the sequents in the leaves of the trees include only atomic symbols, which are either provable by the axiom or not, according to whether one of the symbols on the right also appears on the left.
It is easy to see that the steps in the tree preserve the semantic truth value of the formulas implied by them, with conjunction understood between the tree's different branches whenever there is a split. It is also obvious that an axiom is provable if and only if it is true for every assignment of truth values to the atomic symbols. Thus this system is sound and complete for classical propositional logic.
=== Relation to standard axiomatizations ===
Sequent calculus is related to other axiomatizations of classical propositional calculus, such as Frege's propositional calculus or Jan Łukasiewicz's axiomatization (itself a part of the standard Hilbert system): Every formula that can be proven in these has a reduction tree. This can be shown as follows: Every proof in propositional calculus uses only axioms and the inference rules. Each use of an axiom scheme yields a true logical formula, and can thus be proven in sequent calculus; examples for these are shown below. The only inference rule in the systems mentioned above is modus ponens, which is implemented by the cut rule.
== The system LK ==
This section introduces the rules of the sequent calculus LK (standing for Logistische Kalkül) as introduced by Gentzen in 1934. A (formal) proof in this calculus is a finite sequence of sequents, where each of the sequents is derivable from sequents appearing earlier in the sequence by using one of the rules below.
=== Inference rules ===
The following notation will be used:
⊢
{\displaystyle \vdash }
known as the turnstile, separates the assumptions on the left from the propositions on the right
A
{\displaystyle A}
and
B
{\displaystyle B}
denote formulas of first-order predicate logic (one may also restrict this to propositional logic),
Γ
,
Δ
,
Σ
{\displaystyle \Gamma ,\Delta ,\Sigma }
, and
Π
{\displaystyle \Pi }
are finite (possibly empty) sequences of formulas (in fact, the order of formulas does not matter; see § Structural rules), called contexts,
when on the left of the
⊢
{\displaystyle \vdash }
, the sequence of formulas is considered conjunctively (all assumed to hold at the same time),
while on the right of the
⊢
{\displaystyle \vdash }
, the sequence of formulas is considered disjunctively (at least one of the formulas must hold for any assignment of variables),
t
{\displaystyle t}
denotes an arbitrary term,
x
{\displaystyle x}
and
y
{\displaystyle y}
denote variables.
a variable is said to occur free within a formula if it is not bound by quantifiers
∀
{\displaystyle \forall }
or
∃
{\displaystyle \exists }
.
A
[
t
/
x
]
{\displaystyle A[t/x]}
denotes the formula that is obtained by substituting the term
t
{\displaystyle t}
for every free occurrence of the variable
x
{\displaystyle x}
in formula
A
{\displaystyle A}
with the restriction that the term
t
{\displaystyle t}
must be free for the variable
x
{\displaystyle x}
in
A
{\displaystyle A}
(i.e., no occurrence of any variable in
t
{\displaystyle t}
becomes bound in
A
[
t
/
x
]
{\displaystyle A[t/x]}
).
W
L
{\displaystyle WL}
,
W
R
{\displaystyle WR}
,
C
L
{\displaystyle CL}
,
C
R
{\displaystyle CR}
,
P
L
{\displaystyle PL}
,
P
R
{\displaystyle PR}
: These six stand for the two versions of each of three structural rules; one for use on the left ('L') of a
⊢
{\displaystyle \vdash }
, and the other on its right ('R'). The rules are abbreviated 'W' for Weakening (Left/Right), 'C' for Contraction, and 'P' for Permutation.
Note that, contrary to the rules for proceeding along the reduction tree presented above, the following rules are for moving in the opposite directions, from axioms to theorems. Thus they are exact mirror-images of the rules above, except that here symmetry is not implicitly assumed, and rules regarding quantification are added.
Restrictions: In the rules marked with (†),
(
∀
R
)
{\displaystyle ({\forall }R)}
and
(
∃
L
)
{\displaystyle ({\exists }L)}
, the variable
y
{\displaystyle y}
must not occur free anywhere in the respective lower sequents.
=== An intuitive explanation ===
The above rules can be divided into two major groups: logical and structural ones. Each of the logical rules introduces a new logical formula either on the left or on the right of the turnstile
⊢
{\displaystyle \vdash }
. In contrast, the structural rules operate on the structure of the sequents, ignoring the exact shape of the formulas. The two exceptions to this general scheme are the axiom of identity (I) and the rule of (Cut).
Although stated in a formal way, the above rules allow for a very intuitive reading in terms of classical logic. Consider, for example, the rule
(
∧
L
1
)
{\displaystyle ({\land }L_{1})}
. It says that, whenever one can prove that
Δ
{\displaystyle \Delta }
can be concluded from some sequence of formulas that contain
A
{\displaystyle A}
, then one can also conclude
Δ
{\displaystyle \Delta }
from the (stronger) assumption that
A
∧
B
{\displaystyle A\land B}
holds. Likewise, the rule
(
¬
R
)
{\displaystyle ({\neg }R)}
states that, if
Γ
{\displaystyle \Gamma }
and
A
{\displaystyle A}
suffice to conclude
Δ
{\displaystyle \Delta }
, then from
Γ
{\displaystyle \Gamma }
alone one can either still conclude
Δ
{\displaystyle \Delta }
or
A
{\displaystyle A}
must be false, i.e.
¬
A
{\displaystyle {\neg }A}
holds. All the rules can be interpreted in this way.
For an intuition about the quantifier rules, consider the rule
(
∀
R
)
{\displaystyle ({\forall }R)}
. Of course concluding that
∀
x
A
{\displaystyle \forall {x}A}
holds just from the fact that
A
[
y
/
x
]
{\displaystyle A[y/x]}
is true is not in general possible. If, however, the variable y is not mentioned elsewhere (i.e. it can still be chosen freely, without influencing the other formulas), then one may assume, that
A
[
y
/
x
]
{\displaystyle A[y/x]}
holds for any value of y. The other rules should then be pretty straightforward.
Instead of viewing the rules as descriptions for legal derivations in predicate logic, one may also consider them as instructions for the construction of a proof for a given statement. In this case the rules can be read bottom-up; for example,
(
∧
R
)
{\displaystyle ({\land }R)}
says that, to prove that
A
∧
B
{\displaystyle A\land B}
follows from the assumptions
Γ
{\displaystyle \Gamma }
and
Σ
{\displaystyle \Sigma }
, it suffices to prove that
A
{\displaystyle A}
can be concluded from
Γ
{\displaystyle \Gamma }
and
B
{\displaystyle B}
can be concluded from
Σ
{\displaystyle \Sigma }
, respectively. Note that, given some antecedent, it is not clear how this is to be split into
Γ
{\displaystyle \Gamma }
and
Σ
{\displaystyle \Sigma }
. However, there are only finitely many possibilities to be checked since the antecedent by assumption is finite. This also illustrates how proof theory can be viewed as operating on proofs in a combinatorial fashion: given proofs for both
A
{\displaystyle A}
and
B
{\displaystyle B}
, one can construct a proof for
A
∧
B
{\displaystyle A\land B}
.
When looking for some proof, most of the rules offer more or less direct recipes of how to do this. The rule of cut is different: it states that, when a formula
A
{\displaystyle A}
can be concluded and this formula may also serve as a premise for concluding other statements, then the formula
A
{\displaystyle A}
can be "cut out" and the respective derivations are joined. When constructing a proof bottom-up, this creates the problem of guessing
A
{\displaystyle A}
(since it does not appear at all below). The cut-elimination theorem is thus crucial to the applications of sequent calculus in automated deduction: it states that all uses of the cut rule can be eliminated from a proof, implying that any provable sequent can be given a cut-free proof.
The second rule that is somewhat special is the axiom of identity (I). The intuitive reading of this is obvious: every formula proves itself. Like the cut rule, the axiom of identity is somewhat redundant: the completeness of atomic initial sequents states that the rule can be restricted to atomic formulas without any loss of provability.
Observe that all rules have mirror companions, except the ones for implication. This reflects the fact that the usual language of first-order logic does not include the "is not implied by" connective
↚
{\displaystyle \not \leftarrow }
that would be the De Morgan dual of implication. Adding such a connective with its natural rules would make the calculus completely left–right symmetric.
=== Example derivations ===
Here is the derivation of "
⊢
A
∨
¬
A
{\displaystyle \vdash A\lor \lnot A}
", known as
the Law of excluded middle (tertium non datur in Latin).
Next is the proof of a simple fact involving quantifiers. Note that the converse is not true, and its falsity can be seen when attempting to derive it bottom-up, because an existing free variable cannot be used in substitution in the rules
(
∀
R
)
{\displaystyle (\forall R)}
and
(
∃
L
)
{\displaystyle (\exists L)}
.
For something more interesting we shall prove
(
(
A
→
(
B
∨
C
)
)
→
(
(
(
B
→
¬
A
)
∧
¬
C
)
→
¬
A
)
)
{\displaystyle {\left(\left(A\rightarrow \left(B\lor C\right)\right)\rightarrow \left(\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\rightarrow \lnot A\right)\right)}}
. It is straightforward to find the derivation, which exemplifies the usefulness of LK in automated proving.
These derivations also emphasize the strictly formal structure of the sequent calculus. For example, the logical rules as defined above always act on a formula immediately adjacent to the turnstile, such that the permutation rules are necessary. Note, however, that this is in part an artifact of the presentation, in the original style of Gentzen. A common simplification involves the use of multisets of formulas in the interpretation of the sequent, rather than sequences, eliminating the need for an explicit permutation rule. This corresponds to shifting commutativity of assumptions and derivations outside the sequent calculus, whereas LK embeds it within the system itself.
=== Relation to analytic tableaux ===
For certain formulations (i.e. variants) of the sequent calculus, a proof in such a calculus is isomorphic to an upside-down, closed analytic tableau.
=== Structural rules ===
The structural rules deserve some additional discussion.
Weakening (W) allows the addition of arbitrary elements to a sequence. Intuitively, this is allowed in the antecedent because we can always restrict the scope of our proof (if all cars have wheels, then it's safe to say that all black cars have wheels); and in the succedent because we can always allow for alternative conclusions (if all cars have wheels, then it's safe to say that all cars have either wheels or wings).
Contraction (C) and Permutation (P) assure that neither the order (P) nor the multiplicity of occurrences (C) of elements of the sequences matters. Thus, one could instead of sequences also consider sets.
The extra effort of using sequences, however, is justified since part or all of the structural rules may be omitted. Doing so, one obtains the so-called substructural logics.
=== Properties of the system LK ===
This system of rules can be shown to be both sound and complete with respect to first-order logic, i.e. a statement
A
{\displaystyle A}
follows semantically from a set of premises
Γ
{\displaystyle \Gamma }
(
Γ
⊨
A
)
{\displaystyle (\Gamma \vDash A)}
if and only if the sequent
Γ
⊢
A
{\displaystyle \Gamma \vdash A}
can be derived by the above rules.
In the sequent calculus, the rule of cut is admissible. This result is also referred to as Gentzen's Hauptsatz ("Main Theorem").
== Variants ==
The above rules can be modified in various ways:
=== Minor structural alternatives ===
There is some freedom of choice regarding the technical details of how sequents and structural rules are formalized without changing what sequents the system derives.
First of all, as mentioned above, the sequents can be viewed to consist of sets or multisets. In this case, the rules for permuting and (when using sets) contracting formulas are unnecessary.
The rule of weakening becomes admissible if the axiom (I) is changed to derive any sequent of the form
Γ
,
A
⊢
A
,
Δ
{\displaystyle \Gamma ,A\vdash A,\Delta }
. Any weakening that appears in a derivation can then be moved to the beginning of the proof. This may be a convenient change when constructing proofs bottom-up.
One may also change whether rules with more than one premise share the same context for each of those premises or split their contexts between them: For example,
(
∨
L
)
{\displaystyle ({\lor }L)}
may be instead formulated as
Γ
,
A
⊢
Δ
Σ
,
B
⊢
Π
Γ
,
Σ
,
A
∨
B
⊢
Δ
,
Π
.
{\displaystyle {\cfrac {\Gamma ,A\vdash \Delta \qquad \Sigma ,B\vdash \Pi }{\Gamma ,\Sigma ,A\lor B\vdash \Delta ,\Pi }}.}
Contraction and weakening make this version of the rule interderivable with the version above, although in their absence, as in linear logic, these rules define different connectives.
=== Absurdity ===
One can introduce
⊥
{\displaystyle \bot }
, the absurdity constant representing false, with the axiom:
⊥
⊢
{\displaystyle {\cfrac {}{\bot \vdash \quad }}}
Or if, as described above, weakening is to be an admissible rule, then with the axiom:
Γ
,
⊥
⊢
Δ
{\displaystyle {\cfrac {}{\Gamma ,\bot \vdash \Delta }}}
With
⊥
{\displaystyle \bot }
, negation can be subsumed as a special case of implication, via the definition
(
¬
A
)
⟺
(
A
→
⊥
)
{\displaystyle (\neg A)\iff (A\to \bot )}
.
=== Substructural logics ===
Alternatively, one may restrict or forbid the use of some of the structural rules. This yields a variety of substructural logic systems. They are generally weaker than LK (i.e., they have fewer theorems), and thus not complete with respect to the standard semantics of first-order logic. However, they have other interesting properties that have led to applications in theoretical computer science and artificial intelligence.
=== Intuitionistic sequent calculus: System LJ ===
Surprisingly, some small changes in the rules of LK suffice to turn it into a proof system for intuitionistic logic. To this end, one has to restrict to sequents with at most one formula on the right-hand side, and modify the rules to maintain this invariant. For example,
(
∨
L
)
{\displaystyle ({\lor }L)}
is reformulated as follows (where C is an arbitrary formula):
Γ
,
A
⊢
C
Γ
,
B
⊢
C
Γ
,
A
∨
B
⊢
C
(
∨
L
)
{\displaystyle {\cfrac {\Gamma ,A\vdash C\qquad \Gamma ,B\vdash C}{\Gamma ,A\lor B\vdash C}}\quad ({\lor }L)}
The resulting system is called LJ. It is sound and complete with respect to intuitionistic logic and admits a similar cut-elimination proof. This can be used in proving disjunction and existence properties.
In fact, the only rules in LK that need to be restricted to single-formula consequents are
(
→
R
)
{\displaystyle ({\to }R)}
,
(
¬
R
)
{\displaystyle (\neg R)}
(which can be seen as a special case of
→
R
{\displaystyle {\to }R}
, as described above) and
(
∀
R
)
{\displaystyle ({\forall }R)}
. When multi-formula consequents are interpreted as disjunctions, all of the other inference rules of LK are derivable in LJ, while the rules
(
→
R
)
{\displaystyle ({\to }R)}
and
(
∀
R
)
{\displaystyle ({\forall }R)}
become
Γ
,
A
⊢
B
∨
C
Γ
⊢
(
A
→
B
)
∨
C
{\displaystyle {\cfrac {\Gamma ,A\vdash B\lor C}{\Gamma \vdash (A\to B)\lor C}}}
and (when
y
{\displaystyle y}
does not occur free in the bottom sequent)
Γ
⊢
A
[
y
/
x
]
∨
C
Γ
⊢
(
∀
x
A
)
∨
C
.
{\displaystyle {\cfrac {\Gamma \vdash A[y/x]\lor C}{\Gamma \vdash (\forall xA)\lor C}}.}
These rules are not intuitionistically valid.
== See also ==
Cirquent calculus
Nested sequent calculus
Resolution (logic)
Proof theory
== Notes ==
== References ==
Binder, David; Atkey, Robert; McBride, Conor (26 June 2024). "Grokking the Sequent Calculus (Functional Pearl)". arXiv:2406.14719 [cs.LO].
Buss, Samuel R. (1998). "An introduction to proof theory". In Samuel R. Buss (ed.). Handbook of proof theory. Elsevier. pp. 1–78. ISBN 0-444-89840-9.
Curien, Pierre-Louis; Munch-Maccagnoni, Guillaume (11 June 2010). "The duality of computation under focus". arXiv:1006.2283 [cs.LO].
Curry, Haskell Brooks (1977) [1963]. Foundations of mathematical logic. New York: Dover Publications Inc. ISBN 978-0-486-63462-3.
Gentzen, Gerhard Karl Erich (1934). "Untersuchungen über das logische Schließen. I". Mathematische Zeitschrift. 39 (2): 176–210. doi:10.1007/BF01201353. S2CID 121546341.
Gentzen, Gerhard Karl Erich (1935). "Untersuchungen über das logische Schließen. II". Mathematische Zeitschrift. 39 (3): 405–431. doi:10.1007/bf01201363. S2CID 186239837.
Girard, Jean-Yves; Paul Taylor; Yves Lafont (1990) [1989]. Proofs and Types. Cambridge University Press (Cambridge Tracts in Theoretical Computer Science, 7). ISBN 0-521-37181-3.
Hilbert, David; Bernays, Paul (1970) [1939]. Grundlagen der Mathematik II (Second ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-642-86897-9.
Indrzejczak, Andrzej (2021). "Gentzen's Sequent Calculus LK". Sequents and Trees | An Introduction to the Theory and Applications of Propositional Sequent Calculi. Studies in Universal Logic. Cham, Switzerland: Springer Nature Switzerland AG. doi:10.1007/978-3-030-57145-0_2. ISBN 978-3-030-57144-3.
Kleene, Stephen Cole (2009) [1952]. Introduction to metamathematics. Ishi Press International. ISBN 978-0-923891-57-2.
Kleene, Stephen Cole (2002) [1967]. Mathematical logic. Mineola, New York: Dover Publications. ISBN 978-0-486-42533-7.
Lemmon, Edward John (1965). Beginning logic. Thomas Nelson. ISBN 0-17-712040-1.
Kreitz, Christoph; Constable, Robert (17 February 2009). "Applied Logic, Univ. of Cornell: Lecture 9" (PDF). Cornell University. Retrieved 1 June 2025.
Mancosu, Paolo; Galvan, Sergio; Zach, Richard (2021). An Introduction to Proof Theory — Normalization, Cut-Elimination, and Consistency Proofs. Oxford University Press. p. 431. ISBN 978-0-19-289593-6.
Shankar, Natarajan; Owre, Sam; Rushby, John M.; Stringer-Calvert, David W. J. (2020). "PVS Prover Guide" (PDF). User guide. SRI International. Retrieved 1 June 2025.
Smullyan, Raymond Merrill (1995) [1968]. First-order logic. New York: Dover Publications. ISBN 978-0-486-68370-6.
Suppes, Patrick Colonel (1999) [1957]. Introduction to logic. Mineola, New York: Dover Publications. ISBN 978-0-486-40687-9.
Tait, William W. (2010). "Gentzen's original consistency proof and the Bar Theorem". In Kahle, Reinhard; Rathjen, Michael (eds.). Gentzen's Centenary: The Quest for Consistency. New York: Springer. pp. 213–228. doi:10.1007/978-3-319-10103-3_8. ISBN 978-3-319-10102-6.
Tiomkin, M. (1988). "Proving unprovability". Proceedings of the Third Annual Symposium on Logic in Computer Science, July 5–8, 1988. Computer Society Press. pp. 22–26. ISBN 0-8186-0853-6.
von Plato, Jan [in German] (2014). Elements of Logical Reasoning. Cambridge University Press. doi:10.1017/CBO9781139567862. ISBN 9781139567862.
== External links ==
Rathjen, Michael; Sieg, Wilfried (2024). "Proof Theory (Sequent Calculi)". In Zalta, Edward N.; Nodelman, Uri (eds.). Stanford Encyclopedia of Philosophy (Winter 2024 ed.).
"Sequent calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"A Brief Diversion: Sequent Calculus". Good Math, Bad Math. Retrieved 27 March 2025.
"Interactive Tutorial of the Sequent Calculus". Logitext (MIT). Retrieved 27 March 2025. | Wikipedia/Sequent_calculus |
In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.
The importance of primitive recursive functions lies in the fact that most computable functions that are studied in number theory (and more generally in mathematics) are primitive recursive. For example, addition and division, the factorial and exponential function, and the function which returns the nth prime are all primitive recursive. In fact, for showing that a computable function is primitive recursive, it suffices to show that its time complexity is bounded above by a primitive recursive function of the input size. It is hence not particularly easy to devise a computable function that is not primitive recursive; some examples are shown in section § Limitations below.
The set of primitive recursive functions is known as PR in computational complexity theory.
== Definition ==
A primitive recursive function takes a fixed number of arguments, each a natural number (nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takes n arguments it is called n-ary.
The basic primitive recursive functions are given by these axioms:
More complex primitive recursive functions can be obtained by applying the operations given by these axioms:
The primitive recursive functions are the basic functions and those obtained from the basic functions by applying these operations a finite number of times.
== Examples ==
=== Addition ===
A definition of the 2-ary function
A
d
d
{\displaystyle Add}
, to compute the sum of its arguments, can be obtained using the primitive recursion operator
ρ
{\displaystyle \rho }
. To this end, the well-known equations
0
+
y
=
y
and
S
(
x
)
+
y
=
S
(
x
+
y
)
.
{\displaystyle {\begin{array}{rcll}0+y&=&y&{\text{ and}}\\S(x)+y&=&S(x+y)&.\\\end{array}}}
are "rephrased in primitive recursive function terminology": In the definition of
ρ
(
g
,
h
)
{\displaystyle \rho (g,h)}
, the first equation suggests to choose
g
=
P
1
1
{\displaystyle g=P_{1}^{1}}
to obtain
A
d
d
(
0
,
y
)
=
g
(
y
)
=
y
{\displaystyle Add(0,y)=g(y)=y}
; the second equation suggests to choose
h
=
S
∘
P
2
3
{\displaystyle h=S\circ P_{2}^{3}}
to obtain
A
d
d
(
S
(
x
)
,
y
)
=
h
(
x
,
A
d
d
(
x
,
y
)
,
y
)
=
(
S
∘
P
2
3
)
(
x
,
A
d
d
(
x
,
y
)
,
y
)
=
S
(
A
d
d
(
x
,
y
)
)
{\displaystyle Add(S(x),y)=h(x,Add(x,y),y)=(S\circ P_{2}^{3})(x,Add(x,y),y)=S(Add(x,y))}
. Therefore, the addition function can be defined as
A
d
d
=
ρ
(
P
1
1
,
S
∘
P
2
3
)
{\displaystyle Add=\rho (P_{1}^{1},S\circ P_{2}^{3})}
. As a computation example,
A
d
d
(
1
,
7
)
=
ρ
(
P
1
1
,
S
∘
P
2
3
)
(
S
(
0
)
,
7
)
by Def.
A
d
d
,
S
=
(
S
∘
P
2
3
)
(
0
,
A
d
d
(
0
,
7
)
,
7
)
by case
ρ
(
g
,
h
)
(
S
(
.
.
.
)
,
.
.
.
)
=
S
(
A
d
d
(
0
,
7
)
)
by Def.
∘
,
P
2
3
=
S
(
ρ
(
P
1
1
,
S
∘
P
2
3
)
(
0
,
7
)
)
by Def.
A
d
d
=
S
(
P
1
1
(
7
)
)
by case
ρ
(
g
,
h
)
(
0
,
.
.
.
)
=
S
(
7
)
by Def.
P
1
1
=
8
by Def.
S
.
{\displaystyle {\begin{array}{lll}&Add(1,7)\\=&\rho (P_{1}^{1},S\circ P_{2}^{3})\;(S(0),7)&{\text{ by Def. }}Add,S\\=&(S\circ P_{2}^{3})(0,Add(0,7),7)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&S(Add(0,7))&{\text{ by Def. }}\circ ,P_{2}^{3}\\=&S(\;\rho (P_{1}^{1},S\circ P_{2}^{3})\;(0,7)\;)&{\text{ by Def. }}Add\\=&S(P_{1}^{1}(7))&{\text{ by case }}\rho (g,h)\;(0,...)\\=&S(7)&{\text{ by Def. }}P_{1}^{1}\\=&8&{\text{ by Def. }}S.\\\end{array}}}
=== Doubling ===
Given
A
d
d
{\displaystyle Add}
, the 1-ary function
A
d
d
∘
(
P
1
1
,
P
1
1
)
{\displaystyle Add\circ (P_{1}^{1},P_{1}^{1})}
doubles its argument,
(
A
d
d
∘
(
P
1
1
,
P
1
1
)
)
(
x
)
=
A
d
d
(
x
,
x
)
=
x
+
x
{\displaystyle (Add\circ (P_{1}^{1},P_{1}^{1}))(x)=Add(x,x)=x+x}
.
=== Multiplication ===
In a similar way as addition, multiplication can be defined by
M
u
l
=
ρ
(
C
0
1
,
A
d
d
∘
(
P
2
3
,
P
3
3
)
)
{\displaystyle Mul=\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))}
. This reproduces the well-known multiplication equations:
M
u
l
(
0
,
y
)
=
ρ
(
C
0
1
,
A
d
d
∘
(
P
2
3
,
P
3
3
)
)
(
0
,
y
)
by Def.
M
u
l
=
C
0
1
(
y
)
by case
ρ
(
g
,
h
)
(
0
,
.
.
.
)
=
0
by Def.
C
0
1
.
{\displaystyle {\begin{array}{lll}&Mul(0,y)\\=&\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))\;(0,y)&{\text{ by Def. }}Mul\\=&C_{0}^{1}(y)&{\text{ by case }}\rho (g,h)\;(0,...)\\=&0&{\text{ by Def. }}C_{0}^{1}.\\\end{array}}}
and
M
u
l
(
S
(
x
)
,
y
)
=
ρ
(
C
0
1
,
A
d
d
∘
(
P
2
3
,
P
3
3
)
)
(
S
(
x
)
,
y
)
by Def.
M
u
l
=
(
A
d
d
∘
(
P
2
3
,
P
3
3
)
)
(
x
,
M
u
l
(
x
,
y
)
,
y
)
by case
ρ
(
g
,
h
)
(
S
(
.
.
.
)
,
.
.
.
)
=
A
d
d
(
M
u
l
(
x
,
y
)
,
y
)
by Def.
∘
,
P
2
3
,
P
3
3
=
M
u
l
(
x
,
y
)
+
y
by property of
A
d
d
.
{\displaystyle {\begin{array}{lll}&Mul(S(x),y)\\=&\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))\;(S(x),y)&{\text{ by Def. }}Mul\\=&(Add\circ (P_{2}^{3},P_{3}^{3}))\;(x,Mul(x,y),y)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&Add(Mul(x,y),y)&{\text{ by Def. }}\circ ,P_{2}^{3},P_{3}^{3}\\=&Mul(x,y)+y&{\text{ by property of }}Add.\\\end{array}}}
=== Predecessor ===
The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rules
P
r
e
d
(
0
)
=
0
{\displaystyle Pred(0)=0}
and
P
r
e
d
(
S
(
n
)
)
=
n
{\displaystyle Pred(S(n))=n}
. A primitive recursive definition is
P
r
e
d
=
ρ
(
C
0
0
,
P
1
2
)
{\displaystyle Pred=\rho (C_{0}^{0},P_{1}^{2})}
. As a computation example,
P
r
e
d
(
8
)
=
ρ
(
C
0
0
,
P
1
2
)
(
S
(
7
)
)
by Def.
P
r
e
d
,
S
=
P
1
2
(
7
,
P
r
e
d
(
7
)
)
by case
ρ
(
g
,
h
)
(
S
(
.
.
.
)
,
.
.
.
)
=
7
by Def.
P
1
2
.
{\displaystyle {\begin{array}{lll}&Pred(8)\\=&\rho (C_{0}^{0},P_{1}^{2})\;(S(7))&{\text{ by Def. }}Pred,S\\=&P_{1}^{2}(7,Pred(7))&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&7&{\text{ by Def. }}P_{1}^{2}.\\\end{array}}}
=== Truncated subtraction ===
The limited subtraction function (also called "monus", and denoted "
−
.
{\displaystyle {\stackrel {.}{-}}}
") is definable from the predecessor function. It satisfies the equations
y
−
.
0
=
y
and
y
−
.
S
(
x
)
=
P
r
e
d
(
y
−
.
x
)
.
{\displaystyle {\begin{array}{rcll}y{\stackrel {.}{-}}0&=&y&{\text{and}}\\y{\stackrel {.}{-}}S(x)&=&Pred(y{\stackrel {.}{-}}x)&.\\\end{array}}}
Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction,
R
S
u
b
(
y
,
x
)
=
x
−
.
y
{\displaystyle RSub(y,x)=x{\stackrel {.}{-}}y}
. Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, as
R
S
u
b
=
ρ
(
P
1
1
,
P
r
e
d
∘
P
2
3
)
{\displaystyle RSub=\rho (P_{1}^{1},Pred\circ P_{2}^{3})}
. To get rid of the reversed argument order, then define
S
u
b
=
R
S
u
b
∘
(
P
2
2
,
P
1
2
)
{\displaystyle Sub=RSub\circ (P_{2}^{2},P_{1}^{2})}
. As a computation example,
S
u
b
(
8
,
1
)
=
(
R
S
u
b
∘
(
P
2
2
,
P
1
2
)
)
(
8
,
1
)
by Def.
S
u
b
=
R
S
u
b
(
1
,
8
)
by Def.
∘
,
P
2
2
,
P
1
2
=
ρ
(
P
1
1
,
P
r
e
d
∘
P
2
3
)
(
S
(
0
)
,
8
)
by Def.
R
S
u
b
,
S
=
(
P
r
e
d
∘
P
2
3
)
(
0
,
R
S
u
b
(
0
,
8
)
,
8
)
by case
ρ
(
g
,
h
)
(
S
(
.
.
.
)
,
.
.
.
)
=
P
r
e
d
(
R
S
u
b
(
0
,
8
)
)
by Def.
∘
,
P
2
3
=
P
r
e
d
(
ρ
(
P
1
1
,
P
r
e
d
∘
P
2
3
)
(
0
,
8
)
)
by Def.
R
S
u
b
=
P
r
e
d
(
P
1
1
(
8
)
)
by case
ρ
(
g
,
h
)
(
0
,
.
.
.
)
=
P
r
e
d
(
8
)
by Def.
P
1
1
=
7
by property of
P
r
e
d
.
{\displaystyle {\begin{array}{lll}&Sub(8,1)\\=&(RSub\circ (P_{2}^{2},P_{1}^{2}))\;(8,1)&{\text{ by Def. }}Sub\\=&RSub(1,8)&{\text{ by Def. }}\circ ,P_{2}^{2},P_{1}^{2}\\=&\rho (P_{1}^{1},Pred\circ P_{2}^{3})\;(S(0),8)&{\text{ by Def. }}RSub,S\\=&(Pred\circ P_{2}^{3})\;(0,RSub(0,8),8)&{\text{ by case }}\rho (g,h)\;(S(...),...)\\=&Pred(RSub(0,8))&{\text{ by Def. }}\circ ,P_{2}^{3}\\=&Pred(\;\rho (P_{1}^{1},Pred\circ P_{2}^{3})\;(0,8)\;)&{\text{ by Def. }}RSub\\=&Pred(P_{1}^{1}(8))&{\text{ by case }}\rho (g,h)\;(0,...)\\=&Pred(8)&{\text{ by Def. }}P_{1}^{1}\\=&7&{\text{ by property of }}Pred.\\\end{array}}}
=== Converting predicates to numeric functions ===
In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers with truth values (that is
t
{\displaystyle t}
for true and
f
{\displaystyle f}
for false), or that produce truth values as outputs. This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth value
t
{\displaystyle t}
with the number
1
{\displaystyle 1}
and the truth value
f
{\displaystyle f}
with the number
0
{\displaystyle 0}
. Once this identification has been made, the characteristic function of a set
A
{\displaystyle A}
, which always returns
1
{\displaystyle 1}
or
0
{\displaystyle 0}
, can be viewed as a predicate that tells whether a number is in the set
A
{\displaystyle A}
. Such an identification of predicates with numeric functions will be assumed for the remainder of this article.
=== Predicate "Is zero" ===
As an example for a primitive recursive predicate, the 1-ary function
I
s
Z
e
r
o
{\displaystyle IsZero}
shall be defined such that
I
s
Z
e
r
o
(
x
)
=
1
{\displaystyle IsZero(x)=1}
if
x
=
0
{\displaystyle x=0}
, and
I
s
Z
e
r
o
(
x
)
=
0
{\displaystyle IsZero(x)=0}
, otherwise. This can be achieved by defining
I
s
Z
e
r
o
=
ρ
(
C
1
0
,
C
0
2
)
{\displaystyle IsZero=\rho (C_{1}^{0},C_{0}^{2})}
. Then,
I
s
Z
e
r
o
(
0
)
=
ρ
(
C
1
0
,
C
0
2
)
(
0
)
=
C
1
0
(
0
)
=
1
{\displaystyle IsZero(0)=\rho (C_{1}^{0},C_{0}^{2})(0)=C_{1}^{0}(0)=1}
and e.g.
I
s
Z
e
r
o
(
8
)
=
ρ
(
C
1
0
,
C
0
2
)
(
S
(
7
)
)
=
C
0
2
(
7
,
I
s
Z
e
r
o
(
7
)
)
=
0
{\displaystyle IsZero(8)=\rho (C_{1}^{0},C_{0}^{2})(S(7))=C_{0}^{2}(7,IsZero(7))=0}
.
=== Predicate "Less or equal" ===
Using the property
x
≤
y
⟺
x
−
.
y
=
0
{\displaystyle x\leq y\iff x{\stackrel {.}{-}}y=0}
, the 2-ary function
L
e
q
{\displaystyle Leq}
can be defined by
L
e
q
=
I
s
Z
e
r
o
∘
S
u
b
{\displaystyle Leq=IsZero\circ Sub}
. Then
L
e
q
(
x
,
y
)
=
1
{\displaystyle Leq(x,y)=1}
if
x
≤
y
{\displaystyle x\leq y}
, and
L
e
q
(
x
,
y
)
=
0
{\displaystyle Leq(x,y)=0}
, otherwise. As a computation example,
L
e
q
(
8
,
3
)
=
I
s
Z
e
r
o
(
S
u
b
(
8
,
3
)
)
by Def.
L
e
q
=
I
s
Z
e
r
o
(
5
)
by property of
S
u
b
=
0
by property of
I
s
Z
e
r
o
{\displaystyle {\begin{array}{lll}&Leq(8,3)\\=&IsZero(Sub(8,3))&{\text{ by Def. }}Leq\\=&IsZero(5)&{\text{ by property of }}Sub\\=&0&{\text{ by property of }}IsZero\\\end{array}}}
=== Predicate "Greater or equal" ===
Once a definition of
L
e
q
{\displaystyle Leq}
is obtained, the converse predicate can be defined as
G
e
q
=
L
e
q
∘
(
P
2
2
,
P
1
2
)
{\displaystyle Geq=Leq\circ (P_{2}^{2},P_{1}^{2})}
. Then,
G
e
q
(
x
,
y
)
=
L
e
q
(
y
,
x
)
{\displaystyle Geq(x,y)=Leq(y,x)}
is true (more precisely: has value 1) if, and only if,
x
≥
y
{\displaystyle x\geq y}
.
=== If-then-else ===
The 3-ary if-then-else operator known from programming languages can be defined by
If
=
ρ
(
P
2
2
,
P
3
4
)
{\displaystyle {\textit {If}}=\rho (P_{2}^{2},P_{3}^{4})}
. Then, for arbitrary
x
{\displaystyle x}
,
If
(
S
(
x
)
,
y
,
z
)
=
ρ
(
P
2
2
,
P
3
4
)
(
S
(
x
)
,
y
,
z
)
by Def.
If
=
P
3
4
(
x
,
If
(
x
,
y
,
z
)
,
y
,
z
)
by case
ρ
(
S
(
.
.
.
)
,
.
.
.
)
=
y
by Def.
P
3
4
{\displaystyle {\begin{array}{lll}&{\textit {If}}(S(x),y,z)\\=&\rho (P_{2}^{2},P_{3}^{4})\;(S(x),y,z)&{\text{ by Def. }}{\textit {If}}\\=&P_{3}^{4}(x,{\textit {If}}(x,y,z),y,z)&{\text{ by case }}\rho (S(...),...)\\=&y&{\text{ by Def. }}P_{3}^{4}\\\end{array}}}
and
If
(
0
,
y
,
z
)
=
ρ
(
P
2
2
,
P
3
4
)
(
0
,
y
,
z
)
by Def.
If
=
P
2
2
(
y
,
z
)
by case
ρ
(
0
,
.
.
.
)
=
z
by Def.
P
2
2
.
{\displaystyle {\begin{array}{lll}&{\textit {If}}(0,y,z)\\=&\rho (P_{2}^{2},P_{3}^{4})\;(0,y,z)&{\text{ by Def. }}{\textit {If}}\\=&P_{2}^{2}(y,z)&{\text{ by case }}\rho (0,...)\\=&z&{\text{ by Def. }}P_{2}^{2}.\\\end{array}}}
.
That is,
If
(
x
,
y
,
z
)
{\displaystyle {\textit {If}}(x,y,z)}
returns the then-part,
y
{\displaystyle y}
, if the if-part,
x
{\displaystyle x}
, is true, and the else-part,
z
{\displaystyle z}
, otherwise.
=== Junctors ===
Based on the
If
{\displaystyle {\textit {If}}}
function, it is easy to define logical junctors. For example, defining
A
n
d
=
If
∘
(
P
1
2
,
P
2
2
,
C
0
2
)
{\displaystyle And={\textit {If}}\circ (P_{1}^{2},P_{2}^{2},C_{0}^{2})}
, one obtains
A
n
d
(
x
,
y
)
=
If
(
x
,
y
,
0
)
{\displaystyle And(x,y)={\textit {If}}(x,y,0)}
, that is,
A
n
d
(
x
,
y
)
{\displaystyle And(x,y)}
is true if, and only if, both
x
{\displaystyle x}
and
y
{\displaystyle y}
are true (logical conjunction of
x
{\displaystyle x}
and
y
{\displaystyle y}
).
Similarly,
O
r
=
If
∘
(
P
1
2
,
C
1
2
,
P
2
2
)
{\displaystyle Or={\textit {If}}\circ (P_{1}^{2},C_{1}^{2},P_{2}^{2})}
and
N
o
t
=
If
∘
(
P
1
1
,
C
0
1
,
C
1
1
)
{\displaystyle Not={\textit {If}}\circ (P_{1}^{1},C_{0}^{1},C_{1}^{1})}
lead to appropriate definitions of disjunction and negation:
O
r
(
x
,
y
)
=
If
(
x
,
1
,
y
)
{\displaystyle Or(x,y)={\textit {If}}(x,1,y)}
and
N
o
t
(
x
)
=
If
(
x
,
0
,
1
)
{\displaystyle Not(x)={\textit {If}}(x,0,1)}
.
=== Equality predicate ===
Using the above functions
L
e
q
{\displaystyle Leq}
,
G
e
q
{\displaystyle Geq}
and
A
n
d
{\displaystyle And}
, the definition
E
q
=
A
n
d
∘
(
L
e
q
,
G
e
q
)
{\displaystyle Eq=And\circ (Leq,Geq)}
implements the equality predicate. In fact,
E
q
(
x
,
y
)
=
A
n
d
(
L
e
q
(
x
,
y
)
,
G
e
q
(
x
,
y
)
)
{\displaystyle Eq(x,y)=And(Leq(x,y),Geq(x,y))}
is true if, and only if,
x
{\displaystyle x}
equals
y
{\displaystyle y}
.
Similarly, the definition
L
t
=
N
o
t
∘
G
e
q
{\displaystyle Lt=Not\circ Geq}
implements the predicate "less-than", and
G
t
=
N
o
t
∘
L
e
q
{\displaystyle Gt=Not\circ Leq}
implements "greater-than".
=== Other operations on natural numbers ===
Exponentiation and primality testing are primitive recursive. Given primitive recursive functions
e
{\displaystyle e}
,
f
{\displaystyle f}
,
g
{\displaystyle g}
, and
h
{\displaystyle h}
, a function that returns the value of
g
{\displaystyle g}
when
e
≤
f
{\displaystyle e\leq f}
and the value of
h
{\displaystyle h}
otherwise is primitive recursive.
=== Operations on integers and rational numbers ===
By using Gödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers and rational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then the field operations are all primitive recursive.
=== Some common primitive recursive functions ===
The following examples and definitions are from Kleene (1952, pp. 222–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, in Boolos, Burgess & Jeffrey (2002, pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation.
In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =def a'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed as Gödel numbers.
Addition: a+b
Multiplication: a×b
Exponentiation: ab
Factorial a! : 0! = 1, a'! = a!×a'
pred(a): (Predecessor or decrement): If a > 0 then a−1 else 0
Proper subtraction a ∸ b: If a ≥ b then a−b else 0
Minimum(a1, ... an)
Maximum(a1, ... an)
Absolute difference: | a−b | =def (a ∸ b) + (b ∸ a)
~sg(a): NOT[signum(a)]: If a=0 then 1 else 0
sg(a): signum(a): If a=0 then 0 else 1
a | b: (a divides b): If b=k×a for some k then 0 else 1
Remainder(a, b): the leftover if b does not divide a "evenly". Also called MOD(a, b)
a = b: sg | a − b | (Kleene's convention was to represent true by 0 and false by 1; presently, especially in computers, the most common convention is the reverse, namely to represent true by 1 and false by 0, which amounts to changing sg into ~sg here and in the next item)
a < b: sg( a' ∸ b )
Pr(a): a is a prime number Pr(a) =def a>1 & NOT(Exists c)1<c<a [ c|a ]
pi: the i+1th prime number
(a)i: exponent of pi in a: the unique x such that pix|a & NOT(pix'|a)
lh(a): the "length" or number of non-vanishing exponents in a
lo(a, b): (logarithm of a to base b): If a, b > 1 then the greatest x such that bx | a else 0
In the following, the abbreviation x =def x1, ... xn; subscripts may be applied if the meaning requires.
#A: A function φ definable explicitly from functions Ψ and constants q1, ... qn is primitive recursive in Ψ.
#B: The finite sum Σy<z ψ(x, y) and product Πy<zψ(x, y) are primitive recursive in ψ.
#C: A predicate P obtained by substituting functions χ1,..., χm for the respective variables of a predicate Q is primitive recursive in χ1,..., χm, Q.
#D: The following predicates are primitive recursive in Q and R:
NOT_Q(x) .
Q OR R: Q(x) V R(x),
Q AND R: Q(x) & R(x),
Q IMPLIES R: Q(x) → R(x)
Q is equivalent to R: Q(x) ≡ R(x)
#E: The following predicates are primitive recursive in the predicate R:
(Ey)y<z R(x, y) where (Ey)y<z denotes "there exists at least one y that is less than z such that"
(y)y<z R(x, y) where (y)y<z denotes "for all y less than z it is true that"
μyy<z R(x, y). The operator μyy<z R(x, y) is a bounded form of the so-called minimization- or mu-operator: Defined as "the least value of y less than z such that R(x, y) is true; or z if there is no such value."
#F: Definition by cases: The function defined thus, where Q1, ..., Qm are mutually exclusive predicates (or "ψ(x) shall have the value given by the first clause that applies), is primitive recursive in φ1, ..., Q1, ... Qm:
φ(x) =
φ1(x) if Q1(x) is true,
. . . . . . . . . . . . . . . . . . .
φm(x) if Qm(x) is true
φm+1(x) otherwise
#G: If φ satisfies the equation:
φ(y,x) = χ(y, COURSE-φ(y; x2, ... xn ), x2, ... xn then φ is primitive recursive in χ. The value COURSE-φ(y; x2 to n ) of the course-of-values function encodes the sequence of values φ(0,x2 to n), ..., φ(y-1,x2 to n) of the original function.
== Relationship to recursive functions ==
The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use of this operator may result in a partial function, that is, a relation with at most one value for each argument, but does not necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursive function is one that can be computed by a Turing machine. A total recursive function is a partial recursive function that is defined for every input.
Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. The Ackermann function A(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function.
An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set of all total recursive functions (which is not itself recursively enumerable). This means that there is a single computable function f(m,n) that enumerates the primitive recursive functions, namely:
For every primitive recursive function g, there is an m such that g(n) = f(m,n) for all n, and
For every m, the function h(n) = f(m,n) is primitive recursive.
f can be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use a diagonalization argument to show that f is not recursive primitive in itself: had it been such, so would be h(n) = f(n,n)+1. But if this equals some primitive recursive function, there is an m such that h(n) = f(m,n) for all n, and then h(m) = f(m,m), leading to contradiction.
However, the set of primitive recursive functions is not the largest recursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true.
== Limitations ==
Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant of Cantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows:
This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the article Machine that always halts. Note however that the partial computable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings.
Other examples of total recursive but not primitive recursive functions are known:
The function that takes m to Ackermann(m,m) is a unary total recursive function that is not primitive recursive.
The Paris–Harrington theorem involves a total recursive function that is not primitive recursive.
The Sudan function
The Goodstein function
== Variants ==
=== Constant functions ===
Instead of
C
n
k
{\displaystyle C_{n}^{k}}
,
alternative definitions use just one 0-ary zero function
C
0
0
{\displaystyle C_{0}^{0}}
as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator.
=== Iterative functions ===
Robinson considered various restrictions of the recursion rule. One is the so-called iteration rule where the function h does not have access to the parameters xi (in this case, we may assume without loss of generality that the function g is just the identity, as the general case can be obtained by substitution):
f
(
0
,
x
)
=
x
,
f
(
S
(
y
)
,
x
)
=
h
(
y
,
f
(
y
,
x
)
)
.
{\displaystyle {\begin{aligned}f(0,x)&=x,\\f(S(y),x)&=h(y,f(y,x)).\end{aligned}}}
He proved that the class of all primitive recursive functions can still be obtained in this way.
=== Pure recursion ===
Another restriction considered by Robinson is pure recursion, where h does not have access to the induction variable y:
f
(
0
,
x
1
,
…
,
x
k
)
=
g
(
x
1
,
…
,
x
k
)
,
f
(
S
(
y
)
,
x
1
,
…
,
x
k
)
=
h
(
f
(
y
,
x
1
,
…
,
x
k
)
,
x
1
,
…
,
x
k
)
.
{\displaystyle {\begin{aligned}f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k}),\\f(S(y),x_{1},\ldots ,x_{k})&=h(f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k}).\end{aligned}}}
Gladstone proved that this rule is enough to generate all primitive recursive functions. Gladstone improved this so that even the combination of these two restrictions, i.e., the pure iteration rule below, is enough:
f
(
0
,
x
)
=
x
,
f
(
S
(
y
)
,
x
)
=
h
(
f
(
y
,
x
)
)
.
{\displaystyle {\begin{aligned}f(0,x)&=x,\\f(S(y),x)&=h(f(y,x)).\end{aligned}}}
Further improvements are possible: Severin prove that even the pure iteration rule without parameters, namely
f
(
0
)
=
0
,
f
(
S
(
y
)
)
=
h
(
f
(
y
)
)
,
{\displaystyle {\begin{aligned}f(0)&=0,\\f(S(y))&=h(f(y)),\end{aligned}}}
suffices to generate all unary primitive recursive functions if we extend the set of initial functions with truncated subtraction x ∸ y. We get all primitive recursive functions if we additionally include + as an initial function.
=== Additional primitive recursive forms ===
Some additional forms of recursion also define functions that are in fact
primitive recursive. Definitions in these forms may be easier to find or
more natural for reading or writing. Course-of-values recursion defines primitive recursive functions. Some forms of mutual recursion also define primitive recursive functions.
The functions that can be programmed in the LOOP programming language are exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to a Turing-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run.
=== Computer language definition ===
An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basic for loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such as while loops or IF-THEN plus GOTO, are admitted in a primitive recursive language.
The LOOP language, introduced in a 1967 paper by Albert R. Meyer and Dennis M. Ritchie, is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language is Douglas Hofstadter's BlooP in Gödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the language general recursive and Turing-complete, as are all real-world computer programming languages.
The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, the halting problem is undecidable for general recursive functions.
== Finitism and consistency results ==
The primitive recursive functions are closely related to mathematical finitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired. Primitive recursive arithmetic (PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose.
PRA is much weaker than Peano arithmetic, which is not a finitistic system. Nevertheless, many results in number theory and in proof theory can be proved in PRA. For example, Gödel's incompleteness theorem can be formalized into PRA, giving the following theorem:
If T is a theory of arithmetic satisfying certain hypotheses, with Gödel sentence GT, then PRA proves the implication Con(T)→GT.
Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs.
In proof theory and set theory, there is an interest in finitistic consistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theory T implies the consistency of a theory S by producing a primitive recursive function that can transform any proof of an inconsistency from S into a proof of an inconsistency from T. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained by forcing can be recast as syntactic proofs that can be formalized in PRA.
== History ==
Recursive definitions had been used more or less formally in mathematics before, but the construction of primitive recursion is traced back to Richard Dedekind's theorem 126 of his Was sind und was sollen die Zahlen? (1888). This work was the first to give a proof that a certain recursive construction defines a unique function.
Primitive recursive arithmetic was first proposed by Thoralf Skolem in 1923.
The current terminology was coined by Rózsa Péter (1934) after Ackermann had proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions.
== See also ==
Grzegorczyk hierarchy
Recursion (computer science)
Primitive recursive functional
Double recursion
Primitive recursive set function
Primitive recursive ordinal function
Tail call
== Notes ==
== References ==
Brainerd, W.S.; Landweber, L.H. (1974), Theory of Computation, Wiley, ISBN 0471095850
Hartmanis, Juris (1989), "Overview of Computational Complexity Theory", Computational Complexity Theory, Proceedings of Symposia in Applied Mathematics, vol. 38, American Mathematical Society, pp. 1–17, ISBN 978-0-8218-0131-4, MR 1020807
Robert I. Soare, Recursively Enumerable Sets and Degrees, Springer-Verlag, 1987. ISBN 0-387-15299-7
Kleene, Stephen Cole (1952), Introduction to Metamathematics (7th [1974] reprint; 2nd ed.), North-Holland Publishing Company, ISBN 0444100881, OCLC 3757798 {{citation}}: ISBN / Date incompatibility (help) Chapter XI. General Recursive Functions §57
Boolos, George; Burgess, John; Jeffrey, Richard (2002), Computability and Logic (4th ed.), Cambridge University Press, pp. 70–71, ISBN 9780521007580
Soare, Robert I. (1996), "Computability and recursion", The Bulletin of Symbolic Logic, 2 (3): 284–321, doi:10.2307/420992, JSTOR 420992, MR 1416870
Severin, Daniel E. (2008), "Unary primitive recursive functions", The Journal of Symbolic Logic, 73 (4): 1122–1138, arXiv:cs/0603063, doi:10.2178/jsl/1230396909, JSTOR 275903221, MR 2467207
Robinson, Raphael M. (1947), "Primitive recursive functions", Bulletin of the American Mathematical Society, 53 (10): 925–942, doi:10.1090/S0002-9904-1947-08911-4, MR 0022536
Gladstone, M. D. (1967), "A reduction of the recursion scheme", The Journal of Symbolic Logic, 32 (4): 505–508, doi:10.2307/2270177, JSTOR 2270177, MR 0224460
Gladstone, M. D. (1971), "Simplifications of the recursion scheme", The Journal of Symbolic Logic, 36 (4): 653–665, doi:10.2307/2272468, JSTOR 2272468, MR 0305993 | Wikipedia/Primitive_recursive_function |
Cryptography, or cryptology (from Ancient Greek: κρυπτός, romanized: kryptós "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.
Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted. Information-theoretically secure schemes that provably cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement disputes with regard to digital media.
== Terminology ==
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story by Edgar Allan Poe.
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks.
There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key. Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard). Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century.
In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English, while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis. English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. RFC 2828 advises that steganography is sometimes included in cryptology.
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.
== History ==
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communications, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others.
=== Classic cryptography ===
The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (c. 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair. Other steganography methods involve 'hiding in plain sight,' such as using a music cipher to disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information.
In India, the 2000-year-old Kama Sutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.
In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries.
David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, nearly all such ciphers could be broken by an informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). The Arab mathematician and polymath Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.
Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel that implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski.
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II. The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.
=== Early computer-era cryptography ===
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitive tasks, such as military code breaking (decryption). This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States. In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm. In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.
As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing. The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography. The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.
== Modern cryptography ==
Claude Shannon's two papers, his 1948 paper on information theory, and especially his 1949 paper on cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography. His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis", and as having turned cryptography from an "art to a science". As a result of his contributions and work, he has been described as the "founding father of modern cryptography".
Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
=== Symmetric-key cryptography ===
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL.
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream.
Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt; this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
=== Public-key cryptography ===
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key. A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key.
The X.509 standard defines the most commonly used format for public key certificates.
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm.
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments. Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA. In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange.
Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).
Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.
=== Cryptographic hash functions ===
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance). MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
=== Cryptanalysis ===
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message. Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts. Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient. Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations. This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, rubber-hose cryptanalysis or torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
=== Cryptographic primitives ===
Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.
=== Cryptosystems ===
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols.
Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems, (like zero-knowledge proofs) and systems for secret sharing.
=== Lightweight cryptography ===
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security. Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology.
== Applications ==
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys. But, some algorithms like BitLocker and VeraCrypt are generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. The C++ opensource encryption library OpenSSL provides free and opensource encryption software and tools. The most commonly used encryption cipher suit is AES, as it has hardware acceleration for all x86 based processors that has AES-NI. A close contender is ChaCha20-Poly1305, which is a stream cipher, however it is commonly used for mobile devices as they are ARM based which does not feature AES-NI instruction set extension.
=== Cybersecurity ===
Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS. "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in WhatsApp, Signal and Telegram.
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker. Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.
Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in.
=== Cryptocurrencies and cryptoeconomics ===
Cryptographic techniques enable cryptocurrency technologies, such as distributed ledger technologies (e.g., blockchains), which finance cryptoeconomics applications such as decentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to: cryptographic keys, cryptographic hash function, asymmetric (public key) encryption, Multi-Factor Authentication (MFA), End-to-End Encryption (E2EE), and Zero Knowledge Proofs (ZKP).
== Legal issues ==
=== Prohibitions ===
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
=== Export controls ===
In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users do not realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
=== NSA involvement ===
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).
=== Digital rights management ===
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech.
=== Forced disclosure of encryption keys ===
In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
== See also ==
Collision attack
Comparison of cryptography libraries
Cryptovirology – Securing and encrypting virology
Crypto Wars – Attempts to limit access to strong cryptography
Encyclopedia of Cryptography and Security – Book by Technische Universiteit Eindhoven
Global surveillance – Mass surveillance across national borders
Indistinguishability obfuscation – Type of cryptographic software obfuscation
Information theory – Scientific study of digital information
Outline of cryptography
List of cryptographers – A list of historical mathmaticians
List of multiple discoveries
List of unsolved problems in computer science – List of unsolved computational problems
Pre-shared key – Method to set encryption keys
Secure cryptoprocessor
Strong cryptography – Term applied to cryptographic systems that are highly resistant to cryptanalysis
Syllabical and Steganographical Table – Eighteenth-century work believed to be the first cryptography chart – first cryptography chart
World Wide Web Consortium's Web Cryptography API – World Wide Web Consortium cryptography standard
== References ==
== Further reading ==
== External links ==
The dictionary definition of cryptography at Wiktionary
Media related to Cryptography at Wikimedia Commons
Cryptography on In Our Time at the BBC
Crypto Glossary and Dictionary of Technical Cryptography Archived 4 July 2022 at the Wayback Machine
A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in the form of lecture notes.
For more on the use of cryptographic elements in fiction, see: Dooley, John F. (23 August 2012). "Cryptology in Fiction". Archived from the original on 29 July 2020. Retrieved 20 February 2015.
The George Fabyan Collection at the Library of Congress has early editions of works of seventeenth-century English literature, publications relating to cryptography. | Wikipedia/Cryptography |
Form factor is a hardware design aspect that defines and prescribes the size, shape, and other physical specifications of components, particularly in electronics. A form factor may represent a broad class of similarly sized components, or it may prescribe a specific standard. It may also define an entire system, as in a computer form factor.
== Evolution and standardization ==
As electronic hardware has become smaller following Moore's law and related patterns, ever-smaller form factors have become feasible. Specific technological advances, such as PCI Express, have had a significant design impact, though form factors have historically evolved slower than individual components. Standardization of form factors is vital for hardware compatibility between different manufacturers.
== Trade-offs ==
Smaller form factors may offer more efficient use of limited space, greater flexibility in the placement of components within a larger assembly, reduced use of material, and greater ease of transportation and use. However, smaller form factors typically incur greater costs in the design, manufacturing, and maintenance phases of the engineering lifecycle, and do not allow the same expansion options as larger form factors. In particular, the design of smaller form-factor computers and network equipment must entail careful consideration of cooling. End-user maintenance and repair of small form-factor electronic devices such as mobile phones is often not possible, and may be discouraged by warranty voiding clauses; such devices require professional servicing—or simply replacement—when they fail.
== Examples ==
Computer form factors comprise a number of specific industry standards for motherboards, specifying dimensions, power supplies, placement of mounting holes and ports, and other parameters. Other types of form factors for computers include:
Small form factor (SFF), a more loosely defined set of standards that may refer to both motherboards and computer cases. SFF devices include mini-towers and home theater PCs.
Pizza box form factor, a wide, flat case form factor used for computers and network switches; often sized for installation in a 19-inch rack.
All-in-one PC
"Lunchbox" portable computer
=== Components ===
Hard disk drive form factors, the physical dimensions of a computer hard drive
Hard disk enclosure form factor, the physical dimensions of a computer hard drive enclosure
Motherboard form factor, the physical dimensions of a computer motherboard
Memory module form factors
=== Mobile form factors ===
Laptop or notebook, a form of portable computer with a clamshell design.
Subnotebook, ultra-mobile PC, netbook, and tablet computer, various form factors for devices that are smaller and often cheaper than a typical notebook.
Mobile phone, including a wide range of sizes and layouts. Broad categories of form factors include bars, flip phones, and sliders, with many subtypes and variations. Also include phablets (small tablets) and industrial handheld devices.
Stick PC, a single-board computer in a small elongated casing resembling a stick
== See also ==
Computer hardware
Electronic packaging
Packaging engineering
List of computer size categories
List of integrated circuit package dimensions
== Notes ==
== References == | Wikipedia/Form_factor_(design) |
In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.
For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort organizes the list in time proportional to the number of elements squared (
O
(
n
2
)
{\textstyle O(n^{2})}
, see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list (
O
(
1
)
{\textstyle O(1)}
). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (
O
(
n
log
n
)
{\textstyle O(n\log n)}
), but has a space requirement linear in the length of the list (
O
(
n
)
{\textstyle O(n)}
). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice.
== Background ==
The importance of efficiency with respect to time was emphasized by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine:
"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"
Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory.
Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless, Donald Knuth emphasized that efficiency is still an important consideration:
"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"
== Overview ==
An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago.
Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.
There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order.
In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues.
=== Theoretical analysis ===
In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input
n
{\textstyle n}
. Big O notation is an asymptotic measure of function complexity, where
f
(
n
)
=
O
(
g
(
n
)
)
{\textstyle f(n)=O{\bigl (}g(n){\bigr )}}
roughly means the time requirement for an algorithm is proportional to
g
(
n
)
{\displaystyle g(n)}
, omitting lower-order terms that contribute less than
g
(
n
)
{\displaystyle g(n)}
to the growth of the function as
n
{\textstyle n}
grows arbitrarily large. This estimate may be misleading when
n
{\textstyle n}
is small, but is generally sufficiently accurate when
n
{\textstyle n}
is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs.
Some examples of Big O notation applied to algorithms' asymptotic time complexity include:
=== Measuring performance ===
For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed.
Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example
and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.
Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.
=== Implementation concerns ===
Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded, or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler. See the articles on just-in-time compilation and interpreted languages.
There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls.
Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI.
Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have extensive knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations.
== Measures of resource usage ==
Measures are normally expressed as a function of the size of the input
n
{\displaystyle \scriptstyle {n}}
.
The two most common measures are:
Time: how long does the algorithm take to complete?
Space: how much working memory (typically RAM) is needed by the algorithm? This has two aspects: the amount of memory needed by the code (auxiliary space usage), and the amount of memory needed for the data on which the code operates (intrinsic space usage).
For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are:
Direct power consumption: power needed directly to operate the computer.
Indirect power consumption: power needed for cooling, lighting, etc.
As of 2018, power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing.
Less common measures of computational efficiency may also be relevant in some cases:
Transmission size: bandwidth could be a limiting factor. Data compression can be used to reduce the amount of data to be transmitted. Displaying a picture or image (e.g. Google logo) can result in transmitting tens of thousands of bytes (48K in this case) compared with transmitting six bytes for the text "Google". This is important for I/O bound computing tasks.
External space: space needed on a disk or other external memory device; this could be for temporary storage while the algorithm is being carried out, or it could be long-term storage needed to be carried forward for future reference.
Response time (latency): this is particularly relevant in a real-time application when the computer system must respond quickly to some external event.
Total cost of ownership: particularly if a computer is dedicated to one particular algorithm.
=== Time ===
==== Theory ====
Analysis of algorithms, typically using concepts like time complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Parallel algorithms may be more difficult to analyze.
==== Practice ====
A benchmark can be used to assess the performance of an algorithm in practice. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests.
Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment.
This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions.
=== Space ===
This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation.
There are up to four aspects of memory usage to consider:
The amount of memory needed to hold the code for the algorithm.
The amount of memory needed for the input data.
The amount of memory needed for any output data.
Some algorithms, such as sorting, often rearrange the input data and do not need any additional space for output data. This property is referred to as "in-place" operation.
The amount of memory needed as working space during the calculation.
This includes local variables and any stack space needed by routines called during a calculation; this stack space can be significant for algorithms which use recursive techniques.
Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory.
==== Caching and memory hierarchy ====
Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant:
Processor registers, are the fastest memory with the least amount of space. Most direct computation on modern computers occurs with source and destination operands in registers before being updated to the cache, main memory and virtual memory if needed. On a processor core, there are typically on the order of hundreds of bytes or fewer of register availability, although a register file may contain more physical registers than architectural registers defined in the instruction set architecture.
Cache memory is the second fastest, and second smallest, available in the memory hierarchy. Caches are present in processors such as CPUs or GPUs, where they are typically implemented in static RAM, though they can also be found in peripherals such as disk drives. Processor caches often have their own multi-level hierarchy; lower levels are larger, slower and typically shared between processor cores in multi-core processors. In order to process operands in cache memory, a processing unit must fetch the data from the cache, perform the operation in registers and write the data back to the cache. This operates at speeds comparable (about 2-10 times slower) with the CPU or GPU's arithmetic logic unit or floating-point unit if in the L1 cache. It is about 10 times slower if there is an L1 cache miss and it must be retrieved from and written to the L2 cache, and a further 10 times slower if there is an L2 cache miss and it must be retrieved from an L3 cache, if present.
Main physical memory is most often implemented in dynamic RAM (DRAM). The main memory is much larger (typically gigabytes compared to ≈8 megabytes) than an L3 CPU cache, with read and write latencies typically 10-100 times slower. As of 2018, RAM is increasingly implemented on-chip of processors, as CPU or GPU memory.
Paged memory, often used for virtual memory management, is memory stored in secondary storage such as a hard disk, and is an extension to the memory hierarchy which allows use of a potentially larger storage space, at the cost of much higher latency, typically around 1000 times slower than a cache miss for a value in RAM. While originally motivated to create the impression of higher amounts of memory being available than were truly available, virtual memory is more important in contemporary usage for its time-space tradeoff and enabling the usage of virtual machines. Cache misses from main memory are called page faults, and incur huge performance penalties on programs.
An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another.
In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality, and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.
== See also ==
Analysis of algorithms—how to determine the resources needed by an algorithm
Benchmark—a method for measuring comparative execution times in defined cases
Best, worst and average case—considerations for estimating execution times in three scenarios
Compiler optimization—compiler-derived optimization
Computational complexity theory
Computer performance—computer hardware metrics
Empirical algorithmics—the practice of using empirical methods to study the behavior of algorithms
Program optimization
Performance analysis—methods of measuring actual performance of an algorithm at run-time
== References == | Wikipedia/Algorithmic_efficiency |
General set theory (GST) is George Boolos's (1998) name for a fragment of the axiomatic set theory Z. GST is sufficient for all mathematics not requiring infinite sets, and is the weakest known set theory whose theorems include the Peano axioms.
== Ontology ==
The ontology of GST is identical to that of ZFC, and hence is thoroughly canonical. GST features a single primitive ontological notion, that of set, and a single ontological assumption, namely that all individuals in the universe of discourse (hence all mathematical objects) are sets. There is a single primitive binary relation, set membership; that set a is a member of set b is written a ∈ b (usually read "a is an element of b").
== Axioms ==
The symbolic axioms below are from Boolos (1998: 196), and govern how sets behave and interact.
As with Z, the background logic for GST is first order logic with identity. Indeed, GST is the fragment of Z obtained by omitting the axioms Union, Power Set, Elementary Sets (essentially Pairing) and Infinity and then taking a theorem of Z, Adjunction, as an axiom.
The natural language versions of the axioms are intended to aid the intuition.
1) Axiom of Extensionality: The sets x and y are the same set if they have the same members.
∀
x
∀
y
[
∀
z
[
z
∈
x
↔
z
∈
y
]
→
x
=
y
]
.
{\displaystyle \forall x\forall y[\forall z[z\in x\leftrightarrow z\in y]\rightarrow x=y].}
The converse of this axiom follows from the substitution property of equality.
2) Axiom Schema of Specification (or Separation or Restricted Comprehension): If z is a set and
ϕ
{\displaystyle \phi }
is any property which may be satisfied by all, some, or no elements of z, then there exists a subset y of z containing just those elements x in z which satisfy the property
ϕ
{\displaystyle \phi }
. The restriction to z is necessary to avoid Russell's paradox and its variants. More formally, let
ϕ
(
x
)
{\displaystyle \phi (x)}
be any formula in the language of GST in which x may occur freely and y does not. Then all instances of the following schema are axioms:
∀
z
∃
y
∀
x
[
x
∈
y
↔
(
x
∈
z
∧
ϕ
(
x
)
)
]
.
{\displaystyle \forall z\exists y\forall x[x\in y\leftrightarrow (x\in z\land \phi (x))].}
3) Axiom of Adjunction: If x and y are sets, then there exists a set w, the adjunction of x and y, whose members are just y and the members of x.
∀
x
∀
y
∃
w
∀
z
[
z
∈
w
↔
(
z
∈
x
∨
z
=
y
)
]
.
{\displaystyle \forall x\forall y\exists w\forall z[z\in w\leftrightarrow (z\in x\lor z=y)].}
Adjunction refers to an elementary operation on two sets, and has no bearing on the use of that term elsewhere in mathematics, including in category theory.
ST is GST with the axiom schema of specification replaced by the axiom of empty set:
∃
x
∀
y
[
y
∉
x
]
.
{\displaystyle \exists x\forall y[y\notin x].}
== Discussion ==
=== Metamathematics ===
Note that Specification is an axiom schema. The theory given by these axioms is not finitely axiomatizable. Montague (1961) showed that ZFC is not finitely axiomatizable, and his argument carries over to GST. Hence any axiomatization of GST must include at least one axiom schema.
With its simple axioms, GST is also immune to the three great antinomies of naïve set theory: Russell's, Burali-Forti's, and Cantor's.
GST is Interpretable in relation algebra because no part of any GST axiom lies in the scope of more than three quantifiers. This is the necessary and sufficient condition given in Tarski and Givant (1987).
=== Peano arithmetic ===
Setting φ(x) in Separation to x≠x, and assuming that the domain is nonempty, assures the existence of the empty set. Adjunction implies that if x is a set, then so is
S
(
x
)
=
x
∪
{
x
}
{\displaystyle S(x)=x\cup \{x\}}
. Given Adjunction, the usual construction of the successor ordinals from the empty set can proceed, one in which the natural numbers are defined as
∅
,
S
(
∅
)
,
S
(
S
(
∅
)
)
,
…
,
{\displaystyle \varnothing ,\,S(\varnothing ),\,S(S(\varnothing )),\,\ldots ,}
. See Peano's axioms.
GST is mutually interpretable with Peano arithmetic (thus it has the same proof-theoretic strength as PA).
The most remarkable fact about ST (and hence GST), is that these tiny fragments of set theory give rise to such rich metamathematics. While ST is a small fragment of the well-known canonical set theories ZFC and NBG, ST interprets Robinson arithmetic (Q), so that ST inherits the nontrivial metamathematics of Q. For example, ST is essentially undecidable because Q is, and every consistent theory whose theorems include the ST axioms is also essentially undecidable. This includes GST and every axiomatic set theory worth thinking about, assuming these are consistent. In fact, the undecidability of ST implies the undecidability of first-order logic with a single binary predicate letter.
Q is also incomplete in the sense of Gödel's incompleteness theorem. Any axiomatizable theory, such as ST and GST, whose theorems include the Q axioms is likewise incomplete. Moreover, the consistency of GST cannot be proved within GST itself, unless GST is in fact inconsistent.
=== Infinite sets ===
Given any model M of ZFC, the collection of hereditarily finite sets in M will satisfy the GST axioms. Therefore, GST cannot prove the existence of even a countable infinite set, that is, of a set whose cardinality is
ℵ
0
{\displaystyle \aleph _{0}}
. Even if GST did afford a countably infinite set, GST could not prove the existence of a set whose cardinality is
ℵ
1
{\displaystyle \aleph _{1}}
, because GST lacks the axiom of power set. Hence GST cannot ground analysis and geometry, and is too weak to serve as a foundation for mathematics.
== History ==
Boolos was interested in GST only as a fragment of Z that is just powerful enough to interpret Peano arithmetic. He never lingered over GST, only mentioning it briefly in several papers discussing the systems of Frege's Grundlagen and Grundgesetze, and how they could be modified to eliminate Russell's paradox. The system Aξ'[δ0] in Tarski and Givant (1987: 223) is essentially GST with an axiom schema of induction replacing Specification, and with the existence of an empty set explicitly assumed.
GST is called STZ in Burgess (2005), p. 223. Burgess's theory ST is GST with Empty Set replacing the axiom schema of specification. That the letters "ST" also appear in "GST" is a coincidence.
== Footnotes ==
== References ==
George Boolos (1999) Logic, Logic, and Logic. Harvard Univ. Press.
Burgess, John, 2005. Fixing Frege. Princeton Univ. Press.
Collins, George E., and Daniel, J. D. (1970). "On the interpretability of arithmetic in set theory". Notre Dame Journal of Formal Logic, 11 (4): 477–483.
Richard Montague (1961) "Semantical closure and non-finite axiomatizability" in Infinistic Methods. Warsaw: 45-69.
Alfred Tarski, Andrzej Mostowski, and Raphael Robinson (1953) Undecidable Theories. North Holland.
Tarski, A., and Givant, Steven (1987) A Formalization of Set Theory without Variables. Providence RI: AMS Colloquium Publications, v. 41.
== External links ==
Stanford Encyclopedia of Philosophy: Set Theory—by Thomas Jech. | Wikipedia/General_set_theory |
Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services.": xxvii, 30 While interaction design has an interest in form (similar to other design fields), its main area of focus rests on behavior.: xxvii, 30 Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field, as opposed to a science or engineering field.
Interaction design borrows from a wide range of fields like psychology, human-computer interaction, information architecture, and user research to create designs that are tailored to the needs and preferences of users. This involves understanding the context in which the product will be used, identifying user goals and behaviors, and developing design solutions that are responsive to user needs and expectations.
While disciplines such as software engineering have a heavy focus on designing for technical stakeholders, interaction design is focused on meeting the needs and optimizing the experience of users, within relevant technical or business constraints.: xviii
== History ==
The term interaction design was coined by Bill Moggridge and Bill Verplank in the mid-1980s, but it took 10 years before the concept started to take hold.: 31 To Verplank, it was an adaptation of the computer science term user interface design for the industrial design profession. To Moggridge, it was an improvement over soft-face, which he had coined in 1984 to refer to the application of industrial design to products containing software.
The earliest programs in design for interactive technologies were the Visible Language Workshop, started by Muriel Cooper at MIT in 1975, and the Interactive Telecommunications Program founded at NYU in 1979 by Martin Elton and later headed by Red Burns.
The first academic program officially named "Interaction Design" was established at Carnegie Mellon University in 1994, as a Master of Design in Interaction Design. At the outset, the program focused mainly on screen interfaces, before shifting to a greater emphasis on the "big picture" aspects of interaction—people, organizations, culture, service and system.
In 1990, Gillian Crampton Smith founded the Computer-Related Design MA at the Royal College of Art (RCA) in London, which in 2005 was renamed Design Interactions, headed by Anthony Dunne. In 2001, Crampton Smith helped found the Interaction Design Institute Ivrea (IDII), a specialized institute in Olivetti's hometown in Northern Italy, dedicated solely to interaction design. In 2007, after IDII closed due to a lack of funding, some of the people originally involved with IDII set up the Copenhagen Institute of Interaction Design (CIID), in Denmark. After Ivrea, Crampton Smith and Philip Tabor added the Interaction Design (IxD) track in the Visual and Multimedia Communication at the University of Venice, Italy.
In 1998, the Swedish Foundation for Strategic Research founded The Interactive Institute—a Swedish research institute in the field of interaction design.
== Methodologies ==
=== Goal-oriented design ===
Goal-oriented design (or Goal-Directed design) "is concerned with satisfying the needs and desires of the users of a product or service.": xxviii, 31
Alan Cooper argues in The Inmates Are Running the Asylum that we need a new approach to solving interactive software-based problems.: 1 The problems with designing computer interfaces are fundamentally different from those that do not include software (e.g., hammers). Cooper introduces the concept of cognitive friction, which is when the interface of a design is complex and difficult to use, and behaves inconsistently and unexpectedly, possessing different modes.: 22
Alternatively, interfaces can be designed to serve the needs of the service/product provider. User needs may be poorly served by this approach.
=== Usability ===
Usability answers the question "can someone use this interface?". Jakob Nielsen describes usability as the quality attribute that describes how usable the interface is. Shneiderman proposes principles for designing more usable interfaces called "Eight Golden Rules of Interface Design"—which are well-known heuristics for creating usable systems.
=== Personas ===
Personas are archetypes that describe the various goals and observed behaviour patterns among users.
A persona encapsulates critical behavioural data in a way that both designers and stakeholders can understand, remember, and relate to. Personas use storytelling to engage users' social and emotional aspects, which helps designers to either visualize the best product behaviour or see why the recommended design is successful.
=== Cognitive dimensions ===
The cognitive dimensions framework provides a vocabulary to evaluate and modify design solutions. Cognitive dimensions offer a lightweight approach to analysis of a design quality, rather than an in-depth, detailed description. They provide a common vocabulary for discussing notation, user interface or programming language design.
Dimensions provide high-level descriptions of the interface and how the user interacts with it: examples include consistency, error-proneness, hard mental operations, viscosity and premature commitment. These concepts aid the creation of new designs from existing ones through design maneuvers that alter the design within a particular dimension.
=== Affective interaction design ===
Designers must be aware of elements that influence user emotional responses. For instance, products must convey positive emotions while avoiding negative ones. Other important aspects include motivational, learning, creative, social and persuasive influences. One method that can help convey such aspects is for example, the use of dynamic icons, animations and sound to help communicate, creating a sense of interactivity. Interface aspects such as fonts, color palettes and graphical layouts can influence acceptance. Studies showed that affective aspects can affect perceptions of usability.
Emotion and pleasure theories exist to explain interface responses. These include Don Norman's emotional design model, Patrick Jordan's pleasure model and McCarthy and Wright's Technology as Experience framework.
== Five dimensions ==
The concept of dimensions of interaction design were introduced in Moggridge's book Designing Interactions. Crampton Smith wrote that interaction design draws on four existing design languages, 1D, 2D, 3D, 4D. Kevin Silver later proposed a fifth dimension, behaviour.
=== Words ===
This dimension defines interactions: words are the element that users interact with.
=== Visual representations ===
Visual representations are the elements of an interface that the user perceives; these may include but are not limited to "typography, diagrams, icons, and other graphics".
=== Physical objects or space ===
This dimension defines the objects or space "with which or within which users interact".
=== Time ===
The time during which the user interacts with the interface. An example of this includes "content that changes over time such as sound, video or animation".
=== Behavior ===
Behavior defines how users respond to the interface. Users may have different reactions in this interface.
== Interaction Design Association ==
The Interaction Design Association was created in 2003 to serve the community. The organization has over 80,000 members and more than 173 local groups. IxDA hosts Interaction the annual interaction design conference, and the Interaction Awards. Interaction Awards have since ended in August 2024
== Related disciplines ==
Industrial design
The core principles of industrial design overlap with those of interaction design. Industrial designers use their knowledge of physical form, color, aesthetics, human perception and desire, and usability to create a fit of an object with the person using it.
Human factors and ergonomics
Certain basic principles of ergonomics provide grounding for interaction design. These include anthropometry, biomechanics, kinesiology, physiology and psychology as they relate to human behavior in the built environment.
Cognitive psychology
Certain basic principles of cognitive psychology provide grounding for interaction design. These include mental models, mapping, interface metaphors, and affordances. Many of these are laid out in Donald Norman's influential book The Design of Everyday Things.
Human–computer interaction
Academic research in human–computer interaction (HCI) includes methods for describing and testing the usability of interacting with an interface, such as cognitive dimensions and the cognitive walkthrough.
Design research
Interaction designers are typically informed through iterative cycles of user research. User research is used to identify the needs, motivations and behaviors of end users. They design with an emphasis on user goals and experience, and evaluate designs in terms of usability and affective influence.
Architecture
As interaction designers increasingly deal with ubiquitous computing, urban informatics and urban computing, the architects' ability to make, place, and create context becomes a point of contact between the disciplines.
User interface design
Like user interface design and experience design, interaction design is often associated with the design of system interfaces in a variety of media but concentrates on the aspects of the interface that define and present its behavior over time, with a focus on developing the system to respond to the user's experience and not the other way around.
== See also ==
Activity-centered design
Attentive user interface
Hardware interface design
Human interface guidelines (user friendly computer application designs)
Information architecture
Instructional design
Interaction Design Foundation
Fitts's law
User experience design
Interface design
== References ==
== Further reading ==
Bolter, Jay D.; Gromala, Diane (2008). Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge, Massachusetts: MIT Press. ISBN 978-0-262-02545-4.
Buchenau, Marion; Suri, Jane Fulton. Experience Prototyping. DIS 2000. ISBN 1-58113-219-0.
Buxton, Bill (2005). Sketching the User Experience. New Riders Press. ISBN 0-321-34475-8.
Cooper, Alan (1999). The Inmates are Running the Asylum. Sams. ISBN 0672316498.
Cooper, Alan; Reimann, Robert; Cronin, David; Noessel, Christopher (2014). About Face (4th ed.). Wiley. ISBN 9781118766576.
Dawes, Brendan (2007). Analog In, Digital Out. Berkeley, California: New Riders Press.
Goodwin, Kim (2009). Designing for the Digital Age: How to Create Human-Centered Products and Services. Wiley. ISBN 978-0-470-22910-1.
Houde, Stephanie; Hill, Charles (1997). "What Do Prototypes Prototype?". In Helander, M; Landauer, T; Prabhu, P (eds.). Handbook of Human–Computer Interaction (2nd ed.). Elsevier Science.
Jones, Matt & Gary Marsden: Mobile Interaction Design, John Wiley & Sons, 2006, ISBN 0-470-09089-8.
Kolko, Jon (2009). Thoughts on Interaction Design. Morgan Kaufmann. ISBN 978-0-12-378624-1.
Laurel, Brenda; Lunenfeld, Peter (2003). Design Research: Methods and Perspectives. MIT Press. ISBN 0-262-12263-4.
Tinauli, Musstanser; Pillan, Margherita (2008). "Interaction Design and Experiential Factors: A Novel Case Study on Digital Pen and Paper". Mobility '08: Proceedings of the International Conference on Mobile Technology, Applications, and Systems. New York: ACM. doi:10.1145/1506270.1506400. ISBN 978-1-60558-089-0.
Norman, Donald (1988). The Design of Everyday Things. New York: Basic Books. ISBN 978-0-465-06710-7.
Raskin, Jef (2000). The Humane Interface. ACM Press. ISBN 0-201-37937-6.
Saffer, Dan (2006). Designing for Interaction. New Riders Press. ISBN 0-321-43206-1. | Wikipedia/Interaction_design |
An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop and NetBeans, do not.
The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development.
== Overview ==
Integrated development environments are designed to maximize programmer productivity by providing tight-knit components with similar user interfaces. IDEs present a single program in which all development is done. This program typically provides many features for authoring, modifying, compiling, deploying and debugging software. This contrasts with software development using unrelated tools, such as vi, GDB, GNU Compiler Collection, or make.
One aim of the IDE is to reduce the configuration necessary to piece together multiple development utilities. Instead, it provides the same set of capabilities as one cohesive unit. Reducing setup time can increase developer productivity, especially in cases where learning to use the IDE is faster than manually integrating and learning all of the individual tools. Tighter integration of all development tasks has the potential to improve overall productivity beyond just helping with setup tasks. For example, code can be continuously parsed while it is being edited, providing instant feedback when syntax errors are introduced, thus allowing developers to debug code much faster and more easily with an IDE.
Some IDEs are dedicated to a specific programming language, allowing a feature set that most closely matches the programming paradigms of the language. However, there are many multiple-language IDEs.
While most modern IDEs are graphical, text-based IDEs such as Turbo Pascal were in popular use before the availability of windowing systems like Microsoft Windows and the X Window System (X11). They commonly use function keys or hotkeys to execute frequently used commands or macros.
== History ==
IDEs initially became possible when developing via a console or terminal. Early systems could not support one, since programs were submitted to a compiler or assembler via punched cards, paper tape, etc. Dartmouth BASIC was the first language to be created with an IDE (and was also the first to be designed for use while sitting in front of a console or terminal). Its IDE (part of the Dartmouth Time-Sharing System) was command-based, and therefore did not look much like the menu-driven, graphical IDEs popular after the advent of the graphical user interface. However it integrated editing, file management, compilation, debugging and execution in a manner consistent with a modern IDE.
Maestro I is a product from Softlab Munich and was the world's first integrated development environment for software. Maestro I was installed for 22,000 programmers worldwide. Until 1989, 6,000 installations existed in the Federal Republic of Germany. Maestro was arguably the world leader in this field during the 1970s and 1980s. Today one of the last Maestro I can be found in the Museum of Information Technology at Arlington in Texas.
One of the first IDEs with a plug-in concept was Softbench. In 1995 Computerwoche commented that the use of an IDE was not well received by developers since it would fence in their creativity.
As of August 2023, the most commonly searched for IDEs on Google Search were Visual Studio, Visual Studio Code, and Eclipse.
== Topics ==
=== Syntax highlighting ===
The IDE editor usually provides syntax highlighting, it can show both the structures, the language keywords and the syntax errors with visually distinct colors and font effects.
=== Code completion ===
Code completion is an important IDE feature, intended to speed up programming. Modern IDEs even have intelligent code completion.
==== Intelligent code completion ====
=== Refactoring ===
Advanced IDEs provide support for automated refactoring.
=== Version control ===
An IDE is expected to provide integrated version control, in order to interact with source repositories.
=== Debugging ===
IDEs are also used for debugging, using an integrated debugger, with support for setting breakpoints in the editor, visual rendering of steps, etc.
=== Code search ===
IDEs may provide support for code search. Code search has two different meanings. First, it means searching for class and function declarations, usages, variable and field read/write, etc. IDEs can use different kinds of user interface for code search, for example form-based widgets and natural-language based interfaces.
Second, it means searching for a concrete implementation of some specified functionality.
=== Visual programming ===
Visual programming is a usage scenario in which an IDE is generally required. Visual Basic allows users to create new applications by moving programming, building blocks, or code nodes to create flowcharts or structure diagrams that are then compiled or interpreted. These flowcharts often are based on the Unified Modeling Language.
This interface has been popularized with the Lego Mindstorms system and is being actively perused by a number of companies wishing to capitalize on the power of custom browsers like those found at Mozilla. KTechlab supports flowcode and is a popular open-source IDE and Simulator for developing software for microcontrollers. Visual programming is also responsible for the power of distributed programming (cf. LabVIEW and EICASLAB software). An early visual programming system, Max, was modeled after an analog synthesizer design and has been used to develop real-time music performance software since the 1980s. Another early example was Prograph, a dataflow-based system originally developed for the Macintosh. The graphical programming environment "Grape" is used to program qfix robot kits.
This approach is also used in specialist software such as Openlab, where the end-users want the flexibility of a full programming language, without the traditional learning curve associated with one.
=== Language support ===
Some IDEs support multiple languages, such as GNU Emacs, IntelliJ IDEA, Eclipse, MyEclipse, NetBeans, MonoDevelop, JDoodle or PlayCode.
Support for alternative languages is often provided by plugins, allowing them to be installed on the same IDE at the same time. For example, Flycheck is a modern on-the-fly syntax checking extension for GNU Emacs 24 with support for 39 languages. Another example is JDoodle, an online cloud-based IDE that supports 88 languages.[1] Eclipse, and Netbeans have plugins for C/C++, Ada, GNAT (for example AdaGIDE), Perl, Python, Ruby, and PHP, which are selected between automatically based on file extension, environment or project settings.
=== Implementation ===
IDEs can be implemented in various languages, for example:
GNU Emacs using Emacs Lisp and C;
IntelliJ IDEA, Eclipse and NetBeans, using Java;
MonoDevelop and Rider using C#.
=== Attitudes across different computing platforms ===
Unix programmers can combine command-line POSIX tools into a complete development environment, capable of developing large programs such as the Linux kernel and its environment. In this sense, the entire Unix system functions as an IDE. The free software GNU toolchain (including GNU Compiler Collection (GCC), GNU Debugger (GDB), and GNU make) is available on many platforms, including Windows. The pervasive Unix philosophy of "everything is a text stream" enables developers who favor command-line oriented tools to use editors with support for many of the standard Unix and GNU build tools, building an IDE with programs like
Emacs
or Vim. Data Display Debugger is intended to be an advanced graphical front-end for many text-based debugger standard tools. Some programmers prefer managing makefiles and their derivatives to the similar code building tools included in a full IDE. For example, most contributors to the PostgreSQL database use make and GDB directly to develop new features. Even when building PostgreSQL for Microsoft Windows using Visual C++, Perl scripts are used as a replacement for make rather than relying on any IDE features. Some Linux IDEs such as Geany attempt to provide a graphical front end to traditional build operations.
On the various Microsoft Windows platforms, command-line tools for development are seldom used. Accordingly, there are many commercial and non-commercial products. However, each has a different design commonly creating incompatibilities. Most major compiler vendors for Windows still provide free copies of their command-line tools, including Microsoft (Visual C++, Platform SDK, .NET Framework SDK, nmake utility).
IDEs have always been popular on the Apple Macintosh's classic Mac OS and macOS, dating back to Macintosh Programmer's Workshop, Turbo Pascal, THINK Pascal and THINK C environments of the mid-1980s. Currently macOS programmers can choose between native IDEs like Xcode and open-source tools such as Eclipse and Netbeans. ActiveState Komodo is a proprietary multilanguage IDE supported on macOS.
== Online ==
An online integrated development environment, also known as a web IDE or cloud IDE, is a browser based IDE that allows for software development or web development. An online IDE can be accessed from a web browser, allowing for a portable work environment. An online IDE does not usually contain all of the same features as a traditional or desktop IDE although all of the basic IDE features, such as syntax highlighting, are typically present.
A Mobile-Based Integrated Development Environment (IDE) is a software application that provides a comprehensive suite of tools for software development on mobile platforms. Unlike traditional desktop IDEs, mobile-based IDEs are designed to run on smartphones and tablets, allowing developers to write, debug, and deploy code directly from their mobile devices.
== See also ==
== References == | Wikipedia/Integrated_development_environment |
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
Theoretical foundations and analysis
Use of computer technology to aid logicians
Use of concepts from logic for computer applications
== Theoretical foundations and analysis ==
Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect.". In addition some other major areas of theoretical overlap between logic and computer science are:
Gödel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proved nor disproved within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software.
The frame problem is a basic problem that must be overcome when using first-order logic to represent the goals of an artificial intelligence agent and the state of its environment.
The Curry–Howard correspondence is a relation between logical systems and programming languages. This theory established a precise correspondence between proofs and programs. In particular it showed that terms in the simply typed lambda calculus correspond to proofs of intuitionistic propositional logic.
Category theory represents a view of mathematics that emphasizes the relations between structures. It is intimately tied to many aspects of computer science: type systems for programming languages, the theory of transition systems, models of programming languages and the theory of programming language semantics.
Logic programming is a programming, database and knowledge representation paradigm that is based on formal logic. A logic program is a set of sentences about some problem domain. Computation is performed by applying logical reasoning to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog.
== Computers to assist logicians ==
One of the first applications to use the term artificial intelligence was the Logic Theorist system developed by Allen Newell, Cliff Shaw, and Herbert Simon in 1956. One of the things that a logician does is to take a set of statements in logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example, if given the statements "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica. In addition, subsequent systems have been utilized by logicians to validate and discover new mathematical theorems and proofs.
== Logic applications for computers ==
There has always been a strong influence from mathematical logic on the field of artificial intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described first-order logic (FOL) as the metric by which all AI knowledge representation formalisms should be evaluated. First-order logic is a general and powerful method for describing and analyzing information. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. A widely held belief maintains that the more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop. However, in a recent work by Heng Zhang et al., this belief has been rigorously challenged. Their findings establish that all universal knowledge representation formalisms are recursively isomorphic. Furthermore, their proof demonstrates that FOL can be translated into a pure procedural knowledge representation formalism defined by Turing machines with computationally feasible overhead, specifically within deterministic polynomial time or even at lower complexity.
For example, IF–THEN rules used in expert systems approximate to a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as modus ponens. As a result, rule-based systems can support high-performance computation, especially if they take advantage of optimization algorithms and compilation.
On the other hand, logic programming, which combines the Horn clause subset of first-order logic with a non-monotonic form of negation, has both high expressive power and efficient implementations. In particular, the logic programming language Prolog is a Turing complete programming language. Datalog extends the relational database model with recursive relations, while answer set programming is a form of logic programming oriented towards difficult (primarily NP-hard) search problems.
Another major area of research for logical theory is software engineering. Research projects such as the Knowledge Based Software Assistant and Programmer's Apprentice programs have applied logical theory to validate the correctness of software specifications. They have also used logical tools to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification. This formal transformation-driven approach is often far more effortful than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real-time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) design—the process for designing the chips used for the CPUs and other critical components of digital devices. An error in a chip can be catastrophic. Unlike software, chips can't be patched or updated. As a result, there is commercial justification for using formal methods to prove that the implementation corresponds to the specification.
Another important application of logic to computer technology has been in the area of frame languages and automatic classifiers. Frame languages such as KL-ONE can be directly mapped to set theory and first-order logic. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on top of the existing Internet. This layer is called the Semantic Web.
Temporal logic is used for reasoning in concurrent systems.
== See also ==
Automated reasoning
Computational logic
Logic programming
== References ==
== Further reading ==
Ben-Ari, Mordechai (2012). Mathematical Logic for Computer Science (3rd ed.). Springer-Verlag. ISBN 978-1447141280.
Harrison, John (2009). Handbook of Practical Logic and Automated Reasoning (1st ed.). Cambridge University Press. ISBN 978-0521899574.
Huth, Michael; Ryan, Mark (2004). Logic in Computer Science: Modelling and Reasoning about Systems (2nd ed.). Cambridge University Press. ISBN 978-0521543101.
Burris, Stanley N. (1997). Logic for Mathematics and Computer Science (1st ed.). Prentice Hall. ISBN 978-0132859745.
== External links ==
Article on Logic and Artificial Intelligence at the Stanford Encyclopedia of Philosophy.
IEEE Symposium on Logic in Computer Science (LICS)
Alwen Tiu, Introduction to logic video recording of a lecture at ANU Logic Summer School '09 (aimed mostly at computer scientists) | Wikipedia/Logic_in_computer_science |
In the foundations of mathematics, Morse–Kelley set theory (MK), Kelley–Morse set theory (KM), Morse–Tarski set theory (MT), Quine–Morse set theory (QM) or the system of Quine and Morse is a first-order axiomatic set theory that is closely related to von Neumann–Bernays–Gödel set theory (NBG). While von Neumann–Bernays–Gödel set theory restricts the bound variables in the schematic formula appearing in the axiom schema of Class Comprehension to range over sets alone, Morse–Kelley set theory allows these bound variables to range over proper classes as well as sets, as first suggested by Quine in 1940 for his system ML.
Morse–Kelley set theory is named after mathematicians John L. Kelley and Anthony Morse and was first set out by Wang (1949) and later in an appendix to Kelley's textbook General Topology (1955), a graduate level introduction to topology. Kelley said the system in his book was a variant of the systems due to Thoralf Skolem and Morse. Morse's own version appeared later in his book A Theory of Sets (1965).
While von Neumann–Bernays–Gödel set theory is a conservative extension of Zermelo–Fraenkel set theory (ZFC, the canonical set theory) in the sense that a statement in the language of ZFC is provable in NBG if and only if it is provable in ZFC, Morse–Kelley set theory is a proper extension of ZFC. Unlike von Neumann–Bernays–Gödel set theory, where the axiom schema of Class Comprehension can be replaced with finitely many of its instances, Morse–Kelley set theory cannot be finitely axiomatized.
== MK axioms and ontology ==
NBG and MK share a common ontology. The universe of discourse consists of classes. Classes that are members of other classes are called sets. A class that is not a set is a proper class. The primitive atomic sentences involve membership or equality.
With the exception of Class Comprehension, the following axioms are the same as those for NBG, inessential details aside. The symbolic versions of the axioms employ the following notational devices:
The upper case letters other than M, appearing in Extensionality, Class Comprehension, and Foundation, denote variables ranging over classes. A lower case letter denotes a variable that cannot be a proper class, because it appears to the left of an ∈. As MK is a one-sorted theory, this notational convention is only mnemonic.
The monadic predicate
M
x
,
{\displaystyle Mx,}
whose intended reading is "the class x is a set", abbreviates
∃
W
(
x
∈
W
)
.
{\displaystyle \exists W(x\in W).}
The empty set
∅
{\displaystyle \varnothing }
is defined by
∀
x
(
x
∉
∅
)
.
{\displaystyle \forall x(x\not \in \varnothing ).}
The class V, the universal class having all possible sets as members, is defined by
∀
x
(
M
x
→
x
∈
V
)
.
{\displaystyle \forall x(Mx\to x\in V).}
V is also the von Neumann universe.
Extensionality: Classes having the same members are the same class.
∀
X
∀
Y
(
∀
z
(
z
∈
X
↔
z
∈
Y
)
→
X
=
Y
)
.
{\displaystyle \forall X\,\forall Y\,(\forall z\,(z\in X\leftrightarrow z\in Y)\rightarrow X=Y).}
A set and a class having the same extension are identical. Hence MK is not a two-sorted theory, appearances to the contrary notwithstanding.
Foundation: Each nonempty class A is disjoint from at least one of its members.
∀
A
[
A
≠
∅
→
∃
b
(
b
∈
A
∧
∀
c
(
c
∈
b
→
c
∉
A
)
)
]
.
{\displaystyle \forall A[A\not =\varnothing \rightarrow \exists b(b\in A\land \forall c(c\in b\rightarrow c\not \in A))].}
Class Comprehension: Let φ(x) be any formula in the language of MK in which x is a free variable and Y is not free. φ(x) may contain parameters that are either sets or proper classes. More consequentially, the quantified variables in φ(x) may range over all classes and not just over all sets; this is the only way MK differs from NBG. Then there exists a class
Y
=
{
x
∣
ϕ
(
x
)
}
{\displaystyle Y=\{x\mid \phi (x)\}}
whose members are exactly those sets x such that
ϕ
(
x
)
{\displaystyle \phi (x)}
comes out true. Formally, if Y is not free in φ:
∀
W
1
.
.
.
W
n
∃
Y
∀
x
[
x
∈
Y
↔
(
ϕ
(
x
,
W
1
,
.
.
.
W
n
)
∧
M
x
)
]
.
{\displaystyle \forall W_{1}...W_{n}\exists Y\forall x[x\in Y\leftrightarrow (\phi (x,W_{1},...W_{n})\land Mx)].}
Pairing: For any sets x and y, there exists a set
z
=
{
x
,
y
}
{\displaystyle z=\{x,y\}}
whose members are exactly x and y.
∀
x
∀
y
[
(
M
x
∧
M
y
)
→
∃
z
(
M
z
∧
∀
s
[
s
∈
z
↔
(
s
=
x
∨
s
=
y
)
]
)
]
.
{\displaystyle \forall x\,\forall y\,[(Mx\land My)\rightarrow \exists z\,(Mz\land \forall s\,[s\in z\leftrightarrow (s=x\,\lor \,s=y)])].}
Pairing licenses the unordered pair in terms of which the ordered pair,
⟨
x
,
y
⟩
{\displaystyle \langle x,y\rangle }
, may be defined in the usual way, as
{
{
x
}
,
{
x
,
y
}
}
{\displaystyle \ \{\{x\},\{x,y\}\}}
. With ordered pairs in hand, Class Comprehension enables defining relations and functions on sets as sets of ordered pairs, making possible the next axiom:
Limitation of Size: C is a proper class if and only if V can be mapped one-to-one into C.
∀
C
[
¬
M
C
↔
∃
F
(
∀
x
[
M
x
→
∃
s
(
s
∈
C
∧
⟨
x
,
s
⟩
∈
F
)
]
∧
∀
x
∀
y
∀
s
[
(
⟨
x
,
s
⟩
∈
F
∧
⟨
y
,
s
⟩
∈
F
)
→
x
=
y
]
)
]
.
{\displaystyle {\begin{array}{l}\forall C[\lnot MC\leftrightarrow \exists F(\forall x[Mx\rightarrow \exists s(s\in C\land \langle x,s\rangle \in F)]\land \\\qquad \forall x\forall y\forall s[(\langle x,s\rangle \in F\land \langle y,s\rangle \in F)\rightarrow x=y])].\end{array}}}
The formal version of this axiom resembles the axiom schema of replacement, and embodies the class function F. The next section explains how Limitation of Size is stronger than the usual forms of the axiom of choice.
Power set: Let p be a class whose members are all possible subsets of the set a. Then p is a set.
∀
a
∀
p
[
(
M
a
∧
∀
x
[
x
∈
p
↔
∀
y
(
y
∈
x
→
y
∈
a
)
]
)
→
M
p
]
.
{\displaystyle \forall a\,\forall p\,[(Ma\land \forall x\,[x\in p\leftrightarrow \forall y\,(y\in x\rightarrow y\in a)])\rightarrow Mp].}
Union: Let
s
=
⋃
a
{\displaystyle s=\bigcup a}
be the sum class of the set a, namely the union of all members of a. Then s is a set.
∀
a
∀
s
[
(
M
a
∧
∀
x
[
x
∈
s
↔
∃
y
(
x
∈
y
∧
y
∈
a
)
]
)
→
M
s
]
.
{\displaystyle \forall a\,\forall s\,[(Ma\land \forall x\,[x\in s\leftrightarrow \exists y\,(x\in y\land y\in a)])\rightarrow Ms].}
Infinity: There exists an inductive set y, meaning that (i) the empty set is a member of y; (ii) if x is a member of y, then so is
x
∪
{
x
}
.
{\displaystyle x\cup \{x\}.}
.
∃
y
[
M
y
∧
∅
∈
y
∧
∀
z
(
z
∈
y
→
∃
x
[
x
∈
y
∧
∀
w
(
w
∈
x
↔
[
w
=
z
∨
w
∈
z
]
)
]
)
]
.
{\displaystyle \exists y[My\land \varnothing \in y\land \forall z(z\in y\rightarrow \exists x[x\in y\land \forall w(w\in x\leftrightarrow [w=z\lor w\in z])])].}
Note that p and s in Power Set and Union are universally, not existentially, quantified, as Class Comprehension suffices to establish the existence of p and s. Power Set and Union only serve to establish that p and s cannot be proper classes.
The above axioms are shared with other set theories as follows:
ZFC and NBG: Pairing, Power Set, Union, Infinity;
NBG (and ZFC, if quantified variables were restricted to sets): Extensionality, Foundation;
NBG: Limitation of Size.
== Discussion ==
Monk (1980) and Rubin (1967) are set theory texts built around MK; Rubin's ontology includes urelements. These authors and Mendelson (1997: 287) submit that MK does what is expected of a set theory while being less cumbersome than ZFC and NBG.
MK is strictly stronger than ZFC and its conservative extension NBG, the other well-known set theory with proper classes. In fact, NBG—and hence ZFC—can be proved consistent in MK. That means that if MK's axioms hold, one can define a True predicate and show that all the ZFC and NBG axioms are true—hence every other statement formulated in ZFC or NBG is true, because truth is preserved by logic. MK's strength stems from its axiom schema of Class Comprehension being impredicative, meaning that φ(x) may contain quantified variables ranging over classes. The quantified variables in NBG's axiom schema of Class Comprehension are restricted to sets; hence Class Comprehension in NBG must be predicative. (Separation with respect to sets is still impredicative in NBG, because the quantifiers in φ(x) may range over all sets.) The NBG axiom schema of Class Comprehension can be replaced with finitely many of its instances; this is not possible in MK. MK is consistent relative to ZFC augmented by an axiom asserting the existence of strongly inaccessible cardinals.
The only advantage of the axiom of limitation of size is that it implies the axiom of global choice. Limitation of Size does not appear in Rubin (1967), Monk (1980), or Mendelson (1997). Instead, these authors invoke a usual form of the local axiom of choice, and an "axiom of replacement," asserting that if the domain of a class function is a set, its range is also a set. Replacement can prove everything that Limitation of Size proves, except prove some form of the axiom of choice.
Limitation of Size plus I being a set (hence the universe is nonempty) renders provable the sethood of the empty set; hence no need for an axiom of empty set. Such an axiom could be added, of course, and minor perturbations of the above axioms would necessitate this addition. The set I is not identified with the limit ordinal
ω
,
{\displaystyle \omega ,}
as I could be a set larger than
ω
.
{\displaystyle \omega .}
In this case, the existence of
ω
{\displaystyle \omega }
would follow from either form of Limitation of Size.
The class of von Neumann ordinals can be well-ordered. It cannot be a set (under pain of paradox); hence that class is a proper class, and all proper classes have the same size as V. Hence V too can be well-ordered.
MK can be confused with second-order ZFC, ZFC with second-order logic (representing second-order objects in set rather than predicate language) as its background logic. The language of second-order ZFC is similar to that of MK (although a set and a class having the same extension can no longer be identified), and their syntactical resources for practical proof are almost identical (and are identical if MK includes the strong form of Limitation of Size). But the semantics of second-order ZFC are quite different from those of MK. For example, if MK is consistent then it has a countable first-order model, while second-order ZFC has no countable models.
=== Model theory ===
ZFC, NBG, and MK each have models describable in terms of V, the von Neumann universe of sets in ZFC. Let the inaccessible cardinal κ be a member of V. Also let Def(X) denote the Δ0 definable subsets of X (see constructible universe). Then:
Vκ is model of ZFC;
Def(Vκ) is a model of Mendelson's version of NBG, which excludes global choice, replacing limitation of size by replacement and ordinary choice;
Vκ+1, the power set of Vκ, is a model of MK.
=== History ===
MK was first set out in Wang (1949) and popularized in an appendix to J. L. Kelley's (1955) General Topology, using the axioms given in the next section. The system of Anthony Morse's (1965) A Theory of Sets is equivalent to Kelley's, but formulated in an idiosyncratic formal language rather than, as is done here, in standard first-order logic. The first set theory to include impredicative class comprehension was Quine's ML, that built on New Foundations rather than on ZFC. Impredicative class comprehension was also proposed in Mostowski (1951) and Lewis (1991).
== The axioms in Kelley's General Topology ==
The axioms and definitions in this section are, but for a few inessential details, taken from the Appendix to Kelley (1955). The explanatory remarks below are not his. The Appendix states 181 theorems and definitions, and warrants careful reading as an abbreviated exposition of axiomatic set theory by a working mathematician of the first rank. Kelley introduced his axioms gradually, as needed to develop the topics listed after each instance of Develop below.
Notations appearing below and now well-known are not defined. Peculiarities of Kelley's notation include:
He did not distinguish variables ranging over classes from those ranging over sets;
domain f and range f denote the domain and range of the function f; this peculiarity has been carefully respected below;
His primitive logical language includes class abstracts of the form
{
x
:
A
(
x
)
}
,
{\displaystyle \ \{x:A(x)\},}
"the class of all sets x satisfying A(x)."
Definition: x is a set (and hence not a proper class) if, for some y,
x
∈
y
{\displaystyle x\in y}
.
I. Extent: For each x and each y, x=y if and only if for each z,
z
∈
x
{\displaystyle z\in x}
when and only when
z
∈
y
.
{\displaystyle z\in y.}
Identical to Extensionality above. I would be identical to the axiom of extensionality in ZFC, except that the scope of I includes proper classes as well as sets.
II. Classification (schema): An axiom results if in
For each
β
{\displaystyle \beta }
,
β
∈
{
α
:
A
}
{\displaystyle \beta \in \{\alpha :A\}}
if and only if
β
{\displaystyle \beta }
is a set and
B
,
{\displaystyle B,}
'α' and 'β' are replaced by variables, ' A ' by a formula Æ, and ' B ' by the formula obtained from Æ by replacing each occurrence of the variable that replaced α by the variable that replaced β provided that the variable that replaced β does not appear bound in A.
Develop: Boolean algebra of sets. Existence of the null class and of the universal class V.
III. Subsets: If x is a set, there exists a set y such that for each z, if
z
⊆
x
{\displaystyle z\subseteq x}
, then
z
∈
y
.
{\displaystyle z\in y.}
The import of III is that of Power Set above. Sketch of the proof of Power Set from III: for any class z that is a subclass of the set x, the class z is a member of the set y whose existence III asserts. Hence z is a set.
Develop: V is not a set. Existence of singletons. Separation provable.
IV. Union: If x and y are both sets, then
x
∪
y
{\displaystyle x\cup y}
is a set.
The import of IV is that of Pairing above. Sketch of the proof of Pairing from IV: the singleton
{
x
}
{\displaystyle \{x\}}
of a set x is a set because it is a subclass of the power set of x (by two applications of III). Then IV implies that
{
x
,
y
}
{\displaystyle \{x,y\}}
is a set if x and y are sets.
Develop: Unordered and ordered pairs, relations, functions, domain, range, function composition.
V. Substitution: If f is a [class] function and domain f is a set, then range f is a set.
The import of V is that of the axiom schema of replacement in NBG and ZFC.
VI. Amalgamation: If x is a set, then
⋃
x
{\displaystyle \bigcup x}
is a set.
The import of VI is that of Union above. IV and VI may be combined into one axiom.
Develop: Cartesian product, injection, surjection, bijection, order theory.
VII. Regularity: If
x
≠
∅
{\displaystyle x\neq \varnothing }
there is a member y of x such that
x
∩
y
=
∅
.
{\displaystyle x\cap y=\varnothing .}
The import of VII is that of Foundation above.
Develop: Ordinal numbers, transfinite induction.
VIII. Infinity: There exists a set y, such that
∅
∈
y
{\displaystyle \varnothing \in y}
and
x
∪
{
x
}
∈
y
{\displaystyle x\cup \{x\}\in y}
whenever
x
∈
y
.
{\displaystyle x\in y.}
This axiom, or equivalents thereto, are included in ZFC and NBG. VIII asserts the unconditional existence of two sets, the infinite inductive set y, and the null set
∅
.
{\displaystyle \varnothing .}
∅
{\displaystyle \varnothing }
is a set simply because it is a member of y. Up to this point, everything that has been proved to exist is a class, and Kelley's discussion of sets was entirely hypothetical.
Develop: Natural numbers, N is a set, Peano axioms, integers, rational numbers, real numbers.
Definition: c is a choice function if c is a function and
c
(
x
)
∈
x
{\displaystyle c(x)\in x}
for each member x of domain c.
IX. Choice: There exists a choice function c whose domain is
V
−
{
∅
}
.
{\displaystyle V-\{\varnothing \}.}
.
IX is very similar to the axiom of global choice derivable from Limitation of Size above.
Develop: Equivalents of the axiom of choice. As is the case with ZFC, the development of the cardinal numbers requires some form of choice.
If the scope of all quantified variables in the above axioms is restricted to sets, all axioms except III and the schema IV are ZFC axioms. IV is provable in ZFC. Hence the Kelley treatment of MK makes very clear that all that distinguishes MK from ZFC are variables ranging over proper classes as well as sets, and the Classification schema.
== Notes ==
== References ==
Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153.
Lemmon, E. J. (1986) Introduction to Axiomatic Set Theory. Routledge & Kegan Paul.
David K. Lewis (1991) Parts of Classes. Oxford: Basil Blackwell.
Mendelson, Elliott (1987). Introduction to Mathematical Logic. Chapman & Hall. ISBN 0-534-06624-0. The definitive treatment of the closely related set theory NBG, followed by a page on MK. Harder than Monk or Rubin.
Monk, J. Donald (1980) Introduction to Set Theory. Krieger. Easier and less thorough than Rubin.
Morse, A. P., (1965) A Theory of Sets. Academic Press.
Mostowski, Andrzej (1950), "Some impredicative definitions in the axiomatic set theory" (PDF), Fundamenta Mathematicae, 37: 111–124, doi:10.4064/fm-37-1-111-124.
Rubin, Jean E. (1967) Set Theory for the Mathematician. San Francisco: Holden Day. More thorough than Monk; the ontology includes urelements.
Wang, Hao (1949), "On Zermelo's and von Neumann's axioms for set theory", Proc. Natl. Acad. Sci. U.S.A., 35 (3): 150–155, Bibcode:1949PNAS...35..150W, doi:10.1073/pnas.35.3.150, JSTOR 88430, MR 0029850, PMC 1062986, PMID 16588874.
== External links ==
Download General Topology (1955) by John L. Kelley in various formats. The appendix contains Kelley's axiomatic development of MK.
From Foundations of Mathematics (FOM) discussion group:
Allen Hazen on set theory with classes.
Joseph Shoenfield's doubts about MK. | Wikipedia/Morse–Kelley_set_theory |
In mathematical logic and computer science, homotopy type theory (HoTT) refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.
This includes, among other lines of work, the construction of homotopical and higher-categorical models for such type theories; the use of type theory as a logic (or internal language) for abstract homotopy theory and higher category theory; the development of mathematics within a type-theoretic foundation (including both previously existing mathematics and new mathematics that homotopical types make possible); and the formalization of each of these in computer proof assistants.
There is a large overlap between the work referred to as homotopy type theory, and that called the univalent foundations project. Although neither is precisely delineated, and the terms are sometimes used interchangeably, the choice of usage also sometimes corresponds to differences in viewpoint and emphasis. As such, this article may not represent the views of all researchers in the fields equally. This kind of variability is unavoidable when a field is in rapid flux.
== History ==
=== Groupoid model ===
At one time, the idea that types in intensional type theory with their identity types could be regarded as groupoids was mathematical folklore. It was first made precise semantically in the 1994 paper of Martin Hofmann and Thomas Streicher called "The groupoid model refutes uniqueness of identity proofs", in which they showed that intensional type theory had a model in the category of groupoids. This was the first truly "homotopical" model of type theory, albeit only "1-dimensional" (the traditional models in the category of sets being homotopically 0-dimensional).
Their follow-up paper foreshadowed several later developments in homotopy type theory. For instance, they noted that the groupoid model satisfies a rule they called "universe extensionality", which is none other than the restriction to 1-types of the univalence axiom that Vladimir Voevodsky proposed ten years later. (The axiom for 1-types is notably simpler to formulate, however, since a coherent notion of "equivalence" is not required.) They also defined "categories with isomorphism as equality" and conjectured that in a model using higher-dimensional groupoids, for such categories one would have "equivalence is equality"; this was later proven by Benedikt Ahrens, Krzysztof Kapulkin, and Michael Shulman.
=== Early history: model categories and higher groupoids ===
The first higher-dimensional models of intensional type theory were constructed by Steve Awodey and his student Michael Warren in 2005 using Quillen model categories. These results were first presented in public at the conference FMCS 2006 at which Warren gave a talk titled "Homotopy models of intensional type theory", which also served as his thesis prospectus (the dissertation committee present were Awodey, Nicola Gambino and Alex Simpson). A summary is contained in Warren's thesis prospectus abstract.
At a subsequent workshop about identity types at Uppsala University in 2006 there were two talks about the relation between intensional type theory and factorization systems: one by Richard Garner, "Factorisation systems for type theory", and one by Michael Warren, "Model categories and intensional identity types". Related ideas were discussed in the talks by Steve Awodey, "Type theory of higher-dimensional categories", and Thomas Streicher, "Identity types vs. weak omega-groupoids: some ideas, some problems". At the same conference Benno van den Berg gave a talk titled "Types as weak omega-categories" where he outlined the ideas that later became the subject of a joint paper with Richard Garner.
All early constructions of higher dimensional models had to deal with the problem of coherence typical of models of dependent type theory, and various solutions were developed. One such was given in 2009 by Voevodsky, another in 2010 by van den Berg and Garner. A general solution, building on Voevodsky's construction, was eventually given by Lumsdaine and Warren in 2014.
At the PSSL86 in 2007 Awodey gave a talk titled "Homotopy type theory" (this was the first public usage of that term, which was coined by Awodey). Awodey and Warren summarized their results in the paper "Homotopy theoretic models of identity types", which was posted on the ArXiv preprint server in 2007 and published in 2009; a more detailed version appeared in Warren's thesis "Homotopy theoretic aspects of constructive type theory" in 2008.
At about the same time, Vladimir Voevodsky was independently investigating type theory in the context of the search of a language for practical formalization of mathematics. In September 2006 he posted to the Types mailing list "A very short note on homotopy lambda calculus", which sketched the outlines of a type theory with dependent products, sums and universes and of a model of this type theory in Kan simplicial sets. It began by saying "The homotopy λ-calculus is a hypothetical (at the moment) type system" and ended with "At the moment much of what I said above is at the level of conjectures. Even the definition of the model of TS in the homotopy category is non-trivial" referring to the complex coherence issues that were not resolved until 2009. This note included a syntactic definition of "equality types" that were claimed to be interpreted in the model by path-spaces, but did not consider Per Martin-Löf's rules for identity types. It also stratified the universes by homotopy dimension in addition to size, an idea that later was mostly discarded.
On the syntactic side, Benno van den Berg conjectured in 2006 that the tower of identity types of a type in intensional type theory should have the structure of an ω-category, and indeed a ω-groupoid, in the "globular, algebraic" sense of Michael Batanin. This was later proven independently by van den Berg and Garner in the paper "Types are weak omega-groupoids" (published 2008), and by Peter Lumsdaine in the paper "Weak ω-Categories from Intensional Type Theory" (published 2009) and as part of his 2010 Ph.D. thesis "Higher Categories from Type Theories".
=== The univalence axiom, synthetic homotopy theory, and higher inductive types ===
The concept of a univalent fibration was introduced by Voevodsky in early 2006.
However, because of the insistence of all presentations of the Martin-Löf type theory on the property that the identity types, in the empty context, may contain only reflexivity, Voevodsky did not recognize until 2009 that these identity types can be used in combination with the univalent universes. In particular, the idea that univalence can be introduced simply by adding an axiom to the existing Martin-Löf type theory appeared only in 2009.
Also in 2009, Voevodsky worked out more of the details of a model of type theory in Kan complexes, and observed that the existence of a universal Kan fibration could be used to resolve the coherence problems for categorical models of type theory. He also proved, using an idea of A. K. Bousfield, that this universal fibration was univalent: the associated fibration of pairwise homotopy equivalences between the fibers is equivalent to the paths-space fibration of the base.
To formulate univalence as an axiom Voevodsky found a way to define "equivalences" syntactically that had the important property that the type representing the statement "f is an equivalence" was (under the assumption of function extensionality) (-1)-truncated (i.e. contractible if inhabited). This enabled him to give a syntactic statement of univalence, generalizing Hofmann and Streicher's "universe extensionality" to higher dimensions. He was also able to use these definitions of equivalences and contractibility to start developing significant amounts of "synthetic homotopy theory" in the proof assistant Rocq (previously known as Coq); this formed the basis of the library later called "Foundations" and eventually "UniMath".
Unification of the various threads began in February 2010 with an informal meeting at Carnegie Mellon University, where Voevodsky presented his model in Kan complexes, and his version of Rocq, to a group including Awodey, Warren, Lumsdaine, Robert Harper, Dan Licata, Michael Shulman, and others. This meeting produced the outlines of a proof (by Warren, Lumsdaine, Licata, and Shulman) that every homotopy equivalence is an equivalence (in Voevodsky's good coherent sense), based on the idea from category theory of improving equivalences to adjoint equivalences. Soon afterwards, Voevodsky proved that the univalence axiom implies function extensionality.
The next pivotal event was a mini-workshop at the Mathematical Research Institute of Oberwolfach in March 2011 organized by Steve Awodey, Richard Garner, Per Martin-Löf, and Vladimir Voevodsky, titled "The homotopy interpretation of constructive type theory". As part of a Coq tutorial for this workshop, Andrej Bauer wrote a small Coq library based on Voevodsky's ideas (but not actually using any of his code); this eventually became the kernel of the first version of the "HoTT" Coq library (the first commit of the latter by Michael Shulman notes "Development based on Andrej Bauer's files, with many ideas taken from Vladimir Voevodsky's files"). One of the most important things to come out of the Oberwolfach meeting was the basic idea of higher inductive types, due to Lumsdaine, Shulman, Bauer, and Warren. The participants also formulated a list of important open questions, such as whether the univalence axiom satisfies canonicity (still open, although some special cases have been resolved positively), whether the univalence axiom has nonstandard models (since answered positively by Shulman), and how to define (semi)simplicial types (still open in MLTT, although it can be done in Voevodsky's Homotopy Type System (HTS), a type theory with two equality types).
Soon after the Oberwolfach workshop, the Homotopy Type Theory website and blog was established, and the subject began to be popularized under that name. An idea of some of the important progress during this period can be obtained from the blog history.
== Univalent foundations ==
The phrase "univalent foundations" is agreed by all to be closely related to homotopy type theory, but not everyone uses it in the same way. It was originally used by Vladimir Voevodsky to refer to his vision of a foundational system for mathematics in which the basic objects are homotopy types, based on a type theory satisfying § the univalence axiom, and formalized in a computer proof assistant.
As Voevodsky's work became integrated with the community of other researchers working on homotopy type theory, "univalent foundations" was sometimes used interchangeably with "homotopy type theory", and other times to refer only to its use as a foundational system (excluding, for example, the study of model-categorical semantics or computational metatheory). For instance, the subject of the IAS special year was officially given as "univalent foundations", although a lot of the work done there focused on semantics and metatheory in addition to foundations. The book produced by participants in the IAS program was titled "Homotopy type theory: Univalent foundations of mathematics"; although this could refer to either usage, since the book only discusses HoTT as a mathematical foundation.
== Special Year on Univalent Foundations of Mathematics ==
In 2012–13 researchers at the Institute for Advanced Study held "A Special Year on Univalent Foundations of Mathematics". The special year brought together researchers in topology, computer science, category theory, and mathematical logic. The program was organized by Steve Awodey, Thierry Coquand and Vladimir Voevodsky.
During the program Peter Aczel, who was one of the participants, initiated a working group which investigated how to do type theory informally but rigorously, in a style that is analogous to ordinary mathematicians doing set theory. After initial experiments it became clear that this was not only possible but highly beneficial, and that a book (the so-called HoTT Book) could and should be written. Many other participants of the project then joined the effort with technical support, writing, proof reading, and offering ideas. Unusually for a mathematics text, it was developed collaboratively and in the open on GitHub, is released under a Creative Commons license that allows people to fork their own version of the book, and is both purchasable in print and downloadable free of charge.
More generally, the special year was a catalyst for the development of the entire subject; the HoTT Book was only one, albeit the most visible, result.
Official participants in the special year
ACM Computing Reviews listed the book as a notable 2013 publication in the category "mathematics of computing".
== Key concepts ==
=== "Propositions as types" ===
HoTT uses a modified version of the "propositions as types" interpretation of type theory, according to which types can also represent propositions and terms can then represent proofs. In HoTT, however, unlike in standard "propositions as types", a special role is played by 'mere propositions' which, roughly speaking, are those types having at most one term, up to propositional equality. These are more like conventional logical propositions than are general types, in that they are proof-irrelevant.
=== Equality ===
The fundamental concept of homotopy type theory is the path. In HoTT, the type
a
=
b
{\displaystyle a=b}
is the type of all paths from the point
a
{\displaystyle a}
to the point
b
{\displaystyle b}
. (Therefore, a proof that a point
a
{\displaystyle a}
equals a point
b
{\displaystyle b}
is the same thing as a path from the point
a
{\displaystyle a}
to the point
b
{\displaystyle b}
.) For any point
a
{\displaystyle a}
, there exists a path of type
a
=
a
{\displaystyle a=a}
, corresponding to the reflexive property of equality. A path of type
a
=
b
{\displaystyle a=b}
can be inverted, forming a path of type
b
=
a
{\displaystyle b=a}
, corresponding to the symmetric property of equality. Two paths of type
a
=
b
{\displaystyle a=b}
resp.
b
=
c
{\displaystyle b=c}
can be concatenated, forming a path of type
a
=
c
{\displaystyle a=c}
; this corresponds to the transitive property of equality.
Most importantly, given a path
p
:
a
=
b
{\displaystyle p:a=b}
, and a proof of some property
P
(
a
)
{\displaystyle P(a)}
, the proof can be "transported" along the path
p
{\displaystyle p}
to yield a proof of the property
P
(
b
)
{\displaystyle P(b)}
. (Equivalently stated, an object of type
P
(
a
)
{\displaystyle P(a)}
can be turned into an object of type
P
(
b
)
{\displaystyle P(b)}
.) This corresponds to the substitution property of equality. Here, an important difference between HoTT and classical mathematics comes in. In classical mathematics, once the equality of two values
a
{\displaystyle a}
and
b
{\displaystyle b}
has been established,
a
{\displaystyle a}
and
b
{\displaystyle b}
may be used interchangeably thereafter, with no regard to any distinction between them. In homotopy type theory, however, there may be multiple different paths
a
=
b
{\displaystyle a=b}
, and transporting an object along two different paths will yield two different results. Therefore, in homotopy type theory, when applying the substitution property, it is necessary to state which path is being used.
In general, a "proposition" can have multiple different proofs. (For example, the type of all natural numbers, when considered as a proposition, has every natural number as a proof.) Even if a proposition has only one proof
a
{\displaystyle a}
, the space of paths
a
=
a
{\displaystyle a=a}
may be non-trivial in some way. A "mere proposition" is any type which either is empty, or contains only one point with a trivial path space.
Note that people write
a
=
b
{\displaystyle a=b}
for
I
d
A
(
a
,
b
)
{\displaystyle Id_{A}(a,b)}
,
thereby leaving the type
A
{\displaystyle A}
of
a
,
b
{\displaystyle a,b}
implicit.
Do not confuse it with
i
d
A
:
A
→
A
{\displaystyle id_{A}:A\to A}
, denoting the identity function on
A
{\displaystyle A}
.
=== Type equivalence ===
Two functions
f
,
g
:
A
→
B
{\displaystyle f,g:A\rightarrow B}
are homotopies by pointwise identification:: 2.4.1
f
∼
g
:≡
∏
x
:
A
f
(
x
)
=
g
(
x
)
{\displaystyle f\sim g:\equiv \prod _{x:A}f(x)=g(x)}
Equivalences between two types
A
{\displaystyle A}
and
B
{\displaystyle B}
belonging to some universe
U
{\displaystyle U}
are defined by the functions
f
:
A
→
B
{\displaystyle f:A\rightarrow B}
together with the proof of having retractions and sections with respect to homotopies:: 2.4.11,2.4.10
A
≃
B
:≡
∑
f
:
A
→
B
i
s
e
q
u
i
v
(
f
)
{\displaystyle A\simeq B:\equiv \sum _{f:A\rightarrow B}isequiv(f)}
, where
i
s
e
q
u
i
v
(
f
)
:≡
(
∑
g
:
B
→
A
(
f
∘
g
)
∼
i
d
B
)
×
(
∑
h
:
B
→
A
(
h
∘
f
)
∼
i
d
A
)
{\displaystyle isequiv(f):\equiv (\sum _{g:B\rightarrow A}(f\circ g)\sim id_{B})\times (\sum _{h:B\rightarrow A}(h\circ f)\sim id_{A})}
Together with the univalence axiom below, one receives a non-circular "
∞
{\displaystyle \infty }
-isomorphism" expanded to identity.
A
≃
B
:≡
there are
f
:
A
⇆
B
:
g
with
g
∘
f
≃
i
d
A
and
f
∘
g
≃
i
d
B
{\displaystyle A\simeq B:\equiv {\text{there are}}f:A\leftrightarrows B:g\ {\text{with}}\ g\circ f\simeq id_{A}\ {\text{and}}\ f\circ g\simeq id_{B}}
=== The univalence axiom ===
Having defined functions that are equivalences as above, one can show that there is a canonical way to turn paths to equivalences.
In other words, there is a function of the type
(
A
=
B
)
→
(
A
≃
B
)
,
{\displaystyle (A=B)\to (A\simeq B),}
which expresses that types
A
,
B
{\displaystyle A,B}
that are equal are, in particular, also equivalent.
The univalence axiom states that this function is itself an equivalence.: 115 : 4–6 Therefore, we have
(
A
=
B
)
≃
(
A
≃
B
)
{\displaystyle (A=B)\simeq (A\simeq B)}
"In other words, identity is equivalent to equivalence. In particular, one may say that 'equivalent types are identical'.": 4
Martín Hötzel Escardó has shown that the property of univalence is independent of Martin-Löf Type Theory (MLTT).: 6
This is because type equivalence is compatible with all constructions of the type theory: 2.6-2.15 .
== Applications ==
=== Theorem proving ===
Advocates claim that HoTT allows mathematical proofs to be translated into a computer programming language for computer proof assistants much more easily than before. They argue this approach increases the potential for computers to check difficult proofs. However, these claims aren't universally accepted and many research efforts and proof assistants don't make use of HoTT.
HoTT adopts the univalence axiom, which relates the equality of logical-mathematical propositions to homotopy theory. An equation such as
a
=
b
{\displaystyle a=b}
is a mathematical proposition in which two different symbols have the same value. In homotopy type theory, this is taken to mean that the two shapes which represent the values of the symbols are topologically equivalent.
These equivalence relationships, ETH Zürich Institute for Theoretical Studies director Giovanni Felder argues, can be better formulated in homotopy theory because it is more comprehensive: Homotopy theory explains not only why "a equals b" but also how to derive this. In set theory, this information would have to be defined additionally, which, advocates argue, makes the translation of mathematical propositions into programming languages more difficult.
=== Computer programming ===
As of 2015, intense research work was underway to model and formally analyse the computational behavior of the univalence axiom in homotopy type theory.
Cubical type theory is one attempt to give computational content to homotopy type theory.
However, it is believed that certain objects, such as semi-simplicial types, cannot be constructed without reference to some notion of exact equality. Therefore, various two-level type theories have been developed which partition their types into fibrant types, which respect paths, and non-fibrant types, which do not. Cartesian cubical computational type theory is the first two-level type theory which gives a full computational interpretation to homotopy type theory.
== See also ==
Calculus of constructions
Curry–Howard correspondence
Intuitionistic type theory
Homotopy hypothesis
Univalent foundations
== Notes ==
== References ==
== Bibliography ==
== Further reading ==
David Corfield (2020), Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy, Oxford University Press.
Egbert Rijke (2022), Introduction to Homotopy Type Theory, arXiv:2212.11082 . Introductory textbook.
== External links ==
Homotopy Type Theory
Homotopy type theory at the nLab
Homotopy type theory wiki
Vladimir Voevodsky's webpage on the Univalent Foundations
Homotopy Type Theory and the Univalent Foundations of Mathematics by Steve Awodey
"Constructive Type Theory and Homotopy" – Video lecture by Steve Awodey at the Institute for Advanced Study
=== Libraries of formalized mathematics === | Wikipedia/Homotopy_type_theory |
Network security is a umbrella term to describe security controls, policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.
== Network security concept ==
Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan).
Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevention system (IPS) help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network like wireshark traffic and may be logged for audit purposes and for later high-level analysis. Newer systems combining unsupervised machine learning with full network traffic analysis can detect active network attackers from malicious insiders or targeted external attackers that have compromised a user machine or account.
Communication between two hosts using a network may be encrypted to maintain security and privacy.
Honeypots, essentially decoy network-accessible resources, may be deployed in a network as surveillance and early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Honeypots are placed at a point in the network where they appear vulnerable and undefended, but they Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID ...are actually isolated and monitored. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot. A honeypot can also direct an attacker's attention away from legitimate servers. A honeypot encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a honeypot, a honeynet is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker's methods can be studied and that information can be used to increase network security. A honeynet typically contains one or more honeypots.
Previous research on network security was mostly about using tools to secure transactions and information flow, and how well users knew about and used these tools. However, more recently, the discussion has expanded to consider information security in the broader context of the digital economy and society. This indicates that it's not just about individual users and tools; it's also about the larger culture of information security in our digital world.
== Security management ==
Security management for networks is different for all kinds of situations. A home or small office may only require basic security while large businesses may require high-maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming. In order to minimize susceptibility to malicious attacks from external threats to the network, corporations often employ tools which carry out network security verifications].
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.
=== Types of attack ===
Networks are subject to attacks from malicious sources. Attacks can be from two categories: "Passive" when a network intruder intercepts data traveling through the network, and "Active" in which an intruder initiates commands to disrupt the network's normal operation or to conduct reconnaissance and lateral movements to find and gain access to assets available via the network.
Types of attacks include:
Passive
Network
Wiretapping – Third-party monitoring of electronic communicationsPages displaying short descriptions of redirect targets
Passive Port scanner – Application designed to probe for open ports
Idle scan – computer-related activityPages displaying wikidata descriptions as a fallback
Encryption – Process of converting plaintext to ciphertext
Traffic analysis – Process of intercepting and examining messages
Active:
Network virus (router viruses)
Eavesdropping – Act of secretly listening to the private conversation of others
Data modification
== See also ==
== References ==
== Further reading ==
Case Study: Network Clarity Archived 2016-05-27 at the Wayback Machine, SC Magazine 2014
Cisco. (2011). What is network security?. Retrieved from cisco.com Archived 2016-04-14 at the Wayback Machine
Security of the Internet (The Froehlich/Kent Encyclopedia of Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231–255.)
Introduction to Network Security Archived 2014-12-02 at the Wayback Machine, Matt Curtin, 1997.
Security Monitoring with Cisco Security MARS, Gary Halleen/Greg Kellogg, Cisco Press, Jul. 6, 2007. ISBN 1587052709
Self-Defending Networks: The Next Generation of Network Security, Duane DeCapite, Cisco Press, Sep. 8, 2006. ISBN 1587052539
Security Threat Mitigation and Response: Understanding CS-MARS, Dale Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006. ISBN 1587052601
Securing Your Business with Cisco ASA and PIX Firewalls, Greg Abelar, Cisco Press, May 27, 2005. ISBN 1587052148
Deploying Zone-Based Firewalls, Ivan Pepelnjak, Cisco Press, Oct. 5, 2006. ISBN 1587053101
Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman | Radia Perlman | Mike Speciner, Prentice-Hall, 2002. ISBN 9780137155880
Network Infrastructure Security, Angus Wong and Alan Yeung, Springer, 2009. ISBN 978-1-4419-0165-1 | Wikipedia/Network_security |
In mathematics and logic, Ackermann set theory (AST, also known as
A
∗
/
V
{\displaystyle A^{*}/V}
) is an axiomatic set theory proposed by Wilhelm Ackermann in 1956.
AST differs from Zermelo–Fraenkel set theory (ZF) in that it allows proper classes, that is, objects that are not sets, including a class of all sets.
It replaces several of the standard ZF axioms for constructing new sets with a principle known as Ackermann's schema. Intuitively, the schema allows a new set to be constructed if it can be defined by a formula which does not refer to the class of all sets.
In its use of classes, AST differs from other alternative set theories such as Morse–Kelley set theory and Von Neumann–Bernays–Gödel set theory in that a class may be an element of another class.
William N. Reinhardt established in 1970 that AST is effectively equivalent in strength to ZF, putting it on equal foundations. In particular, AST is consistent if and only if ZF is consistent.
== Preliminaries ==
AST is formulated in first-order logic. The language
L
{
∈
,
V
}
{\displaystyle L_{\{\in ,V\}}}
of AST contains one binary relation
∈
{\displaystyle \in }
denoting set membership and one constant
V
{\displaystyle V}
denoting the class of all sets. Ackermann used a predicate
M
{\displaystyle M}
instead of
V
{\displaystyle V}
; this is equivalent as each of
M
{\displaystyle M}
and
V
{\displaystyle V}
can be defined in terms of the other.
We will refer to elements of
V
{\displaystyle V}
as sets, and general objects as classes. A class that is not a set is called a proper class.
== Axioms ==
The following formulation is due to Reinhardt.
The five axioms include two axiom schemas.
Ackermann's original formulation included only the first four of these, omitting the axiom of regularity.
=== 1. Axiom of extensionality ===
If two classes have the same elements, then they are equal.
∀
x
(
x
∈
A
↔
x
∈
B
)
→
A
=
B
.
{\displaystyle \forall x\;(x\in A\leftrightarrow x\in B)\to A=B.}
This axiom is identical to the axiom of extensionality found in many other set theories, including ZF.
=== 2. Heredity ===
Any element or a subset of a set is a set.
(
x
∈
y
∨
x
⊆
y
)
∧
y
∈
V
→
x
∈
V
.
{\displaystyle (x\in y\lor x\subseteq y)\land y\in V\to x\in V.}
=== 3. Comprehension schema ===
For any property, we can form the class of sets satisfying that property. Formally, for any formula
ϕ
{\displaystyle \phi }
where
X
{\displaystyle X}
is not free:
∃
X
∀
x
(
x
∈
X
↔
x
∈
V
∧
ϕ
)
.
{\displaystyle \exists X\;\forall x\;(x\in X\leftrightarrow x\in V\land \phi ).}
That is, the only restriction is that comprehension is restricted to objects in
V
{\displaystyle V}
. But the resulting object is not necessarily a set.
=== 4. Ackermann's schema ===
For any formula
ϕ
{\displaystyle \phi }
with free variables
a
1
,
…
,
a
n
,
x
{\displaystyle a_{1},\ldots ,a_{n},x}
and no occurrences of
V
{\displaystyle V}
:
a
1
,
…
,
a
n
∈
V
∧
∀
x
(
ϕ
→
x
∈
V
)
→
∃
X
∈
V
∀
x
(
x
∈
X
↔
ϕ
)
.
{\displaystyle a_{1},\ldots ,a_{n}\in V\land \forall x\;(\phi \to x\in V)\to \exists X{\in }V\;\forall x\;(x\in X\leftrightarrow \phi ).}
Ackermann's schema is a form of set comprehension that is unique to AST. It allows constructing a new set (not just a class) as long as we can define it by a property that does not refer to the symbol
V
{\displaystyle V}
. This is the principle that replaces ZF axioms such as pairing, union, and power set.
=== 5. Regularity ===
Any non-empty set contains an element disjoint from itself:
∀
x
∈
V
(
x
=
∅
∨
∃
y
(
y
∈
x
∧
y
∩
x
=
∅
)
)
.
{\displaystyle \forall x\in V\;(x=\varnothing \lor \exists y(y\in x\land y\cap x=\varnothing )).}
Here,
y
∩
x
=
∅
{\displaystyle y\cap x=\varnothing }
is shorthand for
∄
z
(
z
∈
x
∧
z
∈
y
)
{\displaystyle \not \exists z\;(z\in x\land z\in y)}
. This axiom is identical to the axiom of regularity in ZF.
This axiom is conservative in the sense that without it, we can simply use comprehension (axiom schema 3) to restrict our attention to the subclass of sets that are regular.
== Alternative formulations ==
Ackermann's original axioms did not include regularity, and used a predicate symbol
M
{\displaystyle M}
instead of the constant symbol
V
{\displaystyle V}
. We follow Lévy and Reinhardt in replacing instances of
M
x
{\displaystyle Mx}
with
x
∈
V
{\displaystyle x\in V}
. This is equivalent because
M
{\displaystyle M}
can be given a definition as
x
∈
V
{\displaystyle x\in V}
, and conversely, the set
V
{\displaystyle V}
can be obtained in Ackermann's original formulation by applying comprehension to the predicate
ϕ
=
True
{\displaystyle \phi ={\text{True}}}
.
In axiomatic set theory, Ralf Schindler replaces Ackermann's schema (axiom schema 4) with the following reflection principle:
for any formula
ϕ
{\displaystyle \phi }
with free variables
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
,
a
1
,
…
,
a
n
∈
V
→
(
ϕ
↔
ϕ
V
)
.
{\displaystyle a_{1},\ldots ,a_{n}{\in }V\to (\phi \leftrightarrow \phi ^{V}).}
Here,
ϕ
V
{\displaystyle \phi ^{V}}
denotes the relativization of
ϕ
{\displaystyle \phi }
to
V
{\displaystyle V}
, which replaces all quantifiers in
ϕ
{\displaystyle \phi }
of the form
∀
x
{\displaystyle \forall x}
and
∃
x
{\displaystyle \exists x}
by
∀
x
∈
V
{\displaystyle \forall x{\in }V}
and
∃
x
∈
V
{\displaystyle \exists x{\in }V}
, respectively.
== Relation to Zermelo–Fraenkel set theory ==
Let
L
{
∈
}
{\displaystyle L_{\{\in \}}}
be the language of formulas that do not mention
V
{\displaystyle V}
.
In 1959, Azriel Lévy proved that if
ϕ
{\displaystyle \phi }
is a formula of
L
{
∈
}
{\displaystyle L_{\{\in \}}}
and AST proves
ϕ
V
{\displaystyle \phi ^{V}}
, then ZF proves
ϕ
{\displaystyle \phi }
.
In 1970, William N. Reinhardt proved that if
ϕ
{\displaystyle \phi }
is a formula of
L
{
∈
}
{\displaystyle L_{\{\in \}}}
and ZF proves
ϕ
{\displaystyle \phi }
, then AST proves
ϕ
V
{\displaystyle \phi ^{V}}
.
Therefore, AST and ZF are mutually interpretable in conservative extensions of each other. Thus they are equiconsistent.
A remarkable feature of AST is that, unlike NBG and its variants, a proper class can be an element of another proper class.
== Extensions ==
An extension of AST for category theory called ARC was developed by F.A. Muller. Muller stated that ARC "founds Cantorian set-theory as well as category-theory and therefore can pass as a founding theory of the whole of mathematics".
== See also ==
Foundations of mathematics
List of alternative set theories
Zermelo set theory
== Notes ==
== References == | Wikipedia/Ackermann_set_theory |
In mathematical logic, and particularly in its subfield model theory, a saturated model M is one that realizes as many complete types as may be "reasonably expected" given its size. For example, an ultrapower model of the hyperreals is
ℵ
1
{\displaystyle \aleph _{1}}
-saturated, meaning that every descending nested sequence of internal sets has a nonempty intersection.
== Definition ==
Let κ be a finite or infinite cardinal number and M a model in some first-order language. Then M is called κ-saturated if for all subsets A ⊆ M of cardinality less than κ, the model M realizes all complete types over A. The model M is called saturated if it is |M|-saturated where |M| denotes the cardinality of M. That is, it realizes all complete types over sets of parameters of size less than |M|. According to some authors, a model M is called countably saturated if it is
ℵ
1
{\displaystyle \aleph _{1}}
-saturated; that is, it realizes all complete types over countable sets of parameters. According to others, it is countably saturated if it is countable and saturated.
== Motivation ==
The seemingly more intuitive notion—that all complete types of the language are realized—turns out to be too weak (and is appropriately named weak saturation, which is the same as 1-saturation). The difference lies in the fact that many structures contain elements that are not definable (for example, any transcendental element of R is, by definition of the word, not definable in the language of fields). However, they still form a part of the structure, so we need types to describe relationships with them. Thus we allow sets of parameters from the structure in our definition of types. This argument allows us to discuss specific features of the model that we may otherwise miss—for example, a bound on a specific increasing sequence cn can be expressed as realizing the type {x ≥ cn : n ∈ ω}, which uses countably many parameters. If the sequence is not definable, this fact about the structure cannot be described using the base language, so a weakly saturated structure may not bound the sequence, while an ℵ1-saturated structure will.
The reason we only require parameter sets that are strictly smaller than the model is trivial: without this restriction, no infinite model is saturated. Consider a model M, and the type {x ≠ m : m ∈ M}. Each finite subset of this type is realized in the (infinite) model M, so by compactness it is consistent with M, but is trivially not realized. Any definition that is universally unsatisfied is useless; hence the restriction.
== Examples ==
Saturated models exist for certain theories and cardinalities:
(Q, <)—the set of rational numbers with their usual ordering—is saturated. Intuitively, this is because any type consistent with the theory is implied by the order type; that is, the order the variables come in tells you everything there is to know about their role in the structure.
(R, <)—the set of real numbers with their usual ordering—is not saturated. For example, take the type (in one variable x) that contains the formula
x
>
−
1
n
{\displaystyle \textstyle {x>-{\frac {1}{n}}}}
for every natural number n, as well as the formula
x
<
0
{\displaystyle \textstyle {x<0}}
. This type uses ω different parameters from R. Every finite subset of the type is realized on R by some real x, so by compactness the type is consistent with the structure, but it is not realized, as that would imply an upper bound to the sequence −1/n that is less than 0 (its least upper bound). Thus (R,<) is not ω1-saturated, and not saturated. However, it is ω-saturated, for essentially the same reason as Q—every finite type is given by the order type, which if consistent, is always realized, because of the density of the order.
A dense totally ordered set without endpoints is a ηα set if and only if it is ℵα-saturated.
The countable random graph, with the only non-logical symbol being the edge existence relation, is also saturated, because any complete type is isolated (implied) by the finite subgraph consisting of the variables and parameters used to define the type.
Both the theory of Q and the theory of the countable random graph can be shown to be ω-categorical through the back-and-forth method. This can be generalized as follows: the unique model of cardinality κ of a countable κ-categorical theory is saturated.
However, the statement that every model has a saturated elementary extension is not provable in ZFC. In fact, this statement is equivalent to the existence of a proper class of cardinals κ such that κ<κ = κ. The latter identity is equivalent to κ = λ+ = 2λ for some λ, or κ is strongly inaccessible.
== Relationship to prime models ==
The notion of saturated model is dual to the notion of prime model in the following way: let T be a countable theory in a first-order language (that is, a set of mutually consistent sentences in that language) and let P be a prime model of T. Then P admits an elementary embedding into any other model of T. The equivalent notion for saturated models is that any "reasonably small" model of T is elementarily embedded in a saturated model, where "reasonably small" means cardinality no larger than that of the model in which it is to be embedded. Any saturated model is also homogeneous. However, while for countable theories there is a unique prime model, saturated models are necessarily specific to a particular cardinality. Given certain set-theoretic assumptions, saturated models (albeit of very large cardinality) exist for arbitrary theories. For λ-stable theories, saturated models of cardinality λ exist.
== Notes ==
== References ==
Chang, C. C.; Keisler, H. J. Model theory. Third edition. Studies in Logic and the Foundations of Mathematics, 73. North-Holland Publishing Co., Amsterdam, 1990. xvi+650 pp. ISBN 0-444-88054-2
R. Goldblatt (1998). Lectures on the hyperreals. An introduction to nonstandard analysis. Springer.
Marker, David (2002). Model Theory: An Introduction. New York: Springer-Verlag. ISBN 0-387-98760-6
Poizat, Bruno; (translation: Klein, Moses) (2000), A Course in Model Theory, New York: Springer-Verlag. ISBN 0-387-98655-3
Sacks, Gerald E. (1972), Saturated model theory, W. A. Benjamin, Inc., Reading, Mass., MR 0398817 | Wikipedia/Saturated_model |
In mathematical logic, a non-standard model of arithmetic is a model of first-order Peano arithmetic that contains non-standard numbers. The term standard model of arithmetic refers to the standard natural numbers 0, 1, 2, …. The elements of any model of Peano arithmetic are linearly ordered and possess an initial segment isomorphic to the standard natural numbers. A non-standard model is one that has additional elements outside this initial segment. The construction of such models is due to Thoralf Skolem (1934).
Non-standard models of arithmetic exist only for the first-order formulation of the Peano axioms; for the original second-order formulation, there is, up to isomorphism, only one model: the natural numbers themselves.
== Existence ==
There are several methods that can be used to prove the existence of non-standard models of arithmetic.
=== From the compactness theorem ===
The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem. To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol x. The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral n, the axiom x > n is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant x interpreted as some number larger than any numeral mentioned in the finite subset of P*. Thus by the compactness theorem there is a model satisfying all the axioms P*. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to x cannot be a standard number, because as indicated it is larger than any standard number.
Using more complex methods, it is possible to build non-standard models that possess more complicated properties. For example, there are models of Peano arithmetic in which Goodstein's theorem fails. It can be proved in Zermelo–Fraenkel set theory that Goodstein's theorem holds in the standard model, so a model where Goodstein's theorem fails must be non-standard.
=== From the incompleteness theorems ===
Gödel's incompleteness theorems also imply the existence of non-standard models of arithmetic.
The incompleteness theorems show that a particular sentence G, the Gödel sentence of Peano arithmetic, is neither provable nor disprovable in Peano arithmetic. By the completeness theorem, this means that G is false in some model of Peano arithmetic. However, G is true in the standard model of arithmetic, and therefore any model in which G is false must be a non-standard model. Thus satisfying ~G is a sufficient condition for a model to be nonstandard. It is not a necessary condition, however; for any Gödel sentence G and any infinite cardinality there is a model of arithmetic with G true and of that cardinality.
==== Arithmetic unsoundness for models with ~G true ====
Assuming that arithmetic is consistent, arithmetic with ~G is also consistent. However, since ~G states that arithmetic is inconsistent, the result will not be ω-consistent (because ~G is false and this violates ω-consistency).
=== From an ultraproduct ===
Another method for constructing a non-standard model of arithmetic is via an ultraproduct. A typical construction uses the set of all sequences of natural numbers,
N
N
{\displaystyle \mathbb {N} ^{\mathbb {N} }}
. Choose an ultrafilter on
N
{\displaystyle \mathbb {N} }
, then identify two sequences whenever they have equal values on positions that form a member of the ultrafilter (this requires that they agree on infinitely many terms, but the condition is stronger than this as ultrafilters resemble axiom-of-choice-like maximal extensions of the Fréchet filter). The resulting semiring is a non-standard model of arithmetic. It can be identified with the hypernatural numbers.
== Structure of countable non-standard models ==
The ultraproduct models are uncountable. One way to see this is to construct an injection of the infinite product of N into the ultraproduct. However, by the Löwenheim–Skolem theorem there must exist countable non-standard models of arithmetic. One way to define such a model is to use Henkin semantics.
Any countable non-standard model of arithmetic has order type ω + (ω* + ω) ⋅ η, where ω is the order type of the standard natural numbers, ω* is the dual order (an infinite decreasing sequence) and η is the order type of the rational numbers. In other words, a countable non-standard model begins with an infinite increasing sequence (the standard elements of the model). This is followed by a collection of "blocks", each of order type ω* + ω, the order type of the integers. These blocks are in turn densely ordered with the order type of the rationals. The result follows fairly easily because it is easy to see that the blocks of non-standard numbers have to be dense and linearly ordered without endpoints, and the order type of the rationals is the only countable dense linear order without endpoints (see Cantor's isomorphism theorem).
So, the order type of the countable non-standard models is known. However, the arithmetical operations are much more complicated.
It is easy to see that the arithmetical structure differs from ω + (ω* + ω) ⋅ η. For instance if a nonstandard (non-finite) element u is in the model, then so is m ⋅ u for any m in the initial segment N, yet u2 is larger than m ⋅ u for any standard finite m.
Also one can define "square roots" such as the least v such that v2 > 2 ⋅ u. These cannot be within a standard finite number of any rational multiple of u. By analogous methods to non-standard analysis one can also use PA to define close approximations to irrational multiples of a non-standard number u such as the least v with v > π ⋅ u (these can be defined in PA using non-standard finite rational approximations of π even though π itself cannot be). Once more, v − (m/n) ⋅ (u/n) has to be larger than any standard finite number for any standard finite m, n.
This shows that the arithmetical structure of a countable non-standard model is more complex than the structure of the rationals. There is more to it than that though: Tennenbaum's theorem shows that for any countable non-standard model of Peano arithmetic there is no way to code the elements of the model as (standard) natural numbers such that either the addition or multiplication operation of the model is computable on the codes. This result was first obtained by Stanley Tennenbaum in 1959.
== References ==
=== Citations ===
=== Sources ===
== See also ==
Non-Euclidean geometry — about non-standard models in geometry | Wikipedia/Non-standard_model_of_arithmetic |
In model theory, a branch of mathematical logic, the spectrum of a theory
is given by the number of isomorphism classes of models in various cardinalities. More precisely,
for any complete theory T in a language we write I(T, κ) for the number of models of T (up to isomorphism) of cardinality κ. The spectrum problem is to describe the possible behaviors of I(T, κ) as a function of κ. It has been almost completely solved for the case of a countable theory T.
== Early results ==
In this section T is a countable complete theory and κ is a cardinal.
The Löwenheim–Skolem theorem shows that if I(T,κ) is nonzero for one infinite cardinal then it is nonzero for all of them.
Morley's categoricity theorem was the first main step in solving the spectrum problem: it states that if I(T,κ) is 1 for some uncountable κ then it is 1 for all uncountable κ.
Robert Vaught showed that I(T,ℵ0) cannot be 2. It is easy to find examples where it is any given non-negative integer other than 2. Morley proved that if I(T,ℵ0) is infinite then it must be ℵ0 or ℵ1 or 2ℵ0. It is not known if it can be ℵ1 if the continuum hypothesis is false: this is called the Vaught conjecture and is the main remaining open problem (in 2005) in the theory of the spectrum.
Morley's problem was a conjecture (now a theorem) first proposed by Michael D. Morley that I(T,κ) is nondecreasing in κ for uncountable κ. This was proved by Saharon Shelah. For this, he proved a very deep dichotomy theorem.
Saharon Shelah gave an almost complete solution to the spectrum problem. For a given complete theory T, either I(T,κ) = 2κ for all uncountable cardinals κ, or
I
(
T
,
ℵ
ξ
)
<
ℶ
ω
1
(
|
ξ
|
+
ℵ
0
)
{\displaystyle \textstyle I(T,\aleph _{\xi })<\beth _{\omega _{1}}(|\xi |+\aleph _{0})}
for all ordinals ξ (See Aleph number and Beth number for an explanation of the notation), which is usually much smaller than the bound in the first case. Roughly speaking this means that either there are the maximum possible number of models in all uncountable cardinalities, or there are only "few" models in all uncountable cardinalities. Shelah also gave a description of the possible spectra in the case when there are few models.
== List of possible spectra of a countable theory ==
By extending Shelah's work, Bradd Hart, Ehud Hrushovski and Michael C. Laskowski gave the following complete solution to the spectrum problem for countable theories in uncountable cardinalities.
If T is a countable complete theory, then the number I(T, ℵα) of isomorphism classes of models is given for ordinals α>0 by the minimum of 2ℵα and one of the following maps:
2ℵα. Examples: there are many examples, in particular any unclassifiable or deep theory, such as the theory of the Rado graph.
ℶ
d
+
1
(
|
α
+
ω
|
)
{\displaystyle \beth _{d+1}(|\alpha +\omega |)}
for some countable infinite ordinal d. (For finite d see case 8.) Examples: The theory with equivalence relations Eβ for all β with β+1<d, such that every Eγ class is a union of infinitely many Eβ classes, and each E0 class is infinite.
ℶ
d
−
1
(
|
α
+
ω
|
2
ℵ
0
)
{\displaystyle \beth _{d-1}(|\alpha +\omega |^{2^{\aleph _{0}}})}
for some finite positive ordinal d. Example (for d=1): the theory of countably many independent unary predicates.
ℶ
d
−
1
(
|
α
+
ω
|
ℵ
0
+
ℶ
2
)
{\displaystyle \beth _{d-1}(|\alpha +\omega |^{\aleph _{0}}+\beth _{2})}
for some finite positive ordinal d.
ℶ
d
−
1
(
|
α
+
ω
|
+
ℶ
2
)
{\displaystyle \beth _{d-1}(|\alpha +\omega |+\beth _{2})}
for some finite positive ordinal d;
ℶ
d
−
1
(
|
α
+
ω
|
ℵ
0
)
{\displaystyle \beth _{d-1}(|\alpha +\omega |^{\aleph _{0}})}
for some finite positive ordinal d. Example (for d=1): the theory of countable many disjoint unary predicates.
ℶ
d
−
1
(
|
α
+
ω
|
+
ℶ
1
)
{\displaystyle \beth _{d-1}(|\alpha +\omega |+\beth _{1})}
for some finite ordinal d≥2;
ℶ
d
−
1
(
|
α
+
ω
|
)
{\displaystyle \beth _{d-1}(|\alpha +\omega |)}
for some finite positive ordinal d;
ℶ
d
−
2
(
|
α
+
ω
|
|
α
+
1
|
)
{\displaystyle \beth _{d-2}(|\alpha +\omega |^{|\alpha +1|})}
for some finite ordinal d≥2; Examples: similar to case 2.
ℶ
2
{\displaystyle \beth _{2}}
. Example: the theory of the integers viewed as an abelian group.
|
(
α
+
1
)
n
/
G
|
−
|
α
n
/
G
|
{\displaystyle |(\alpha +1)^{n}/G|-|\alpha ^{n}/G|}
for finite α, and |α| for infinite α, where G is some subgroup of the symmetric group on n ≥ 2 elements. Here, we identify αn with the set of sequences of length n of elements of a set of size α. G acts on αn by permuting the sequence elements, and |αn/G| denotes the number of orbits of this action. Examples: the theory of the set ω×n acted on by the wreath product of G with all permutations of ω.
1
{\displaystyle 1}
. Examples: theories that are categorical in uncountable cardinals, such as the theory of algebraically closed fields in a given characteristic.
0
{\displaystyle 0}
. Examples: theories with a finite model, and the inconsistent theory.
Moreover, all possibilities above occur as the spectrum of some countable complete theory.
The number d in the list above is the depth of the theory.
If T is a theory we define a new theory 2T to be the theory with an equivalence relation such that there are infinitely many equivalence classes each of which is a model of T. We also define theories
ℶ
n
(
T
)
{\displaystyle \beth _{n}(T)}
by
ℶ
0
(
T
)
=
T
{\displaystyle \beth _{0}(T)=T}
,
ℶ
n
+
1
(
T
)
=
2
ℶ
n
(
T
)
{\displaystyle \beth _{n+1}(T)=2^{\beth _{n}(T)}}
. Then
I
(
ℶ
n
(
T
)
,
λ
)
=
min
(
ℶ
n
(
I
(
T
,
λ
)
)
,
2
λ
)
{\displaystyle I(\beth _{n}(T),\lambda )=\min(\beth _{n}(I(T,\lambda )),2^{\lambda })}
. This can be used to construct examples of theories with spectra in the list above for non-minimal values of d from examples for the minimal value of d.
== See also ==
Spectrum of a sentence
== References ==
C. C. Chang, H. J. Keisler, Model Theory. ISBN 0-7204-0692-7
Saharon Shelah, "Classification theory and the number of nonisomorphic models", Studies in Logic and the Foundations of Mathematics, vol. 92, IX, 1.19, p.49 (North Holland, 1990).
Hart, Bradd; Hrushovski, Ehud; Laskowski, Michael C. (2000). "The Uncountable Spectra of Countable Theories". The Annals of Mathematics. 152 (1): 207–257. arXiv:math/0007199. Bibcode:2000math......7199H. doi:10.2307/2661382. JSTOR 2661382.
Bradd Hart, Michael C. Laskowski, "A survey of the uncountable spectra of countable theories", Algebraic Model Theory, edited by Hart, Lachlan, Valeriote (Springer, 1997). ISBN 0-7923-4666-1 | Wikipedia/Spectrum_of_a_theory |
In computer science, control flow (or flow of control) is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language.
Within an imperative programming language, a control flow statement is a statement that results in a choice being made as to which of two or more paths to follow. For non-strict functional languages, functions and language constructs exist to achieve the same result, but they are usually not termed control flow statements.
A set of statements is in turn generally structured as a block, which in addition to grouping, also defines a lexical scope.
Interrupts and signals are low-level mechanisms that can alter the flow of control in a way similar to a subroutine, but usually occur as a response to some external stimulus or event (that can occur asynchronously), rather than execution of an in-line control flow statement.
At the level of machine language or assembly language, control flow instructions usually work by altering the program counter. For some central processing units (CPUs), the only control flow instructions available are conditional or unconditional branch instructions, also termed jumps.
== Categories ==
The kinds of control flow statements supported by different languages vary, but can be categorized by their effect:
Continuation at a different statement (unconditional branch or jump)
Executing a set of statements only if some condition is met (choice - i.e., conditional branch)
Executing a set of statements zero or more times, until some condition is met (i.e., loop - the same as conditional branch)
Executing a set of distant statements, after which the flow of control usually returns (subroutines, coroutines, and continuations)
Stopping the program, preventing any further execution (unconditional halt)
== Primitives ==
=== Labels ===
A label is an explicit name or number assigned to a fixed position within the source code, and which may be referenced by control flow statements appearing elsewhere in the source code. A label marks a position within source code and has no other effect.
Line numbers are an alternative to a named label used in some languages (such as BASIC). They are whole numbers placed at the start of each line of text in the source code. Languages which use these often impose the constraint that the line numbers must increase in value in each following line, but may not require that they be consecutive. For example, in BASIC:
In other languages such as C and Ada, a label is an identifier, usually appearing at the start of a line and immediately followed by a colon. For example, in C:
The language ALGOL 60 allowed both whole numbers and identifiers as labels (both linked by colons to the following statement), but few if any other ALGOL variants allowed whole numbers. Early Fortran compilers only allowed whole numbers as labels. Beginning with Fortran-90, alphanumeric labels have also been allowed.
=== Goto ===
The goto statement (a combination of the English words go and to, and pronounced accordingly) is the most basic form of unconditional transfer of control.
Although the keyword may either be in upper or lower case depending on the language, it is usually written as:
goto label
The effect of a goto statement is to cause the next statement to be executed to be the statement appearing at (or immediately after) the indicated label.
Goto statements have been considered harmful by many computer scientists, notably Dijkstra.
=== Subroutines ===
The terminology for subroutines varies; they may alternatively be known as routines, procedures, functions (especially if they return results) or methods (especially if they belong to classes or type classes).
In the 1950s, computer memories were very small by current standards so subroutines were used mainly to reduce program size. A piece of code was written once and then used many times from various other places in a program.
Today, subroutines are more often used to help make a program more structured, e.g., by isolating some algorithm or hiding some data access method. If many programmers are working on one program, subroutines are one kind of modularity that can help divide the work.
=== Sequence ===
In structured programming, the ordered sequencing of successive commands is considered one of the basic control structures, which is used as a building block for programs alongside iteration, recursion and choice.
== Minimal structured control flow ==
In May 1966, Böhm and Jacopini published an article in Communications of the ACM which showed that any program with gotos could be transformed into a goto-free form involving only choice (IF THEN ELSE) and loops (WHILE condition DO xxx), possibly with duplicated code and/or the addition of Boolean variables (true/false flags). Later authors showed that choice can be replaced by loops (and yet more Boolean variables).
That such minimalism is possible does not mean that it is necessarily desirable; computers theoretically need only one machine instruction (subtract one number from another and branch if the result is negative), but practical computers have dozens or even hundreds of machine instructions.
Other research showed that control structures with one entry and one exit were much easier to understand than any other form, mainly because they could be used anywhere as a statement without disrupting the control flow. In other words, they were composable. (Later developments, such as non-strict programming languages – and more recently, composable software transactions – have continued this strategy, making components of programs even more freely composable.)
Some academics took a purist approach to the Böhm–Jacopini result and argued that even instructions like break and return from the middle of loops are bad practice as they are not needed in the Böhm–Jacopini proof, and thus they advocated that all loops should have a single exit point. This purist approach is embodied in the language Pascal (designed in 1968–1969), which up to the mid-1990s was the preferred tool for teaching introductory programming in academia. The direct application of the Böhm–Jacopini theorem may result in additional local variables being introduced in the structured chart, and may also result in some code duplication. Pascal is affected by both of these problems and according to empirical studies cited by Eric S. Roberts, student programmers had difficulty formulating correct solutions in Pascal for several simple problems, including writing a function for searching an element in an array. A 1980 study by Henry Shapiro cited by Roberts found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop.
== Control structures in practice ==
Most programming languages with control structures have an initial keyword which indicates the type of control structure involved. Languages then divide as to whether or not control structures have a final keyword.
No final keyword: ALGOL 60, C, C++, Go, Haskell, Java, Pascal, Perl, PHP, PL/I, Python, PowerShell. Such languages need some way of grouping statements together:
ALGOL 60 and Pascal: begin ... end
C, C++, Go, Java, Perl, PHP, and PowerShell: curly brackets { ... }
PL/I: DO ... END
Python: uses indent level (see Off-side rule)
Haskell: either indent level or curly brackets can be used, and they can be freely mixed
Lua: uses do ... end
Final keyword: Ada, APL, ALGOL 68, Modula-2, Fortran 77, Mythryl, Visual Basic. The forms of the final keyword vary:
Ada: final keyword is end + space + initial keyword e.g., if ... end if, loop ... end loop
APL: final keyword is :End optionally + initial keyword, e.g., :If ... :End or :If ... :EndIf, Select ... :End or :Select ... :EndSelect, however, if adding an end condition, the end keyword becomes :Until
ALGOL 68, Mythryl: initial keyword spelled backwards e.g., if ... fi, case ... esac
Fortran 77: final keyword is END + initial keyword e.g., IF ... ENDIF, DO ... ENDDO
Modula-2: same final keyword END for everything
Visual Basic: every control structure has its own keyword. If ... End If; For ... Next; Do ... Loop; While ... Wend
== Choice ==
=== If-then-(else) statements ===
Conditional expressions and conditional constructs are features of a programming language that perform different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false.
IF..GOTO. A form found in unstructured languages, mimicking a typical machine code instruction, would jump to (GOTO) a label or line number when the condition was met.
IF..THEN..(ENDIF). Rather than being restricted to a jump, any simple statement, or nested block, could follow the THEN key keyword. This a structured form.
IF..THEN..ELSE..(ENDIF). As above, but with a second action to be performed if the condition is false. This is one of the most common forms, with many variations. Some require a terminal ENDIF, others do not. C and related languages do not require a terminal keyword, or a 'then', but do require parentheses around the condition.
Conditional statements can be and often are nested inside other conditional statements. Some languages allow ELSE and IF to be combined into ELSEIF, avoiding the need to have a series of ENDIF or other final statements at the end of a compound statement.
Less common variations include:
Some languages, such as early Fortran, have a three-way or arithmetic if, testing whether a numeric value is negative, zero, or positive.
Some languages have a functional form of an if statement, for instance Lisp's cond.
Some languages have an operator form of an if statement, such as C's ternary operator.
Perl supplements a C-style if with when and unless.
Smalltalk uses ifTrue and ifFalse messages to implement conditionals, rather than any fundamental language construct.
=== Case and switch statements ===
Switch statements (or case statements, or multiway branches) compare a given value with specified constants and take action according to the first constant to match. There is usually a provision for a default action ("else", "otherwise") to be taken if no match succeeds. Switch statements can allow compiler optimizations, such as lookup tables. In dynamic languages, the cases may not be limited to constant expressions, and might extend to pattern matching, as in the shell script example on the right, where the *) implements the default case as a glob matching any string. Case logic can also be implemented in functional form, as in SQL's decode statement.
== Loops ==
A loop is a sequence of statements which is specified once but which may be carried out several times in succession. The code "inside" the loop (the body of the loop, shown below as xxx) is obeyed a specified number of times, or once for each of a collection of items, or until some condition is met, or indefinitely. When one of those items is itself also a loop, it is called a "nested loop".
In functional programming languages, such as Haskell and Scheme, both recursive and iterative processes are expressed with tail recursive procedures instead of looping constructs that are syntactic.
=== Count-controlled loops ===
Most programming languages have constructions for repeating a loop a certain number of times.
In most cases counting can go downwards instead of upwards and step sizes other than 1 can be used.
In these examples, if N < 1 then the body of loop may execute once (with I having value 1) or not at all, depending on the programming language.
In many programming languages, only integers can be reliably used in a count-controlled loop. Floating-point numbers are represented imprecisely due to hardware constraints, so a loop such as
for X := 0.1 step 0.1 to 1.0 do
might be repeated 9 or 10 times, depending on rounding errors and/or the hardware and/or the compiler version. Furthermore, if the increment of X occurs by repeated addition, accumulated rounding errors may mean that the value of X in each iteration can differ quite significantly from the expected sequence 0.1, 0.2, 0.3, ..., 1.0.
=== Condition-controlled loops ===
Most programming languages have constructions for repeating a loop until some condition changes. Some variations test the condition at the start of the loop; others test it at the end. If the test is at the start, the body may be skipped completely; if it is at the end, the body is always executed at least once.
A control break is a value change detection method used within ordinary loops to trigger processing for groups of values. Values are monitored within the loop and a change diverts program flow to the handling of the group event associated with them.
DO UNTIL (End-of-File)
IF new-zipcode <> current-zipcode
display_tally(current-zipcode, zipcount)
current-zipcode = new-zipcode
zipcount = 0
ENDIF
zipcount++
LOOP
=== Collection-controlled loops ===
Several programming languages (e.g., Ada, D, C++11, Smalltalk, PHP, Perl, Object Pascal, Java, C#, MATLAB, Visual Basic, Ruby, Python, JavaScript, Fortran 95 and later) have special constructs which allow implicit looping through all elements of an array, or all members of a set or collection.
someCollection do: [:eachElement |xxx].
for Item in Collection do begin xxx end;
foreach (item; myCollection) { xxx }
foreach someArray { xxx }
foreach ($someArray as $k => $v) { xxx }
Collection<String> coll; for (String s : coll) {}
foreach (string s in myStringCollection) { xxx }
someCollection | ForEach-Object { $_ }
forall ( index = first:last:step... )
Scala has for-expressions, which generalise collection-controlled loops, and also support other uses, such as asynchronous programming. Haskell has do-expressions and comprehensions, which together provide similar function to for-expressions in Scala.
=== General iteration ===
General iteration constructs such as C's for statement and Common Lisp's do form can be used to express any of the above sorts of loops, and others, such as looping over some number of collections in parallel. Where a more specific looping construct can be used, it is usually preferred over the general iteration construct, since it often makes the purpose of the expression clearer.
=== Infinite loops ===
Infinite loops are used to assure a program segment loops forever or until an exceptional condition arises, such as an error. For instance, an event-driven program (such as a server) should loop forever, handling events as they occur, only stopping when the process is terminated by an operator.
Infinite loops can be implemented using other control flow constructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, as while (true) .... Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop), Fortran (DO ... END DO), Go (for { ... }), and Ruby (loop do ... end).
Often, an infinite loop is unintentionally created by a programming error in a condition-controlled loop, wherein the loop condition uses variables that never change within the loop.
=== Continuation with next iteration ===
Sometimes within the body of a loop there is a desire to skip the remainder of the loop body and continue with the next iteration of the loop. Some languages provide a statement such as continue (most languages), skip, cycle (Fortran), or next (Perl and Ruby), which will do this. The effect is to prematurely terminate the innermost loop body and then resume as normal with the next iteration. If the iteration is the last one in the loop, the effect is to terminate the entire loop early.
=== Redo current iteration ===
Some languages, like Perl and Ruby, have a redo statement that restarts the current iteration from the start.
=== Restart loop ===
Ruby has a retry statement that restarts the entire loop from the initial iteration.
=== Early exit from loops ===
When using a count-controlled loop to search through a table, it might be desirable to stop searching as soon as the required item is found. Some programming languages provide a statement such as break (most languages), Exit (Visual Basic), or last (Perl), which effect is to terminate the current loop immediately, and transfer control to the statement immediately after that loop. Another term for early-exit loops is loop-and-a-half.
The following example is done in Ada which supports both early exit from loops and loops with test in the middle. Both features are very similar and comparing both code snippets will show the difference: early exit must be combined with an if statement while a condition in the middle is a self-contained construct.
Python supports conditional execution of code depending on whether a loop was exited early (with a break statement) or not by using an else-clause with the loop. For example,
The else clause in the above example is linked to the for statement, and not the inner if statement. Both Python's for and while loops support such an else clause, which is executed only if early exit of the loop has not occurred.
Some languages support breaking out of nested loops; in theory circles, these are called multi-level breaks. One common use example is searching a multi-dimensional table. This can be done either via multilevel breaks (break out of N levels), as in bash and PHP, or via labeled breaks (break out and continue at given label), as in Go, Java and Perl. Alternatives to multilevel breaks include single breaks, together with a state variable which is tested to break out another level; exceptions, which are caught at the level being broken out to; placing the nested loops in a function and using return to effect termination of the entire nested loop; or using a label and a goto statement. C does not include a multilevel break, and the usual alternative is to use a goto to implement a labeled break. Python does not have a multilevel break or continue – this was proposed in PEP 3136, and rejected on the basis that the added complexity was not worth the rare legitimate use.
The notion of multi-level breaks is of some interest in theoretical computer science, because it gives rise to what is today called the Kosaraju hierarchy. In 1973 S. Rao Kosaraju refined the structured program theorem by proving that it is possible to avoid adding additional variables in structured programming, as long as arbitrary-depth, multi-level breaks from loops are allowed. Furthermore, Kosaraju proved that a strict hierarchy of programs exists: for every integer n, there exists a program containing a multi-level break of depth n that cannot be rewritten as a program with multi-level breaks of depth less than n without introducing added variables.
One can also return out of a subroutine executing the looped statements, breaking out of both the nested loop and the subroutine. There are other proposed control structures for multiple breaks, but these are generally implemented as exceptions instead.
In his 2004 textbook, David Watt uses Tennent's notion of sequencer to explain the similarity between multi-level breaks and return statements. Watt notes that a class of sequencers known as escape sequencers, defined as "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. As commonly implemented, however, return sequencers may also carry a (return) value, whereas the break sequencer as implemented in contemporary languages usually cannot.
=== Loop variants and invariants ===
Loop variants and loop invariants are used to express correctness of loops.
In practical terms, a loop variant is an integer expression which has an initial non-negative value. The variant's value must decrease during each loop iteration but must never become negative during the correct execution of the loop. Loop variants are used to guarantee that loops will terminate.
A loop invariant is an assertion which must be true before the first loop iteration and remain true after each iteration. This implies that when a loop terminates correctly, both the exit condition and the loop invariant are satisfied. Loop invariants are used to monitor specific properties of a loop during successive iterations.
Some programming languages, such as Eiffel contain native support for loop variants and invariants. In other cases, support is an add-on, such as the Java Modeling Language's specification for loop statements in Java.
=== Loop sublanguage ===
Some Lisp dialects provide an extensive sublanguage for describing Loops. An early example can be found in Conversional Lisp of Interlisp. Common Lisp provides a Loop macro which implements such a sublanguage.
=== Loop system cross-reference table ===
a while (true) does not count as an infinite loop for this purpose, because it is not a dedicated language structure.
a b c d e f g h C's for (init; test; increment) loop is a general loop construct, not specifically a counting one, although it is often used for that.
a b c Deep breaks may be accomplished in APL, C, C++ and C# through the use of labels and gotos.
a Iteration over objects was added in PHP 5.
a b c A counting loop can be simulated by iterating over an incrementing list or generator, for instance, Python's range().
a b c d e Deep breaks may be accomplished through the use of exception handling.
a There is no special construct, since the while function can be used for this.
a There is no special construct, but users can define general loop functions.
a The C++11 standard introduced the range-based for. In the STL, there is a std::for_each template function which can iterate on STL containers and call a unary function for each element. The functionality also can be constructed as macro on these containers.
a Count-controlled looping is effected by iteration across an integer interval; early exit by including an additional condition for exit.
a Eiffel supports a reserved word retry, however it is used in exception handling, not loop control.
a Requires Java Modeling Language (JML) behavioral interface specification language.
a Requires loop variants to be integers; transfinite variants are not supported. [1]
a D supports infinite collections, and the ability to iterate over those collections. This does not require any special construct.
a Deep breaks can be achieved using GO TO and procedures.
a Common Lisp predates the concept of generic collection type.
== Structured non-local control flow ==
Many programming languages, especially those favoring more dynamic styles of programming, offer constructs for non-local control flow. These cause the flow of execution to jump out of a given context and resume at some predeclared point. Conditions, exceptions and continuations are three common sorts of non-local control constructs; more exotic ones also exist, such as generators, coroutines and the async keyword.
=== Conditions ===
The earliest Fortran compilers had statements for testing exceptional conditions. These included the IF ACCUMULATOR OVERFLOW, IF QUOTIENT OVERFLOW, and IF DIVIDE CHECK statements. In the interest of machine independence, they were not included in FORTRAN IV and the Fortran 66 Standard. However since Fortran 2003 it is possible to test for numerical issues via calls to functions in the IEEE_EXCEPTIONS module.
PL/I has some 22 standard conditions (e.g., ZERODIVIDE SUBSCRIPTRANGE ENDFILE) which can be raised and which can be intercepted by: ON condition action; Programmers can also define and use their own named conditions.
Like the unstructured if, only one statement can be specified so in many cases a GOTO is needed to decide where flow of control should resume.
Unfortunately, some implementations had a substantial overhead in both space and time (especially SUBSCRIPTRANGE), so many programmers tried to avoid using conditions.
Common Syntax examples:
ON condition GOTO label
=== Exceptions ===
Modern languages have a specialized structured construct for exception handling which does not rely on the use of GOTO or (multi-level) breaks or returns. For example, in C++ one can write:
Any number and variety of catch clauses can be used above. If there is no catch matching a particular throw, control percolates back through subroutine calls and/or nested blocks until a matching catch is found or until the end of the main program is reached, at which point the program is forcibly stopped with a suitable error message.
Via C++'s influence, catch is the keyword reserved for declaring a pattern-matching exception handler in other languages popular today, like Java or C#. Some other languages like Ada use the keyword exception to introduce an exception handler and then may even employ a different keyword (when in Ada) for the pattern matching. A few languages like AppleScript incorporate placeholders in the exception handler syntax to automatically extract several pieces of information when the exception occurs. This approach is exemplified below by the on error construct from AppleScript:
David Watt's 2004 textbook also analyzes exception handling in the framework of sequencers (introduced in this article in the section on early exits from loops). Watt notes that an abnormal situation, generally exemplified with arithmetic overflows or input/output failures like file not found, is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" Watt notes that in contrast to status flags testing, exceptions have the opposite default behavior, causing the program to terminate unless the program deals with the exception explicitly in some way, possibly by adding explicit code to ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers are less suitable as a dedicated exception sequencer with the semantics discussed above.
In Object Pascal, D, Java, C#, and Python a finally clause can be added to the try construct. No matter how control leaves the try the code inside the finally clause is guaranteed to execute. This is useful when writing code that must relinquish an expensive resource (such as an opened file or a database connection) when finished processing:
Since this pattern is fairly common, C# has a special syntax:
Upon leaving the using-block, the compiler guarantees that the stm object is released, effectively binding the variable to the file stream while abstracting from the side effects of initializing and releasing the file. Python's with statement and Ruby's block argument to File.open are used to similar effect.
All the languages mentioned above define standard exceptions and the circumstances under which they are thrown. Users can throw exceptions of their own; C++ allows users to throw and catch almost any type, including basic types like int, whereas other languages like Java are less permissive.
=== Continuations ===
=== Async ===
C# 5.0 introduced the async keyword for supporting asynchronous I/O in a "direct style".
=== Generators ===
Generators, also known as semicoroutines, allow control to be yielded to a consumer method temporarily, typically using a yield keyword (yield description) . Like the async keyword, this supports programming in a "direct style".
=== Coroutines ===
Coroutines are functions that can yield control to each other - a form of co-operative multitasking without threads.
Coroutines can be implemented as a library if the programming language provides either continuations or generators - so the distinction between coroutines and generators in practice is a technical detail.
=== Non-local control flow cross reference ===
== Proposed control structures ==
In a spoof Datamation article in 1973, R. Lawrence Clark suggested that the GOTO statement could be replaced by the COMEFROM statement, and provides some entertaining examples. COMEFROM was implemented in one esoteric programming language named INTERCAL.
Donald Knuth's 1974 article "Structured Programming with go to Statements", identifies two situations which were not covered by the control structures listed above, and gave examples of control structures which could handle these situations. Despite their utility, these constructs have not yet found their way into mainstream programming languages.
=== Loop with test in the middle ===
The following was proposed by Dahl in 1972:
loop loop
xxx1 read(char);
while test; while not atEndOfFile;
xxx2 write(char);
repeat; repeat;
If xxx1 is omitted, we get a loop with the test at the top (a traditional while loop). If xxx2 is omitted, we get a loop with the test at the bottom, equivalent to a do while loop in many languages. If while is omitted, we get an infinite loop. The construction here can be thought of as a do loop with the while check in the middle. Hence this single construction can replace several constructions in most programming languages.
Languages lacking this construct generally emulate it using an equivalent infinite-loop-with-break idiom:
while (true) {
xxx1
if (not test)
break
xxx2
}
A possible variant is to allow more than one while test; within the loop, but the use of exitwhen (see next section) appears to cover this case better.
In Ada, the above loop construct (loop-while-repeat) can be represented using a standard infinite loop (loop - end loop) that has an exit when clause in the middle (not to be confused with the exitwhen statement in the following section).
Naming a loop (like Read_Data in this example) is optional but permits leaving the outer loop of several nested loops.
=== Multiple early exit/exit from nested loops ===
This construct was proposed by Zahn in 1974. A modified version is presented here.
exitwhen EventA or EventB or EventC;
xxx
exits
EventA: actionA
EventB: actionB
EventC: actionC
endexit;
exitwhen is used to specify the events which may occur within xxx,
their occurrence is indicated by using the name of the event as a statement. When some event does occur, the relevant action is carried out, and then control passes just after endexit. This construction provides a very clear separation between determining that some situation applies, and the action to be taken for that situation.
exitwhen is conceptually similar to exception handling, and exceptions or similar constructs are used for this purpose in many languages.
The following simple example involves searching a two-dimensional table for a particular item.
exitwhen found or missing;
for I := 1 to N do
for J := 1 to M do
if table[I,J] = target then found;
missing;
exits
found: print ("item is in table");
missing: print ("item is not in table");
endexit;
== Security ==
One way to attack a piece of software is to redirect the flow of execution of a program. A variety of control-flow integrity techniques, including stack canaries, buffer overflow protection, shadow stacks, and vtable pointer verification, are used to defend against these attacks.
== See also ==
== Notes ==
== References ==
== Further reading ==
Hoare, C. A. R. "Partition: Algorithm 63," "Quicksort: Algorithm 64," and "Find: Algorithm 65." Comm. ACM 4, 321–322, 1961.
== External links ==
Media related to Control flow at Wikimedia Commons
Go To Statement Considered Harmful
A Linguistic Contribution of GOTO-less Programming
"Structured Programming with Go To Statements" (PDF). Archived from the original (PDF) on 2009-08-24. (2.88 MB)
"IBM 704 Manual" (PDF). (31.4 MB) | Wikipedia/Control_flow |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.