text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In mathematics, secondary calculus is a proposed expansion of classical differential calculus on manifolds, to the "space" of solutions of a (nonlinear) partial differential equation. It is a sophisticated theory at the level of jet spaces and employing algebraic methods.
== Secondary calculus ==
Secondary calculus acts on the space of solutions of a system of partial differential equations (usually nonlinear equations). When the number of independent variables is zero (i.e. the equations are all algebraic) secondary calculus reduces to classical differential calculus.
All objects in secondary calculus are cohomology classes of differential complexes growing on diffieties. The latter are, in the framework of secondary calculus, the analog of smooth manifolds.
== Cohomological physics ==
Cohomological physics was born with Gauss's theorem, describing the electric charge contained inside a given surface in terms of the flux of the electric field through the surface itself. Flux is the integral of a differential form and, consequently, a de Rham cohomology class. It is not by chance that formulas of this kind, such as the well known Stokes formula, though being a natural part of classical differential calculus, have entered in modern mathematics from physics.
== Classical analogues ==
All the constructions in classical differential calculus have an analog in secondary calculus. For instance, higher symmetries of a system of partial differential equations are the analog of vector fields on differentiable manifolds. The Euler operator, which associates to each variational problem the corresponding Euler–Lagrange equation, is the analog of the classical differential associating to a function on a variety its differential. The Euler operator is a secondary differential operator of first order, even if, according to its expression in local coordinates, it looks like one of infinite order. More generally, the analog of differential forms in secondary calculus are the elements of the first term of the so-called C-spectral sequence, and so on.
The simplest diffieties are infinite prolongations of partial differential equations, which are subvarieties of infinite jet spaces. The latter are infinite dimensional varieties that can not be studied by means of standard functional analysis. On the contrary, the most natural language in which to study these objects is differential calculus over commutative algebras. Therefore, the latter must be regarded as a fundamental tool of secondary calculus. On the other hand, differential calculus over commutative algebras gives the possibility to develop algebraic geometry as if it were differential geometry.
== Theoretical physics ==
Recent developments of particle physics, based on quantum field theories and its generalizations, have led to understand the deep cohomological nature of the quantities describing both classical and quantum fields. The turning point was the discovery of the famous BRST transformation. For instance, it was understood that observables in field theory are classes in horizontal de Rham cohomology which are invariant under the corresponding gauge group and so on. This current in modern theoretical physics is called Cohomological Physics.
It is relevant that secondary calculus and cohomological physics, which developed for twenty years independently from each other, arrived at the same results. Their confluence took place at the international conference Secondary Calculus and Cohomological Physics (Moscow, August 24–30, 1997).
== Prospects ==
A large number of modern mathematical theories harmoniously converges in the framework of secondary calculus, for instance: commutative algebra and algebraic geometry, homological algebra and differential topology, Lie group and Lie algebra theory, differential geometry, etc.
== See also ==
Differential calculus over commutative algebras – part of commutative algebraPages displaying wikidata descriptions as a fallback
Spectrum of a ring – Set of a ring's prime ideals
== References ==
I. S. Krasil'shchik, Calculus over Commutative Algebras: a concise user's guide, Acta Appl. Math. 49 (1997) 235–248; DIPS-01/98
I. S. Krasil'shchik, A. M. Verbovetsky, Homological Methods in Equations of Mathematical Physics, Open Ed. and Sciences, Opava (Czech Rep.), 1998; DIPS-07/98.
I. S. Krasil'shchik, A. M. Vinogradov (eds.), Symmetries and conservation laws for differential equations of mathematical physics, Translations of Math. Monographs 182, Amer. Math. Soc., 1999.
J. Nestruev, Smooth Manifolds and Observables, Graduate Texts in Mathematics 220, Springer, 2002, doi:10.1007/978-3-030-45650-4.
A. M. Vinogradov, The C-spectral sequence, Lagrangian formalism, and conservation laws I. The linear theory, J. Math. Anal. Appl. 100 (1984) 1—40; Diffiety Inst. Library.
A. M. Vinogradov, The C-spectral sequence, Lagrangian formalism, and conservation laws II. The nonlinear theory, J. Math. Anal. Appl. 100 (1984) 41–129; Diffiety Inst. Library.
A. M. Vinogradov, From symmetries of partial differential equations towards secondary (`quantized') calculus, J. Geom. Phys. 14 (1994) 146–194; Diffiety Inst. Library.
A. M. Vinogradov, Introduction to Secondary Calculus, Proc. Conf. Secondary Calculus and Cohomology Physics (M. Henneaux, I. S. Krasil'shchik, and A. M. Vinogradov, eds.), Contemporary Mathematics, Amer. Math. Soc., Providence, Rhode Island, 1998; DIPS-05/98.
A. M. Vinogradov, Cohomological Analysis of Partial Differential Equations and Secondary Calculus, Translations of Math. Monographs 204, Amer. Math. Soc., 2001.
== External links ==
The Diffiety Institute
Diffiety School | Wikipedia/Secondary_calculus_and_cohomological_physics |
Affine differential geometry is a type of differential geometry which studies invariants of volume-preserving affine transformations. The name affine differential geometry follows from Klein's Erlangen program. The basic difference between affine and Riemannian differential geometry is that affine differential geometry studies manifolds equipped with a volume form rather than a metric.
== Preliminaries ==
Here we consider the simplest case, i.e. manifolds of codimension one. Let
M
⊂
R
n
+
1
{\displaystyle M\subset \mathbb {R} ^{n+1}}
be an
n
{\displaystyle n}
-dimensional manifold, and let
ξ
{\displaystyle \xi }
be a vector field on
R
n
+
1
{\displaystyle \mathbb {R} ^{n+1}}
transverse to
M
{\displaystyle M}
such that
T
p
R
n
+
1
=
T
p
M
⊕
Span
(
ξ
)
{\displaystyle T_{p}\mathbb {R} ^{n+1}=T_{p}M\oplus {\text{Span}}(\xi )}
for all
p
∈
M
{\displaystyle p\in M}
where
⊕
{\displaystyle \oplus }
denotes the direct sum and
Span
{\displaystyle {\text{Span}}}
the linear span.
For a smooth manifold, say N, let Ψ(N) denote the module of smooth vector fields over N. Let D : Ψ(Rn+1)×Ψ(Rn+1) → Ψ(Rn+1) be the standard covariant derivative on Rn+1 where D(X, Y) = DXY.
We can decompose DXY into a component tangent to M and a transverse component, parallel to ξ. This gives the equation of Gauss: DXY = ∇XY + h(X,Y)ξ, where ∇ : Ψ(M)×Ψ(M) → Ψ(M) is the induced connexion on M and h : Ψ(M)×Ψ(M) → R is a bilinear form. Notice that ∇ and h depend upon the choice of transverse vector field ξ. We consider only those hypersurfaces for which h is non-degenerate. This is a property of the hypersurface M and does not depend upon the choice of transverse vector field ξ. If h is non-degenerate then we say that M is non-degenerate. In the case of curves in the plane, the non-degenerate curves are those without inflexions. In the case of surfaces in 3-space, the non-degenerate surfaces are those without parabolic points.
We may also consider the derivative of ξ in some tangent direction, say X. This quantity, DXξ, can be decomposed into a component tangent to M and a transverse component, parallel to ξ. This gives the Weingarten equation: DXξ = −SX + τ(X)ξ. The type-(1,1)-tensor S : Ψ(M) → Ψ(M) is called the affine shape operator, the differential one-form τ : Ψ(M) → R is called the transverse connexion form. Again, both S and τ depend upon the choice of transverse vector field ξ.
== The first induced volume form ==
Let Ω : Ψ(Rn+1)n+1 → R be a volume form defined on Rn+1. We can induce a volume form on M given by ω : Ψ(M)n → R given by ω(X1,...,Xn) := Ω(X1,...,Xn,ξ). This is a natural definition: in Euclidean differential geometry where ξ is the Euclidean unit normal then the standard Euclidean volume spanned by X1,...,Xn is always equal to ω(X1,...,Xn). Notice that ω depends on the choice of transverse vector field ξ.
== The second induced volume form ==
For tangent vectors X1,...,Xn let H := (hi,j) be the n × n matrix given by hi,j := h(Xi,Xj). We define a second volume form on M given by ν : Ψ(M)n → R, where ν(X1,...,Xn) := |det(H)|1⁄2. Again, this is a natural definition to make. If M = Rn and h is the Euclidean scalar product then ν(X1,...,Xn) is always the standard Euclidean volume spanned by the vectors X1,...,Xn.
Since h depends on the choice of transverse vector field ξ it follows that ν does too.
== Two natural conditions ==
We impose two natural conditions. The first is that the induced connexion ∇ and the induced volume form ω be compatible, i.e. ∇ω ≡ 0. This means that ∇Xω = 0 for all X ∈ Ψ(M). In other words, if we parallel transport the vectors X1,...,Xn along some curve in M, with respect to the connexion ∇, then the volume spanned by X1,...,Xn, with respect to the volume form ω, does not change. A direct calculation shows that ∇Xω = τ(X)ω and so ∇Xω = 0 for all X ∈ Ψ(M) if, and only if, τ ≡ 0, i.e. DXξ ∈ Ψ(M) for all X ∈ Ψ(M). This means that the derivative of ξ, in a tangent direction X, with respect to D always yields a, possibly zero, tangent vector to M. The second condition is that the two volume forms ω and ν coincide, i.e. ω ≡ ν.
== The conclusion ==
It can be shown that there is, up to sign, a unique choice of transverse vector field ξ for which the two conditions that ∇ω ≡ 0 and ω ≡ ν are both satisfied. These two special transverse vector fields are called affine normal vector fields, or sometimes called Blaschke normal fields. From its dependence on volume forms for its definition we see that the affine normal vector field is invariant under volume preserving affine transformations. These transformations are given by SL(n+1,R) ⋉ Rn+1, where SL(n+1,R) denotes the special linear group of (n+1) × (n+1) matrices with real entries and determinant 1, and ⋉ denotes the semi-direct product. SL(n+1,R) ⋉ Rn+1 forms a Lie group.
== The affine normal line ==
The affine normal line at a point p ∈ M is the line passing through p and parallel to ξ.
=== Plane curves ===
The affine normal vector field for a curve in the plane has a nice geometrical interpretation. Let I ⊂ R be an open interval and let γ : I → R2 be a smooth parametrisation of a plane curve. We assume that γ(I) is a non-degenerate curve (in the sense of Nomizu and Sasaki), i.e. is without inflexion points. Consider a point p = γ(t0) on the plane curve. Since γ(I) is without inflexion points it follows that γ(t0) is not an inflexion point and so the curve will be locally convex, i.e. all of the points γ(t) with t0 − ε < t < t0 + ε, for sufficiently small ε, will lie on the same side of the tangent line to γ(I) at γ(t0).
Consider the tangent line to γ(I) at γ(t0), and consider near-by parallel lines on the side of the tangent line containing the piece of curve P := {γ(t) ∈ R2 : t0 − ε < t < t0 + ε}. For parallel lines sufficiently close to the tangent line they will intersect P in exactly two points. On each parallel line we mark the midpoint of the line segment joining these two intersection points. For each parallel line we get a midpoint, and so the locus of midpoints traces out a curve starting at p. The limiting tangent line to the locus of midpoints as we approach p is exactly the affine normal line, i.e. the line containing the affine normal vector to γ(I) at γ(t0). Notice that this is an affine invariant construction since parallelism and midpoints are invariant under affine transformations.
Consider the parabola given by the parametrisation γ(t) = (t + 2t2,t2). This has the equation x2 + 4y2 − 4xy − y = 0. The tangent line at γ(0) has the equation y = 0 and so the parallel lines are given by y = k for sufficiently small k ≥ 0. The line y = k intersects the curve at x = 2k ± √k. The locus of midpoints is given by {(2k,k) : k ≥ 0}. These form a line segment, and so the limiting tangent line to this line segment as we tend to γ(0) is just the line containing this line segment, i.e. the line x = 2y. In that case the affine normal line to the curve at γ(0) has the equation x = 2y. In fact, direct calculation shows that the affine normal vector at γ(0), namely ξ(0), is given by ξ(0) = 21⁄3·(2,1). In the figure the red curve is the curve γ, the black lines are the tangent line and some near-by tangent lines, the black dots are the midpoints on the displayed lines, and the blue line is the locus of midpoints.
=== Surfaces in 3-space ===
A similar analogue exists for finding the affine normal line at elliptic points of smooth surfaces in 3-space. This time one takes planes parallel to the tangent plane. These, for planes sufficiently close to the tangent plane, intersect the surface to make convex plane curves. Each convex plane curve has a centre of mass. The locus of centres of mass trace out a curve in 3-space. The limiting tangent line to this locus as one tends to the original surface point is the affine normal line, i.e. the line containing the affine normal vector.
== See also ==
Affine geometry of curves
Affine focal set
Affine sphere
== References == | Wikipedia/Affine_differential_geometry |
In mathematics the differential calculus over commutative algebras is a part of commutative algebra based on the observation that most concepts known from classical differential calculus can be formulated in purely algebraic terms. Instances of this are:
The whole topological information of a smooth manifold
M
{\displaystyle M}
is encoded in the algebraic properties of its
R
{\displaystyle \mathbb {R} }
-algebra of smooth functions
A
=
C
∞
(
M
)
,
{\displaystyle A=C^{\infty }(M),}
as in the Banach–Stone theorem.
Vector bundles over
M
{\displaystyle M}
correspond to projective finitely generated modules over
A
,
{\displaystyle A,}
via the functor
Γ
{\displaystyle \Gamma }
which associates to a vector bundle its module of sections.
Vector fields on
M
{\displaystyle M}
are naturally identified with derivations of the algebra
A
{\displaystyle A}
.
More generally, a linear differential operator of order k, sending sections of a vector bundle
E
→
M
{\displaystyle E\rightarrow M}
to sections of another bundle
F
→
M
{\displaystyle F\rightarrow M}
is seen to be an
R
{\displaystyle \mathbb {R} }
-linear map
Δ
:
Γ
(
E
)
→
Γ
(
F
)
{\displaystyle \Delta :\Gamma (E)\to \Gamma (F)}
between the associated modules, such that for any
k
+
1
{\displaystyle k+1}
elements
f
0
,
…
,
f
k
∈
A
{\displaystyle f_{0},\ldots ,f_{k}\in A}
:
[
f
k
[
f
k
−
1
[
⋯
[
f
0
,
Δ
]
⋯
]
]
]
=
0
{\displaystyle \left[f_{k}\left[f_{k-1}\left[\cdots \left[f_{0},\Delta \right]\cdots \right]\right]\right]=0}
where the bracket
[
f
,
Δ
]
:
Γ
(
E
)
→
Γ
(
F
)
{\displaystyle [f,\Delta ]:\Gamma (E)\to \Gamma (F)}
is defined as the commutator
[
f
,
Δ
]
(
s
)
=
Δ
(
f
⋅
s
)
−
f
⋅
Δ
(
s
)
.
{\displaystyle [f,\Delta ](s)=\Delta (f\cdot s)-f\cdot \Delta (s).}
Denoting the set of
k
{\displaystyle k}
th order linear differential operators from an
A
{\displaystyle A}
-module
P
{\displaystyle P}
to an
A
{\displaystyle A}
-module
Q
{\displaystyle Q}
with
D
i
f
f
k
(
P
,
Q
)
{\displaystyle \mathrm {Diff} _{k}(P,Q)}
we obtain a bi-functor with values in the category of
A
{\displaystyle A}
-modules. Other natural concepts of calculus such as jet spaces, differential forms are then obtained as representing objects of the functors
D
i
f
f
k
{\displaystyle \mathrm {Diff} _{k}}
and related functors.
Seen from this point of view calculus may in fact be understood as the theory of these functors and their representing objects.
Replacing the real numbers
R
{\displaystyle \mathbb {R} }
with any commutative ring, and the algebra
C
∞
(
M
)
{\displaystyle C^{\infty }(M)}
with any commutative algebra the above said remains meaningful, hence differential calculus can be developed for arbitrary commutative algebras. Many of these concepts are widely used in algebraic geometry, differential geometry and secondary calculus. Moreover, the theory generalizes naturally to the setting of graded commutative algebra, allowing for a natural foundation of calculus on supermanifolds, graded manifolds and associated concepts like the Berezin integral.
== See also ==
Secondary calculus and cohomological physics – Modern discipline
Differential algebra – Algebraic study of differential equations
Spectrum of a ring – Set of a ring's prime ideals
== References ==
J. Nestruev, Smooth Manifolds and Observables, Graduate Texts in Mathematics 220, Springer, 2002.
Nestruev, Jet (10 September 2020). Smooth Manifolds and Observables. Graduate Texts in Mathematics. Vol. 220. Cham, Switzerland: Springer Nature. ISBN 978-3-030-45649-8. OCLC 1195920718.
I. S. Krasil'shchik, "Lectures on Linear Differential Operators over Commutative Algebras". Eprint DIPS-01/99.
I. S. Krasil'shchik, A. M. Vinogradov (eds) "Algebraic Aspects of Differential Calculus", Acta Appl. Math. 49 (1997), Eprints: DIPS-01/96, DIPS-02/96, DIPS-03/96, DIPS-04/96, DIPS-05/96, DIPS-06/96, DIPS-07/96, DIPS-08/96.
I. S. Krasil'shchik, A. M. Verbovetsky, "Homological Methods in Equations of Mathematical Physics", Open Ed. and Sciences, Opava (Czech Rep.), 1998; Eprint arXiv:math/9808130v2.
G. Sardanashvily, Lectures on Differential Geometry of Modules and Rings, Lambert Academic Publishing, 2012; Eprint arXiv:0910.1515 [math-ph] 137 pages.
A. M. Vinogradov, "The Logic Algebra for the Theory of Linear Differential Operators", Dokl. Akad. Nauk SSSR, 295(5) (1972) 1025-1028; English transl. in Soviet Math. Dokl. 13(4) (1972), 1058-1062.
Vinogradov, A. M. (2001). Cohomological Analysis of Partial Differential Equations and Secondary Calculus. American Mathematical Soc. ISBN 9780821897997.
A. M. Vinogradov, "Some new homological systems associated with differential calculus over commutative algebras" (Russian), Uspechi Mat.Nauk, 1979, 34 (6), 145-150;English transl. in Russian Math. Surveys, 34(6) (1979), 250-255. | Wikipedia/Differential_calculus_over_commutative_algebras |
In mathematics, a weak Lie algebra bundle
ξ
=
(
ξ
,
p
,
X
,
θ
)
{\displaystyle \xi =(\xi ,p,X,\theta )\,}
is a vector bundle
ξ
{\displaystyle \xi \,}
over a base space X together with a morphism
θ
:
ξ
⊗
ξ
→
ξ
{\displaystyle \theta :\xi \otimes \xi \rightarrow \xi }
which induces a Lie algebra structure on each fibre
ξ
x
{\displaystyle \xi _{x}\,}
.
A Lie algebra bundle
ξ
=
(
ξ
,
p
,
X
)
{\displaystyle \xi =(\xi ,p,X)\,}
is a vector bundle in which
each fibre is a Lie algebra and for every x in X, there is an open set
U
{\displaystyle U}
containing x, a Lie algebra L and a homeomorphism
ϕ
:
U
×
L
→
p
−
1
(
U
)
{\displaystyle \phi :U\times L\to p^{-1}(U)\,}
such that
ϕ
x
:
{
x
}
×
L
→
p
−
1
(
{
x
}
)
{\displaystyle \phi _{x}:\{x\}\times L\rightarrow p^{-1}(\{x\})\,}
is a Lie algebra isomorphism.
Any Lie algebra bundle is a weak Lie algebra bundle, but the converse need not be true in general.
As an example of a weak Lie algebra bundle that is not a strong Lie algebra bundle, consider the total space
s
o
(
3
)
×
R
{\displaystyle {\mathfrak {so}}(3)\times \mathbb {R} }
over the real line
R
{\displaystyle \mathbb {R} }
. Let [.,.] denote the Lie bracket of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
and deform it by the real parameter as:
[
X
,
Y
]
x
=
x
⋅
[
X
,
Y
]
{\displaystyle [X,Y]_{x}=x\cdot [X,Y]}
for
X
,
Y
∈
s
o
(
3
)
{\displaystyle X,Y\in {\mathfrak {so}}(3)}
and
x
∈
R
{\displaystyle x\in \mathbb {R} }
.
Lie's third theorem states that every bundle of Lie algebras can locally be integrated to a bundle of Lie groups. In general globally the total space might fail to be Hausdorff. But if all fibres of a real Lie algebra bundle over a topological space are mutually isomorphic as Lie algebras, then it is a locally trivial Lie algebra bundle. This result was proved by proving that the real orbit of a real point under an algebraic group is open in the real part of its complex orbit. Suppose the base space is Hausdorff and fibers of total space are isomorphic as Lie algebras then there exists a Hausdorff Lie group bundle over the same base space whose Lie algebra bundle is isomorphic to the given Lie algebra bundle. Every semi simple Lie algebra bundle is locally trivial. Hence there exist a Hausdorff Lie group bundle over the same base space whose Lie algebra bundle is isomorphic to the given Lie algebra bundle.
== See also ==
Algebra bundle
Adjoint bundle
== References ==
Douady, Adrien; Lazard, Michel (1966). "Espaces fibrés en algèbres de Lie et en groupes". Inventiones Mathematicae. 1 (2): 133–151. Bibcode:1966InMat...1..133D. doi:10.1007/BF01389725.
Kiranagi, B. S.; Kumar, Ranjitha; Prema, G. (2015). "On completely semisimple Lie algebra bundles". Journal of Algebra and Its Applications. 14 (2): 1550009. doi:10.1142/S0219498815500097. | Wikipedia/Lie_algebra_bundle |
In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of r vector fields mesh into coordinate grids on r-dimensional integral manifolds. The theorem is foundational in differential topology and calculus on manifolds.
Contact geometry studies 1-forms that maximally violates the assumptions of Frobenius' theorem. An example is shown on the right.
== Introduction ==
=== One-form version ===
Suppose we are to find the trajectory of a particle in a subset of 3D space, but we do not know its trajectory formula. Instead, we know only that its trajectory satisfies
a
d
x
+
b
d
y
+
c
d
z
=
0
{\displaystyle adx+bdy+cdz=0}
, where
a
,
b
,
c
{\displaystyle a,b,c}
are smooth functions of
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
. Thus, our only certainty is that if at some moment in time the particle is at location
(
x
0
,
y
0
,
z
0
)
{\displaystyle (x_{0},y_{0},z_{0})}
, then its velocity at that moment is restricted within the plane with equation
a
(
x
0
,
y
0
,
z
0
)
[
x
−
x
0
]
+
b
(
x
0
,
y
0
,
z
0
)
[
y
−
y
0
]
+
c
(
x
0
,
y
0
,
z
0
)
[
z
−
z
0
]
=
0
{\displaystyle a(x_{0},y_{0},z_{0})[x-x_{0}]+b(x_{0},y_{0},z_{0})[y-y_{0}]+c(x_{0},y_{0},z_{0})[z-z_{0}]=0}
In other words, we can draw a "local plane" at each point in 3D space, and we know that the particle's trajectory must be tangent to the local plane at all times.
If we have two equations
{
a
d
x
+
b
d
y
+
c
d
z
=
0
a
′
d
x
+
b
′
d
y
+
c
′
d
z
=
0
{\displaystyle {\begin{cases}adx+bdy+cdz=0\\a'dx+b'dy+c'dz=0\end{cases}}}
then we can draw two local planes at each point, and their intersection is generically a line, allowing us to uniquely solve for the curve starting at any point. In other words, with two 1-forms, we can foliate the domain into curves.
If we have only one equation
a
d
x
+
b
d
y
+
c
d
z
=
0
{\displaystyle adx+bdy+cdz=0}
, then we might be able to foliate
R
3
{\displaystyle \mathbb {R} ^{3}}
into surfaces, in which case, we can be sure that a curve starting at a certain surface must be restricted to wander within that surface. If not, then a curve starting at any point might end up at any other point in
R
3
{\displaystyle \mathbb {R} ^{3}}
. One can imagine starting with a cloud of little planes, and quilting them together to form a full surface. The main danger is that, if we quilt the little planes two at a time, we might go on a cycle and return to where we began, but shifted by a small amount. If this happens, then we would not get a 2-dimensional surface, but a 3-dimensional blob. An example is shown in the diagram on the right.
If the one-form is integrable, then loops exactly close upon themselves, and each surface would be 2-dimensional. Frobenius' theorem states that this happens precisely when
ω
∧
d
ω
=
0
{\displaystyle \omega \wedge d\omega =0}
over all of the domain, where
ω
:=
a
d
x
+
b
d
y
+
c
d
z
{\displaystyle \omega :=adx+bdy+cdz}
. The notation is defined in the article on one-forms.
During his development of axiomatic thermodynamics, Carathéodory proved that if
ω
{\displaystyle \omega }
is an integrable one-form on an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
, then
ω
=
f
d
g
{\displaystyle \omega =fdg}
for some scalar functions
f
,
g
{\displaystyle f,g}
on the subset. This is usually called Carathéodory's theorem in axiomatic thermodynamics. One can prove this intuitively by first constructing the little planes according to
ω
{\displaystyle \omega }
, quilting them together into a foliation, then assigning each surface in the foliation with a scalar label. Now for each point
p
{\displaystyle p}
, define
g
(
p
)
{\displaystyle g(p)}
to be the scalar label of the surface containing point
p
{\displaystyle p}
.
Now,
d
g
{\displaystyle dg}
is a one-form that has exactly the same planes as
ω
{\displaystyle \omega }
. However, it has "even thickness" everywhere, while
ω
{\displaystyle \omega }
might have "uneven thickness". This can be fixed by a scalar scaling by
f
{\displaystyle f}
, giving
ω
=
f
d
g
{\displaystyle \omega =fdg}
. This is illustrated on the right.
=== Multiple one-forms ===
In its most elementary form, the theorem addresses the problem of finding a maximal set of independent solutions of a regular system of first-order linear homogeneous partial differential equations. Let
{
f
k
i
:
R
n
→
R
:
1
≤
i
≤
n
,
1
≤
k
≤
r
}
{\displaystyle \left\{f_{k}^{i}:\mathbf {R} ^{n}\to \mathbf {R} \ :\ 1\leq i\leq n,1\leq k\leq r\right\}}
be a collection of C1 functions, with r < n, and such that the matrix ( f ik ) has rank r when evaluated at any point of Rn. Consider the following system of partial differential equations for a C2 function u : Rn → R:
(
1
)
{
L
1
u
=
d
e
f
∑
i
f
1
i
(
x
)
∂
u
∂
x
i
=
f
→
1
⋅
∇
u
=
0
L
2
u
=
d
e
f
∑
i
f
2
i
(
x
)
∂
u
∂
x
i
=
f
→
2
⋅
∇
u
=
0
⋯
L
r
u
=
d
e
f
∑
i
f
r
i
(
x
)
∂
u
∂
x
i
=
f
→
r
⋅
∇
u
=
0
{\displaystyle (1)\quad {\begin{cases}L_{1}u\ {\stackrel {\mathrm {def} }{=}}\ \sum _{i}f_{1}^{i}(x){\frac {\partial u}{\partial x^{i}}}={\vec {f}}_{1}\cdot \nabla u=0\\L_{2}u\ {\stackrel {\mathrm {def} }{=}}\ \sum _{i}f_{2}^{i}(x){\frac {\partial u}{\partial x^{i}}}={\vec {f}}_{2}\cdot \nabla u=0\\\qquad \cdots \\L_{r}u\ {\stackrel {\mathrm {def} }{=}}\ \sum _{i}f_{r}^{i}(x){\frac {\partial u}{\partial x^{i}}}={\vec {f}}_{r}\cdot \nabla u=0\end{cases}}}
One seeks conditions on the existence of a collection of solutions u1, ..., un−r such that the gradients ∇u1, ..., ∇un−r are linearly independent.
The Frobenius theorem asserts that this problem admits a solution locally if, and only if, the operators Lk satisfy a certain integrability condition known as involutivity. Specifically, they must satisfy relations of the form
L
i
L
j
u
(
x
)
−
L
j
L
i
u
(
x
)
=
∑
k
c
i
j
k
(
x
)
L
k
u
(
x
)
{\displaystyle L_{i}L_{j}u(x)-L_{j}L_{i}u(x)=\sum _{k}c_{ij}^{k}(x)L_{k}u(x)}
for 1 ≤ i, j ≤ r, and all C2 functions u, and for some coefficients ckij(x) that are allowed to depend on x. In other words, the commutators [Li, Lj] must lie in the linear span of the Lk at every point. The involutivity condition is a generalization of the commutativity of partial derivatives. In fact, the strategy of proof of the Frobenius theorem is to form linear combinations among the operators Li so that the resulting operators do commute, and then to show that there is a coordinate system yi for which these are precisely the partial derivatives with respect to y1, ..., yr.
=== From analysis to geometry ===
Even though the system is overdetermined there are typically infinitely many solutions. For example, the system of differential equations
{
∂
f
∂
x
+
∂
f
∂
y
=
0
∂
f
∂
y
+
∂
f
∂
z
=
0
{\displaystyle {\begin{cases}{\frac {\partial f}{\partial x}}+{\frac {\partial f}{\partial y}}=0\\{\frac {\partial f}{\partial y}}+{\frac {\partial f}{\partial z}}=0\end{cases}}}
clearly permits multiple solutions. Nevertheless, these solutions still have enough structure that they may be completely described. The first observation is that, even if f1 and f2 are two different solutions, the level surfaces of f1 and f2 must overlap. In fact, the level surfaces for this system are all planes in R3 of the form x − y + z = C, for C a constant. The second observation is that, once the level surfaces are known, all solutions can then be given in terms of an arbitrary function. Since the value of a solution f on a level surface is constant by definition, define a function C(t) by:
f
(
x
,
y
,
z
)
=
C
(
t
)
whenever
x
−
y
+
z
=
t
.
{\displaystyle f(x,y,z)=C(t){\text{ whenever }}x-y+z=t.}
Conversely, if a function C(t) is given, then each function f given by this expression is a solution of the original equation. Thus, because of the existence of a family of level surfaces, solutions of the original equation are in a one-to-one correspondence with arbitrary functions of one variable.
Frobenius' theorem allows one to establish a similar such correspondence for the more general case of solutions of (1). Suppose that u1, ..., un−r are solutions of the problem (1) satisfying the independence condition on the gradients. Consider the level sets of (u1, ..., un−r) as functions with values in Rn−r. If v1, ..., vn−r is another such collection of solutions, one can show (using some linear algebra and the mean value theorem) that this has the same family of level sets but with a possibly different choice of constants for each set. Thus, even though the independent solutions of (1) are not unique, the equation (1) nonetheless determines a unique family of level sets. Just as in the case of the example, general solutions u of (1) are in a one-to-one correspondence with (continuously differentiable) functions on the family of level sets.
The level sets corresponding to the maximal independent solution sets of (1) are called the integral manifolds because functions on the collection of all integral manifolds correspond in some sense to constants of integration. Once one of these constants of integration is known, then the corresponding solution is also known.
== Frobenius' theorem in modern language ==
The Frobenius theorem can be restated more economically in modern language. Frobenius' original version of the theorem was stated in terms of Pfaffian systems, which today can be translated into the language of differential forms. An alternative formulation, which is somewhat more intuitive, uses vector fields.
=== Formulation using vector fields ===
In the vector field formulation, the theorem states that a subbundle of the tangent bundle of a manifold is integrable (or involutive) if and only if it arises from a regular foliation. In this context, the Frobenius theorem relates integrability to foliation; to state the theorem, both concepts must be clearly defined.
One begins by noting that an arbitrary smooth vector field
X
{\displaystyle X}
on a manifold
M
{\displaystyle M}
defines a family of curves, its integral curves
u
:
I
→
M
{\displaystyle u:I\to M}
(for intervals
I
{\displaystyle I}
). These are the solutions of
u
˙
(
t
)
=
X
u
(
t
)
{\displaystyle {\dot {u}}(t)=X_{u(t)}}
, which is a system of first-order ordinary differential equations, whose solvability is guaranteed by the Picard–Lindelöf theorem. If the vector field
X
{\displaystyle X}
is nowhere zero then it defines a one-dimensional subbundle of the tangent bundle of
M
{\displaystyle M}
, and the integral curves form a regular foliation of
M
{\displaystyle M}
. Thus, one-dimensional subbundles are always integrable.
If the subbundle has dimension greater than one, a condition needs to be imposed.
One says that a subbundle
E
⊂
T
M
{\displaystyle E\subset TM}
of the tangent bundle
T
M
{\displaystyle TM}
is integrable (or involutive), if, for any two vector fields
X
{\displaystyle X}
and
Y
{\displaystyle Y}
taking values in
E
{\displaystyle E}
, the Lie bracket
[
X
,
Y
]
{\displaystyle [X,Y]}
takes values in
E
{\displaystyle E}
as well. This notion of integrability need only be defined locally; that is, the existence of the vector fields
X
{\displaystyle X}
and
Y
{\displaystyle Y}
and their integrability need only be defined on subsets of
M
{\displaystyle M}
.
Several definitions of foliation exist. Here we use the following:
Definition. A p-dimensional, class Cr foliation of an n-dimensional manifold M is a decomposition of M into a union of disjoint connected submanifolds {Lα}α∈A, called the leaves of the foliation, with the following property: Every point in M has a neighborhood U and a system of local, class Cr coordinates x=(x1, ⋅⋅⋅, xn) : U→Rn such that for each leaf Lα, the components of U ∩ Lα are described by the equations xp+1=constant, ⋅⋅⋅, xn=constant. A foliation is denoted by
F
{\displaystyle {\mathcal {F}}}
={Lα}α∈A.
Trivially, any foliation of
M
{\displaystyle M}
defines an integrable subbundle, since if
p
∈
M
{\displaystyle p\in M}
and
N
⊂
M
{\displaystyle N\subset M}
is the leaf of the foliation passing through
p
{\displaystyle p}
then
E
p
=
T
p
N
{\displaystyle E_{p}=T_{p}N}
is integrable. Frobenius' theorem states that the converse is also true:
Given the above definitions, Frobenius' theorem states that a subbundle
E
{\displaystyle E}
is integrable if and only if the subbundle
E
{\displaystyle E}
arises from a regular foliation of
M
{\displaystyle M}
.
=== Differential forms formulation ===
Let U be an open set in a manifold M, Ω1(U) be the space of smooth, differentiable 1-forms on U, and F be a submodule of Ω1(U) of rank r, the rank being constant in value over U. The Frobenius theorem states that F is integrable if and only if for every p in U the stalk Fp is generated by r exact differential forms.
Geometrically, the theorem states that an integrable module of 1-forms of rank r is the same thing as a codimension-r foliation. The correspondence to the definition in terms of vector fields given in the introduction follows from the close relationship between differential forms and Lie derivatives. Frobenius' theorem is one of the basic tools for the study of vector fields and foliations.
There are thus two forms of the theorem: one which operates with distributions, that is smooth subbundles D of the tangent bundle TM; and the other which operates with subbundles of the graded ring Ω(M) of all forms on M. These two forms are related by duality. If D is a smooth tangent distribution on M, then the annihilator of D, I(D) consists of all forms
α
∈
Ω
k
(
M
)
{\displaystyle \alpha \in \Omega ^{k}(M)}
(for any
k
∈
{
1
,
…
,
dim
M
}
{\displaystyle k\in \{1,\dots ,\operatorname {dim} M\}}
) such that
α
(
v
1
,
…
,
v
k
)
=
0
{\displaystyle \alpha (v_{1},\dots ,v_{k})=0}
for all
v
1
,
…
,
v
k
∈
D
{\displaystyle v_{1},\dots ,v_{k}\in D}
. The set I(D) forms a subring and, in fact, an ideal in Ω(M). Furthermore, using the definition of the exterior derivative, it can be shown that I(D) is closed under exterior differentiation (it is a differential ideal) if and only if D is involutive. Consequently, the Frobenius theorem takes on the equivalent form that I(D) is closed under exterior differentiation if and only if D is integrable.
== Generalizations ==
The theorem may be generalized in a variety of ways.
=== Infinite dimensions ===
One infinite-dimensional generalization is as follows. Let X and Y be Banach spaces, and A ⊂ X, B ⊂ Y a pair of open sets. Let
F
:
A
×
B
→
L
(
X
,
Y
)
{\displaystyle F:A\times B\to L(X,Y)}
be a continuously differentiable function of the Cartesian product (which inherits a differentiable structure from its inclusion into X ×Y ) into the space L(X,Y) of continuous linear transformations of X into Y. A differentiable mapping u : A → B is a solution of the differential equation
(
1
)
y
′
=
F
(
x
,
y
)
{\displaystyle (1)\quad y'=F(x,y)}
if
∀
x
∈
A
:
u
′
(
x
)
=
F
(
x
,
u
(
x
)
)
.
{\displaystyle \forall x\in A:\quad u'(x)=F(x,u(x)).}
The equation (1) is completely integrable if for each
(
x
0
,
y
0
)
∈
A
×
B
{\displaystyle (x_{0},y_{0})\in A\times B}
, there is a neighborhood U of x0 such that (1) has a unique solution u(x) defined on U such that u(x0)=y0.
The conditions of the Frobenius theorem depend on whether the underlying field is R or C. If it is R, then assume F is continuously differentiable. If it is C, then assume F is twice continuously differentiable. Then (1) is completely integrable at each point of A × B if and only if
D
1
F
(
x
,
y
)
⋅
(
s
1
,
s
2
)
+
D
2
F
(
x
,
y
)
⋅
(
F
(
x
,
y
)
⋅
s
1
,
s
2
)
=
D
1
F
(
x
,
y
)
⋅
(
s
2
,
s
1
)
+
D
2
F
(
x
,
y
)
⋅
(
F
(
x
,
y
)
⋅
s
2
,
s
1
)
{\displaystyle D_{1}F(x,y)\cdot (s_{1},s_{2})+D_{2}F(x,y)\cdot (F(x,y)\cdot s_{1},s_{2})=D_{1}F(x,y)\cdot (s_{2},s_{1})+D_{2}F(x,y)\cdot (F(x,y)\cdot s_{2},s_{1})}
for all s1, s2 ∈ X. Here D1 (resp. D2) denotes the partial derivative with respect to the first (resp. second) variable; the dot product denotes the action of the linear operator F(x, y) ∈ L(X, Y), as well as the actions of the operators D1F(x, y) ∈ L(X, L(X, Y)) and D2F(x, y) ∈ L(Y, L(X, Y)).
==== Banach manifolds ====
The infinite-dimensional version of the Frobenius theorem also holds on Banach manifolds. The statement is essentially the same as the finite-dimensional version.
Let M be a Banach manifold of class at least C2. Let E be a subbundle of the tangent bundle of M. The bundle E is involutive if, for each point p ∈ M and pair of sections X and Y of E defined in a neighborhood of p, the Lie bracket of X and Y evaluated at p, lies in Ep:
[
X
,
Y
]
p
∈
E
p
{\displaystyle [X,Y]_{p}\in E_{p}}
On the other hand, E is integrable if, for each p ∈ M, there is an immersed submanifold φ : N → M whose image contains p, such that the differential of φ is an isomorphism of TN with φ−1E.
The Frobenius theorem states that a subbundle E is integrable if and only if it is involutive.
=== Holomorphic forms ===
The statement of the theorem remains true for holomorphic 1-forms on complex manifolds — manifolds over C with biholomorphic transition functions.
Specifically, if
ω
1
,
…
,
ω
r
{\displaystyle \omega ^{1},\dots ,\omega ^{r}}
are r linearly independent holomorphic 1-forms on an open set in Cn such that
d
ω
j
=
∑
i
=
1
r
ψ
i
j
∧
ω
i
{\displaystyle d\omega ^{j}=\sum _{i=1}^{r}\psi _{i}^{j}\wedge \omega ^{i}}
for some system of holomorphic 1-forms ψ ji, 1 ≤ i, j ≤ r, then there exist holomorphic functions fij and gi such that, on a possibly smaller domain,
ω
j
=
∑
i
=
1
r
f
i
j
d
g
i
.
{\displaystyle \omega ^{j}=\sum _{i=1}^{r}f_{i}^{j}dg^{i}.}
This result holds locally in the same sense as the other versions of the Frobenius theorem. In particular, the fact that it has been stated for domains in Cn is not restrictive.
=== Higher degree forms ===
The statement does not generalize to higher degree forms, although there is a number of partial results such as Darboux's theorem and the Cartan-Kähler theorem.
== History ==
Despite being named for Ferdinand Georg Frobenius, the theorem was first proven by Alfred Clebsch and Feodor Deahna. Deahna was the first to establish the sufficient conditions for the theorem, and Clebsch developed the necessary conditions. Frobenius is responsible for applying the theorem to Pfaffian systems, thus paving the way for its usage in differential topology.
== Applications ==
In classical mechanics, the integrability of a system's constraint equations determines whether the system is holonomic or nonholonomic.
In microeconomic theory, Frobenius' theorem can be used to prove the existence of a solution to the problem of integrability of demand functions.
=== Carathéodory's axiomatic thermodynamics ===
In classical thermodynamics, Frobenius' theorem can be used to construct entropy and temperature in Carathéodory's formalism.
Specifically, Carathéodory considered a thermodynamic system (concretely one can imagine a piston of gas) that can interact with the outside world by either heat conduction (such as setting the piston on fire) or mechanical work (pushing on the piston). He then defined "adiabatic process" as any process that the system may undergo without heat conduction, and defined a relation of "adiabatic accessibility" thus: if the system can go from state A to state B after an adiabatic process, then
B
{\displaystyle B}
is adiabatically accessible from
A
{\displaystyle A}
. Write it as
A
⪰
B
{\displaystyle A\succeq B}
.
Now assume that
For any pair of states
A
,
B
{\displaystyle A,B}
, at least one of
A
⪰
B
{\displaystyle A\succeq B}
and
B
⪰
A
{\displaystyle B\succeq A}
holds.
For any state
A
{\displaystyle A}
, and any neighborhood of
A
{\displaystyle A}
, there exists a state
B
{\displaystyle B}
in the neighborhood, such that
B
{\displaystyle B}
is adiabatically inaccessible from
A
{\displaystyle A}
.
Then, we can foliate the state space into subsets of states that are mutually adiabatically accessible. With mild assumptions on the smoothness of
⪰
{\displaystyle \succeq }
, each subset is a manifold of codimension 1. Call these manifolds "adiabatic surfaces".
By the first law of thermodynamics, there exists a scalar function
U
{\displaystyle U}
("internal energy") on the state space, such that
d
U
=
δ
W
+
δ
Q
=
∑
i
X
i
d
x
i
+
δ
Q
{\displaystyle dU=\delta W+\delta Q=\sum _{i}X_{i}dx_{i}+\delta Q}
where
X
1
d
x
1
,
.
.
.
,
X
n
d
x
n
{\displaystyle X_{1}dx_{1},...,X_{n}dx_{n}}
are the possible ways to perform mechanical work on the system. For example, if the system is a tank of ideal gas, then
δ
W
=
−
p
d
V
{\displaystyle \delta W=-pdV}
.
Now, define the one-form on the state space
ω
:=
d
U
−
∑
i
X
i
d
x
i
{\displaystyle \omega :=dU-\sum _{i}X_{i}dx_{i}}
Now, since the adiabatic surfaces are tangent to
ω
{\displaystyle \omega }
at every point in state space,
ω
{\displaystyle \omega }
is integrable, so by Carathéodory's theorem, there exists two scalar functions
T
,
S
{\displaystyle T,S}
on state space, such that
ω
=
T
d
S
{\displaystyle \omega =TdS}
. These are the temperature and entropy functions, up to a multiplicative constant.
By plugging in the ideal gas laws, and noting that Joule expansion is an (irreversible) adiabatic process, we can fix the sign of
d
S
{\displaystyle dS}
, and find that
A
⪰
B
{\displaystyle A\succeq B}
means
S
(
A
)
≤
S
(
B
)
{\displaystyle S(A)\leq S(B)}
. That is, entropy is preserved in reversible adiabatic processes, and increases during irreversible adiabatic processes.
== See also ==
Integrability conditions for differential systems
Domain-straightening theorem
Newlander-Nirenberg Theorem
== Notes ==
== References == | Wikipedia/Frobenius_theorem_(differential_topology) |
In the mathematical field of differential geometry, Euler's theorem is a result on the curvature of curves on a surface. The theorem establishes the existence of principal curvatures and associated principal directions which give the directions in which the surface curves the most and the least. The theorem is named for Leonhard Euler who proved the theorem in (Euler 1760).
More precisely, let M be a surface in three-dimensional Euclidean space, and p a point on M. A normal plane through p is a plane passing through the point p containing the normal vector to M. Through each (unit) tangent vector to M at p, there passes a normal plane PX which cuts out a curve in M. That curve has a certain curvature κX when regarded as a curve inside PX. Provided not all κX are equal, there is some unit vector X1 for which k1 = κX1 is as large as possible, and another unit vector X2 for which k2 = κX2 is as small as possible. Euler's theorem asserts that X1 and X2 are perpendicular and that, moreover, if X is any vector making an angle θ with X1, then
The quantities k1 and k2 are called the principal curvatures, and X1 and X2 are the corresponding principal directions. Equation (1) is sometimes called Euler's equation (Eisenhart 2004, p. 124).
== See also ==
Differential geometry of surfaces
Dupin indicatrix
== References ==
Eisenhart, Luther P. (2004), A Treatise on the Differential Geometry of Curves and Surfaces, Dover, ISBN 0-486-43820-1 Full 1909 text (now out of copyright)
Euler, Leonhard (1760), "Recherches sur la courbure des surfaces", Mémoires de l'Académie des Sciences de Berlin, 16 (published 1767): 119–143.
Spivak, Michael (1999), A comprehensive introduction to differential geometry, Volume II, Publish or Perish Press, ISBN 0-914098-71-3 | Wikipedia/Euler's_theorem_(differential_geometry) |
In geometry, a geodesic () is a curve representing in some sense the locally shortest path (arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a connection. It is a generalization of the notion of a "straight line".
The noun geodesic and the adjective geodetic come from geodesy, the science of measuring the size and shape of Earth, though many of the underlying principles can be applied to any ellipsoidal geometry. In the original sense, a geodesic was the shortest route between two points on the Earth's surface. For a spherical Earth, it is a segment of a great circle (see also great-circle distance). The term has since been generalized to more abstract mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph.
In a Riemannian manifold or submanifold, geodesics are characterised by the property of having vanishing geodesic curvature. More generally, in the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. Applying this to the Levi-Civita connection of a Riemannian metric recovers the previous notion.
Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles.
== Introduction ==
A locally shortest path between two given points in a curved space, assumed to be a Riemannian manifold, can be defined by using the equation for the length of a curve (a function f from an open interval of R to the space), and then minimizing this length between the points using the calculus of variations. This has some minor technical problems because there is an infinite-dimensional space of different ways to parameterize the shortest path. It is simpler to restrict the set of curves to those that are parameterized "with constant speed" 1, meaning that the distance from f(s) to f(t) along the curve equals |s−t|. Equivalently, a different quantity may be used, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimization). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its width, and in so doing will minimize its energy. The resulting shape of the band is a geodesic.
It is possible that several different curves between two points minimize the distance, as is the case for two diametrically opposite points on a sphere. In such a case, any of these curves is a geodesic.
A contiguous segment of a geodesic is again a geodesic.
In general, geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parameterized with "constant speed". Going the "long way round" on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map
t
→
t
2
{\displaystyle t\to t^{2}}
from the unit interval on the real number line to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant.
Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry. In general relativity, geodesics in spacetime describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved spacetime. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free, and their movement is constrained in various ways.
This article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian manifolds. The article Levi-Civita connection discusses the more general case of a pseudo-Riemannian manifold and geodesic (general relativity) discusses the special case of general relativity in greater detail.
=== Examples ===
The most familiar examples are the straight lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points, then there are infinitely many shortest paths between them. Geodesics on an ellipsoid behave in a more complicated way than on a sphere; in particular, they are not closed in general (see figure).
=== Triangles ===
A geodesic triangle is formed by the geodesics joining each pair out of three points on a given surface. On the sphere, the geodesics are great circle arcs, forming a spherical triangle.
== Metric geometry ==
In metric geometry, a geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve γ : I → M from an interval I of the reals to the metric space M is a geodesic if there is a constant v ≥ 0 such that for any t ∈ I there is a neighborhood J of t in I such that for any t1, t2 ∈ J we have
d
(
γ
(
t
1
)
,
γ
(
t
2
)
)
=
v
|
t
1
−
t
2
|
.
{\displaystyle d(\gamma (t_{1}),\gamma (t_{2}))=v\left|t_{1}-t_{2}\right|.}
This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parameterization, i.e. in the above identity v = 1 and
d
(
γ
(
t
1
)
,
γ
(
t
2
)
)
=
|
t
1
−
t
2
|
.
{\displaystyle d(\gamma (t_{1}),\gamma (t_{2}))=\left|t_{1}-t_{2}\right|.}
If the last equality is satisfied for all t1, t2 ∈ I, the geodesic is called a minimizing geodesic or shortest path.
In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic. The metric Hopf-Rinow theorem provides situations where a length space is automatically a geodesic space.
Common examples of geodesic metric spaces that are often not manifolds include metric graphs, (locally compact) metric polyhedral complexes, infinite-dimensional pre-Hilbert spaces, and real trees.
== Riemannian geometry ==
In a Riemannian manifold
M
{\displaystyle M}
with metric tensor
g
{\displaystyle g}
, the length
L
{\displaystyle L}
of a continuously differentiable curve
γ
:
[
a
,
b
]
→
M
{\displaystyle \gamma :[a,b]\to M}
is defined by
L
(
γ
)
=
∫
a
b
g
γ
(
t
)
(
γ
˙
(
t
)
,
γ
˙
(
t
)
)
d
t
.
{\displaystyle L(\gamma )=\int _{a}^{b}{\sqrt {g_{\gamma (t)}({\dot {\gamma }}(t),{\dot {\gamma }}(t))}}\,dt.}
The distance
d
(
p
,
q
)
{\displaystyle d(p,q)}
between two points
p
{\displaystyle p}
and
q
{\displaystyle q}
of
M
{\displaystyle M}
is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves
γ
:
[
a
,
b
]
→
M
{\displaystyle \gamma :[a,b]\to M}
such that
γ
(
a
)
=
p
{\displaystyle \gamma (a)=p}
and
γ
(
b
)
=
q
{\displaystyle \gamma (b)=q}
. In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. In fact, only paths that are both locally distance minimizing and parameterized proportionately to arc-length are geodesics.
Another equivalent way of defining geodesics on a Riemannian manifold, is to define them as the minima of the following action or energy functional
E
(
γ
)
=
1
2
∫
a
b
g
γ
(
t
)
(
γ
˙
(
t
)
,
γ
˙
(
t
)
)
d
t
.
{\displaystyle E(\gamma )={\frac {1}{2}}\int _{a}^{b}g_{\gamma (t)}({\dot {\gamma }}(t),{\dot {\gamma }}(t))\,dt.}
All minima of
E
{\displaystyle E}
are also minima of
L
{\displaystyle L}
, but
L
{\displaystyle L}
is a bigger set since paths that are minima of
L
{\displaystyle L}
can be arbitrarily re-parameterized (without changing their length), while minima of
E
{\displaystyle E}
cannot.
For a piecewise
C
1
{\displaystyle C^{1}}
curve (more generally, a
W
1
,
2
{\displaystyle W^{1,2}}
curve), the Cauchy–Schwarz inequality gives
L
(
γ
)
2
≤
2
(
b
−
a
)
E
(
γ
)
{\displaystyle L(\gamma )^{2}\leq 2(b-a)E(\gamma )}
with equality if and only if
g
(
γ
′
,
γ
′
)
{\displaystyle g(\gamma ',\gamma ')}
is equal to a constant a.e.; the path should be travelled at constant speed. It happens that minimizers of
E
(
γ
)
{\displaystyle E(\gamma )}
also minimize
L
(
γ
)
{\displaystyle L(\gamma )}
, because they turn out to be affinely parameterized, and the inequality is an equality. The usefulness of this approach is that the problem of seeking minimizers of
E
{\displaystyle E}
is a more robust variational problem. Indeed,
E
(
γ
)
{\displaystyle E(\gamma )}
is a "convex function" of
γ
{\displaystyle \gamma }
, so that within each isotopy class of "reasonable functions", one ought to expect existence, uniqueness, and regularity of minimizers. In contrast, "minimizers" of the functional
L
(
γ
)
{\displaystyle L(\gamma )}
are generally not very regular, because arbitrary reparameterizations are allowed.
The Euler–Lagrange equations of motion for the functional
E
{\displaystyle E}
are then given in local coordinates by
d
2
x
λ
d
t
2
+
Γ
μ
ν
λ
d
x
μ
d
t
d
x
ν
d
t
=
0
,
{\displaystyle {\frac {d^{2}x^{\lambda }}{dt^{2}}}+\Gamma _{\mu \nu }^{\lambda }{\frac {dx^{\mu }}{dt}}{\frac {dx^{\nu }}{dt}}=0,}
where
Γ
μ
ν
λ
{\displaystyle \Gamma _{\mu \nu }^{\lambda }}
are the Christoffel symbols of the metric. This is the geodesic equation, discussed below.
=== Calculus of variations ===
Techniques of the classical calculus of variations can be applied to examine the energy functional
E
{\displaystyle E}
. The first variation of energy is defined in local coordinates by
δ
E
(
γ
)
(
φ
)
=
∂
∂
t
|
t
=
0
E
(
γ
+
t
φ
)
.
{\displaystyle \delta E(\gamma )(\varphi )=\left.{\frac {\partial }{\partial t}}\right|_{t=0}E(\gamma +t\varphi ).}
The critical points of the first variation are precisely the geodesics. The second variation is defined by
δ
2
E
(
γ
)
(
φ
,
ψ
)
=
∂
2
∂
s
∂
t
|
s
=
t
=
0
E
(
γ
+
t
φ
+
s
ψ
)
.
{\displaystyle \delta ^{2}E(\gamma )(\varphi ,\psi )=\left.{\frac {\partial ^{2}}{\partial s\,\partial t}}\right|_{s=t=0}E(\gamma +t\varphi +s\psi ).}
In an appropriate sense, zeros of the second variation along a geodesic
γ
{\displaystyle \gamma }
arise along Jacobi fields. Jacobi fields are thus regarded as variations through geodesics.
By applying variational techniques from classical mechanics, one can also regard geodesics as Hamiltonian flows. They are solutions of the associated Hamilton equations, with (pseudo-)Riemannian metric taken as Hamiltonian.
== Affine geodesics ==
A geodesic on a smooth manifold
M
{\displaystyle M}
with an affine connection
∇
{\displaystyle \nabla }
is defined as a curve
γ
(
t
)
{\displaystyle \gamma (t)}
such that parallel transport along the curve preserves the tangent vector to the curve, so
at each point along the curve, where
γ
˙
{\displaystyle {\dot {\gamma }}}
is the derivative with respect to
t
{\displaystyle t}
. More precisely, in order to define the covariant derivative of
γ
˙
{\displaystyle {\dot {\gamma }}}
it is necessary first to extend
γ
˙
{\displaystyle {\dot {\gamma }}}
to a continuously differentiable vector field in an open set. However, the resulting value of (1) is independent of the choice of extension.
Using local coordinates on
M
{\displaystyle M}
, we can write the geodesic equation (using the summation convention) as
d
2
γ
λ
d
t
2
+
Γ
μ
ν
λ
d
γ
μ
d
t
d
γ
ν
d
t
=
0
,
{\displaystyle {\frac {d^{2}\gamma ^{\lambda }}{dt^{2}}}+\Gamma _{\mu \nu }^{\lambda }{\frac {d\gamma ^{\mu }}{dt}}{\frac {d\gamma ^{\nu }}{dt}}=0\ ,}
where
γ
μ
=
x
μ
∘
γ
(
t
)
{\displaystyle \gamma ^{\mu }=x^{\mu }\circ \gamma (t)}
are the coordinates of the curve
γ
(
t
)
{\displaystyle \gamma (t)}
and
Γ
μ
ν
λ
{\displaystyle \Gamma _{\mu \nu }^{\lambda }}
are the Christoffel symbols of the connection
∇
{\displaystyle \nabla }
. This is an ordinary differential equation for the coordinates. It has a unique solution, given an initial position and an initial velocity. Therefore, from the point of view of classical mechanics, geodesics can be thought of as trajectories of free particles in a manifold. Indeed, the equation
∇
γ
˙
γ
˙
=
0
{\displaystyle \nabla _{\dot {\gamma }}{\dot {\gamma }}=0}
means that the acceleration vector of the curve has no components in the direction of the surface (and therefore it is perpendicular to the tangent plane of the surface at each point of the curve). So, the motion is completely determined by the bending of the surface. This is also the idea of general relativity where particles move on geodesics and the bending is caused by gravity.
=== Existence and uniqueness ===
The local existence and uniqueness theorem for geodesics states that geodesics on a smooth manifold with an affine connection exist, and are unique. More precisely:
For any point p in M and for any vector V in TpM (the tangent space to M at p) there exists a unique geodesic
γ
{\displaystyle \gamma \,}
: I → M such that
γ
(
0
)
=
p
{\displaystyle \gamma (0)=p\,}
and
γ
˙
(
0
)
=
V
,
{\displaystyle {\dot {\gamma }}(0)=V,}
where I is a maximal open interval in R containing 0.
The proof of this theorem follows from the theory of ordinary differential equations, by noticing that the geodesic equation is a second-order ODE. Existence and uniqueness then follow from the Picard–Lindelöf theorem for the solutions of ODEs with prescribed initial conditions. γ depends smoothly on both p and V.
In general, I may not be all of R as for example for an open disc in R2. Any γ extends to all of ℝ if and only if M is geodesically complete.
=== Geodesic flow ===
Geodesic flow is a local R-action on the tangent bundle TM of a manifold M defined in the following way
G
t
(
V
)
=
γ
˙
V
(
t
)
{\displaystyle G^{t}(V)={\dot {\gamma }}_{V}(t)}
where t ∈ R, V ∈ TM and
γ
V
{\displaystyle \gamma _{V}}
denotes the geodesic with initial data
γ
˙
V
(
0
)
=
V
{\displaystyle {\dot {\gamma }}_{V}(0)=V}
. Thus,
G
t
(
V
)
=
exp
(
t
V
)
{\displaystyle G^{t}(V)=\exp(tV)}
is the exponential map of the vector tV. A closed orbit of the geodesic flow corresponds to a closed geodesic on M.
On a (pseudo-)Riemannian manifold, the geodesic flow is identified with a Hamiltonian flow on the cotangent bundle. The Hamiltonian is then given by the inverse of the (pseudo-)Riemannian metric, evaluated against the canonical one-form. In particular the flow preserves the (pseudo-)Riemannian metric
g
{\displaystyle g}
, i.e.
g
(
G
t
(
V
)
,
G
t
(
V
)
)
=
g
(
V
,
V
)
.
{\displaystyle g(G^{t}(V),G^{t}(V))=g(V,V).\,}
In particular, when V is a unit vector,
γ
V
{\displaystyle \gamma _{V}}
remains unit speed throughout, so the geodesic flow is tangent to the unit tangent bundle. Liouville's theorem implies invariance of a kinematic measure on the unit tangent bundle.
=== Geodesic spray ===
The geodesic flow defines a family of curves in the tangent bundle. The derivatives of these curves define a vector field on the total space of the tangent bundle, known as the geodesic spray.
More precisely, an affine connection gives rise to a splitting of the double tangent bundle TTM into horizontal and vertical bundles:
T
T
M
=
H
⊕
V
.
{\displaystyle TTM=H\oplus V.}
The geodesic spray is the unique horizontal vector field W satisfying
π
∗
W
v
=
v
{\displaystyle \pi _{*}W_{v}=v\,}
at each point v ∈ TM; here π∗ : TTM → TM denotes the pushforward (differential) along the projection π : TM → M associated to the tangent bundle.
More generally, the same construction allows one to construct a vector field for any Ehresmann connection on the tangent bundle. For the resulting vector field to be a spray (on the deleted tangent bundle TM \ {0}) it is enough that the connection be equivariant under positive rescalings: it need not be linear. That is, (cf. Ehresmann connection#Vector bundles and covariant derivatives) it is enough that the horizontal distribution satisfy
H
λ
X
=
d
(
S
λ
)
X
H
X
{\displaystyle H_{\lambda X}=d(S_{\lambda })_{X}H_{X}\,}
for every X ∈ TM \ {0} and λ > 0. Here d(Sλ) is the pushforward along the scalar homothety
S
λ
:
X
↦
λ
X
.
{\displaystyle S_{\lambda }:X\mapsto \lambda X.}
A particular case of a non-linear connection arising in this manner is that associated to a Finsler manifold.
=== Affine and projective geodesics ===
Equation (1) is invariant under affine reparameterizations; that is, parameterizations of the form
t
↦
a
t
+
b
{\displaystyle t\mapsto at+b}
where a and b are constant real numbers. Thus apart from specifying a certain class of embedded curves, the geodesic equation also determines a preferred class of parameterizations on each of the curves. Accordingly, solutions of (1) are called geodesics with affine parameter.
An affine connection is determined by its family of affinely parameterized geodesics, up to torsion (Spivak 1999, Chapter 6, Addendum I). The torsion itself does not, in fact, affect the family of geodesics, since the geodesic equation depends only on the symmetric part of the connection. More precisely, if
∇
,
∇
¯
{\displaystyle \nabla ,{\bar {\nabla }}}
are two connections such that the difference tensor
D
(
X
,
Y
)
=
∇
X
Y
−
∇
¯
X
Y
{\displaystyle D(X,Y)=\nabla _{X}Y-{\bar {\nabla }}_{X}Y}
is skew-symmetric, then
∇
{\displaystyle \nabla }
and
∇
¯
{\displaystyle {\bar {\nabla }}}
have the same geodesics, with the same affine parameterizations. Furthermore, there is a unique connection having the same geodesics as
∇
{\displaystyle \nabla }
, but with vanishing torsion.
Geodesics without a particular parameterization are described by a projective connection.
== Computational methods ==
Efficient solvers for the minimal geodesic problem on surfaces have been proposed by Mitchell, Kimmel, Crane, and others.
== Ribbon test ==
A ribbon "test" is a way of finding a geodesic on a physical surface. The idea is to fit a bit of paper around a straight line (a ribbon) onto a curved surface as closely as possible without stretching or squishing the ribbon (without changing its internal geometry).
For example, when a ribbon is wound as a ring around a cone, the ribbon would not lie on the cone's surface but stick out, so that circle is not a geodesic on the cone. If the ribbon is adjusted so that all its parts touch the cone's surface, it would give an approximation to a geodesic.
Mathematically the ribbon test can be formulated as finding a mapping
f
:
N
(
ℓ
)
→
S
{\displaystyle f:N(\ell )\to S}
of a neighborhood
N
{\displaystyle N}
of a line
ℓ
{\displaystyle \ell }
in a plane into a surface
S
{\displaystyle S}
so that the mapping
f
{\displaystyle f}
"doesn't change the distances around
ℓ
{\displaystyle \ell }
by much"; that is, at the distance
ε
{\displaystyle \varepsilon }
from
l
{\displaystyle l}
we have
g
N
−
f
∗
(
g
S
)
=
O
(
ε
2
)
{\displaystyle g_{N}-f^{*}(g_{S})=O(\varepsilon ^{2})}
where
g
N
{\displaystyle g_{N}}
and
g
S
{\displaystyle g_{S}}
are metrics on
N
{\displaystyle N}
and
S
{\displaystyle S}
.
== Examples of applications ==
While geometric in nature, the idea of a shortest path is so general that it easily finds extensive use in nearly all sciences, and in some other disciplines as well.
=== Topology and geometric group theory ===
In a surface with negative Euler characteristic, any (free) homotopy class determines a unique (closed) geodesic for a hyperbolic metric. These geodesics contribute significantly to the geometric understanding of the action of mapping classes.
Geodesic metric spaces and length spaces behave particularly well with isometric group actions (Švarc-Milnor lemma, Hopf-Rinow theorem, Morse lemma...). They are often an adequate framework for generalizing results from Riemannian geometry to constructions that reflect the geometry of a group. For instance, Gromov-hyperbolicity can be understood in terms of geodesic triangle thinness, and CAT(0) can be stated in terms of angles between geodesics.
=== Probability, statistics and machine learning ===
Optimal transport can be understood as the problem of finding geodesic paths in spaces of measures.
In information geometry, divergences such as the Kullback-Leibler divergence play a role analogous to that of a Riemannian metric, allowing analogies for connections and geodesics.
=== Physics ===
In classical mechanics, trajectories minimize an energy according to the Hamilton-Jacobi equation, which can be regarded as a similar idea to geodesics. In some special cases, the two notions actually coincide.
Relativity theory models spacetime as a Lorentzian manifold, where light follows Lorentzian geodesics.
=== Biology ===
The study of how the nervous system optimizes muscular movement may be approached by endowing a configuration space of the body with a Riemannian metric that measures the effort, so that the problem can be stated in terms of geodesy.
Geodesic distance is often used to measure the length of paths for signal propagation in neurons.
The structures of geodesics in large molecules plays a role in the study of protein folds.
=== Engineering ===
Geodesics serve as the basis to calculate:
geodesic airframes; see geodesic airframe or geodetic airframe
geodesic structures – for example geodesic domes
horizontal distances on or near Earth; see Earth geodesics
mapping images on surfaces, for rendering; see UV mapping
robot motion planning (e.g., when painting car parts); see Shortest path problem
geodesic shortest path (GSP) correction over Poisson surface reconstruction (e.g. in digital dentistry); without GSP reconstruction often results in self-intersections within the surface
== See also ==
== Notes ==
== References ==
Spivak, Michael (1999), A Comprehensive introduction to differential geometry (Volume 2), Houston, TX: Publish or Perish, ISBN 978-0-914098-71-3
== Further reading ==
Adler, Ronald; Bazin, Maurice; Schiffer, Menahem (1975), Introduction to General Relativity (2nd ed.), New York: McGraw-Hill, ISBN 978-0-07-000423-8. See chapter 2.
Abraham, Ralph H.; Marsden, Jerrold E. (1978), Foundations of mechanics, London: Benjamin-Cummings, Bibcode:1978fome.book.....A, ISBN 978-0-8053-0102-1. See section 2.7.
Jost, Jürgen (2002), Riemannian Geometry and Geometric Analysis, Berlin, New York: Springer-Verlag, ISBN 978-3-540-42627-1. See section 1.4.
Kobayashi, Shoshichi; Nomizu, Katsumi (1996), Foundations of Differential Geometry, vol. 1 (New ed.), Wiley-Interscience, ISBN 0-471-15733-3.
Landau, L. D.; Lifshitz, E. M. (1975), Classical Theory of Fields, Oxford: Pergamon, Bibcode:1975ctf..book.....L, ISBN 978-0-08-018176-9. See section 87.
Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973), Gravitation, W. H. Freeman, ISBN 978-0-7167-0344-0
Ortín, Tomás (2004), Gravity and strings, Cambridge University Press, ISBN 978-0-521-82475-0. Note especially pages 7 and 10.
Volkov, Yu.A. (2001) [1994], "Geodesic line", Encyclopedia of Mathematics, EMS Press.
Weinberg, Steven (1972), Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, New York: John Wiley & Sons, Bibcode:1972gcpa.book.....W, ISBN 978-0-471-92567-5. See chapter 3.
== External links ==
Geodesics Revisited — Introduction to geodesics including two ways of derivation of the equation of geodesic with applications in geometry (geodesic on a sphere and on a torus), mechanics (brachistochrone) and optics (light beam in inhomogeneous medium).
Totally geodesic submanifold at the Manifold Atlas | Wikipedia/Geodesic_equation |
In mathematics, an almost complex manifold is a smooth manifold equipped with a smooth linear complex structure on each tangent space. Every complex manifold is an almost complex manifold, but there are almost complex manifolds that are not complex manifolds. Almost complex structures have important applications in symplectic geometry.
The concept is due to Charles Ehresmann and Heinz Hopf in the 1940s.
== Formal definition ==
Let M be a smooth manifold. An almost complex structure J on M is a linear complex structure (that is, a linear map which squares to −1) on each tangent space of the manifold, which varies smoothly on the manifold. In other words, we have a smooth tensor field J of degree (1, 1) such that
J
2
=
−
1
{\displaystyle J^{2}=-1}
when regarded as a vector bundle isomorphism
J
:
T
M
→
T
M
{\displaystyle J\colon TM\to TM}
on the tangent bundle. A manifold equipped with an almost complex structure is called an almost complex manifold.
If M admits an almost complex structure, it must be even-dimensional. This can be seen as follows. Suppose M is n-dimensional, and let J : TM → TM be an almost complex structure. If J2 = −1 then (det J)2 = (−1)n. But if M is a real manifold, then det J is a real number – thus n must be even if M has an almost complex structure. One can show that it must be orientable as well.
An easy exercise in linear algebra shows that any even dimensional vector space admits a linear complex structure. Therefore, an even dimensional manifold always admits a (1, 1)-rank tensor pointwise (which is just a linear transformation on each tangent space) such that Jp2 = −1 at each point p. Only when this local tensor can be patched together to be defined globally does the pointwise linear complex structure yield an almost complex structure, which is then uniquely determined. The possibility of this patching, and therefore existence of an almost complex structure on a manifold M is equivalent to a reduction of the structure group of the tangent bundle from GL(2n, R) to GL(n, C). The existence question is then a purely algebraic topological one and is fairly well understood.
== Examples ==
For every integer n, the flat space R2n admits an almost complex structure. An example for such an almost complex structure is (1 ≤ j, k ≤ 2n):
J
j
k
=
−
i
δ
j
,
k
−
1
{\displaystyle J_{jk}=-i\delta _{j,k-1}}
for odd j,
J
j
k
=
i
δ
j
,
k
+
1
{\displaystyle J_{jk}=i\delta _{j,k+1}}
for even j.
The only spheres which admit almost complex structures are S2 and S6 (Borel & Serre (1953)). In particular, S4 cannot be given an almost complex
structure (Ehresmann and Hopf). In the case of S2, the almost complex structure comes from an honest complex structure on the Riemann sphere. The 6-sphere, S6, when considered as the set of unit norm imaginary octonions, inherits an almost complex structure from the octonion multiplication; the question of whether it has a complex structure is known as the Hopf problem, after Heinz Hopf.
== Differential topology of almost complex manifolds ==
Just as a complex structure on a vector space V allows a decomposition of VC into V+ and V− (the eigenspaces of J corresponding to +i and −i, respectively), so an almost complex structure on M allows a decomposition of the complexified tangent bundle TMC (which is the vector bundle of complexified tangent spaces at each point) into TM+ and TM−. A section of TM+ is called a vector field of type (1, 0), while a section of TM− is a vector field of type (0, 1). Thus J corresponds to multiplication by i on the (1, 0)-vector fields of the complexified tangent bundle, and multiplication by −i on the (0, 1)-vector fields.
Just as we build differential forms out of exterior powers of the cotangent bundle, we can build exterior powers of the complexified cotangent bundle (which is canonically isomorphic to the bundle of dual spaces of the complexified tangent bundle). The almost complex structure induces the decomposition of each space of r-forms
Ω
r
(
M
)
C
=
⨁
p
+
q
=
r
Ω
(
p
,
q
)
(
M
)
.
{\displaystyle \Omega ^{r}(M)^{\mathbf {C} }=\bigoplus _{p+q=r}\Omega ^{(p,q)}(M).\,}
In other words, each Ωr(M)C admits a decomposition into a sum of Ω(p, q)(M), with r = p + q.
As with any direct sum, there is a canonical projection πp,q from Ωr(M)C to Ω(p,q). We also have the exterior derivative d which maps Ωr(M)C to Ωr+1(M)C. Thus we may use the almost complex structure to refine the action of the exterior derivative to the forms of definite type
∂
=
π
p
+
1
,
q
∘
d
{\displaystyle \partial =\pi _{p+1,q}\circ d}
∂
¯
=
π
p
,
q
+
1
∘
d
{\displaystyle {\overline {\partial }}=\pi _{p,q+1}\circ d}
so that
∂
{\displaystyle \partial }
is a map which increases the holomorphic part of the type by one (takes forms of type (p, q) to forms of type (p+1, q)), and
∂
¯
{\displaystyle {\overline {\partial }}}
is a map which increases the antiholomorphic part of the type by one. These operators are called the Dolbeault operators.
Since the sum of all the projections must be the identity map, we note that the exterior derivative can be written
d
=
∑
r
+
s
=
p
+
q
+
1
π
r
,
s
∘
d
=
∂
+
∂
¯
+
⋯
.
{\displaystyle d=\sum _{r+s=p+q+1}\pi _{r,s}\circ d=\partial +{\overline {\partial }}+\cdots .}
== Integrable almost complex structures ==
Every complex manifold is itself an almost complex manifold. In local holomorphic coordinates
z
μ
=
x
μ
+
i
y
μ
{\displaystyle z^{\mu }=x^{\mu }+iy^{\mu }}
one can define the maps
J
∂
∂
x
μ
=
∂
∂
y
μ
J
∂
∂
y
μ
=
−
∂
∂
x
μ
{\displaystyle J{\frac {\partial }{\partial x^{\mu }}}={\frac {\partial }{\partial y^{\mu }}}\qquad J{\frac {\partial }{\partial y^{\mu }}}=-{\frac {\partial }{\partial x^{\mu }}}}
(just like a counterclockwise rotation of π/2) or
J
∂
∂
z
μ
=
i
∂
∂
z
μ
J
∂
∂
z
¯
μ
=
−
i
∂
∂
z
¯
μ
.
{\displaystyle J{\frac {\partial }{\partial z^{\mu }}}=i{\frac {\partial }{\partial z^{\mu }}}\qquad J{\frac {\partial }{\partial {\bar {z}}^{\mu }}}=-i{\frac {\partial }{\partial {\bar {z}}^{\mu }}}.}
One easily checks that this map defines an almost complex structure. Thus any complex structure on a manifold yields an almost complex structure, which is said to be 'induced' by the complex structure, and the complex structure is said to be 'compatible with' the almost complex structure.
The converse question, whether the almost complex structure implies the existence of a complex structure is much less trivial, and not true in general. On an arbitrary almost complex manifold one can always find coordinates for which the almost complex structure takes the above canonical form at any given point p. In general, however, it is not possible to find coordinates so that J takes the canonical form on an entire neighborhood of p. Such coordinates, if they exist, are called 'local holomorphic coordinates for J'. If M admits local holomorphic coordinates for J around every point then these patch together to form a holomorphic atlas for M giving it a complex structure, which moreover induces J. J is then said to be 'integrable'. If J is induced by a complex structure, then it is induced by a unique complex structure.
Given any linear map A on each tangent space of M; i.e., A is a tensor field of rank (1, 1), then the Nijenhuis tensor is a tensor field of rank (1,2) given by
N
A
(
X
,
Y
)
=
−
A
2
[
X
,
Y
]
+
A
(
[
A
X
,
Y
]
+
[
X
,
A
Y
]
)
−
[
A
X
,
A
Y
]
.
{\displaystyle N_{A}(X,Y)=-A^{2}[X,Y]+A([AX,Y]+[X,AY])-[AX,AY].\,}
or, for the usual case of an almost complex structure A=J such that
J
2
=
−
I
d
{\displaystyle J^{2}=-Id}
,
N
J
(
X
,
Y
)
=
[
X
,
Y
]
+
J
(
[
J
X
,
Y
]
+
[
X
,
J
Y
]
)
−
[
J
X
,
J
Y
]
.
{\displaystyle N_{J}(X,Y)=[X,Y]+J([JX,Y]+[X,JY])-[JX,JY].\,}
The individual expressions on the right depend on the choice of the smooth vector fields X and Y, but the left side actually depends only on the pointwise values of X and Y, which is why NA is a tensor. This is also clear from the component formula
−
(
N
A
)
i
j
k
=
A
i
m
∂
m
A
j
k
−
A
j
m
∂
m
A
i
k
−
A
m
k
(
∂
i
A
j
m
−
∂
j
A
i
m
)
.
{\displaystyle -(N_{A})_{ij}^{k}=A_{i}^{m}\partial _{m}A_{j}^{k}-A_{j}^{m}\partial _{m}A_{i}^{k}-A_{m}^{k}(\partial _{i}A_{j}^{m}-\partial _{j}A_{i}^{m}).}
In terms of the Frölicher–Nijenhuis bracket, which generalizes the Lie bracket of vector fields, the Nijenhuis tensor NA is just one-half of [A, A].
The Newlander–Nirenberg theorem states that an almost complex structure J is integrable if and only if NJ = 0. The compatible complex structure is unique, as discussed above. Since the existence of an integrable almost complex structure is equivalent to the existence of a complex structure, this is sometimes taken as the definition of a complex structure.
There are several other criteria which are equivalent to the vanishing of the Nijenhuis tensor, and which therefore furnish methods for checking the integrability of an almost complex structure (and in fact each of these can be found in the literature):
The Lie bracket of any two (1, 0)-vector fields is again of type (1, 0)
d
=
∂
+
∂
¯
{\displaystyle d=\partial +{\bar {\partial }}}
∂
¯
2
=
0.
{\displaystyle {\bar {\partial }}^{2}=0.}
Any of these conditions implies the existence of a unique compatible complex structure.
The existence of an almost complex structure is a topological question and is relatively easy to answer, as discussed above. The existence of an integrable almost complex structure, on the other hand, is a much more difficult analytic question. For example, it is still not known whether S6 admits an integrable almost complex structure, despite a long history of ultimately unverified claims. Smoothness issues are important. For real-analytic J, the Newlander–Nirenberg theorem follows from the Frobenius theorem; for C∞ (and less smooth) J, analysis is required (with more difficult techniques as the regularity hypothesis weakens).
== Compatible triples ==
Suppose M is equipped with a symplectic form ω, a Riemannian metric g, and an almost complex structure J. Since ω and g are nondegenerate, each induces a bundle isomorphism TM → T*M, where the first map, denoted φω, is given by the interior product φω(u) = iuω = ω(u, •) and the other, denoted φg, is given by the analogous operation for g. With this understood, the three structures (g, ω, J) form a compatible triple when each structure can be specified by the two others as follows:
g(u, v) = ω(u, Jv)
ω(u, v) = g(Ju, v)
J(u) = (φg)−1(φω(u)).
In each of these equations, the two structures on the right hand side are called compatible when the corresponding construction yields a structure of the type specified. For example, ω and J are compatible if and only if ω(•, J•) is a Riemannian metric. The bundle on M whose sections are the almost complex structures compatible to ω has contractible fibres: the complex structures on the tangent fibres compatible with the restriction to the symplectic forms.
Using elementary properties of the symplectic form ω, one can show that a compatible almost complex structure J is an almost Kähler structure for the Riemannian metric ω(u, Jv). Also, if J is integrable, then (M, ω, J) is a Kähler manifold.
These triples are related to the 2 out of 3 property of the unitary group.
== Generalized almost complex structure ==
Nigel Hitchin introduced the notion of a generalized almost complex structure on the manifold M, which was elaborated in the doctoral dissertations of his students Marco Gualtieri and Gil Cavalcanti. An ordinary almost complex structure is a choice of a half-dimensional subspace of each fiber of the complexified tangent bundle TM. A generalized almost complex structure is a choice of a half-dimensional isotropic subspace of each fiber of the direct sum of the complexified tangent and cotangent bundles. In both cases one demands that the direct sum of the subbundle and its complex conjugate yield the original bundle.
An almost complex structure integrates to a complex structure if the half-dimensional subspace is closed under the Lie bracket. A generalized almost complex structure integrates to a generalized complex structure if the subspace is closed under the Courant bracket. If furthermore this half-dimensional space is the annihilator of a nowhere vanishing pure spinor then M is a generalized Calabi–Yau manifold.
== See also ==
Almost quaternionic manifold – Concept in geometryPages displaying short descriptions of redirect targets
Chern class – Characteristic classes of vector bundles
Frölicher–Nijenhuis bracket
Kähler manifold – Manifold with Riemannian, complex and symplectic structure
Poisson manifold – Mathematical structure in differential geometry
Rizza manifold – almost complex manifold equipped with a compatible Finsler structurePages displaying wikidata descriptions as a fallback
Symplectic manifold – Type of manifold in differential geometry
== References ==
Newlander, August; Nirenberg, Louis (1957). "Complex analytic coordinates in almost complex manifolds". Annals of Mathematics. Second Series. 65 (3): 391–404. doi:10.2307/1970051. ISSN 0003-486X. JSTOR 1970051. MR 0088770.
Cannas da Silva, Ana (2001). Lectures on Symplectic Geometry. Springer. ISBN 3-540-42195-5. Information on compatible triples, Kähler and Hermitian manifolds, etc.
Wells, Raymond O. (1980). Differential Analysis on Complex Manifolds. New York: Springer-Verlag. ISBN 0-387-90419-0. Short section which introduces standard basic material.
Rubei, Elena (2014). Algebraic Geometry, a concise dictionary. Berlin/Boston: Walter De Gruyter. ISBN 978-3-11-031622-3.
Borel, Armand; Serre, Jean-Pierre (1953). "Groupes de Lie et puissances réduites de Steenrod". American Journal of Mathematics. 75 (3): 409–448. doi:10.2307/2372495. JSTOR 2372495. MR 0058213. | Wikipedia/Nijenhuis_tensor |
In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is.
== Usage in science and engineering ==
In formal expressions, the ordinal number used before the word order refers to the highest power in the series expansion used in the approximation. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases. The expression a zero-order approximation is also common. Cardinal numerals are occasionally used in expressions like an order-zero approximation, an order-one approximation, etc.
The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity. The phrase to a zeroth approximation indicates a wild guess. The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing, as these formal expressions do not directly refer to the order of derivatives.
The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. The choice of order of approximation depends on the research purpose. One may wish to simplify a known analytic expression to devise a new application or, on the contrary, try to fit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy.
In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree. The formal usage of order of approximation corresponds to the omission of some terms of the series used in the expansion. This affects accuracy. The error usually varies within the interval. Thus the terms (zeroth, first, second, etc.) used above meaning do not directly give information about percent error or significant figures. For example, in the Taylor series expansion of the exponential function,
e
x
=
1
⏟
0
th
+
x
⏟
1
st
+
x
2
2
!
⏟
2
nd
+
x
3
3
!
⏟
3
rd
+
x
4
4
!
⏟
4
th
+
…
,
{\displaystyle e^{x}=\underbrace {1} _{0^{\text{th}}}+\underbrace {x} _{1^{\text{st}}}+\underbrace {\frac {x^{2}}{2!}} _{2^{\text{nd}}}+\underbrace {\frac {x^{3}}{3!}} _{3^{\text{rd}}}+\underbrace {\frac {x^{4}}{4!}} _{4^{\text{th}}}+\ldots \;,}
the zeroth-order term is
1
;
{\displaystyle 1;}
the first-order term is
x
,
{\displaystyle x,}
second-order is
x
2
/
2
,
{\displaystyle x^{2}/2,}
and so forth. If
|
x
|
<
1
,
{\displaystyle |x|<1,}
each higher order term is smaller than the previous. If
|
x
|
<<
1
,
{\displaystyle |x|<<1,\,}
then the first order approximation,
e
x
≈
1
+
x
,
{\displaystyle e^{x}\approx 1+x,}
is often sufficient. But at
x
=
1
,
{\displaystyle x=1,}
the first-order term,
x
,
{\displaystyle x,}
is not smaller than the zeroth-order term,
1.
{\displaystyle 1.}
And at
x
=
2
,
{\displaystyle x=2,}
even the second-order term,
2
3
/
3
!
=
4
/
3
,
{\displaystyle 2^{3}/3!=4/3,\,}
is greater than the zeroth-order term.
=== Zeroth-order ===
Zeroth-order approximation is the term scientists use for a first rough answer. Many simplifying assumptions are made, and when a number is needed, an order-of-magnitude answer (or zero significant figures) is often given. For example, "the town has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order-of-magnitude approximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined.
A zeroth-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be constant, or a flat line with no slope: a polynomial of degree 0. For example,
x
=
[
0
,
1
,
2
]
,
{\displaystyle x=[0,1,2],}
y
=
[
3
,
3
,
5
]
,
{\displaystyle y=[3,3,5],}
y
∼
f
(
x
)
=
3.67
{\displaystyle y\sim f(x)=3.67}
could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging the x values and the y values. However, data points represent results of measurements and they do differ from points in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example of false precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result for y of ~3.7 ± 2.0 in the interval of x from −0.5 to 2.5, considering the standard deviation.
If the data points are reported as
x
=
[
0.00
,
1.00
,
2.00
]
,
{\displaystyle x=[0.00,1.00,2.00],}
y
=
[
3.00
,
3.00
,
5.00
]
,
{\displaystyle y=[3.00,3.00,5.00],}
the zeroth-order approximation results in
y
∼
f
(
x
)
=
3.67.
{\displaystyle y\sim f(x)=3.67.}
The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example,
y
∼
x
+
2.67.
{\displaystyle y\sim x+2.67.}
One should be careful though, because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of the interval, which may be a large part of it. This means that y could have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for example y = sin πx. Taylor series are useful and help predict analytic solutions, but the approximations alone do not provide conclusive evidence.
=== First-order ===
First-order approximation is the term scientists use for a slightly better answer. Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has 4×103, or four thousand, residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth-order example above, the quantity "a few" was given, but in the first-order example, the number "4" is given.
A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example:
x
=
[
0.00
,
1.00
,
2.00
]
,
{\displaystyle x=[0.00,1.00,2.00],}
y
=
[
3.00
,
3.00
,
5.00
]
,
{\displaystyle y=[3.00,3.00,5.00],}
y
∼
f
(
x
)
=
x
+
2.67
{\displaystyle y\sim f(x)=x+2.67}
is an approximate fit to the data.
In this example there is a zeroth-order approximation that is the same as the first-order, but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an "educated guess".
=== Second-order ===
Second-order approximation is the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has 3.9×103, or thirty-nine hundred, residents") is generally given. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth order found in the examples above.
A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a quadratic polynomial, geometrically, a parabola: a polynomial of degree 2. For example:
x
=
[
0.00
,
1.00
,
2.00
]
,
{\displaystyle x=[0.00,1.00,2.00],}
y
=
[
3.00
,
3.00
,
5.00
]
,
{\displaystyle y=[3.00,3.00,5.00],}
y
∼
f
(
x
)
=
x
2
−
x
+
3
{\displaystyle y\sim f(x)=x^{2}-x+3}
is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order").
=== Higher-order ===
While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number.
Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. See polynomial interpolation.
== Colloquial usage ==
These terms are also used colloquially by scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it." or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement.
== See also ==
Linearization
Perturbation theory
Taylor series
Chapman–Enskog method
Big O notation
Order of accuracy
== References == | Wikipedia/First_order_of_approximation |
In algebraic geometry and theoretical physics, mirror symmetry is a relationship between geometric objects called Calabi–Yau manifolds. The term refers to a situation where two Calabi–Yau manifolds look very different geometrically but are nevertheless equivalent when employed as extra dimensions of string theory.
Early cases of mirror symmetry were discovered by physicists. Mathematicians became interested in this relationship around 1990 when Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that it could be used as a tool in enumerative geometry, a branch of mathematics concerned with counting the number of solutions to geometric questions. Candelas and his collaborators showed that mirror symmetry could be used to count rational curves on a Calabi–Yau manifold, thus solving a longstanding problem. Although the original approach to mirror symmetry was based on physical ideas that were not understood in a mathematically precise way, some of its mathematical predictions have since been proven rigorously.
Today, mirror symmetry is a major research topic in pure mathematics, and mathematicians are working to develop a mathematical understanding of the relationship based on physicists' intuition. Mirror symmetry is also a fundamental tool for doing calculations in string theory, and it has been used to understand aspects of quantum field theory, the formalism that physicists use to describe elementary particles. Major approaches to mirror symmetry include the homological mirror symmetry program of Maxim Kontsevich, and the SYZ conjecture of Andrew Strominger, Shing-Tung Yau, and Eric Zaslow and its algebraic analog — the Gross-Siebert program of Mark Gross and Bernd Siebert.
== Overview ==
=== Strings and compactification ===
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. These strings look like small segments or loops of ordinary string. String theory describes how strings propagate through space and interact with each other. On distance scales larger than the string scale, a string will look just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. Splitting and recombination of strings correspond to particle emission and absorption, giving rise to the interactions between particles.
There are notable differences between the world described by string theory and the everyday world. In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time (later/earlier). Thus, in the language of modern physics, one says that spacetime is four-dimensional. One of the peculiar features of string theory is that it requires extra dimensions of spacetime for its mathematical consistency. In superstring theory, the version of the theory that incorporates a theoretical idea called supersymmetry, there are six extra dimensions of spacetime in addition to the four that are familiar from everyday experience.
One of the goals of current research in string theory is to develop models in which the strings represent particles observed in high energy physics experiments. For such a model to be consistent with observations, its spacetime must be four-dimensional at the relevant distance scales, so one must look for ways to restrict the extra dimensions to smaller scales. In most realistic models of physics based on string theory, this is accomplished by a process called compactification, in which the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions.
=== Calabi–Yau manifolds ===
Compactification can be used to construct models in which spacetime is effectively four-dimensional. However, not every way of compactifying the extra dimensions produces a model with the right properties to describe nature. In a viable model of particle physics, the compact extra dimensions must be shaped like a Calabi–Yau manifold. A Calabi–Yau manifold is a special space which is typically taken to be six-dimensional in applications to string theory. It is named after mathematicians Eugenio Calabi and Shing-Tung Yau.
After Calabi–Yau manifolds had entered physics as a way to compactify extra dimensions, many physicists began studying these manifolds. In the late 1980s, Lance Dixon, Wolfgang Lerche, Cumrun Vafa, and Nick Warner noticed that given such a compactification of string theory, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, two different versions of string theory called type IIA string theory and type IIB can be compactified on completely different Calabi–Yau manifolds giving rise to the same physics. In this situation, the manifolds are called mirror manifolds, and the relationship between the two physical theories is called mirror symmetry.
The mirror symmetry relationship is a particular example of what physicists call a physical duality. In general, the term physical duality refers to a situation where two seemingly different physical theories turn out to be equivalent in a nontrivial way. If one theory can be transformed so it looks just like another theory, the two are said to be dual under that transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena. Such dualities play an important role in modern physics, especially in string theory.
Regardless of whether Calabi–Yau compactifications of string theory provide a correct description of nature, the existence of the mirror duality between different string theories has significant mathematical consequences. The Calabi–Yau manifolds used in string theory are of interest in pure mathematics, and mirror symmetry allows mathematicians to solve problems in enumerative algebraic geometry, a branch of mathematics concerned with counting the numbers of solutions to geometric questions. A classical problem of enumerative geometry is to enumerate the rational curves on a Calabi–Yau manifold such as the one illustrated above. By applying mirror symmetry, mathematicians have translated this problem into an equivalent problem for the mirror Calabi–Yau, which turns out to be easier to solve.
In physics, mirror symmetry is justified on physical grounds. However, mathematicians generally require rigorous proofs that do not require an appeal to physical intuition. From a mathematical point of view, the version of mirror symmetry described above is still only a conjecture, but there is another version of mirror symmetry in the context of topological string theory, a simplified version of string theory introduced by Edward Witten, which has been rigorously proven by mathematicians. In the context of topological string theory, mirror symmetry states that two theories called the A-model and B-model are equivalent in the sense that there is a duality relating them. Today mirror symmetry is an active area of research in mathematics, and mathematicians are working to develop a more complete mathematical understanding of mirror symmetry based on physicists' intuition.
== History ==
The idea of mirror symmetry can be traced back to the mid-1980s when it was noticed that a string propagating on a circle of radius
R
{\displaystyle R}
is physically equivalent to a string propagating on a circle of radius
1
/
R
{\displaystyle 1/R}
in appropriate units. This phenomenon is now known as T-duality and is understood to be closely related to mirror symmetry. In a paper from 1985, Philip Candelas, Gary Horowitz, Andrew Strominger, and Edward Witten showed that by compactifying string theory on a Calabi–Yau manifold, one obtains a theory roughly similar to the standard model of particle physics that also consistently incorporates an idea called supersymmetry. Following this development, many physicists began studying Calabi–Yau compactifications, hoping to construct realistic models of particle physics based on string theory. Cumrun Vafa and others noticed that given such a physical model, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, there are two Calabi–Yau manifolds that give rise to the same physics.
By studying the relationship between Calabi–Yau manifolds and certain conformal field theories called Gepner models, Brian Greene and Ronen Plesser found nontrivial examples of the mirror relationship. Further evidence for this relationship came from the work of Philip Candelas, Monika Lynker, and Rolf Schimmrigk, who surveyed a large number of Calabi–Yau manifolds by computer and found that they came in mirror pairs.
Mathematicians became interested in mirror symmetry around 1990 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to solve problems in enumerative geometry that had resisted solution for decades or more. These results were presented to mathematicians at a conference at the Mathematical Sciences Research Institute (MSRI) in Berkeley, California in May 1991. During this conference, it was noticed that one of the numbers Candelas had computed for the counting of rational curves disagreed with the number obtained by Norwegian mathematicians Geir Ellingsrud and Stein Arild Strømme using ostensibly more rigorous techniques. Many mathematicians at the conference assumed that Candelas's work contained a mistake since it was not based on rigorous mathematical arguments. However, after examining their solution, Ellingsrud and Strømme discovered an error in their computer code and, upon fixing the code, they got an answer that agreed with the one obtained by Candelas and his collaborators.
In 1990, Edward Witten introduced topological string theory, a simplified version of string theory, and physicists showed that there is a version of mirror symmetry for topological string theory. This statement about topological string theory is usually taken as the definition of mirror symmetry in the mathematical literature. In an address at the International Congress of Mathematicians in 1994, mathematician Maxim Kontsevich presented a new mathematical conjecture based on the physical idea of mirror symmetry in topological string theory. Known as homological mirror symmetry, this conjecture formalizes mirror symmetry as an equivalence of two mathematical structures: the derived category of coherent sheaves on a Calabi–Yau manifold and the Fukaya category of its mirror.
Also around 1995, Kontsevich analyzed the results of Candelas, which gave a general formula for the problem of counting rational curves on a quintic threefold, and he reformulated these results as a precise mathematical conjecture. In 1996, Alexander Givental posted a paper that claimed to prove this conjecture of Kontsevich. Initially, many mathematicians found this paper hard to understand, so there were doubts about its correctness. Subsequently, Bong Lian, Kefeng Liu, and Shing-Tung Yau published an independent proof in a series of papers. Despite controversy over who had published the first proof, these papers are now collectively seen as providing a mathematical proof of the results originally obtained by physicists using mirror symmetry. In 2000, Kentaro Hori and Cumrun Vafa gave another physical proof of mirror symmetry based on T-duality.
Work on mirror symmetry continues today with major developments in the context of strings on surfaces with boundaries. In addition, mirror symmetry has been related to many active areas of mathematics research, such as the McKay correspondence, topological quantum field theory, and the theory of stability conditions. At the same time, basic questions continue to vex. For example, mathematicians still lack an understanding of how to construct examples of mirror Calabi–Yau pairs, though there has been progress in understanding this issue.
== Applications ==
=== Enumerative geometry ===
Many of the important mathematical applications of mirror symmetry belong to the branch of mathematics called enumerative geometry. In enumerative geometry, one is interested in counting the number of solutions to geometric questions, typically using the techniques of algebraic geometry. One of the earliest problems of enumerative geometry was posed around the year 200 BCE by the ancient Greek mathematician Apollonius, who asked how many circles in the plane are tangent to three given circles. In general, the solution to the problem of Apollonius is that there are eight such circles.
Enumerative problems in mathematics often concern a class of geometric objects called algebraic varieties which are defined by the vanishing of polynomials. For example, the Clebsch cubic (see the illustration) is defined using a certain polynomial of degree three in four variables. A celebrated result of nineteenth-century mathematicians Arthur Cayley and George Salmon states that there are exactly 27 straight lines that lie entirely on such a surface.
Generalizing this problem, one can ask how many lines can be drawn on a quintic Calabi–Yau manifold, such as the one illustrated above, which is defined by a polynomial of degree five. This problem was solved by the nineteenth-century German mathematician Hermann Schubert, who found that there are exactly 2,875 such lines. In 1986, geometer Sheldon Katz proved that the number of curves, such as circles, that are defined by polynomials of degree two and lie entirely in the quintic is 609,250.
By the year 1991, most of the classical problems of enumerative geometry had been solved and interest in enumerative geometry had begun to diminish. According to mathematician Mark Gross, "As the old problems had been solved, people went back to check Schubert's numbers with modern techniques, but that was getting pretty stale." The field was reinvigorated in May 1991 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to count the number of degree three curves on a quintic Calabi–Yau. Candelas and his collaborators found that these six-dimensional Calabi–Yau manifolds can contain exactly 317,206,375 curves of degree three.
In addition to counting degree-three curves on a quintic three-fold, Candelas and his collaborators obtained a number of more general results for counting rational curves which went far beyond the results obtained by mathematicians. Although the methods used in this work were based on physical intuition, mathematicians have gone on to prove rigorously some of the predictions of mirror symmetry. In particular, the enumerative predictions of mirror symmetry have now been rigorously proven.
=== Theoretical physics ===
In addition to its applications in enumerative geometry, mirror symmetry is a fundamental tool for doing calculations in string theory. In the A-model of topological string theory, physically interesting quantities are expressed in terms of infinitely many numbers called Gromov–Witten invariants, which are extremely difficult to compute. In the B-model, the calculations can be reduced to classical integrals and are much easier. By applying mirror symmetry, theorists can translate difficult calculations in the A-model into equivalent but technically easier calculations in the B-model. These calculations are then used to determine the probabilities of various physical processes in string theory. Mirror symmetry can be combined with other dualities to translate calculations in one theory into equivalent calculations in a different theory. By outsourcing calculations to different theories in this way, theorists can calculate quantities that are impossible to calculate without the use of dualities.
Outside of string theory, mirror symmetry is used to understand aspects of quantum field theory, the formalism that physicists use to describe elementary particles. For example, gauge theories are a class of highly symmetric physical theories appearing in the standard model of particle physics and other parts of theoretical physics. Some gauge theories which are not part of the standard model, but which are nevertheless important for theoretical reasons, arise from strings propagating on a nearly singular background. For such theories, mirror symmetry is a useful computational tool. Indeed, mirror symmetry can be used to perform calculations in an important gauge theory in four spacetime dimensions that was studied by Nathan Seiberg and Edward Witten and is also familiar in mathematics in the context of Donaldson invariants. There is also a generalization of mirror symmetry called 3D mirror symmetry which relates pairs of quantum field theories in three spacetime dimensions.
== Approaches ==
=== Homological mirror symmetry ===
In string theory and related theories in physics, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane.
In string theory, a string may be open (forming a segment with two endpoints) or closed (forming a closed loop). D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to a condition that it satisfies, the Dirichlet boundary condition.
Mathematically, branes can be described using the notion of a category. This is a mathematical structure consisting of objects, and for any pair of objects, a set of morphisms between them. In most examples, the objects are mathematical structures (such as sets, vector spaces, or topological spaces) and the morphisms are functions between these structures. One can also consider categories where the objects are D-branes and the morphisms between two branes
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are states of open strings stretched between
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
.
In the B-model of topological string theory, the D-branes are complex submanifolds of a Calabi–Yau together with additional data that arise physically from having charges at the endpoints of strings. Intuitively, one can think of a submanifold as a surface embedded inside the Calabi–Yau, although submanifolds can also exist in dimensions different from two. In mathematical language, the category having these branes as its objects is known as the derived category of coherent sheaves on the Calabi–Yau. In the A-model, the D-branes can again be viewed as submanifolds of a Calabi–Yau manifold. Roughly speaking, they are what mathematicians call special Lagrangian submanifolds. This means among other things that they have half the dimension of the space in which they sit, and they are length-, area-, or volume-minimizing. The category having these branes as its objects is called the Fukaya category.
The derived category of coherent sheaves is constructed using tools from complex geometry, a branch of mathematics that describes geometric curves in algebraic terms and solves geometric problems using algebraic equations. On the other hand, the Fukaya category is constructed using symplectic geometry, a branch of mathematics that arose from studies of classical physics. Symplectic geometry studies spaces equipped with a symplectic form, a mathematical tool that can be used to compute area in two-dimensional examples.
The homological mirror symmetry conjecture of Maxim Kontsevich states that the derived category of coherent sheaves on one Calabi–Yau manifold is equivalent in a certain sense to the Fukaya category of its mirror. This equivalence provides a precise mathematical formulation of mirror symmetry in topological string theory. In addition, it provides an unexpected bridge between two branches of geometry, namely complex and symplectic geometry.
=== Strominger–Yau–Zaslow conjecture ===
Another approach to understanding mirror symmetry was suggested by Andrew Strominger, Shing-Tung Yau, and Eric Zaslow in 1996. According to their conjecture, now known as the SYZ conjecture, mirror symmetry can be understood by dividing a Calabi–Yau manifold into simpler pieces and then transforming them to get the mirror Calabi–Yau.
The simplest example of a Calabi–Yau manifold is a two-dimensional torus or donut shape. Consider a circle on this surface that goes once through the hole of the donut. An example is the red circle in the figure. There are infinitely many circles like it on a torus; in fact, the entire surface is a union of such circles.
One can choose an auxiliary circle
B
{\displaystyle B}
(the pink circle in the figure) such that each of the infinitely many circles decomposing the torus passes through a point of
B
{\displaystyle B}
. This auxiliary circle is said to parametrize the circles of the decomposition, meaning there is a correspondence between them and points of
B
{\displaystyle B}
. The circle
B
{\displaystyle B}
is more than just a list, however, because it also determines how these circles are arranged on the torus. This auxiliary space plays an important role in the SYZ conjecture.
The idea of dividing a torus into pieces parametrized by an auxiliary space can be generalized. Increasing the dimension from two to four real dimensions, the Calabi–Yau becomes a K3 surface. Just as the torus was decomposed into circles, a four-dimensional K3 surface can be decomposed into two-dimensional tori. In this case the space
B
{\displaystyle B}
is an ordinary sphere. Each point on the sphere corresponds to one of the two-dimensional tori, except for twenty-four "bad" points corresponding to "pinched" or singular tori.
The Calabi–Yau manifolds of primary interest in string theory have six dimensions. One can divide such a manifold into 3-tori (three-dimensional objects that generalize the notion of a torus) parametrized by a 3-sphere
B
{\displaystyle B}
(a three-dimensional generalization of a sphere). Each point of
B
{\displaystyle B}
corresponds to a 3-torus, except for infinitely many "bad" points which form a grid-like pattern of segments on the Calabi–Yau and correspond to singular tori.
Once the Calabi–Yau manifold has been decomposed into simpler parts, mirror symmetry can be understood in an intuitive geometric way. As an example, consider the torus described above. Imagine that this torus represents the "spacetime" for a physical theory. The fundamental objects of this theory will be strings propagating through the spacetime according to the rules of quantum mechanics. One of the basic dualities of string theory is T-duality, which states that a string propagating around a circle of radius
R
{\displaystyle R}
is equivalent to a string propagating around a circle of radius
1
/
R
{\displaystyle 1/R}
in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum
p
{\displaystyle p}
and winding number
n
{\displaystyle n}
in one description, it will have momentum
n
{\displaystyle n}
and winding number
p
{\displaystyle p}
in the dual description. By applying T-duality simultaneously to all of the circles that decompose the torus, the radii of these circles become inverted, and one is left with a new torus which is "fatter" or "skinnier" than the original. This torus is the mirror of the original Calabi–Yau.
T-duality can be extended from circles to the two-dimensional tori appearing in the decomposition of a K3 surface or to the three-dimensional tori appearing in the decomposition of a six-dimensional Calabi–Yau manifold. In general, the SYZ conjecture states that mirror symmetry is equivalent to the simultaneous application of T-duality to these tori. In each case, the space
B
{\displaystyle B}
provides a kind of blueprint that describes how these tori are assembled into a Calabi–Yau manifold.
== See also ==
Donaldson–Thomas theory
Wall-crossing
== Notes ==
== References ==
== Further reading ==
=== Popularizations ===
Yau, Shing-Tung; Nadis, Steve (2010). The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions. Basic Books. ISBN 978-0-465-02023-2.
Zaslow, Eric (2005). "Physmatics". arXiv:physics/0506153.
Zaslow, Eric (2008). "Mirror Symmetry". In Gowers, Timothy (ed.). The Princeton Companion to Mathematics. Princeton University Press. ISBN 978-0-691-11880-2.
=== Textbooks ===
Aspinwall, Paul; Bridgeland, Tom; Craw, Alastair; Douglas, Michael; Gross, Mark; Kapustin, Anton; Moore, Gregory; Segal, Graeme; Szendröi, Balázs; Wilson, P.M.H., eds. (2009). Dirichlet Branes and Mirror Symmetry. American Mathematical Society. ISBN 978-0-8218-3848-8.
Cox, David; Katz, Sheldon (1999). Mirror symmetry and algebraic geometry. American Mathematical Society. ISBN 978-0-8218-2127-5.
Hori, Kentaro; Katz, Sheldon; Klemm, Albrecht; Pandharipande, Rahul; Thomas, Richard; Vafa, Cumrun; Vakil, Ravi; Zaslow, Eric, eds. (2003). Mirror Symmetry (PDF). American Mathematical Society. ISBN 0-8218-2955-6. Archived from the original (PDF) on 2006-09-19. | Wikipedia/Mirror_symmetry_(string_theory) |
In differential geometry, a discipline within mathematics, a distribution on a manifold
M
{\displaystyle M}
is an assignment
x
↦
Δ
x
⊆
T
x
M
{\displaystyle x\mapsto \Delta _{x}\subseteq T_{x}M}
of vector subspaces satisfying certain properties. In the most common situations, a distribution is asked to be a vector subbundle of the tangent bundle
T
M
{\displaystyle TM}
.
Distributions satisfying a further integrability condition give rise to foliations, i.e. partitions of the manifold into smaller submanifolds. These notions have several applications in many fields of mathematics, including integrable systems, Poisson geometry, non-commutative geometry, sub-Riemannian geometry, differential topology.
Even though they share the same name, distributions presented in this article have nothing to do with distributions in the sense of analysis.
== Definition ==
Let
M
{\displaystyle M}
be a smooth manifold; a (smooth) distribution
Δ
{\displaystyle \Delta }
assigns to any point
x
∈
M
{\displaystyle x\in M}
a vector subspace
Δ
x
⊂
T
x
M
{\displaystyle \Delta _{x}\subset T_{x}M}
in a smooth way. More precisely,
Δ
{\displaystyle \Delta }
consists of a collection
{
Δ
x
⊂
T
x
M
}
x
∈
M
{\displaystyle \{\Delta _{x}\subset T_{x}M\}_{x\in M}}
of vector subspaces with the following property: Around any
x
∈
M
{\displaystyle x\in M}
there exist a neighbourhood
N
x
⊂
M
{\displaystyle N_{x}\subset M}
and a collection of vector fields
X
1
,
…
,
X
k
{\displaystyle X_{1},\ldots ,X_{k}}
such that, for any point
y
∈
N
x
{\displaystyle y\in N_{x}}
, span
{
X
1
(
y
)
,
…
,
X
k
(
y
)
}
=
Δ
y
.
{\displaystyle \{X_{1}(y),\ldots ,X_{k}(y)\}=\Delta _{y}.}
The set of smooth vector fields
{
X
1
,
…
,
X
k
}
{\displaystyle \{X_{1},\ldots ,X_{k}\}}
is also called a local basis of
Δ
{\displaystyle \Delta }
. These need not be linearly independent at every point, and so aren't formally a vector space basis at every point; thus, the term local generating set can be more appropriate. The notation
Δ
{\displaystyle \Delta }
is used to denote both the assignment
x
↦
Δ
x
{\displaystyle x\mapsto \Delta _{x}}
and the subset
Δ
=
⨿
x
∈
M
Δ
x
⊆
T
M
{\displaystyle \Delta =\amalg _{x\in M}\Delta _{x}\subseteq TM}
.
=== Regular distributions ===
Given an integer
n
≤
m
=
d
i
m
(
M
)
{\displaystyle n\leq m=\mathrm {dim} (M)}
, a smooth distribution
Δ
{\displaystyle \Delta }
on
M
{\displaystyle M}
is called regular of rank
n
{\displaystyle n}
if all the subspaces
Δ
x
⊂
T
x
M
{\displaystyle \Delta _{x}\subset T_{x}M}
have the same dimension
n
{\displaystyle n}
. Locally, this amounts to ask that every local basis is given by
n
{\displaystyle n}
linearly independent vector fields.
More compactly, a regular distribution is a vector subbundle
Δ
⊂
T
M
{\displaystyle \Delta \subset TM}
of rank
n
{\displaystyle n}
(this is actually the most commonly used definition). A rank
n
{\displaystyle n}
distribution is sometimes called an
n
{\displaystyle n}
-plane distribution, and when
n
=
m
−
1
{\displaystyle n=m-1}
, one talks about hyperplane distributions.
== Special classes of distributions ==
Unless stated otherwise, by "distribution" we mean a smooth regular distribution (in the sense explained above).
=== Involutive distributions ===
Given a distribution
Δ
{\displaystyle \Delta }
, its sections consist of vector fields on
M
,
{\displaystyle M,}
forming a vector subspace
Γ
(
Δ
)
⊆
Γ
(
T
M
)
=
X
(
M
)
{\displaystyle \Gamma (\Delta )\subseteq \Gamma (TM)={\mathfrak {X}}(M)}
of the space of all vector fields on
M
{\displaystyle M}
. (Notation:
Γ
(
T
M
)
{\displaystyle \Gamma (TM)}
is the space of sections of
T
M
.
{\displaystyle TM.}
) A distribution
Δ
{\displaystyle \Delta }
is called involutive if
Γ
(
Δ
)
⊆
X
(
M
)
{\displaystyle \Gamma (\Delta )\subseteq {\mathfrak {X}}(M)}
is also a Lie subalgebra: in other words, for any two vector fields
X
,
Y
∈
Γ
(
Δ
)
⊆
X
(
M
)
{\displaystyle X,Y\in \Gamma (\Delta )\subseteq {\mathfrak {X}}(M)}
, the Lie bracket
[
X
,
Y
]
{\displaystyle [X,Y]}
belongs to
Γ
(
Δ
)
⊆
X
(
M
)
{\displaystyle \Gamma (\Delta )\subseteq {\mathfrak {X}}(M)}
.
Locally, this condition means that for every point
x
∈
M
{\displaystyle x\in M}
there exists a local basis
{
X
1
,
…
,
X
n
}
{\displaystyle \{X_{1},\ldots ,X_{n}\}}
of the distribution in a neighbourhood of
x
{\displaystyle x}
such that, for all
1
≤
i
,
j
≤
n
{\displaystyle 1\leq i,j\leq n}
, the Lie bracket
[
X
i
,
X
j
]
{\displaystyle [X_{i},X_{j}]}
is in the span of
{
X
1
,
…
,
X
n
}
{\displaystyle \{X_{1},\ldots ,X_{n}\}}
, i.e.
[
X
i
,
X
j
]
{\displaystyle [X_{i},X_{j}]}
is a linear combination of
{
X
1
,
…
,
X
n
}
.
{\displaystyle \{X_{1},\ldots ,X_{n}\}.}
Involutive distributions are a fundamental ingredient in the study of integrable systems. A related idea occurs in Hamiltonian mechanics: two functions
f
{\displaystyle f}
and
g
{\displaystyle g}
on a symplectic manifold are said to be in mutual involution if their Poisson bracket vanishes.
=== Integrable distributions and foliations ===
An integral manifold for a rank
n
{\displaystyle n}
distribution
Δ
{\displaystyle \Delta }
is a submanifold
N
⊂
M
{\displaystyle N\subset M}
of dimension
n
{\displaystyle n}
such that
T
x
N
=
Δ
x
{\displaystyle T_{x}N=\Delta _{x}}
for every
x
∈
N
{\displaystyle x\in N}
. A distribution is called integrable if through any point
x
∈
M
{\displaystyle x\in M}
there is an integral manifold. The base spaces of the bundle
Δ
⊂
T
M
{\displaystyle \Delta \subset TM}
are thus disjoint, maximal, connected integral manifolds, also called leaves; that is,
Δ
{\displaystyle \Delta }
defines an n-dimensional foliation of
M
{\displaystyle M}
.
Locally, integrability means that for every point
x
∈
M
{\displaystyle x\in M}
there exists a local chart
(
U
,
{
χ
1
,
…
,
χ
n
}
)
{\displaystyle (U,\{\chi _{1},\ldots ,\chi _{n}\})}
such that, for every
y
∈
U
{\displaystyle y\in U}
, the space
Δ
y
{\displaystyle \Delta _{y}}
is spanned by the coordinate vectors
∂
∂
χ
1
(
y
)
,
…
,
∂
∂
χ
n
(
y
)
{\displaystyle {\frac {\partial }{\partial \chi _{1}}}(y),\ldots ,{\frac {\partial }{\partial \chi _{n}}}(y)}
. In other words, every point admits a foliation chart, i.e. the distribution
Δ
{\displaystyle \Delta }
is tangent to the leaves of a foliation. Moreover, this local characterisation coincides with the definition of integrability for a
G
{\displaystyle G}
-structures, when
G
{\displaystyle G}
is the group of real invertible upper-triangular block matrices (with
(
n
×
n
)
{\displaystyle (n\times n)}
and
(
m
−
n
,
m
−
n
)
{\displaystyle (m-n,m-n)}
-blocks).
It is easy to see that any integrable distribution is automatically involutive. The converse is less trivial but holds by Frobenius theorem.
=== Weakly regular distributions ===
Given any distribution
Δ
⊆
T
M
{\displaystyle \Delta \subseteq TM}
, the associated Lie flag is a grading, defined as
Δ
(
0
)
⊆
Δ
(
1
)
⊆
…
⊆
Δ
(
i
)
⊆
Δ
(
i
+
1
)
⊆
…
{\displaystyle \Delta ^{(0)}\subseteq \Delta ^{(1)}\subseteq \ldots \subseteq \Delta ^{(i)}\subseteq \Delta ^{(i+1)}\subseteq \ldots }
where
Δ
(
0
)
:=
Γ
(
Δ
)
{\displaystyle \Delta ^{(0)}:=\Gamma (\Delta )}
,
Δ
(
1
)
:=
⟨
[
Δ
(
0
)
,
Δ
(
0
)
]
⟩
C
∞
(
M
)
{\displaystyle \Delta ^{(1)}:=\langle [\Delta ^{(0)},\Delta ^{(0)}]\rangle _{{\mathcal {C}}^{\infty }(M)}}
and
Δ
(
i
+
1
)
:=
⟨
[
Δ
(
i
)
,
Δ
(
0
)
]
⟩
C
∞
(
M
)
{\displaystyle \Delta ^{(i+1)}:=\langle [\Delta ^{(i)},\Delta ^{(0)}]\rangle _{{\mathcal {C}}^{\infty }(M)}}
. In other words,
Δ
(
i
)
⊆
X
(
M
)
{\displaystyle \Delta ^{(i)}\subseteq {\mathfrak {X}}(M)}
denotes the set of vector fields spanned by the
i
{\displaystyle i}
-iterated Lie brackets of elements in
Γ
(
Δ
)
{\displaystyle \Gamma (\Delta )}
. Some authors use a negative decreasing grading for the definition.
Then
Δ
{\displaystyle \Delta }
is called weakly regular (or just regular by some authors) if there exists a sequence
{
T
i
M
⊆
T
M
}
i
{\displaystyle \{T^{i}M\subseteq TM\}_{i}}
of nested vector subbundles such that
Γ
(
T
i
M
)
=
Δ
(
i
)
{\displaystyle \Gamma (T^{i}M)=\Delta ^{(i)}}
(hence
T
0
M
=
Δ
{\displaystyle T^{0}M=\Delta }
). Note that, in such case, the associated Lie flag stabilises at a certain point
m
∈
N
{\displaystyle m\in \mathbb {N} }
, since the ranks of
T
i
M
{\displaystyle T^{i}M}
are bounded from above by
r
a
n
k
(
T
M
)
=
d
i
m
(
M
)
{\displaystyle \mathrm {rank} (TM)=\mathrm {dim} (M)}
. The string of integers
(
r
a
n
k
(
Δ
(
0
)
)
,
r
a
n
k
(
Δ
(
1
)
)
,
…
,
r
a
n
k
(
Δ
(
m
)
)
)
{\displaystyle (\mathrm {rank} (\Delta ^{(0)}),\mathrm {rank} (\Delta ^{(1)}),\ldots ,\mathrm {rank} (\Delta ^{(m)}))}
is then called the grow vector of
Δ
{\displaystyle \Delta }
.
Any weakly regular distribution has an associated graded vector bundle
g
r
(
T
M
)
:=
T
0
M
⊕
(
⨁
i
=
0
m
−
1
T
i
+
1
M
/
T
i
M
)
⊕
T
M
/
T
m
M
.
{\displaystyle \mathrm {gr} (TM):=T^{0}M\oplus {\Big (}\bigoplus _{i=0}^{m-1}T^{i+1}M/T^{i}M{\Big )}\oplus TM/T^{m}M.}
Moreover, the Lie bracket of vector fields descends, for any
i
,
j
=
0
,
…
,
m
{\displaystyle i,j=0,\ldots ,m}
, to a
C
∞
(
M
)
{\displaystyle {\mathcal {C}}^{\infty }(M)}
-linear bundle morphism
g
r
i
(
T
M
)
×
g
r
j
(
T
M
)
→
g
r
i
+
j
+
1
(
T
M
)
{\displaystyle \mathrm {gr} _{i}(TM)\times \mathrm {gr} _{j}(TM)\to \mathrm {gr} _{i+j+1}(TM)}
, called the
(
i
,
j
)
{\displaystyle (i,j)}
-curvature. In particular, the
(
0
,
0
)
{\displaystyle (0,0)}
-curvature vanishes identically if and only if the distribution is involutive.
Patching together the curvatures, one obtains a morphism
L
:
g
r
(
T
M
)
×
g
r
(
T
M
)
→
g
r
(
T
M
)
{\displaystyle {\mathcal {L}}:\mathrm {gr} (TM)\times \mathrm {gr} (TM)\to \mathrm {gr} (TM)}
, also called the Levi bracket, which makes
g
r
(
T
M
)
{\displaystyle \mathrm {gr} (TM)}
into a bundle of nilpotent Lie algebras; for this reason,
(
g
r
(
T
M
)
,
L
)
{\displaystyle (\mathrm {gr} (TM),{\mathcal {L}})}
is also called the nilpotentisation of
Δ
{\displaystyle \Delta }
.
The bundle
g
r
(
T
M
)
→
M
{\displaystyle \mathrm {gr} (TM)\to M}
, however, is in general not locally trivial, since the Lie algebras
g
r
i
(
T
x
M
)
:=
T
x
i
M
/
T
x
i
+
1
M
{\displaystyle \mathrm {gr} _{i}(T_{x}M):=T_{x}^{i}M/T_{x}^{i+1}M}
are not isomorphic when varying the point
x
∈
M
{\displaystyle x\in M}
. If this happens, the weakly regular distribution
Δ
{\displaystyle \Delta }
is also called regular (or strongly regular by some authors). Note that the names (strongly, weakly) regular used here are completely unrelated with the notion of regularity discussed above (which is always assumed), i.e. the dimension of the spaces
Δ
x
{\displaystyle \Delta _{x}}
being constant.
=== Bracket-generating distributions ===
A distribution
Δ
⊆
T
M
{\displaystyle \Delta \subseteq TM}
is called bracket-generating (or non-holonomic, or it is said to satisfy the Hörmander condition) if taking a finite number of Lie brackets of elements in
Γ
(
Δ
)
{\displaystyle \Gamma (\Delta )}
is enough to generate the entire space of vector fields on
M
{\displaystyle M}
. With the notation introduced above, such condition can be written as
Δ
(
m
)
=
X
(
M
)
{\displaystyle \Delta ^{(m)}={\mathfrak {X}}(M)}
for certain
m
∈
N
{\displaystyle m\in \mathbb {N} }
; then one says also that
Δ
{\displaystyle \Delta }
is bracket-generating in
m
+
1
{\displaystyle m+1}
steps, or has depth
m
+
1
{\displaystyle m+1}
.
Clearly, the associated Lie flag of a bracket-generating distribution stabilises at the point
m
{\displaystyle m}
. Even though being weakly regular and being bracket-generating are two independent properties (see the examples below), when a distribution satisfies both of them, the integer
m
{\displaystyle m}
from the two definitions is the same.
Thanks to the Chow-Rashevskii theorem, given a bracket-generating distribution
Δ
⊆
T
M
{\displaystyle \Delta \subseteq TM}
on a connected manifold, any two points in
M
{\displaystyle M}
can be joined by a path tangent to the distribution.
== Examples of regular distributions ==
=== Integrable distributions ===
Any vector field
X
{\displaystyle X}
on
M
{\displaystyle M}
defines a rank 1 distribution, by setting
Δ
x
:=
⟨
X
x
⟩
⊆
T
x
M
{\displaystyle \Delta _{x}:=\langle X_{x}\rangle \subseteq T_{x}M}
, which is automatically integrable: the image of any integral curve
γ
:
I
→
M
{\displaystyle \gamma :I\to M}
is an integral manifold.
The trivial distribution of rank
k
{\displaystyle k}
on
M
=
R
n
{\displaystyle M=\mathbb {R} ^{n}}
is generated by the first
k
{\displaystyle k}
coordinate vector fields
∂
∂
x
1
,
…
,
∂
∂
x
k
{\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{k}}}}
. It is automatically integrable, and the integral manifolds are defined by the equations
{
x
i
=
c
i
}
i
=
k
+
1
,
…
,
n
{\displaystyle \{x_{i}=c_{i}\}_{i=k+1,\ldots ,n}}
, for any constants
c
i
∈
R
{\displaystyle c_{i}\in \mathbb {R} }
.
In general, any involutive/integrable distribution is weakly regular (with
Δ
(
i
)
=
Γ
(
Δ
)
{\displaystyle \Delta ^{(i)}=\Gamma (\Delta )}
for every
i
{\displaystyle i}
), but it is never bracket-generating.
=== Non-integrable distributions ===
The Martinet distribution on
M
=
R
3
{\displaystyle M=\mathbb {R} ^{3}}
is given by
Δ
=
ker
(
ω
)
⊆
T
M
{\displaystyle \Delta =\ker(\omega )\subseteq TM}
, for
ω
=
d
y
−
z
2
d
x
∈
Ω
1
(
M
)
{\displaystyle \omega =dy-z^{2}dx\in \Omega ^{1}(M)}
; equivalently, it is generated by the vector fields
∂
∂
x
+
z
2
∂
∂
y
{\displaystyle {\frac {\partial }{\partial x}}+z^{2}{\frac {\partial }{\partial y}}}
and
∂
∂
z
{\displaystyle {\frac {\partial }{\partial z}}}
. It is bracket-generating since
Δ
(
2
)
=
X
(
M
)
{\displaystyle \Delta ^{(2)}={\mathfrak {X}}(M)}
, but it is not weakly regular:
Δ
(
1
)
{\displaystyle \Delta ^{(1)}}
has rank 3 everywhere except on the surface
z
=
0
{\displaystyle z=0}
.
The contact distribution on
M
=
R
2
n
+
1
{\displaystyle M=\mathbb {R} ^{2n+1}}
is given by
Δ
=
ker
(
ω
)
⊆
T
M
{\displaystyle \Delta =\ker(\omega )\subseteq TM}
, for
ω
=
d
z
+
∑
i
=
1
n
x
i
d
y
i
∈
Ω
1
(
M
)
{\displaystyle \omega =dz+\sum _{i=1}^{n}x_{i}dy_{i}\in \Omega ^{1}(M)}
; equivalently, it is generated by the vector fields
∂
∂
y
i
{\displaystyle {\frac {\partial }{\partial y_{i}}}}
and
∂
∂
x
i
+
y
i
∂
∂
z
{\displaystyle {\frac {\partial }{\partial x_{i}}}+y_{i}{\frac {\partial }{\partial z}}}
, for
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
. It is weakly regular, with grow vector
(
2
n
,
2
n
+
1
)
{\displaystyle (2n,2n+1)}
, and bracket-generating, with
Δ
(
1
)
=
X
(
M
)
{\displaystyle \Delta ^{(1)}={\mathfrak {X}}(M)}
. One can also define an abstract contact structures on a manifold
M
2
n
+
1
{\displaystyle M^{2n+1}}
as a hyperplane distribution which is maximally non-integrable, i.e. it is as far from being involutive as possible. An analogue of the Darboux theorem shows that such structure has the unique local model described above.
The Engel distribution on
M
=
R
4
{\displaystyle M=\mathbb {R} ^{4}}
is given by
Δ
=
ker
(
ω
1
)
∩
ker
(
ω
2
)
⊆
T
M
{\displaystyle \Delta =\ker(\omega _{1})\cap \ker(\omega _{2})\subseteq TM}
, for
ω
1
=
d
z
−
w
d
x
∈
Ω
1
(
M
)
{\displaystyle \omega _{1}=dz-wdx\in \Omega ^{1}(M)}
and
ω
2
=
d
y
−
z
d
x
∈
Ω
1
(
M
)
{\displaystyle \omega _{2}=dy-zdx\in \Omega ^{1}(M)}
; equivalently, it is generated by the vector fields
∂
∂
x
+
z
∂
∂
y
+
w
∂
∂
z
{\displaystyle {\frac {\partial }{\partial x}}+z{\frac {\partial }{\partial y}}+w{\frac {\partial }{\partial z}}}
and
∂
∂
w
{\displaystyle {\frac {\partial }{\partial w}}}
. It is weakly regular, with grow vector
(
2
,
3
,
4
)
{\displaystyle (2,3,4)}
, and bracket-generating. One can also define an abstract Engel structure on a manifold
M
4
{\displaystyle M^{4}}
as a weakly regular rank 2 distribution
Δ
⊆
T
M
{\displaystyle \Delta \subseteq TM}
such that
Δ
(
1
)
{\displaystyle \Delta ^{(1)}}
has rank 3 and
Δ
(
2
)
{\displaystyle \Delta ^{(2)}}
has rank 4; Engel proved that such structure has the unique local model described above.
In general, a Goursat structure on a manifold
M
k
+
2
{\displaystyle M^{k+2}}
is a rank 2 distribution which is weakly regular and bracket-generating, with grow vector
(
2
,
3
,
…
,
k
+
1
,
k
+
2
)
{\displaystyle (2,3,\ldots ,k+1,k+2)}
. For
k
=
1
{\displaystyle k=1}
and
k
=
2
{\displaystyle k=2}
one recovers, respectively, contact distributions on 3-dimensional manifolds and Engel distributions. Goursat structures are locally diffeomorphic to the Cartan distribution of the jet bundles
J
k
(
R
,
R
)
{\displaystyle J^{k}(\mathbb {R} ,\mathbb {R} )}
.
== Singular distributions ==
A singular distribution, generalised distribution, or Stefan-Sussmann distribution, is a smooth distribution which is not regular. This means that the subspaces
Δ
x
⊂
T
x
M
{\displaystyle \Delta _{x}\subset T_{x}M}
may have different dimensions, and therefore the subset
Δ
⊂
T
M
{\displaystyle \Delta \subset TM}
is no longer a smooth subbundle.
In particular, the number of elements in a local basis spanning
Δ
x
{\displaystyle \Delta _{x}}
will change with
x
{\displaystyle x}
, and those vector fields will no longer be linearly independent everywhere. It is not hard to see that the dimension of
Δ
x
{\displaystyle \Delta _{x}}
is lower semicontinuous, so that at special points the dimension is lower than at nearby points.
=== Integrability and singular foliations ===
The definitions of integral manifolds and of integrability given above applies also to the singular case (removing the requirement of the fixed dimension). However, Frobenius theorem does not hold in this context, and involutivity is in general not sufficient for integrability (counterexamples in low dimensions exist).
After several partial results, the integrability problem for singular distributions was fully solved by a theorem independently proved by Stefan and Sussmann. It states that a singular distribution
Δ
{\displaystyle \Delta }
is integrable if and only if the following two properties hold:
Δ
{\displaystyle \Delta }
is generated by a family
F
⊆
X
(
M
)
{\displaystyle F\subseteq {\mathfrak {X}}(M)}
of vector fields;
Δ
{\displaystyle \Delta }
is invariant with respect to every
X
∈
F
{\displaystyle X\in F}
, i.e.
(
ϕ
X
t
)
∗
(
Δ
y
)
⊆
Δ
ϕ
X
t
(
y
)
{\displaystyle (\phi _{X}^{t})_{*}(\Delta _{y})\subseteq \Delta _{\phi _{X}^{t}(y)}}
, where
ϕ
X
t
{\displaystyle \phi _{X}^{t}}
is the flow of
X
{\displaystyle X}
,
t
∈
R
{\displaystyle t\in \mathbb {R} }
and
y
∈
d
o
m
(
X
)
{\displaystyle y\in \mathrm {dom} (X)}
.
Similarly to the regular case, an integrable singular distribution defines a singular foliation, which intuitively consists in a partition of
M
{\displaystyle M}
into submanifolds (the maximal integral manifolds of
Δ
{\displaystyle \Delta }
) of different dimensions.
The definition of singular foliation can be made precise in several equivalent ways. Actually, in the literature there is a plethora of variations, reformulations and generalisations of the Stefan-Sussman theorem, using different notion of singular foliations according to which applications one has in mind, e.g. Poisson geometry or non-commutative geometry.
=== Examples ===
Given a Lie group action of a Lie group on a manifold
M
{\displaystyle M}
, its infinitesimal generators span a singular distribution which is always integrable; the leaves of the associated singular foliation are precisely the orbits of the group action. The distribution/foliation is regular if and only if the action is free.
Given a Poisson manifold
(
M
,
π
)
{\displaystyle (M,\pi )}
, the image of
π
♯
=
ι
π
:
T
∗
M
→
T
M
{\displaystyle \pi ^{\sharp }=\iota _{\pi }:T^{*}M\to TM}
is a singular distribution which is always integrable; the leaves of the associated singular foliation are precisely the symplectic leaves of
(
M
,
π
)
{\displaystyle (M,\pi )}
. The distribution/foliation is regular If and only if the Poisson manifold is regular.
More generally, the image of the anchor map
ρ
:
A
→
T
M
{\displaystyle \rho :A\to TM}
of any Lie algebroid
A
→
M
{\displaystyle A\to M}
defines a singular distribution which is automatically integrable, and the leaves of the associated singular foliation are precisely the leaves of the Lie algebroid. The distribution/foliation is regular if and only if
ρ
{\displaystyle \rho }
has constant rank, i.e. the Lie algebroid is regular. Considering, respectively, the action Lie algebroid
M
×
g
{\displaystyle M\times {\mathfrak {g}}}
and the cotangent Lie algebroid
T
∗
M
{\displaystyle T^{*}M}
, one recovers the two examples above.
In dynamical systems, a singular distribution arise from the set of vector fields that commute with a given one.
There are also examples and applications in control theory, where the generalised distribution represents infinitesimal constraints of the system.
== References ==
== Books, lecture notes and external links ==
William M. Boothby. Section IV. 8 in An Introduction to Differentiable Manifolds and Riemannian Geometry, Academic Press, San Diego, California, 2003.
John M. Lee, Chapter 19 in Introduction to Smooth Manifolds, Graduate Texts in Mathematics, Springer-Verlag, 2003.
Richard Montgomery, Chapters 2, 4 and 6 in A tour of subriemannian geometries, their geodesics and applications. Mathematical Surveys and Monographs 91. Amer. Math. Soc., Providence, RI, 2002.
Álvaro del Pino, Topological aspects in the study of tangent distributions. Textos de Matemática. Série B, 48. Universidade de Coimbra, 2019.
"Involutive distribution", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
This article incorporates material from Distribution on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Distribution_(differential_geometry) |
There are many ways to derive the Lorentz transformations using a variety of physical principles, ranging from Maxwell's equations to Einstein's postulates of special relativity, and mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory.
This article provides a few of the easier ones to follow in the context of special relativity, for the simplest case of a Lorentz boost in standard configuration, i.e. two inertial frames moving relative to each other at constant (uniform) relative velocity less than the speed of light, and using Cartesian coordinates so that the x and x′ axes are collinear.
== Lorentz transformation ==
In the fundamental branches of modern physics, namely general relativity and its widely applicable subset special relativity, as well as relativistic quantum mechanics and relativistic quantum field theory, the Lorentz transformation is the transformation rule under which all four-vectors and tensors containing physical quantities transform from one frame of reference to another.
The prime examples of such four-vectors are the four-position and four-momentum of a particle, and for fields the electromagnetic tensor and stress–energy tensor. The fact that these objects transform according to the Lorentz transformation is what mathematically defines them as vectors and tensors; see tensor for a definition.
Given the components of the four-vectors or tensors in some frame, the "transformation rule" allows one to determine the altered components of the same four-vectors or tensors in another frame, which could be boosted or accelerated, relative to the original frame. A "boost" should not be conflated with spatial translation, rather it's characterized by the relative velocity between frames. The transformation rule itself depends on the relative motion of the frames. In the simplest case of two inertial frames the relative velocity between enters the transformation rule. For rotating reference frames or general non-inertial reference frames, more parameters are needed, including the relative velocity (magnitude and direction), the rotation axis and angle turned through.
== Historical background ==
The usual treatment (e.g., Albert Einstein's original work) is based on the invariance of the speed of light. However, this is not necessarily the starting point: indeed (as is described, for example, in the second volume of the Course of Theoretical Physics by Landau and Lifshitz), what is really at stake is the locality of interactions: one supposes that the influence that one particle, say, exerts on another can not be transmitted instantaneously. Hence, there exists a theoretical maximal speed of information transmission which must be invariant, and it turns out that this speed coincides with the speed of light in vacuum. Newton had himself called the idea of action at a distance philosophically "absurd", and held that gravity had to be transmitted by some agent according to certain laws.
Michelson and Morley in 1887 designed an experiment, employing an interferometer and a half-silvered mirror, that was accurate enough to detect aether flow. The mirror system reflected the light back into the interferometer. If there were an aether drift, it would produce a phase shift and a change in the interference that would be detected. However, no phase shift was ever found. The negative outcome of the Michelson–Morley experiment left the concept of aether (or its drift) undermined. There was consequent perplexity as to why light evidently behaves like a wave, without any detectable medium through which wave activity might propagate.
In a 1964 paper, Erik Christopher Zeeman showed that the causality-preserving property, a condition that is weaker in a mathematical sense than the invariance of the speed of light, is enough to assure that the coordinate transformations are the Lorentz transformations. Norman Goldstein's paper shows a similar result using inertiality (the preservation of time-like lines) rather than causality.
== Physical principles ==
Einstein based his theory of special relativity on two fundamental postulates. First, all physical laws are the same for all inertial frames of reference, regardless of their relative state of motion; and second, the speed of light in free space is the same in all inertial frames of reference, again, regardless of the relative velocity of each reference frame. The Lorentz transformation is fundamentally a direct consequence of this second postulate.
=== The second postulate ===
Assume the second postulate of special relativity stating the constancy of the speed of light, independent of reference frame, and consider a collection of reference systems moving with respect to each other with constant velocity, i.e. inertial systems, each endowed with its own set of Cartesian coordinates labeling the points, i.e. events of spacetime. To express the invariance of the speed of light in mathematical form, fix two events in spacetime, to be recorded in each reference frame. Let the first event be the emission of a light signal, and the second event be it being absorbed.
Pick any reference frame in the collection. In its coordinates, the first event will be assigned coordinates
x
1
,
y
1
,
z
1
,
c
t
1
{\displaystyle x_{1},y_{1},z_{1},ct_{1}}
, and the second
x
2
,
y
2
,
z
2
,
c
t
2
{\displaystyle x_{2},y_{2},z_{2},ct_{2}}
. The spatial distance between emission and absorption is
(
x
2
−
x
1
)
2
+
(
y
2
−
y
1
)
2
+
(
z
2
−
z
1
)
2
{\textstyle {\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}}}
, but this is also the distance
c
(
t
2
−
t
1
)
{\displaystyle c(t_{2}-t_{1})}
traveled by the signal. One may therefore set up the equation
c
2
(
t
2
−
t
1
)
2
−
(
x
2
−
x
1
)
2
−
(
y
2
−
y
1
)
2
−
(
z
2
−
z
1
)
2
=
0.
{\displaystyle c^{2}(t_{2}-t_{1})^{2}-(x_{2}-x_{1})^{2}-(y_{2}-y_{1})^{2}-(z_{2}-z_{1})^{2}=0.}
Every other coordinate system will record, in its own coordinates, the same equation. This is the immediate mathematical consequence of the invariance of the speed of light. The quantity on the left is called the spacetime interval. The interval is, for events separated by light signals, the same (zero) in all reference frames, and is therefore called invariant.
=== Invariance of interval ===
For the Lorentz transformation to have the physical significance realized by nature, it is crucial that the interval is an invariant measure for any two events, not just for those separated by light signals. To establish this, one considers an infinitesimal interval,
d
s
2
=
c
2
d
t
2
−
d
x
2
−
d
y
2
−
d
z
2
,
{\displaystyle ds^{2}=c^{2}dt^{2}-dx^{2}-dy^{2}-dz^{2},}
as recorded in a system
K
{\displaystyle K}
. Let
K
′
{\displaystyle K'}
be another system assigning the interval
d
s
′
2
{\displaystyle ds'^{2}}
to the same two infinitesimally separated events. Since if
d
s
2
=
0
{\displaystyle ds^{2}=0}
, then the interval will also be zero in any other system (second postulate), and since
d
s
2
{\displaystyle ds^{2}}
and
d
s
′
2
{\displaystyle ds'^{2}}
are infinitesimals of the same order, they must be proportional to each other,
d
s
2
=
a
d
s
′
2
.
{\displaystyle ds^{2}=ads'^{2}.}
On what may
a
{\displaystyle a}
depend? It may not depend on the positions of the two events in spacetime, because that would violate the postulated homogeneity of spacetime. It might depend on the relative velocity
V
′
{\displaystyle V'}
between
K
{\displaystyle K}
and
K
′
{\displaystyle K'}
, but only on the speed, not on the direction, because the latter would violate the isotropy of space.
Now bring in systems
K
1
{\displaystyle K_{1}}
and
K
2
{\displaystyle K_{2}}
,
d
s
2
=
a
(
V
1
)
d
s
1
2
,
d
s
2
=
a
(
V
2
)
d
s
2
2
,
d
s
1
2
=
a
(
V
12
)
d
s
2
2
.
{\displaystyle ds^{2}=a(V_{1})ds_{1}^{2},\quad ds^{2}=a(V_{2})ds_{2}^{2},\quad ds_{1}^{2}=a(V_{12})ds_{2}^{2}.}
From these it follows,
a
(
V
2
)
a
(
V
1
)
=
a
(
V
12
)
.
{\displaystyle {\frac {a(V_{2})}{a(V_{1})}}=a(V_{12}).}
Now, one observes that on the right-hand side that
V
12
{\displaystyle V_{12}}
depend on both
V
1
{\displaystyle V_{1}}
and
V
2
{\displaystyle V_{2}}
; as well as on the angle between the vectors
V
1
{\displaystyle {\textbf {V}}_{1}}
and
V
2
{\displaystyle {\textbf {V}}_{2}}
. However, one also observes that the left-hand side does not depend on this angle. Thus, the only way for the equation to hold true is if the function
a
(
V
)
{\displaystyle a(V)}
is a constant. Further, by the same equation this constant is unity. Thus,
d
s
2
=
d
s
′
2
{\displaystyle ds^{2}=ds'^{2}}
for all systems
K
′
{\displaystyle K'}
. Since this holds for all infinitesimal intervals, it holds for all intervals.
Most, if not all, derivations of the Lorentz transformations take this for granted. In those derivations, they use the constancy of the speed of light (invariance of light-like separated events) only. This result ensures that the Lorentz transformation is the correct transformation.
==== Rigorous Statement and Proof of Proportionality of ds2 and ds′2 ====
Theorem:
Let
n
,
p
≥
1
{\displaystyle n,p\geq 1}
be integers,
d
:=
n
+
p
{\displaystyle d:=n+p}
and
V
{\displaystyle V}
a vector space over
R
{\displaystyle \mathbb {R} }
of dimension
d
{\displaystyle d}
. Let
h
{\displaystyle h}
be an indefinite-inner product on
V
{\displaystyle V}
with signature type
(
n
,
p
)
{\displaystyle (n,p)}
. Suppose
g
{\displaystyle g}
is a symmetric bilinear form on
V
{\displaystyle V}
such that the null set of the associated quadratic form of
h
{\displaystyle h}
is contained in that of
g
{\displaystyle g}
(i.e. suppose that for every
v
∈
V
{\displaystyle v\in V}
, if
h
(
v
,
v
)
=
0
{\displaystyle h(v,v)=0}
then
g
(
v
,
v
)
=
0
{\displaystyle g(v,v)=0}
). Then, there exists a constant
C
∈
R
{\displaystyle C\in \mathbb {R} }
such that
g
=
C
h
{\displaystyle g=Ch}
. Furthermore, if we assume
n
≠
p
{\displaystyle n\neq p}
and that
g
{\displaystyle g}
also has signature type
(
n
,
p
)
{\displaystyle (n,p)}
, then we have
C
>
0
{\displaystyle C>0}
.
Remarks.
In the section above, the term "infinitesimal" in relation to
d
s
2
{\displaystyle ds^{2}}
is actually referring (pointwise) to a quadratic form over a four-dimensional real vector space (namely the tangent space at a point of the spacetime manifold). The argument above is copied almost verbatim from Landau and Lifshitz, where the proportionality of
d
s
2
{\displaystyle ds^{2}}
and
d
s
′
2
{\displaystyle ds'^{2}}
is merely stated as an 'obvious' fact even though the statement is not formulated in a mathematically precise fashion nor proven. This is a non-obvious mathematical fact which needs to be justified; fortunately the proof is relatively simple and it amounts to basic algebraic observations and manipulations.
The above assumptions on
h
{\displaystyle h}
means the following:
h
:
V
×
V
→
R
{\displaystyle h:V\times V\to \mathbb {R} }
is a bilinear form which is symmetric and non-degenerate, such that there exists an ordered basis
{
v
1
,
…
,
v
n
,
v
n
+
1
,
…
,
v
d
}
{\displaystyle \{v_{1},\dots ,v_{n},v_{n+1},\dots ,v_{d}\}}
of
V
{\displaystyle V}
for which
h
(
v
a
,
v
b
)
=
{
−
1
if
a
=
b
,
where
a
,
b
∈
{
1
,
…
,
n
}
1
if
a
=
b
,
where
a
,
b
∈
{
n
+
1
,
…
,
d
}
0
otherwise
{\displaystyle h(v_{a},v_{b})={\begin{cases}-1&{\text{if }}a=b,{\text{where }}a,b\in \{1,\dots ,n\}\\1&{\text{if }}a=b,{\text{where }}a,b\in \{n+1,\dots ,d\}\\0&{\text{ otherwise}}\end{cases}}}
An equivalent way of saying this is that
h
{\displaystyle h}
has the matrix representation
(
−
I
n
0
0
I
p
)
{\displaystyle {\begin{pmatrix}-I_{n}&0\\0&I_{p}\end{pmatrix}}}
relative to the ordered basis
{
v
1
,
…
,
v
d
}
{\displaystyle \{v_{1},\dots ,v_{d}\}}
.
If we consider the special case where
n
=
1
,
p
=
3
{\displaystyle n=1,p=3}
then we're dealing with the situation of Lorentzian signature in 4-dimensions, which is what relativity is based on (or one could adopt the opposite convention with an overall minus sign; but this clearly doesn't affect the truth of the theorem). Also, in this case, if we assume
g
{\displaystyle g}
and
h
{\displaystyle h}
both have quadratics forms with the same null-set (in physics terminology, we say that
g
{\displaystyle g}
and
h
{\displaystyle h}
give rise to the same light cone) then the theorem tells us that there is a constant
C
>
0
{\displaystyle C>0}
such that
g
=
C
h
{\displaystyle g=Ch}
. Modulo some differences in notation, this is precisely what was used in the section above.
Proof of Theorem.
Fix a basis
{
v
1
,
…
,
v
d
}
{\displaystyle \{v_{1},\dots ,v_{d}\}}
of
V
{\displaystyle V}
relative to which
h
{\displaystyle h}
has the matrix representation
[
h
]
=
(
−
I
n
0
0
I
p
)
{\displaystyle [h]={\begin{pmatrix}-I_{n}&0\\0&I_{p}\end{pmatrix}}}
. The point is that the vector space
V
{\displaystyle V}
can be decomposed
into subspaces
V
−
{\displaystyle V^{-}}
(the span of the first
n
{\displaystyle n}
basis vectors) and
V
+
{\displaystyle V^{+}}
(then span of the other
p
{\displaystyle p}
basis vectors) such that each vector in
V
{\displaystyle V}
can be written uniquely as
v
+
w
{\displaystyle v+w}
for
v
∈
V
−
{\displaystyle v\in V^{-}}
and
w
∈
V
+
{\displaystyle w\in V^{+}}
; moreover
h
(
v
,
v
)
≤
0
{\displaystyle h(v,v)\leq 0}
,
h
(
w
,
w
)
≥
0
{\displaystyle h(w,w)\geq 0}
and
h
(
v
,
w
)
=
0
{\displaystyle h(v,w)=0}
. So (by bilinearity)
h
(
v
+
w
,
v
+
w
)
=
h
(
v
,
v
)
+
h
(
w
,
w
)
{\displaystyle h(v+w,v+w)=h(v,v)+h(w,w)}
Since the first summand on the right in non-positive and the second in non-negative, for any
v
∈
V
−
{\displaystyle v\in V^{-}}
and
w
∈
V
+
{\displaystyle w\in V^{+}}
, we can find a scalar
α
{\displaystyle \alpha }
such that
h
(
v
+
α
w
,
v
+
α
w
)
=
0
{\displaystyle h(v+\alpha w,v+\alpha w)=0}
.
From now on, always consider
v
∈
V
−
{\displaystyle v\in V^{-}}
and
w
∈
V
+
{\displaystyle w\in V^{+}}
. By bilinearity
g
(
v
+
w
,
v
+
w
)
=
g
(
v
,
v
)
+
g
(
w
,
w
)
+
2
g
(
v
,
w
)
g
(
v
−
w
,
v
−
w
)
=
g
(
v
,
v
)
+
g
(
w
,
w
)
−
2
g
(
v
,
w
)
{\displaystyle {\begin{aligned}g(v+w,v+w)&=g(v,v)+g(w,w)+2g(v,w)\\g(v-w,v-w)&=g(v,v)+g(w,w)-2g(v,w)\end{aligned}}}
If
h
(
v
+
w
,
v
+
w
)
=
0
{\displaystyle h(v+w,v+w)=0}
, then also
h
(
v
−
w
,
v
−
w
)
=
0
{\displaystyle h(v-w,v-w)=0}
and the same is true for
g
{\displaystyle g}
(since the null-set of
h
{\displaystyle h}
is contained in that of
g
{\displaystyle g}
). In that case, subtracting the two expression above (and dividing by 4) yields
0
=
g
(
v
,
w
)
{\displaystyle 0=g(v,w)}
As above, for each
v
∈
V
−
{\displaystyle v\in V^{-}}
and
w
∈
V
+
{\displaystyle w\in V^{+}}
, there is a scalar
α
{\displaystyle \alpha }
such that
h
(
v
+
α
w
,
v
+
α
w
)
=
0
{\displaystyle h(v+\alpha w,v+\alpha w)=0}
, so
g
(
v
,
α
w
)
=
0
{\displaystyle g(v,\alpha w)=0}
, which by bilinearity means
g
(
v
,
w
)
=
0
{\displaystyle g(v,w)=0}
.
Now consider nonzero
v
,
v
′
∈
V
−
{\displaystyle v,v'\in V^{-}}
such that
h
(
v
,
v
)
=
h
(
v
′
,
v
′
)
{\displaystyle h(v,v)=h(v',v')}
. We can find
w
∈
V
+
{\displaystyle w\in V^{+}}
such that
0
=
h
(
v
+
w
,
v
+
w
)
=
h
(
v
,
v
)
+
h
(
w
,
w
)
=
h
(
v
′
+
w
,
v
′
+
w
)
{\displaystyle 0=h(v+w,v+w)=h(v,v)+h(w,w)=h(v'+w,v'+w)}
. By the expressions above,
g
(
v
,
v
)
=
−
g
(
w
,
w
)
=
g
(
v
′
,
v
′
)
{\displaystyle g(v,v)=-g(w,w)=g(v',v')}
Analogically, for
w
,
w
′
∈
V
+
{\displaystyle w,w'\in V^{+}}
, one can show that if
h
(
w
,
w
)
=
h
(
w
′
,
w
′
)
{\displaystyle h(w,w)=h(w',w')}
, then also
g
(
w
,
w
)
=
g
(
w
′
,
w
′
)
{\displaystyle g(w,w)=g(w',w')}
. So it holds for all vectors in
V
{\displaystyle V}
.
For
u
,
u
′
∈
V
{\displaystyle u,u'\in V}
, if
g
(
u
,
u
)
=
C
h
(
u
,
u
)
≠
0
{\displaystyle g(u,u)=Ch(u,u)\neq 0}
,
g
(
u
′
,
u
′
)
=
C
′
h
(
u
′
,
u
′
)
≠
0
{\displaystyle g(u',u')=C'h(u',u')\neq 0}
for some
C
,
C
′
∈
R
{\displaystyle C,C'\in \mathbb {R} }
, we can (scaling one of the if necessary) assume
h
(
u
,
u
)
=
h
(
u
′
,
u
′
)
{\displaystyle h(u,u)=h(u',u')}
, which by the above means that
C
=
C
′
{\displaystyle C=C'}
. So
g
=
C
h
{\displaystyle g=Ch}
.
Finally, if we assume that
g
,
h
{\displaystyle g,h}
both have signature types
(
n
,
p
)
{\displaystyle (n,p)}
and
n
≠
p
{\displaystyle n\neq p}
then
C
>
0
{\displaystyle C>0}
(we can't have
C
=
0
{\displaystyle C=0}
because that would mean
g
=
0
{\displaystyle g=0}
, which is impossible since having signature type
(
n
,
p
)
{\displaystyle (n,p)}
means it is a non-zero bilinear form. Also, if
C
<
0
{\displaystyle C<0}
, then it means
g
{\displaystyle g}
has
n
{\displaystyle n}
positive diagonal entries and
p
{\displaystyle p}
negative diagonal entries; i.e. it is of signature
(
p
,
n
)
≠
(
n
,
p
)
{\displaystyle (p,n)\neq (n,p)}
, since we assumed
n
≠
p
{\displaystyle n\neq p}
, so this is also not possible. This leaves us with
C
>
0
{\displaystyle C>0}
as the only option). This completes the proof of the theorem.
== Standard configuration ==
The invariant interval can be seen as a non-positive definite distance function on spacetime. The set of transformations sought must leave this distance invariant. Due to the reference frame's coordinate system's cartesian nature, one concludes that, as in the Euclidean case, the possible transformations are made up of translations and rotations, where a slightly broader meaning should be allowed for the term rotation.
The interval is quite trivially invariant under translation. For rotations, there are four coordinates. Hence there are six planes of rotation. Three of those are rotations in spatial planes. The interval is invariant under ordinary rotations too.
It remains to find a "rotation" in the three remaining coordinate planes that leaves the interval invariant. Equivalently, to find a way to assign coordinates so that they coincide with the coordinates corresponding to a moving frame.
The general problem is to find a transformation such that
c
2
(
t
2
−
t
1
)
2
−
(
x
2
−
x
1
)
2
−
(
y
2
−
y
1
)
2
−
(
z
2
−
z
1
)
2
=
c
2
(
t
2
′
−
t
1
′
)
2
−
(
x
2
′
−
x
1
′
)
2
−
(
y
2
′
−
y
1
′
)
2
−
(
z
2
′
−
z
1
′
)
2
.
{\displaystyle {\begin{aligned}&c^{2}(t_{2}-t_{1})^{2}-(x_{2}-x_{1})^{2}-(y_{2}-y_{1})^{2}-(z_{2}-z_{1})^{2}\\={}&c^{2}(t_{2}'-t_{1}')^{2}-(x_{2}'-x_{1}')^{2}-(y_{2}'-y_{1}')^{2}-(z_{2}'-z_{1}')^{2}.\end{aligned}}}
To solve the general problem, one may use the knowledge about invariance of the interval of translations and ordinary rotations to assume, without loss of generality, that the frames F and F′ are aligned in such a way that their coordinate axes all meet at t = t′ = 0 and that the x and x′ axes are permanently aligned and system F′ has speed V along the positive x-axis. Call this the standard configuration. It reduces the general problem to finding a transformation such that
c
2
(
t
2
−
t
1
)
2
−
(
x
2
−
x
1
)
2
=
c
2
(
t
2
′
−
t
1
′
)
2
−
(
x
2
′
−
x
1
′
)
2
.
{\displaystyle c^{2}(t_{2}-t_{1})^{2}-(x_{2}-x_{1})^{2}=c^{2}(t_{2}'-t_{1}')^{2}-(x_{2}'-x_{1}')^{2}.}
The standard configuration is used in most examples below. A linear solution of the simpler problem
(
c
t
)
2
−
x
2
=
(
c
t
′
)
2
−
x
′
2
{\displaystyle (ct)^{2}-x^{2}=(ct')^{2}-x'^{2}}
solves the more general problem since coordinate differences then transform the same way. Linearity is often assumed or argued somehow in the literature when this simpler problem is considered. If the solution to the simpler problem is not linear, then it doesn't solve the original problem because of the cross terms appearing when expanding the squares.
== The solutions ==
As mentioned, the general problem is solved by translations in spacetime. These do not appear as a solution to the simpler problem posed, while the boosts do (and sometimes rotations depending on angle of attack). Even more solutions exist if one only insist on invariance of the interval for lightlike separated events. These are nonlinear conformal ("angle preserving") transformations. One has
Some equations of physics are conformal invariant, e.g. the Maxwell's equations in source-free space, but not all. The relevance of the conformal transformations in spacetime is not known at present, but the conformal group in two dimensions is highly relevant in conformal field theory and statistical mechanics. It is thus the Poincaré group that is singled out by the postulates of special relativity. It is the presence of Lorentz boosts (for which velocity addition is different from mere vector addition that would allow for speeds greater than the speed of light) as opposed to ordinary boosts that separates it from the Galilean group of Galilean relativity. Spatial rotations, spatial and temporal inversions and translations are present in both groups and have the same consequences in both theories (conservation laws of momentum, energy, and angular momentum). Not all accepted theories respect symmetry under the inversions.
== Using the geometry of spacetime ==
=== Landau & Lifshitz solution ===
These three hyperbolic function formulae (H1–H3) are referenced below:
cosh
2
Ψ
−
sinh
2
Ψ
=
1
,
{\displaystyle \cosh ^{2}\Psi -\sinh ^{2}\Psi =1,}
sinh
Ψ
=
tanh
Ψ
1
−
tanh
2
Ψ
,
{\displaystyle \sinh \Psi ={\frac {\tanh \Psi }{\sqrt {1-\tanh ^{2}\Psi }}},}
cosh
Ψ
=
1
1
−
tanh
2
Ψ
,
{\displaystyle \cosh \Psi ={\frac {1}{\sqrt {1-\tanh ^{2}\Psi }}},}
The problem posed in standard configuration for a boost in the x-direction, where the primed coordinates refer to the moving system is solved by finding a linear solution to the simpler problem
(
c
t
)
2
−
x
2
=
(
c
t
′
)
2
−
x
′
2
.
{\displaystyle (ct)^{2}-x^{2}=(ct')^{2}-x'^{2}.}
The most general solution is, as can be verified by direct substitution using (H1),
To find the role of Ψ in the physical setting, record the progression of the origin of F′, i.e. x′ = 0, x = vt. The equations become (using first x′ = 0),
x
=
c
t
′
sinh
Ψ
,
c
t
=
c
t
′
cosh
Ψ
.
{\displaystyle x=ct'\sinh \Psi ,\quad ct=ct'\cosh \Psi .}
Now divide:
x
c
t
=
tanh
Ψ
=
v
c
⇒
sinh
Ψ
=
v
c
1
−
v
2
c
2
,
cosh
Ψ
=
1
1
−
v
2
c
2
,
{\displaystyle {\frac {x}{ct}}=\tanh \Psi ={\frac {v}{c}}\Rightarrow \quad \sinh \Psi ={\frac {\frac {v}{c}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},\quad \cosh \Psi ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},}
where x = vt was used in the first step, (H2) and (H3) in the second, which, when plugged back in (1), gives
x
=
x
′
+
v
t
′
1
−
v
2
c
2
,
t
=
t
′
+
v
c
2
x
′
1
−
v
2
c
2
,
{\displaystyle x={\frac {x'+vt'}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},\quad t={\frac {t'+{\frac {v}{c^{2}}}x'}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},}
or, with the usual abbreviations,
This calculation is repeated with more detail in section hyperbolic rotation.
=== Hyperbolic rotation ===
The Lorentz transformations can also be derived by simple application of the special relativity postulates and using hyperbolic identities.
Relativity postulates
Start from the equations of the spherical wave front of a light pulse, centred at the origin:
(
c
t
)
2
−
(
x
2
+
y
2
+
z
2
)
=
(
c
t
′
)
2
−
(
x
′
2
+
y
′
2
+
z
′
2
)
=
0
{\displaystyle (ct)^{2}-(x^{2}+y^{2}+z^{2})=(ct')^{2}-(x'^{2}+y'^{2}+z'^{2})=0}
which take the same form in both frames because of the special relativity postulates. Next, consider relative motion along the x-axes of each frame, in standard configuration above, so that y = y′, z = z′, which simplifies to
(
c
t
)
2
−
x
2
=
(
c
t
′
)
2
−
x
′
2
{\displaystyle (ct)^{2}-x^{2}=(ct')^{2}-x'^{2}}
Linearity
Now assume that the transformations take the linear form:
x
′
=
A
x
+
B
c
t
c
t
′
=
C
x
+
D
c
t
{\displaystyle {\begin{aligned}x'&=Ax+Bct\\ct'&=Cx+Dct\end{aligned}}}
where A, B, C, D are to be found. If they were non-linear, they would not take the same form for all observers, since fictitious forces (hence accelerations) would occur in one frame even if the velocity was constant in another, which is inconsistent with inertial frame transformations.
Substituting into the previous result:
(
c
t
)
2
−
x
2
=
[
(
C
x
)
2
+
(
D
c
t
)
2
+
2
C
D
c
x
t
]
−
[
(
A
x
)
2
+
(
B
c
t
)
2
+
2
A
B
c
x
t
]
{\displaystyle (ct)^{2}-x^{2}=[(Cx)^{2}+(Dct)^{2}+2CDcxt]-[(Ax)^{2}+(Bct)^{2}+2ABcxt]}
and comparing coefficients of x2, t2, xt:
−
1
=
C
2
−
A
2
⇒
A
2
−
C
2
=
1
c
2
=
(
D
c
)
2
−
(
B
c
)
2
⇒
D
2
−
B
2
=
1
2
C
D
c
−
2
A
B
c
=
0
⇒
A
B
=
C
D
{\displaystyle {\begin{aligned}-1=C^{2}-A^{2}&\Rightarrow &A^{2}-C^{2}=1\\c^{2}=(Dc)^{2}-(Bc)^{2}&\Rightarrow &D^{2}-B^{2}=1\\2CDc-2ABc=0&\Rightarrow &AB=CD\end{aligned}}}
Hyperbolic rotation
The equations suggest the hyperbolic identity
cosh
2
ϕ
−
sinh
2
ϕ
=
1.
{\displaystyle \cosh ^{2}\phi -\sinh ^{2}\phi =1.}
Introducing the rapidity parameter ϕ as a hyperbolic angle allows the consistent identifications
A
=
D
=
cosh
ϕ
,
C
=
B
=
−
sinh
ϕ
{\displaystyle A=D=\cosh \phi \,,\quad C=B=-\sinh \phi }
where the signs after the square roots are chosen so that x' and t' increase if x and t increase, respectively. The hyperbolic transformations have been solved for:
x
′
=
x
cosh
ϕ
−
c
t
sinh
ϕ
c
t
′
=
−
x
sinh
ϕ
+
c
t
cosh
ϕ
{\displaystyle {\begin{aligned}x'&=x\cosh \phi -ct\sinh \phi \\ct'&=-x\sinh \phi +ct\cosh \phi \end{aligned}}}
If the signs were chosen differently the position and time coordinates would need to be replaced by −x and/or −t so that x and t increase not decrease.
To find how ϕ relates to the relative velocity, from the standard configuration the origin of the primed frame x′ = 0 is measured in the unprimed frame to be x = vt (or the equivalent and opposite way round; the origin of the unprimed frame is x = 0 and in the primed frame it is at x′ = −vt):
0
=
v
t
cosh
ϕ
−
c
t
sinh
ϕ
⇒
tanh
ϕ
=
v
c
=
β
{\displaystyle 0=vt\cosh \phi -ct\sinh \phi \,\Rightarrow \,\tanh \phi ={\frac {v}{c}}=\beta }
and hyperbolic identities
sinh
Ψ
=
tanh
Ψ
1
−
tanh
2
Ψ
,
cosh
Ψ
=
1
1
−
tanh
2
Ψ
{\displaystyle \sinh \Psi ={\frac {\tanh \Psi }{\sqrt {1-\tanh ^{2}\Psi }}},\,\cosh \Psi ={\frac {1}{\sqrt {1-\tanh ^{2}\Psi }}}}
leads to the relations between β, γ, and ϕ,
cosh
ϕ
=
γ
,
sinh
ϕ
=
β
γ
.
{\displaystyle \cosh \phi =\gamma ,\,\quad \sinh \phi =\beta \gamma \,.}
== From physical principles ==
The problem is usually restricted to two dimensions by using a velocity along the x axis such that the y and z coordinates do not intervene, as described in standard configuration above.
=== Time dilation and length contraction ===
The transformation equations can be derived from time dilation and length contraction, which in turn can be derived from first principles. With O and O′ representing the spatial origins of the frames F and F′, and some event M, the relation between the position vectors (which here reduce to oriented segments OM, OO′ and O′M) in both frames is given by:
Using coordinates (x,t) in F and (x′,t′) in F′ for event M, in frame F the segments are OM = x, OO′ = vt and O′M = x′/γ (since x′ is O′M as measured in F′):
x
=
v
t
+
x
′
/
γ
.
{\displaystyle x=vt+x'/\gamma .}
Likewise, in frame F′, the segments are OM = x/γ (since x is OM as measured in F), OO′ = vt′ and O′M = x′:
x
/
γ
=
v
t
′
+
x
′
.
{\displaystyle x/\gamma =vt'+x'.}
By rearranging the first equation, we get
x
′
=
γ
(
x
−
v
t
)
,
{\displaystyle x'=\gamma (x-vt),}
which is the space part of the Lorentz transformation. The second relation gives
x
=
γ
(
x
′
+
v
t
′
)
,
{\displaystyle x=\gamma (x'+vt'),}
which is the inverse of the space part. Eliminating x′ between the two space part equations gives
t
′
=
γ
t
+
(
1
−
γ
2
)
x
γ
v
.
{\displaystyle t'=\gamma t+{\frac {\left(1-{\gamma ^{2}}\right)x}{\gamma v}}.}
that, if
γ
2
=
1
1
−
v
2
/
c
2
{\displaystyle \gamma ^{2}={\frac {1}{1-v^{2}/c^{2}}}}
, simplifies to:
t
′
=
γ
(
t
−
v
x
/
c
2
)
,
{\displaystyle t'=\gamma (t-vx/c^{2}),}
which is the time part of the transformation, the inverse of which is found by a similar elimination of x:
t
=
γ
(
t
′
+
v
x
′
/
c
2
)
.
{\displaystyle t=\gamma (t'+vx'/c^{2}).}
=== Spherical wavefronts of light ===
The following is similar to that of Einstein.
As in the Galilean transformation, the Lorentz transformation is linear since the relative velocity of the reference frames is constant as a vector; otherwise, inertial forces would appear. They are called inertial or Galilean reference frames. According to relativity no Galilean reference frame is privileged. Another condition is that the speed of light must be independent of the reference frame, in practice of the velocity of the light source.
Consider two inertial frames of reference O and O′, assuming O to be at rest while O′ is moving with a velocity v with respect to O in the positive x-direction. The origins of O and O′ initially coincide with each other. A light signal is emitted from the common origin and travels as a spherical wave front. Consider a point P on a spherical wavefront at a distance r and r′ from the origins of O and O′ respectively. According to the second postulate of the special theory of relativity the speed of light is the same in both frames, so for the point P:
r
=
c
t
r
′
=
c
t
′
.
{\displaystyle {\begin{aligned}r&=ct\\r'&=ct'.\end{aligned}}}
The equation of a sphere in frame O is given by
x
2
+
y
2
+
z
2
=
r
2
.
{\displaystyle x^{2}+y^{2}+z^{2}=r^{2}.}
For the spherical wavefront that becomes
x
2
+
y
2
+
z
2
=
(
c
t
)
2
.
{\displaystyle x^{2}+y^{2}+z^{2}=(ct)^{2}.}
Similarly, the equation of a sphere in frame O′ is given by
x
′
2
+
y
′
2
+
z
′
2
=
r
′
2
,
{\displaystyle x'^{2}+y'^{2}+z'^{2}=r'^{2},}
so the spherical wavefront satisfies
x
′
2
+
y
′
2
+
z
′
2
=
(
c
t
′
)
2
.
{\displaystyle x'^{2}+y'^{2}+z'^{2}=(ct')^{2}.}
The origin O′ is moving along x-axis. Therefore,
y
′
=
y
z
′
=
z
.
{\displaystyle {\begin{aligned}y'&=y\\z'&=z.\end{aligned}}}
x′ must vary linearly with x and t. Therefore, the transformation has the form
x
′
=
γ
x
+
σ
t
.
{\displaystyle x'=\gamma x+\sigma t.}
For the origin of O′ x′ and x are given by
x
′
=
0
x
=
v
t
,
{\displaystyle {\begin{aligned}x'&=0\\x&=vt,\end{aligned}}}
so, for all t,
0
=
γ
v
t
+
σ
t
{\displaystyle 0=\gamma vt+\sigma t}
and thus
σ
=
−
γ
v
.
{\displaystyle \sigma =-\gamma v.}
This simplifies the transformation to
x
′
=
γ
(
x
−
v
t
)
{\displaystyle x'=\gamma \left(x-vt\right)}
where γ is to be determined. At this point γ is not necessarily a constant, but is required to reduce to 1 for v ≪ c.
The inverse transformation is the same except that the sign of v is reversed:
x
=
γ
(
x
′
+
v
t
′
)
.
{\displaystyle x=\gamma \left(x'+vt'\right).}
The above two equations give the relation between t and t′ as:
x
=
γ
[
γ
(
x
−
v
t
)
+
v
t
′
]
{\displaystyle x=\gamma \left[\gamma \left(x-vt\right)+vt'\right]}
or
t
′
=
γ
t
+
(
1
−
γ
2
)
x
γ
v
.
{\displaystyle t'=\gamma t+{\frac {\left(1-{\gamma ^{2}}\right)x}{\gamma v}}.}
Replacing x′, y′, z′ and t′ in the spherical wavefront equation in the O′ frame,
x
′
2
+
y
′
2
+
z
′
2
=
(
c
t
′
)
2
,
{\displaystyle x'^{2}+y'^{2}+z'^{2}=(ct')^{2},}
with their expressions in terms of x, y, z and t produces:
γ
2
(
x
−
v
t
)
2
+
y
2
+
z
2
=
c
2
[
γ
t
+
(
1
−
γ
2
)
x
γ
v
]
2
{\displaystyle {\gamma ^{2}}\left(x-vt\right)^{2}+y^{2}+z^{2}=c^{2}\left[\gamma t+{\frac {\left(1-{\gamma ^{2}}\right)x}{\gamma v}}\right]^{2}}
and therefore,
γ
2
x
2
+
γ
2
v
2
t
2
−
2
γ
2
v
t
x
+
y
2
+
z
2
=
c
2
γ
2
t
2
+
(
1
−
γ
2
)
2
c
2
x
2
γ
2
v
2
+
2
(
1
−
γ
2
)
t
x
c
2
v
{\displaystyle \gamma ^{2}x^{2}+\gamma ^{2}v^{2}t^{2}-2\gamma ^{2}vtx+y^{2}+z^{2}=c^{2}{\gamma ^{2}}t^{2}+{\frac {\left(1-{\gamma ^{2}}\right)^{2}c^{2}x^{2}}{{\gamma ^{2}}v^{2}}}+2{\frac {\left(1-{\gamma ^{2}}\right)txc^{2}}{v}}}
which implies,
[
γ
2
−
(
1
−
γ
2
)
2
c
2
γ
2
v
2
]
x
2
−
2
γ
2
v
t
x
+
y
2
+
z
2
=
(
c
2
γ
2
−
v
2
γ
2
)
t
2
+
2
[
1
−
γ
2
]
t
x
c
2
v
{\displaystyle \left[{\gamma ^{2}}-{\frac {\left(1-{\gamma ^{2}}\right)^{2}c^{2}}{{\gamma ^{2}}v^{2}}}\right]x^{2}-2{\gamma ^{2}}vtx+y^{2}+z^{2}=\left(c^{2}{\gamma ^{2}}-v^{2}{\gamma ^{2}}\right)t^{2}+2{\frac {\left[1-{\gamma ^{2}}\right]txc^{2}}{v}}}
or
[
γ
2
−
(
1
−
γ
2
)
2
c
2
γ
2
v
2
]
x
2
−
[
2
γ
2
v
+
2
(
1
−
γ
2
)
c
2
v
]
t
x
+
y
2
+
z
2
=
[
c
2
γ
2
−
v
2
γ
2
]
t
2
{\displaystyle \left[{\gamma ^{2}}-{\frac {\left(1-{\gamma ^{2}}\right)^{2}c^{2}}{{\gamma ^{2}}v^{2}}}\right]x^{2}-\left[2{\gamma ^{2}}v+2{\frac {\left(1-{\gamma ^{2}}\right)c^{2}}{v}}\right]tx+y^{2}+z^{2}=\left[c^{2}{\gamma ^{2}}-v^{2}{\gamma ^{2}}\right]t^{2}}
Comparing the coefficient of t2 in the above equation with the coefficient of t2 in the spherical wavefront equation for frame O produces:
c
2
γ
2
−
v
2
γ
2
=
c
2
{\displaystyle c^{2}{\gamma ^{2}}-v^{2}{\gamma ^{2}}=c^{2}}
Equivalent expressions for γ can be obtained by matching the x2 coefficients or setting the tx coefficient to zero. Rearranging:
γ
2
=
1
1
−
v
2
c
2
{\displaystyle {\gamma ^{2}}={\frac {1}{1-{\frac {v^{2}}{c^{2}}}}}}
or, choosing the positive root to ensure that the x and x' axes and the time axes point in the same direction,
γ
=
1
1
−
v
2
c
2
{\displaystyle {\gamma }={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
which is called the Lorentz factor. This produces the Lorentz transformation from the above expression. It is given by
x
′
=
γ
(
x
−
v
t
)
t
′
=
γ
(
t
−
v
x
c
2
)
y
′
=
y
z
′
=
z
{\displaystyle {\begin{aligned}x'&=\gamma \left(x-vt\right)\\t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\y'&=y\\z'&=z\end{aligned}}}
The Lorentz transformation is not the only transformation leaving invariant the shape of spherical waves, as there is a wider set of spherical wave transformations in the context of conformal geometry, leaving invariant the expression
λ
(
δ
x
2
+
δ
y
2
+
δ
z
2
−
c
2
δ
t
2
)
{\displaystyle \lambda \left(\delta x^{2}+\delta y^{2}+\delta z^{2}-c^{2}\delta t^{2}\right)}
. However, scale changing conformal transformations cannot be used to symmetrically describe all laws of nature including mechanics, whereas the Lorentz transformations (the only one implying
λ
=
1
{\displaystyle \lambda =1}
) represent a symmetry of all laws of nature and reduce to Galilean transformations at
v
≪
c
{\displaystyle v\ll c}
.
=== Galilean and Einstein's relativity ===
==== Galilean reference frames ====
In classical kinematics, the total displacement x in the R frame is the sum of the relative displacement x′ in frame R′ and of the distance between the two origins x − x′. If v is the relative velocity of R′ relative to R, the transformation is: x = x′ + vt, or x′ = x − vt. This relationship is linear for a constant v, that is when R and R′ are Galilean frames of reference.
In Einstein's relativity, the main difference from Galilean relativity is that space and time coordinates are intertwined, and in different inertial frames t ≠ t′.
Since space is assumed to be homogeneous, the transformation must be linear. The most general linear relationship is obtained with four constant coefficients, A, B, γ, and b:
x
′
=
γ
x
+
b
t
{\displaystyle x'=\gamma x+bt}
t
′
=
A
x
+
B
t
.
{\displaystyle t'=Ax+Bt.}
The linear transformation becomes the Galilean transformation when γ = B = 1, b = −v and A = 0.
An object at rest in the R′ frame at position x′ = 0 moves with constant velocity v in the R frame. Hence the transformation must yield x′ = 0 if x = vt. Therefore, b = −γv and the first equation is written as
x
′
=
γ
(
x
−
v
t
)
.
{\displaystyle x'=\gamma \left(x-vt\right).}
==== Using the principle of relativity ====
According to the principle of relativity, there is no privileged Galilean frame of reference: therefore the inverse transformation for the position from frame R′ to frame R should have the same form as the original but with the velocity in the opposite direction, i.o.w. replacing v with -v:
x
=
γ
(
x
′
−
(
−
v
)
t
′
)
,
{\displaystyle x=\gamma \left(x'-(-v)t'\right),}
and thus
x
=
γ
(
x
′
+
v
t
′
)
.
{\displaystyle x=\gamma \left(x'+vt'\right).}
==== Determining the constants of the first equation ====
Since the speed of light is the same in all frames of reference, for the case of a light signal, the transformation must guarantee that t = x/c when t′ = x′/c.
Substituting for t and t′ in the preceding equations gives:
x
′
=
γ
(
1
−
v
/
c
)
x
,
{\displaystyle x'=\gamma \left(1-v/c\right)x,}
x
=
γ
(
1
+
v
/
c
)
x
′
.
{\displaystyle x=\gamma \left(1+v/c\right)x'.}
Multiplying these two equations together gives,
x
x
′
=
γ
2
(
1
−
v
2
/
c
2
)
x
x
′
.
{\displaystyle xx'=\gamma ^{2}\left(1-v^{2}/c^{2}\right)xx'.}
At any time after t = t′ = 0, xx′ is not zero, so dividing both sides of the equation by xx′ results in
γ
=
1
1
−
v
2
c
2
,
{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},}
which is called the "Lorentz factor".
When the transformation equations are required to satisfy the light signal equations in the form x = ct and x′ = ct′, by substituting the x and x'-values, the same technique produces the same expression for the Lorentz factor.
==== Determining the constants of the second equation ====
The transformation equation for time can be easily obtained by considering the special case of a light signal, again satisfying x = ct and x′ = ct′, by substituting term by term into the earlier obtained equation for the spatial coordinate
x
′
=
γ
(
x
−
v
t
)
,
{\displaystyle x'=\gamma (x-vt),\,}
giving
c
t
′
=
γ
(
c
t
−
v
c
x
)
,
{\displaystyle ct'=\gamma \left(ct-{\frac {v}{c}}x\right),}
so that
t
′
=
γ
(
t
−
v
c
2
x
)
,
{\displaystyle t'=\gamma \left(t-{\frac {v}{c^{2}}}x\right),}
which, when identified with
t
′
=
A
x
+
B
t
,
{\displaystyle t'=Ax+Bt,\,}
determines the transformation coefficients A and B as
A
=
−
γ
v
/
c
2
,
{\displaystyle A=-\gamma v/c^{2},\,}
B
=
γ
.
{\displaystyle B=\gamma .\,}
So A and B are the unique constant coefficients necessary to preserve the constancy of the speed of light in the primed system of coordinates.
=== Einstein's popular derivation ===
In his popular book Einstein derived the Lorentz transformation by arguing that there must be two non-zero coupling constants λ and μ such that
{
x
′
−
c
t
′
=
λ
(
x
−
c
t
)
x
′
+
c
t
′
=
μ
(
x
+
c
t
)
{\displaystyle {\begin{cases}x'-ct'=\lambda \left(x-ct\right)\\x'+ct'=\mu \left(x+ct\right)\,\end{cases}}}
that correspond to light traveling along the positive and negative x-axis, respectively.
For light x = ct if and only if x′ = ct′. Adding and subtracting the two equations and defining
{
γ
=
(
λ
+
μ
)
/
2
b
=
(
λ
−
μ
)
/
2
,
{\displaystyle {\begin{cases}\gamma =\left(\lambda +\mu \right)/2\\b=\left(\lambda -\mu \right)/2,\,\end{cases}}}
gives
{
x
′
=
γ
x
−
b
c
t
c
t
′
=
γ
c
t
−
b
x
.
{\displaystyle {\begin{cases}x'=\gamma x-bct\\ct'=\gamma ct-bx.\,\end{cases}}}
Substituting x′ = 0 corresponding to x = vt and noting that the relative velocity is v = bc/γ, this gives
{
x
′
=
γ
(
x
−
v
t
)
t
′
=
γ
(
t
−
v
c
2
x
)
{\displaystyle {\begin{cases}x'=\gamma \left(x-vt\right)\\t'=\gamma \left(t-{\frac {v}{c^{2}}}x\right)\,\end{cases}}}
The constant γ can be evaluated by demanding c2t2 − x2 = c2t′2 − x′2 as per standard configuration.
== Using group theory ==
=== From group postulates ===
Following is a classical derivation (see, e.g., [1] and references therein) based on group postulates and isotropy of the space.
Coordinate transformations as a group
The coordinate transformations between inertial frames form a group (called the proper Lorentz group) with the group operation being the composition of transformations (performing one transformation after another). Indeed, the four group axioms are satisfied:
Closure: the composition of two transformations is a transformation: consider a composition of transformations from the inertial frame K to inertial frame K′, (denoted as K → K′), and then from K′ to inertial frame K′′, [K′ → K′′], there exists a transformation, [K → K′] [K′ → K′′], directly from an inertial frame K to inertial frame K′′.
Associativity: the transformations ( [K → K′] [K′ → K′′] ) [K′′ → K′′′] and [K → K′] ( [K′ → K′′] [K′′ → K′′′] ) are identical.
Identity element: there is an identity element, a transformation K → K.
Inverse element: for any transformation K → K′ there exists an inverse transformation K′ → K.
Transformation matrices consistent with group axioms
Consider two inertial frames, K and K′, the latter moving with velocity v with respect to the former. By rotations and shifts we can choose the x and x′ axes along the relative velocity vector and also that the events (t, x) = (0,0) and (t′, x′) = (0,0) coincide. Since the velocity boost is along the x (and x′) axes nothing happens to the perpendicular coordinates and we can just omit them for brevity. Now since the transformation we are looking after connects two inertial frames, it has to transform a linear motion in (t, x) into a linear motion in (t′, x′) coordinates. Therefore, it must be a linear transformation. The general form of a linear transformation is
[
t
′
x
′
]
=
[
γ
δ
β
α
]
[
t
x
]
,
{\displaystyle {\begin{bmatrix}t'\\x'\end{bmatrix}}={\begin{bmatrix}\gamma &\delta \\\beta &\alpha \end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}},}
where α, β, γ and δ are some yet unknown functions of the relative velocity v.
Let us now consider the motion of the origin of the frame K′. In the K′ frame it has coordinates (t′, x′ = 0), while in the K frame it has coordinates (t, x = vt). These two points are connected by the transformation
[
t
′
0
]
=
[
γ
δ
β
α
]
[
t
v
t
]
,
{\displaystyle {\begin{bmatrix}t'\\0\end{bmatrix}}={\begin{bmatrix}\gamma &\delta \\\beta &\alpha \end{bmatrix}}{\begin{bmatrix}t\\vt\end{bmatrix}},}
from which we get
β
=
−
v
α
.
{\displaystyle \beta =-v\alpha \,.}
Analogously, considering the motion of the origin of the frame K, we get
[
t
′
−
v
t
′
]
=
[
γ
δ
β
α
]
[
t
0
]
,
{\displaystyle {\begin{bmatrix}t'\\-vt'\end{bmatrix}}={\begin{bmatrix}\gamma &\delta \\\beta &\alpha \end{bmatrix}}{\begin{bmatrix}t\\0\end{bmatrix}},}
from which we get
β
=
−
v
γ
.
{\displaystyle \beta =-v\gamma \,.}
Combining these two gives α = γ and the transformation matrix has simplified,
[
t
′
x
′
]
=
[
γ
δ
−
v
γ
γ
]
[
t
x
]
.
{\displaystyle {\begin{bmatrix}t'\\x'\end{bmatrix}}={\begin{bmatrix}\gamma &\delta \\-v\gamma &\gamma \end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}}.}
Now consider the group postulate inverse element. There are two ways we can go from the K′ coordinate system to the K coordinate system. The first is to apply the inverse of the transform matrix to the K′ coordinates:
[
t
x
]
=
1
γ
2
+
v
δ
γ
[
γ
−
δ
v
γ
γ
]
[
t
′
x
′
]
.
{\displaystyle {\begin{bmatrix}t\\x\end{bmatrix}}={\frac {1}{\gamma ^{2}+v\delta \gamma }}{\begin{bmatrix}\gamma &-\delta \\v\gamma &\gamma \end{bmatrix}}{\begin{bmatrix}t'\\x'\end{bmatrix}}.}
The second is, considering that the K′ coordinate system is moving at a velocity v relative to the K coordinate system, the K coordinate system must be moving at a velocity −v relative to the K′ coordinate system. Replacing v with −v in the transformation matrix gives:
[
t
x
]
=
[
γ
(
−
v
)
δ
(
−
v
)
v
γ
(
−
v
)
γ
(
−
v
)
]
[
t
′
x
′
]
,
{\displaystyle {\begin{bmatrix}t\\x\end{bmatrix}}={\begin{bmatrix}\gamma (-v)&\delta (-v)\\v\gamma (-v)&\gamma (-v)\end{bmatrix}}{\begin{bmatrix}t'\\x'\end{bmatrix}},}
Now the function γ can not depend upon the direction of v because it is apparently the factor which defines the relativistic contraction and time dilation. These two (in an isotropic world of ours) cannot depend upon the direction of v. Thus, γ(−v) = γ(v) and comparing the two matrices, we get
γ
2
+
v
δ
γ
=
1.
{\displaystyle \gamma ^{2}+v\delta \gamma =1.}
According to the closure group postulate a composition of two coordinate transformations is also a coordinate transformation, thus the product of two of our matrices should also be a matrix of the same form. Transforming K to K′ and from K′ to K′′ gives the following transformation matrix to go from K to K′′:
[
t
″
x
″
]
=
[
γ
(
v
′
)
δ
(
v
′
)
−
v
′
γ
(
v
′
)
γ
(
v
′
)
]
[
γ
(
v
)
δ
(
v
)
−
v
γ
(
v
)
γ
(
v
)
]
[
t
x
]
=
[
γ
(
v
′
)
γ
(
v
)
−
v
δ
(
v
′
)
γ
(
v
)
γ
(
v
′
)
δ
(
v
)
+
δ
(
v
′
)
γ
(
v
)
−
(
v
′
+
v
)
γ
(
v
′
)
γ
(
v
)
−
v
′
γ
(
v
′
)
δ
(
v
)
+
γ
(
v
′
)
γ
(
v
)
]
[
t
x
]
.
{\displaystyle {\begin{aligned}{\begin{bmatrix}t''\\x''\end{bmatrix}}&={\begin{bmatrix}\gamma (v')&\delta (v')\\-v'\gamma (v')&\gamma (v')\end{bmatrix}}{\begin{bmatrix}\gamma (v)&\delta (v)\\-v\gamma (v)&\gamma (v)\end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}}\\&={\begin{bmatrix}\gamma (v')\gamma (v)-v\delta (v')\gamma (v)&\gamma (v')\delta (v)+\delta (v')\gamma (v)\\-(v'+v)\gamma (v')\gamma (v)&-v'\gamma (v')\delta (v)+\gamma (v')\gamma (v)\end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}}.\end{aligned}}}
In the original transform matrix, the main diagonal elements are both equal to γ, hence, for the combined transform matrix above to be of the same form as the original transform matrix, the main diagonal elements must also be equal. Equating these elements and rearranging gives:
γ
(
v
′
)
γ
(
v
)
−
v
δ
(
v
′
)
γ
(
v
)
=
−
v
′
γ
(
v
′
)
δ
(
v
)
+
γ
(
v
′
)
γ
(
v
)
v
δ
(
v
′
)
γ
(
v
)
=
v
′
γ
(
v
′
)
δ
(
v
)
δ
(
v
)
v
γ
(
v
)
=
δ
(
v
′
)
v
′
γ
(
v
′
)
.
{\displaystyle {\begin{aligned}\gamma (v')\gamma (v)-v\delta (v')\gamma (v)&=-v'\gamma (v')\delta (v)+\gamma (v')\gamma (v)\\v\delta (v')\gamma (v)&=v'\gamma (v')\delta (v)\\{\frac {\delta (v)}{v\gamma (v)}}&={\frac {\delta (v')}{v'\gamma (v')}}.\end{aligned}}}
The denominator will be nonzero for nonzero v, because γ(v) is always nonzero;
γ
2
+
v
δ
γ
=
1.
{\displaystyle \gamma ^{2}+v\delta \gamma =1.}
If v = 0 we have the identity matrix which coincides with putting v = 0 in the matrix we get at the end of this derivation for the other values of v, making the final matrix valid for all nonnegative v.
For the nonzero v, this combination of function must be a universal constant, one and the same for all inertial frames. Define this constant as δ(v)/v γ(v) = κ, where κ has the dimension of 1/v2. Solving
1
=
γ
2
+
v
δ
γ
=
γ
2
(
1
+
κ
v
2
)
{\displaystyle 1=\gamma ^{2}+v\delta \gamma =\gamma ^{2}(1+\kappa v^{2})}
we finally get
γ
=
1
/
1
+
κ
v
2
{\displaystyle \gamma =1/{\sqrt {1+\kappa v^{2}}}}
and thus the transformation matrix, consistent with the group axioms, is given by
[
t
′
x
′
]
=
1
1
+
κ
v
2
[
1
κ
v
−
v
1
]
[
t
x
]
.
{\displaystyle {\begin{bmatrix}t'\\x'\end{bmatrix}}={\frac {1}{\sqrt {1+\kappa v^{2}}}}{\begin{bmatrix}1&\kappa v\\-v&1\end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}}.}
If κ > 0, then there would be transformations (with κv2 ≫ 1) which transform time into a spatial coordinate and vice versa. We exclude this on physical grounds, because time can only run in the positive direction. Thus two types of transformation matrices are consistent with group postulates:
Galilean transformations
If κ = 0 then we get the Galilean-Newtonian kinematics with the Galilean transformation,
[
t
′
x
′
]
=
[
1
0
−
v
1
]
[
t
x
]
,
{\displaystyle {\begin{bmatrix}t'\\x'\end{bmatrix}}={\begin{bmatrix}1&0\\-v&1\end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}}\;,}
where time is absolute, t′ = t, and the relative velocity v of two inertial frames is not limited.
Lorentz transformations
If κ < 0, then we set
c
=
1
/
−
κ
{\displaystyle c=1/{\sqrt {-\kappa }}}
which becomes the invariant speed, the speed of light in vacuum. This yields κ = −1/c2 and thus we get special relativity with Lorentz transformation
[
t
′
x
′
]
=
1
1
−
v
2
c
2
[
1
−
v
c
2
−
v
1
]
[
t
x
]
,
{\displaystyle {\begin{bmatrix}t'\\x'\end{bmatrix}}={\frac {1}{\sqrt {1-{v^{2} \over c^{2}}}}}{\begin{bmatrix}1&{-v \over c^{2}}\\-v&1\end{bmatrix}}{\begin{bmatrix}t\\x\end{bmatrix}}\;,}
where the speed of light is a finite universal constant determining the highest possible relative velocity between inertial frames.
If v ≪ c the Galilean transformation is a good approximation to the Lorentz transformation.
Only experiment can answer the question which of the two possibilities, κ = 0 or κ < 0, is realized in our world. The experiments measuring the speed of light, first performed by a Danish physicist Ole Rømer, show that it is finite, and the Michelson–Morley experiment showed that it is an absolute speed, and thus that κ < 0.
=== Boost from generators ===
Using rapidity ϕ to parametrize the Lorentz transformation, the boost in the x direction is
[
c
t
′
x
′
y
′
z
′
]
=
[
cosh
ϕ
−
sinh
ϕ
0
0
−
sinh
ϕ
cosh
ϕ
0
0
0
0
1
0
0
0
0
1
]
[
c
t
x
y
z
]
,
{\displaystyle {\begin{bmatrix}ct'\\x'\\y'\\z'\end{bmatrix}}={\begin{bmatrix}\cosh \phi &-\sinh \phi &0&0\\-\sinh \phi &\cosh \phi &0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}{\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}},}
likewise for a boost in the y-direction
[
c
t
′
x
′
y
′
z
′
]
=
[
cosh
ϕ
0
−
sinh
ϕ
0
0
1
0
0
−
sinh
ϕ
0
cosh
ϕ
0
0
0
0
1
]
[
c
t
x
y
z
]
,
{\displaystyle {\begin{bmatrix}ct'\\x'\\y'\\z'\end{bmatrix}}={\begin{bmatrix}\cosh \phi &0&-\sinh \phi &0\\0&1&0&0\\-\sinh \phi &0&\cosh \phi &0\\0&0&0&1\\\end{bmatrix}}{\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}},}
and the z-direction
[
c
t
′
x
′
y
′
z
′
]
=
[
cosh
ϕ
0
0
−
sinh
ϕ
0
1
0
0
0
0
1
0
−
sinh
ϕ
0
0
cosh
ϕ
]
[
c
t
x
y
z
]
.
{\displaystyle {\begin{bmatrix}ct'\\x'\\y'\\z'\end{bmatrix}}={\begin{bmatrix}\cosh \phi &0&0&-\sinh \phi \\0&1&0&0\\0&0&1&0\\-\sinh \phi &0&0&\cosh \phi \\\end{bmatrix}}{\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}}\,.}
where ex, ey, ez are the Cartesian basis vectors, a set of mutually perpendicular unit vectors along their indicated directions. If one frame is boosted with velocity v relative to another, it is convenient to introduce a unit vector n = v/v = β/β in the direction of relative motion. The general boost is
[
c
t
′
x
′
y
′
z
′
]
=
[
cosh
ϕ
−
n
x
sinh
ϕ
−
n
y
sinh
ϕ
−
n
z
sinh
ϕ
−
n
x
sinh
ϕ
1
+
(
cosh
ϕ
−
1
)
n
x
2
(
cosh
ϕ
−
1
)
n
x
n
y
(
cosh
ϕ
−
1
)
n
x
n
z
−
n
y
sinh
ϕ
(
cosh
ϕ
−
1
)
n
y
n
x
1
+
(
cosh
ϕ
−
1
)
n
y
2
(
cosh
ϕ
−
1
)
n
y
n
z
−
n
z
sinh
ϕ
(
cosh
ϕ
−
1
)
n
z
n
x
(
cosh
ϕ
−
1
)
n
z
n
y
1
+
(
cosh
ϕ
−
1
)
n
z
2
]
[
c
t
x
y
z
]
.
{\displaystyle {\begin{bmatrix}c\,t'\\x'\\y'\\z'\end{bmatrix}}={\begin{bmatrix}\cosh \phi &-n_{x}\sinh \phi &-n_{y}\sinh \phi &-n_{z}\sinh \phi \\-n_{x}\sinh \phi &1+(\cosh \phi -1)n_{x}^{2}&(\cosh \phi -1)n_{x}n_{y}&(\cosh \phi -1)n_{x}n_{z}\\-n_{y}\sinh \phi &(\cosh \phi -1)n_{y}n_{x}&1+(\cosh \phi -1)n_{y}^{2}&(\cosh \phi -1)n_{y}n_{z}\\-n_{z}\sinh \phi &(\cosh \phi -1)n_{z}n_{x}&(\cosh \phi -1)n_{z}n_{y}&1+(\cosh \phi -1)n_{z}^{2}\\\end{bmatrix}}{\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}}\,.}
Notice the matrix depends on the direction of the relative motion as well as the rapidity, in all three numbers (two for direction, one for rapidity).
We can cast each of the boost matrices in another form as follows. First consider the boost in the x direction. The Taylor expansion of the boost matrix about ϕ = 0 is
B
(
e
x
,
ϕ
)
=
∑
n
=
0
∞
ϕ
n
n
!
∂
n
B
(
e
x
,
ϕ
)
∂
ϕ
n
|
ϕ
=
0
{\displaystyle B(\mathbf {e} _{x},\phi )=\sum _{n=0}^{\infty }{\frac {\phi ^{n}}{n!}}\left.{\frac {\partial ^{n}B(\mathbf {e} _{x},\phi )}{\partial \phi ^{n}}}\right|_{\phi =0}}
where the derivatives of the matrix with respect to ϕ are given by differentiating each entry of the matrix separately, and the notation |ϕ = 0 indicates ϕ is set to zero after the derivatives are evaluated. Expanding to first order gives the infinitesimal transformation
B
(
e
x
,
ϕ
)
=
I
+
ϕ
∂
B
∂
ϕ
|
ϕ
=
0
=
[
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
]
−
ϕ
[
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
]
{\displaystyle B(\mathbf {e} _{x},\phi )=I+\phi \left.{\frac {\partial B}{\partial \phi }}\right|_{\phi =0}={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}-\phi {\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix}}}
which is valid if ϕ is small (hence ϕ2 and higher powers are negligible), and can be interpreted as no boost (the first term I is the 4×4 identity matrix), followed by a small boost. The matrix
K
x
=
[
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
]
{\displaystyle K_{x}={\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix}}}
is the generator of the boost in the x direction, so the infinitesimal boost is
B
(
e
x
,
ϕ
)
=
I
−
ϕ
K
x
{\displaystyle B(\mathbf {e} _{x},\phi )=I-\phi K_{x}}
Now, ϕ is small, so dividing by a positive integer N gives an even smaller increment of rapidity ϕ/N, and N of these infinitesimal boosts will give the original infinitesimal boost with rapidity ϕ,
B
(
e
x
,
ϕ
)
=
(
I
−
ϕ
K
x
N
)
N
{\displaystyle B(\mathbf {e} _{x},\phi )=\left(I-{\frac {\phi K_{x}}{N}}\right)^{N}}
In the limit of an infinite number of infinitely small steps, we obtain the finite boost transformation
B
(
e
x
,
ϕ
)
=
lim
N
→
∞
(
I
−
ϕ
K
x
N
)
N
=
e
−
ϕ
K
x
{\displaystyle B(\mathbf {e} _{x},\phi )=\lim _{N\to \infty }\left(I-{\frac {\phi K_{x}}{N}}\right)^{N}=e^{-\phi K_{x}}}
which is the limit definition of the exponential due to Leonhard Euler, and is now true for any ϕ.
Repeating the process for the boosts in the y and z directions obtains the other generators
K
y
=
[
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
]
,
K
z
=
[
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
]
{\displaystyle K_{y}={\begin{bmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{bmatrix}}\,,\quad K_{z}={\begin{bmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}}
and the boosts are
B
(
e
y
,
ϕ
)
=
e
−
ϕ
K
y
,
B
(
e
z
,
ϕ
)
=
e
−
ϕ
K
z
.
{\displaystyle B(\mathbf {e} _{y},\phi )=e^{-\phi K_{y}}\,,\quad B(\mathbf {e} _{z},\phi )=e^{-\phi K_{z}}\,.}
For any direction, the infinitesimal transformation is (small ϕ and expansion to first order)
B
(
n
,
ϕ
)
=
I
+
ϕ
∂
B
∂
ϕ
|
ϕ
=
0
=
[
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
]
−
ϕ
[
0
n
x
n
y
n
z
n
x
0
0
0
n
y
0
0
0
n
z
0
0
0
]
{\displaystyle B(\mathbf {n} ,\phi )=I+\phi \left.{\frac {\partial B}{\partial \phi }}\right|_{\phi =0}={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}-\phi {\begin{bmatrix}0&n_{x}&n_{y}&n_{z}\\n_{x}&0&0&0\\n_{y}&0&0&0\\n_{z}&0&0&0\end{bmatrix}}}
where
[
0
n
x
n
y
n
z
n
x
0
0
0
n
y
0
0
0
n
z
0
0
0
]
=
n
x
K
x
+
n
y
K
y
+
n
z
K
z
=
n
⋅
K
{\displaystyle {\begin{bmatrix}0&n_{x}&n_{y}&n_{z}\\n_{x}&0&0&0\\n_{y}&0&0&0\\n_{z}&0&0&0\end{bmatrix}}=n_{x}K_{x}+n_{y}K_{y}+n_{z}K_{z}=\mathbf {n} \cdot \mathbf {K} }
is the generator of the boost in direction n. It is the full boost generator, a vector of matrices K = (Kx, Ky, Kz), projected into the direction of the boost n. The infinitesimal boost is
B
(
n
,
ϕ
)
=
I
−
ϕ
(
n
⋅
K
)
{\displaystyle B(\mathbf {n} ,\phi )=I-\phi (\mathbf {n} \cdot \mathbf {K} )}
Then in the limit of an infinite number of infinitely small steps, we obtain the finite boost transformation
B
(
n
,
ϕ
)
=
lim
N
→
∞
(
I
−
ϕ
(
n
⋅
K
)
N
)
N
=
e
−
ϕ
(
n
⋅
K
)
{\displaystyle B(\mathbf {n} ,\phi )=\lim _{N\to \infty }\left(I-{\frac {\phi (\mathbf {n} \cdot \mathbf {K} )}{N}}\right)^{N}=e^{-\phi (\mathbf {n} \cdot \mathbf {K} )}}
which is now true for any ϕ. Expanding the matrix exponential of −ϕ(n ⋅ K) in its power series
e
−
ϕ
n
⋅
K
=
∑
n
=
0
∞
1
n
!
(
−
ϕ
n
⋅
K
)
n
{\displaystyle e^{-\phi \mathbf {n} \cdot \mathbf {K} }=\sum _{n=0}^{\infty }{\frac {1}{n!}}(-\phi \mathbf {n} \cdot \mathbf {K} )^{n}}
we now need the powers of the generator. The square is
(
n
⋅
K
)
2
=
[
1
0
0
0
0
n
x
2
n
x
n
y
n
x
n
z
0
n
y
n
x
n
y
2
n
y
n
z
0
n
z
n
x
n
z
n
y
n
z
2
]
{\displaystyle (\mathbf {n} \cdot \mathbf {K} )^{2}={\begin{bmatrix}1&0&0&0\\0&n_{x}^{2}&n_{x}n_{y}&n_{x}n_{z}\\0&n_{y}n_{x}&n_{y}^{2}&n_{y}n_{z}\\0&n_{z}n_{x}&n_{z}n_{y}&n_{z}^{2}\end{bmatrix}}}
but the cube (n ⋅ K)3 returns to (n ⋅ K), and as always the zeroth power is the 4×4 identity, (n ⋅ K)0 = I. In general the odd powers n = 1, 3, 5, ... are
(
n
⋅
K
)
n
=
(
n
⋅
K
)
{\displaystyle (\mathbf {n} \cdot \mathbf {K} )^{n}=(\mathbf {n} \cdot \mathbf {K} )}
while the even powers n = 2, 4, 6, ... are
(
n
⋅
K
)
n
=
(
n
⋅
K
)
2
{\displaystyle (\mathbf {n} \cdot \mathbf {K} )^{n}=(\mathbf {n} \cdot \mathbf {K} )^{2}}
therefore the explicit form of the boost matrix depends only the generator and its square. Splitting the power series into an odd power series and an even power series, using the odd and even powers of the generator, and the Taylor series of sinh ϕ and cosh ϕ about ϕ = 0 obtains a more compact but detailed form of the boost matrix
e
−
ϕ
n
⋅
K
=
−
∑
n
=
1
,
3
,
5
…
∞
1
n
!
ϕ
n
(
n
⋅
K
)
n
+
∑
n
=
0
,
2
,
4
…
∞
1
n
!
ϕ
n
(
n
⋅
K
)
n
=
−
[
ϕ
+
ϕ
3
3
!
+
ϕ
5
5
!
+
⋯
]
(
n
⋅
K
)
+
I
+
[
−
1
+
1
+
1
2
!
ϕ
2
+
1
4
!
ϕ
4
+
1
6
!
ϕ
6
+
⋯
]
(
n
⋅
K
)
2
=
−
sinh
ϕ
(
n
⋅
K
)
+
I
+
(
−
1
+
cosh
ϕ
)
(
n
⋅
K
)
2
{\displaystyle {\begin{aligned}e^{-\phi \mathbf {n} \cdot \mathbf {K} }&=-\sum _{n=1,3,5\ldots }^{\infty }{\frac {1}{n!}}\phi ^{n}(\mathbf {n} \cdot \mathbf {K} )^{n}+\sum _{n=0,2,4\ldots }^{\infty }{\frac {1}{n!}}\phi ^{n}(\mathbf {n} \cdot \mathbf {K} )^{n}\\&=-\left[\phi +{\frac {\phi ^{3}}{3!}}+{\frac {\phi ^{5}}{5!}}+\cdots \right](\mathbf {n} \cdot \mathbf {K} )+I+\left[-1+1+{\frac {1}{2!}}\phi ^{2}+{\frac {1}{4!}}\phi ^{4}+{\frac {1}{6!}}\phi ^{6}+\cdots \right](\mathbf {n} \cdot \mathbf {K} )^{2}\\&=-\sinh \phi (\mathbf {n} \cdot \mathbf {K} )+I+(-1+\cosh \phi )(\mathbf {n} \cdot \mathbf {K} )^{2}\end{aligned}}}
where 0 = −1 + 1 is introduced for the even power series to complete the Taylor series for cosh ϕ. The boost is similar to Rodrigues' rotation formula,
B
(
n
,
ϕ
)
=
e
−
ϕ
n
⋅
K
=
I
−
sinh
ϕ
(
n
⋅
K
)
+
(
cosh
ϕ
−
1
)
(
n
⋅
K
)
2
.
{\displaystyle B(\mathbf {n} ,\phi )=e^{-\phi \mathbf {n} \cdot \mathbf {K} }=I-\sinh \phi (\mathbf {n} \cdot \mathbf {K} )+(\cosh \phi -1)(\mathbf {n} \cdot \mathbf {K} )^{2}\,.}
Negating the rapidity in the exponential gives the inverse transformation matrix,
B
(
n
,
−
ϕ
)
=
e
ϕ
n
⋅
K
=
I
+
sinh
ϕ
(
n
⋅
K
)
+
(
cosh
ϕ
−
1
)
(
n
⋅
K
)
2
.
{\displaystyle B(\mathbf {n} ,-\phi )=e^{\phi \mathbf {n} \cdot \mathbf {K} }=I+\sinh \phi (\mathbf {n} \cdot \mathbf {K} )+(\cosh \phi -1)(\mathbf {n} \cdot \mathbf {K} )^{2}\,.}
In quantum mechanics, relativistic quantum mechanics, and quantum field theory, a different convention is used for the boost generators; all of the boost generators are multiplied by a factor of the imaginary unit i = √−1.
== From experiments ==
Howard Percy Robertson and others showed that the Lorentz transformation can also be derived empirically. In order to achieve this, it's necessary to write down coordinate transformations that include experimentally testable parameters. For instance, let there be given a single "preferred" inertial frame
X
,
Y
,
Z
,
T
{\displaystyle X,Y,Z,T}
in which the speed of light is constant, isotropic, and independent of the velocity of the source. It is also assumed that Einstein synchronization and synchronization by slow clock transport are equivalent in this frame. Then assume another frame
x
,
y
,
z
,
t
{\displaystyle x,y,z,t}
in relative motion, in which clocks and rods have the same internal constitution as in the preferred frame. The following relations, however, are left undefined:
a
(
v
)
{\displaystyle a(v)}
differences in time measurements,
b
(
v
)
{\displaystyle b(v)}
differences in measured longitudinal lengths,
d
(
v
)
{\displaystyle d(v)}
differences in measured transverse lengths,
ε
(
v
)
{\displaystyle \varepsilon (v)}
depends on the clock synchronization procedure in the moving frame,
then the transformation formulas (assumed to be linear) between those frames are given by:
t
=
a
(
v
)
T
+
ε
(
v
)
x
x
=
b
(
v
)
(
X
−
v
T
)
y
=
d
(
v
)
Y
z
=
d
(
v
)
Z
{\displaystyle {\begin{aligned}t&=a(v)T+\varepsilon (v)x\\x&=b(v)(X-vT)\\y&=d(v)Y\\z&=d(v)Z\end{aligned}}}
ε
(
v
)
{\displaystyle \varepsilon (v)}
depends on the synchronization convention and is not determined experimentally, it obtains the value
−
v
/
c
2
{\displaystyle -v/c^{2}}
by using Einstein synchronization in both frames. The ratio between
b
(
v
)
{\displaystyle b(v)}
and
d
(
v
)
{\displaystyle d(v)}
is determined by the Michelson–Morley experiment, the ratio between
a
(
v
)
{\displaystyle a(v)}
and
b
(
v
)
{\displaystyle b(v)}
is determined by the Kennedy–Thorndike experiment, and
a
(
v
)
{\displaystyle a(v)}
alone is determined by the Ives–Stilwell experiment. In this way, they have been determined with great precision to
1
/
a
(
v
)
=
b
(
v
)
=
γ
{\displaystyle 1/a(v)=b(v)=\gamma }
and
d
(
v
)
=
1
{\displaystyle d(v)=1}
, which converts the above transformation into the Lorentz transformation.
== See also ==
== Notes ==
== References ==
Greiner, W.; Bromley, D. A. (2000). Relativistic Quantum Mechanics (3rd ed.). springer. ISBN 9783540674573.
Landau, L.D.; Lifshitz, E.M. (2002) [1939]. The Classical Theory of Fields. Course of Theoretical Physics. Vol. 2 (4th ed.). Butterworth–Heinemann. ISBN 0-7506-2768-9.
Weinberg, S. (2002), The Quantum Theory of Fields, vol. 1, Cambridge University Press, ISBN 0-521-55001-7 | Wikipedia/Derivations_of_the_Lorentz_transformations |
In physics, geometrothermodynamics (GTD) is a formalism developed in 2007 by Hernando Quevedo to describe the properties of thermodynamic systems in terms of concepts of differential geometry.
Consider a thermodynamic system in the framework of classical equilibrium thermodynamics. The states of thermodynamic equilibrium are considered as points of an abstract equilibrium space in which a Riemannian metric can be introduced in several ways. In particular, one can introduce Hessian metrics like the Fisher information metric, the Weinhold metric, the Ruppeiner metric and others, whose components are calculated as the Hessian of a particular thermodynamic potential.
Another possibility is to introduce metrics which are independent of the thermodynamic potential, a property which is shared by all thermodynamic systems in classical thermodynamics. Since a change of thermodynamic potential is equivalent to a Legendre transformation, and Legendre transformations do not act in the equilibrium space, it is necessary to introduce an auxiliary space to correctly handle the Legendre transformations. This is the so-called thermodynamic phase space. If the phase space is equipped with a Legendre invariant Riemannian metric, a smooth map can be introduced that induces a thermodynamic metric in the equilibrium manifold. The thermodynamic metric can then be used with different thermodynamic potentials without changing the geometric properties of the equilibrium manifold. One expects the geometric properties of the equilibrium manifold to be related to the macroscopic physical properties.
The details of this relation can be summarized in three main points:
Curvature is a measure of the thermodynamical interaction.
Curvature singularities correspond to curvature phase transitions.
Thermodynamic geodesics correspond to quasi-static processes.
== Geometric aspects ==
The main ingredient of GTD is a (2n + 1)-dimensional manifold
T
{\displaystyle {\mathcal {T}}}
with coordinates
Z
A
=
{
Φ
,
E
a
,
I
a
}
{\displaystyle Z^{A}=\{\Phi ,E^{a},I^{a}\}}
, where
Φ
{\displaystyle \Phi }
is an arbitrary thermodynamic potential,
E
a
{\displaystyle E^{a}}
,
a
=
1
,
2
,
…
,
n
{\displaystyle a=1,2,\ldots ,n}
, are the
extensive variables, and
I
a
{\displaystyle I^{a}}
the intensive variables. It is also
possible to introduce in a canonical manner the fundamental
one-form
Θ
=
d
Φ
−
δ
a
b
I
a
d
E
b
{\displaystyle \Theta =d\Phi -\delta _{ab}I^{a}dE^{b}}
(summation over repeated indices) with
δ
a
b
=
d
i
a
g
(
+
1
,
…
,
+
1
)
{\displaystyle \delta _{ab}={\rm {diag}}(+1,\ldots ,+1)}
, which satisfies the condition
Θ
∧
(
d
Θ
)
n
≠
0
{\displaystyle \Theta \wedge (d\Theta )^{n}\neq 0}
, where
n
{\displaystyle n}
is the number of thermodynamic
degrees of freedom of the system, and is invariant with respect to
Legendre transformations
{
Z
A
}
⟶
{
Z
~
A
}
=
{
Φ
~
,
E
~
a
,
I
~
a
}
,
Φ
=
Φ
~
−
δ
k
l
E
~
k
I
~
l
,
E
i
=
−
I
~
i
,
E
j
=
E
~
j
,
I
i
=
E
~
i
,
I
j
=
I
~
j
,
{\displaystyle \{Z^{A}\}\longrightarrow \{{\widetilde {Z}}^{A}\}=\{{\tilde {\Phi }},{\tilde {E}}^{a},{\tilde {I}}^{a}\}\ ,\quad \Phi ={\tilde {\Phi }}-\delta _{kl}{\tilde {E}}^{k}{\tilde {I}}^{l},\quad E^{i}=-{\tilde {I}}^{i},\quad E^{j}={\tilde {E}}^{j},\quad I^{i}={\tilde {E}}^{i},\quad I^{j}={\tilde {I}}^{j}\ ,}
where
i
∪
j
{\displaystyle i\cup j}
is any disjoint decomposition of the set of indices
{
1
,
…
,
n
}
{\displaystyle \{1,\ldots ,n\}}
,
and
k
,
l
=
1
,
…
,
i
{\displaystyle k,l=1,\ldots ,i}
. In particular, for
i
=
{
1
,
…
,
n
}
{\displaystyle i=\{1,\ldots ,n\}}
and
i
=
∅
{\displaystyle i=\emptyset }
we obtain
the total Legendre transformation and the identity, respectively.
It is also assumed that in
T
{\displaystyle {\mathcal {T}}}
there exists a metric
G
{\displaystyle G}
which is also
invariant with respect to Legendre transformations. The triad
(
T
,
Θ
,
G
)
{\displaystyle ({\mathcal {T}},\Theta ,G)}
defines a Riemannian contact manifold which is
called the thermodynamic phase space (phase manifold). The space of
thermodynamic equilibrium states (equilibrium manifold) is an
n-dimensional Riemannian submanifold
E
⊂
T
{\displaystyle {\mathcal {E}}\subset {\mathcal {T}}}
induced by a smooth map
φ
:
E
→
T
{\displaystyle \varphi :{\mathcal {E}}\rightarrow {\mathcal {T}}}
,
i.e.
φ
:
{
E
a
}
↦
{
Φ
,
E
a
,
I
a
}
{\displaystyle \varphi :\{E^{a}\}\mapsto \{\Phi ,E^{a},I^{a}\}}
, with
Φ
=
Φ
(
E
a
)
{\displaystyle \Phi =\Phi (E^{a})}
and
I
a
=
I
a
(
E
a
)
{\displaystyle I^{a}=I^{a}(E^{a})}
, such that
φ
∗
(
Θ
)
=
φ
∗
(
d
Φ
−
δ
a
b
I
a
d
E
b
)
=
0
{\displaystyle \varphi ^{*}(\Theta )=\varphi ^{*}(d\Phi -\delta _{ab}I^{a}dE^{b})=0}
holds, where
φ
∗
{\displaystyle \varphi ^{*}}
is the
pullback of
φ
{\displaystyle \varphi }
. The manifold
E
{\displaystyle {\mathcal {E}}}
is naturally equipped
with the Riemannian metric
g
=
φ
∗
(
G
)
{\displaystyle g=\varphi ^{*}(G)}
. The purpose of GTD is
to demonstrate that the geometric properties of
E
{\displaystyle {\mathcal {E}}}
are
related to the thermodynamic properties of a system with fundamental
thermodynamic equation
Φ
=
Φ
(
E
a
)
{\displaystyle \Phi =\Phi (E^{a})}
.
The condition of invariance with respect total Legendre transformations leads to the metrics
G
I
=
(
d
Φ
−
δ
a
b
I
a
d
E
b
)
2
+
Λ
(
ξ
a
b
E
a
I
b
)
(
δ
c
d
d
E
c
d
I
d
)
,
δ
a
b
=
d
i
a
g
(
1
,
…
,
1
)
{\displaystyle G^{I}=(d\Phi -\delta _{ab}I^{a}dE^{b})^{2}+\Lambda \,(\xi _{ab}E^{a}I^{b})\left(\delta _{cd}dE^{c}dI^{d}\right)\ ,\quad \delta _{ab}={\rm {diag}}(1,\ldots ,1)}
G
I
I
=
(
d
Φ
−
δ
a
b
I
a
d
E
b
)
2
+
Λ
(
ξ
a
b
E
a
I
b
)
(
η
c
d
d
E
c
d
I
d
)
,
η
a
b
=
d
i
a
g
(
−
1
,
1
,
…
,
1
)
{\displaystyle G^{II}=(d\Phi -\delta _{ab}I^{a}dE^{b})^{2}+\Lambda \,(\xi _{ab}E^{a}I^{b})\left(\eta _{cd}dE^{c}dI^{d}\right)\ ,\quad \eta _{ab}={\rm {diag}}(-1,1,\ldots ,1)}
where
ξ
a
b
{\displaystyle \xi _{ab}}
is a constant diagonal matrix that can be expressed in terms of
δ
a
b
{\displaystyle \delta _{ab}}
and
η
a
b
{\displaystyle \eta _{ab}}
, and
Λ
{\displaystyle \Lambda }
is an arbitrary Legendre invariant function of
Z
A
{\displaystyle Z^{A}}
. The metrics
G
I
{\displaystyle G^{I}}
and
G
I
I
{\displaystyle G^{II}}
have been used to describe thermodynamic systems with first and second order phase transitions, respectively. The most general metric which is invariant with respect to partial Legendre transformations is
G
I
I
I
=
(
d
Φ
−
δ
a
b
I
a
d
E
b
)
2
+
Λ
(
E
a
I
a
)
2
k
+
1
(
d
E
a
d
I
a
)
,
E
a
=
δ
a
b
E
b
,
I
a
=
δ
a
b
I
b
.
{\displaystyle G^{III}=(d\Phi -\delta _{ab}I^{a}dE^{b})^{2}+\Lambda \,(E_{a}I_{a})^{2k+1}\left(dE^{a}dI^{a}\right)\ ,\quad E_{a}=\delta _{ab}E^{b}\ ,\quad I_{a}=\delta _{ab}I^{b}\ .}
The components of the corresponding metric for the equilibrium manifold
E
{\displaystyle {\mathcal {E}}}
can be computed as
g
a
b
=
∂
Z
A
∂
E
a
∂
Z
B
∂
E
b
G
A
B
.
{\displaystyle g_{ab}={\frac {\partial Z^{A}}{\partial E^{a}}}{\frac {\partial Z^{B}}{\partial E^{b}}}G_{AB}\ .}
== Applications ==
GTD has been applied to describe laboratory systems like the ideal gas, van der Waals gas, the Ising model, etc., more exotic systems like black holes in different gravity theories, in the context of relativistic cosmology, and to describe chemical reactions
.
== References == | Wikipedia/Geometrothermodynamics |
In mathematics, a vector-valued differential form on a manifold M is a differential form on M with values in a vector space V. More generally, it is a differential form with values in some vector bundle E over M. Ordinary differential forms can be viewed as R-valued differential forms.
An important case of vector-valued differential forms are Lie algebra-valued forms. (A connection form is an example of such a form.)
== Definition ==
Let M be a smooth manifold and E → M be a smooth vector bundle over M. We denote the space of smooth sections of a bundle E by Γ(E). An E-valued differential form of degree p is a smooth section of the tensor product bundle of E with Λp(T ∗M), the p-th exterior power of the cotangent bundle of M. The space of such forms is denoted by
Ω
p
(
M
,
E
)
=
Γ
(
E
⊗
Λ
p
T
∗
M
)
.
{\displaystyle \Omega ^{p}(M,E)=\Gamma (E\otimes \Lambda ^{p}T^{*}M).}
Because Γ is a strong monoidal functor, this can also be interpreted as
Γ
(
E
⊗
Λ
p
T
∗
M
)
=
Γ
(
E
)
⊗
Ω
0
(
M
)
Γ
(
Λ
p
T
∗
M
)
=
Γ
(
E
)
⊗
Ω
0
(
M
)
Ω
p
(
M
)
,
{\displaystyle \Gamma (E\otimes \Lambda ^{p}T^{*}M)=\Gamma (E)\otimes _{\Omega ^{0}(M)}\Gamma (\Lambda ^{p}T^{*}M)=\Gamma (E)\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M),}
where the latter two tensor products are the tensor product of modules over the ring Ω0(M) of smooth R-valued functions on M (see the seventh example here). By convention, an E-valued 0-form is just a section of the bundle E. That is,
Ω
0
(
M
,
E
)
=
Γ
(
E
)
.
{\displaystyle \Omega ^{0}(M,E)=\Gamma (E).\,}
Equivalently, an E-valued differential form can be defined as a bundle morphism
T
M
⊗
⋯
⊗
T
M
→
E
{\displaystyle TM\otimes \cdots \otimes TM\to E}
which is totally skew-symmetric.
Let V be a fixed vector space. A V-valued differential form of degree p is a differential form of degree p with values in the trivial bundle M × V. The space of such forms is denoted Ωp(M, V). When V = R one recovers the definition of an ordinary differential form. If V is finite-dimensional, then one can show that the natural homomorphism
Ω
p
(
M
)
⊗
R
V
→
Ω
p
(
M
,
V
)
,
{\displaystyle \Omega ^{p}(M)\otimes _{\mathbb {R} }V\to \Omega ^{p}(M,V),}
where the first tensor product is of vector spaces over R, is an isomorphism.
== Operations on vector-valued forms ==
=== Pullback ===
One can define the pullback of vector-valued forms by smooth maps just as for ordinary forms. The pullback of an E-valued form on N by a smooth map φ : M → N is an (φ*E)-valued form on M, where φ*E is the pullback bundle of E by φ.
The formula is given just as in the ordinary case. For any E-valued p-form ω on N the pullback φ*ω is given by
(
φ
∗
ω
)
x
(
v
1
,
⋯
,
v
p
)
=
ω
φ
(
x
)
(
d
φ
x
(
v
1
)
,
⋯
,
d
φ
x
(
v
p
)
)
.
{\displaystyle (\varphi ^{*}\omega )_{x}(v_{1},\cdots ,v_{p})=\omega _{\varphi (x)}(\mathrm {d} \varphi _{x}(v_{1}),\cdots ,\mathrm {d} \varphi _{x}(v_{p})).}
=== Wedge product ===
Just as for ordinary differential forms, one can define a wedge product of vector-valued forms. The wedge product of an E1-valued p-form with an E2-valued q-form is naturally an (E1⊗E2)-valued (p+q)-form:
∧
:
Ω
p
(
M
,
E
1
)
×
Ω
q
(
M
,
E
2
)
→
Ω
p
+
q
(
M
,
E
1
⊗
E
2
)
.
{\displaystyle \wedge :\Omega ^{p}(M,E_{1})\times \Omega ^{q}(M,E_{2})\to \Omega ^{p+q}(M,E_{1}\otimes E_{2}).}
The definition is just as for ordinary forms with the exception that real multiplication is replaced with the tensor product:
(
ω
∧
η
)
(
v
1
,
⋯
,
v
p
+
q
)
=
1
p
!
q
!
∑
σ
∈
S
p
+
q
sgn
(
σ
)
ω
(
v
σ
(
1
)
,
⋯
,
v
σ
(
p
)
)
⊗
η
(
v
σ
(
p
+
1
)
,
⋯
,
v
σ
(
p
+
q
)
)
.
{\displaystyle (\omega \wedge \eta )(v_{1},\cdots ,v_{p+q})={\frac {1}{p!q!}}\sum _{\sigma \in S_{p+q}}\operatorname {sgn}(\sigma )\omega (v_{\sigma (1)},\cdots ,v_{\sigma (p)})\otimes \eta (v_{\sigma (p+1)},\cdots ,v_{\sigma (p+q)}).}
In particular, the wedge product of an ordinary (R-valued) p-form with an E-valued q-form is naturally an E-valued (p+q)-form (since the tensor product of E with the trivial bundle M × R is naturally isomorphic to E).
In terms of local frames {eα} and {lβ} for E1 and E2 respectively, the wedge product of an E1-valued p-form ω = ωα eα, and an E2-valued q-form η = ηβ lβ is
ω
∧
η
=
∑
α
,
β
(
ω
α
∧
η
β
)
(
e
α
⊗
l
β
)
,
{\displaystyle \omega \wedge \eta =\sum _{\alpha ,\beta }(\omega ^{\alpha }\wedge \eta ^{\beta })(e_{\alpha }\otimes l_{\beta }),}
where ωα ∧ ηβ is the ordinary wedge product of
R
{\displaystyle \mathbb {R} }
-valued forms.
For ω ∈ Ωp(M) and η ∈ Ωq(M, E) one has the usual commutativity relation:
ω
∧
η
=
(
−
1
)
p
q
η
∧
ω
.
{\displaystyle \omega \wedge \eta =(-1)^{pq}\eta \wedge \omega .}
In general, the wedge product of two E-valued forms is not another E-valued form, but rather an (E⊗E)-valued form. However, if E is an algebra bundle (i.e. a bundle of algebras rather than just vector spaces) one can compose with multiplication in E to obtain an E-valued form. If E is a bundle of commutative, associative algebras then, with this modified wedge product, the set of all E-valued differential forms
Ω
(
M
,
E
)
=
⨁
p
=
0
dim
M
Ω
p
(
M
,
E
)
{\displaystyle \Omega (M,E)=\bigoplus _{p=0}^{\dim M}\Omega ^{p}(M,E)}
becomes a graded-commutative associative algebra. If the fibers of E are not commutative then Ω(M,E) will not be graded-commutative.
=== Exterior derivative ===
For any vector space V there is a natural exterior derivative on the space of V-valued forms. This is just the ordinary exterior derivative acting component-wise relative to any basis of V. Explicitly, if {eα} is a basis for V then the differential of a V-valued p-form ω = ωαeα is given by
d
ω
=
(
d
ω
α
)
e
α
.
{\displaystyle d\omega =(d\omega ^{\alpha })e_{\alpha }.\,}
The exterior derivative on V-valued forms is completely characterized by the usual relations:
d
(
ω
+
η
)
=
d
ω
+
d
η
d
(
ω
∧
η
)
=
d
ω
∧
η
+
(
−
1
)
p
ω
∧
d
η
(
p
=
deg
ω
)
d
(
d
ω
)
=
0.
{\displaystyle {\begin{aligned}&d(\omega +\eta )=d\omega +d\eta \\&d(\omega \wedge \eta )=d\omega \wedge \eta +(-1)^{p}\,\omega \wedge d\eta \qquad (p=\deg \omega )\\&d(d\omega )=0.\end{aligned}}}
More generally, the above remarks apply to E-valued forms where E is any flat vector bundle over M (i.e. a vector bundle whose transition functions are constant). The exterior derivative is defined as above on any local trivialization of E.
If E is not flat then there is no natural notion of an exterior derivative acting on E-valued forms. What is needed is a choice of connection on E. A connection on E is a linear differential operator taking sections of E to E-valued one forms:
∇
:
Ω
0
(
M
,
E
)
→
Ω
1
(
M
,
E
)
.
{\displaystyle \nabla :\Omega ^{0}(M,E)\to \Omega ^{1}(M,E).}
If E is equipped with a connection ∇ then there is a unique covariant exterior derivative
d
∇
:
Ω
p
(
M
,
E
)
→
Ω
p
+
1
(
M
,
E
)
{\displaystyle d_{\nabla }:\Omega ^{p}(M,E)\to \Omega ^{p+1}(M,E)}
extending ∇. The covariant exterior derivative is characterized by linearity and the equation
d
∇
(
ω
∧
η
)
=
d
∇
ω
∧
η
+
(
−
1
)
p
ω
∧
d
η
{\displaystyle d_{\nabla }(\omega \wedge \eta )=d_{\nabla }\omega \wedge \eta +(-1)^{p}\,\omega \wedge d\eta }
where ω is a E-valued p-form and η is an ordinary q-form. In general, one need not have d∇2 = 0. In fact, this happens if and only if the connection ∇ is flat (i.e. has vanishing curvature).
== Basic or tensorial forms on principal bundles ==
Let E → M be a smooth vector bundle of rank k over M and let π : F(E) → M be the (associated) frame bundle of E, which is a principal GLk(R) bundle over M. The pullback of E by π is canonically isomorphic to F(E) ×ρ Rk via the inverse of [u, v] →u(v), where ρ is the standard representation. Therefore, the pullback by π of an E-valued form on M determines an Rk-valued form on F(E). It is not hard to check that this pulled back form is right-equivariant with respect to the natural action of GLk(R) on F(E) × Rk and vanishes on vertical vectors (tangent vectors to F(E) which lie in the kernel of dπ). Such vector-valued forms on F(E) are important enough to warrant special terminology: they are called basic or tensorial forms on F(E).
Let π : P → M be a (smooth) principal G-bundle and let V be a fixed vector space together with a representation ρ : G → GL(V). A basic or tensorial form on P of type ρ is a V-valued form ω on P that is equivariant and horizontal in the sense that
(
R
g
)
∗
ω
=
ρ
(
g
−
1
)
ω
{\displaystyle (R_{g})^{*}\omega =\rho (g^{-1})\omega \,}
for all g ∈ G, and
ω
(
v
1
,
…
,
v
p
)
=
0
{\displaystyle \omega (v_{1},\ldots ,v_{p})=0}
whenever at least one of the vi are vertical (i.e., dπ(vi) = 0).
Here Rg denotes the right action of G on P for some g ∈ G. Note that for 0-forms the second condition is vacuously true.
Example: If ρ is the adjoint representation of G on the Lie algebra, then the connection form ω satisfies the first condition (but not the second). The associated curvature form Ω satisfies both; hence Ω is a tensorial form of adjoint type. The "difference" of two connection forms is a tensorial form.
Given P and ρ as above one can construct the associated vector bundle E = P ×ρ V. Tensorial q-forms on P are in a natural one-to-one correspondence with E-valued q-forms on M. As in the case of the principal bundle F(E) above, given a q-form
ϕ
¯
{\displaystyle {\overline {\phi }}}
on M with values in E, define φ on P fiberwise by, say at u,
ϕ
=
u
−
1
π
∗
ϕ
¯
{\displaystyle \phi =u^{-1}\pi ^{*}{\overline {\phi }}}
where u is viewed as a linear isomorphism
V
→
≃
E
π
(
u
)
=
(
π
∗
E
)
u
,
v
↦
[
u
,
v
]
{\displaystyle V{\overset {\simeq }{\to }}E_{\pi (u)}=(\pi ^{*}E)_{u},v\mapsto [u,v]}
. φ is then a tensorial form of type ρ. Conversely, given a tensorial form φ of type ρ, the same formula defines an E-valued form
ϕ
¯
{\displaystyle {\overline {\phi }}}
on M (cf. the Chern–Weil homomorphism.) In particular, there is a natural isomorphism of vector spaces
Γ
(
M
,
E
)
≃
{
f
:
P
→
V
|
f
(
u
g
)
=
ρ
(
g
)
−
1
f
(
u
)
}
,
f
¯
↔
f
{\displaystyle \Gamma (M,E)\simeq \{f:P\to V|f(ug)=\rho (g)^{-1}f(u)\},\,{\overline {f}}\leftrightarrow f}
.
Example: Let E be the tangent bundle of M. Then identity bundle map idE: E →E is an E-valued one form on M. The tautological one-form is a unique one-form on the frame bundle of E that corresponds to idE. Denoted by θ, it is a tensorial form of standard type.
Now, suppose there is a connection on P so that there is an exterior covariant differentiation D on (various) vector-valued forms on P. Through the above correspondence, D also acts on E-valued forms: define ∇ by
∇
ϕ
¯
=
D
ϕ
¯
.
{\displaystyle \nabla {\overline {\phi }}={\overline {D\phi }}.}
In particular for zero-forms,
∇
:
Γ
(
M
,
E
)
→
Γ
(
M
,
T
∗
M
⊗
E
)
{\displaystyle \nabla :\Gamma (M,E)\to \Gamma (M,T^{*}M\otimes E)}
.
This is exactly the covariant derivative for the connection on the vector bundle E.
== Examples ==
Siegel modular forms arise as vector-valued differential forms on Siegel modular varieties.
== Notes ==
== References ==
Shoshichi Kobayashi and Katsumi Nomizu (1963) Foundations of Differential Geometry, Vol. 1, Wiley Interscience. | Wikipedia/Vector-valued_differential_form |
In mathematics, specifically linear algebra, a degenerate bilinear form f (x, y ) on a vector space V is a bilinear form such that the map from V to V∗ (the dual space of V ) given by v ↦ (x ↦ f (x, v )) is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0\,}
for all
y
∈
V
.
{\displaystyle \,y\in V.}
== Nondegenerate forms ==
A nondegenerate or nonsingular form is a bilinear form that is not degenerate, meaning that
v
↦
(
x
↦
f
(
x
,
v
)
)
{\displaystyle v\mapsto (x\mapsto f(x,v))}
is an isomorphism, or equivalently in finite dimensions, if and only if
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0}
for all
y
∈
V
{\displaystyle y\in V}
implies that
x
=
0
{\displaystyle x=0}
.
== Using the determinant ==
If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is singular, and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis.
== Related notions ==
If for a quadratic form Q there is a non-zero vector v ∈ V such that Q(v) = 0, then Q is an isotropic quadratic form. If Q has the same sign for all non-zero vectors, it is a definite quadratic form or an anisotropic quadratic form.
There is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings.
== Examples ==
The study of real, quadratic algebras shows the distinction between types of quadratic forms. The product zz* is a quadratic form for each of the complex numbers, split-complex numbers, and dual numbers. For z = x + ε y, the dual number form is x2 which is a degenerate quadratic form. The split-complex case is an isotropic form, and the complex case is a definite form.
The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map
V
→
V
∗
{\displaystyle V\to V^{*}}
be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold.
== Infinite dimensions ==
Note that in an infinite-dimensional space, we can have a bilinear form ƒ for which
v
↦
(
x
↦
f
(
x
,
v
)
)
{\displaystyle v\mapsto (x\mapsto f(x,v))}
is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form
f
(
ϕ
,
ψ
)
=
∫
ψ
(
x
)
ϕ
(
x
)
d
x
{\displaystyle f(\phi ,\psi )=\int \psi (x)\phi (x)\,dx}
is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies
f
(
ϕ
,
ψ
)
=
0
{\displaystyle f(\phi ,\psi )=0}
for all
ϕ
{\displaystyle \phi }
implies that
ψ
=
0.
{\displaystyle \psi =0.\,}
In such a case where ƒ satisfies injectivity (but not necessarily surjectivity), ƒ is said to be weakly nondegenerate.
== Terminology ==
If f vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form f on V the set of vectors
{
x
∈
V
∣
f
(
x
,
y
)
=
0
for all
y
∈
V
}
{\displaystyle \{x\in V\mid f(x,y)=0{\mbox{ for all }}y\in V\}}
forms a totally degenerate subspace of V. The map f is nondegenerate if and only if this subspace is trivial.
Geometrically, an isotropic line of the quadratic form corresponds to a point of the associated quadric hypersurface in projective space. Such a line is additionally isotropic for the bilinear form if and only if the corresponding point is a singularity. Hence, over an algebraically closed field, Hilbert's Nullstellensatz guarantees that the quadratic form always has isotropic lines, while the bilinear form has them if and only if the surface is singular.
== See also ==
Indefinite inner product space – generalization of Hilbert space with indefinite signaturePages displaying wikidata descriptions as a fallback
Dual system
Linear form – Linear map from a vector space to its field of scalars
== References == | Wikipedia/Non-degenerate |
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor (after Bernhard Riemann and Elwin Bruno Christoffel) is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold (i.e., it is a tensor field). It is a local invariant of Riemannian metrics that measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
It is a central mathematical tool in the theory of general relativity, the modern theory of gravity. The curvature of spacetime is in principle observable via the geodesic deviation equation. The curvature tensor represents the tidal force experienced by a rigid body moving along a geodesic in a sense made precise by the Jacobi equation.
== Definition ==
Let
(
M
,
g
)
{\displaystyle (M,g)}
be a Riemannian or pseudo-Riemannian manifold, and
X
(
M
)
{\displaystyle {\mathfrak {X}}(M)}
be the space of all vector fields on
M
{\displaystyle M}
. We define the Riemann curvature tensor as a map
X
(
M
)
×
X
(
M
)
×
X
(
M
)
→
X
(
M
)
{\displaystyle {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\rightarrow {\mathfrak {X}}(M)}
by the following formula where
∇
{\displaystyle \nabla }
is the Levi-Civita connection:
R
(
X
,
Y
)
Z
=
∇
X
∇
Y
Z
−
∇
Y
∇
X
Z
−
∇
[
X
,
Y
]
Z
{\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z}
or equivalently
R
(
X
,
Y
)
=
[
∇
X
,
∇
Y
]
−
∇
[
X
,
Y
]
{\displaystyle R(X,Y)=[\nabla _{X},\nabla _{Y}]-\nabla _{[X,Y]}}
where
[
X
,
Y
]
{\displaystyle [X,Y]}
is the Lie bracket of vector fields and
[
∇
X
,
∇
Y
]
{\displaystyle [\nabla _{X},\nabla _{Y}]}
is a commutator of differential operators. It turns out that the right-hand side actually only depends on the value of the vector fields
X
,
Y
,
Z
{\displaystyle X,Y,Z}
at a given point, which is notable since the covariant derivative of a vector field also depends on the field values in a neighborhood of the point. Hence,
R
{\displaystyle R}
is a
(
1
,
3
)
{\displaystyle (1,3)}
-tensor field. For fixed
X
,
Y
{\displaystyle X,Y}
, the linear transformation
Z
↦
R
(
X
,
Y
)
Z
{\displaystyle Z\mapsto R(X,Y)Z}
is also called the curvature transformation or endomorphism. Occasionally, the curvature tensor is defined with the opposite sign.
The curvature tensor measures noncommutativity of the covariant derivative, and as such is the integrability obstruction for the existence of an isometry with Euclidean space (called, in this context, flat space).
Since the Levi-Civita connection is torsion-free, its curvature can also be expressed in terms of the second covariant derivative
∇
X
,
Y
2
Z
=
∇
X
∇
Y
Z
−
∇
∇
X
Y
Z
{\textstyle \nabla _{X,Y}^{2}Z=\nabla _{X}\nabla _{Y}Z-\nabla _{\nabla _{X}Y}Z}
which depends only on the values of
X
,
Y
{\displaystyle X,Y}
at a point.
The curvature can then be written as
R
(
X
,
Y
)
=
∇
X
,
Y
2
−
∇
Y
,
X
2
{\displaystyle R(X,Y)=\nabla _{X,Y}^{2}-\nabla _{Y,X}^{2}}
Thus, the curvature tensor measures the noncommutativity of the second covariant derivative. In abstract index notation,
R
d
c
a
b
Z
c
=
∇
a
∇
b
Z
d
−
∇
b
∇
a
Z
d
.
{\displaystyle R^{d}{}_{cab}Z^{c}=\nabla _{a}\nabla _{b}Z^{d}-\nabla _{b}\nabla _{a}Z^{d}.}
The Riemann curvature tensor is also the commutator of the covariant derivative of an arbitrary covector
A
ν
{\displaystyle A_{\nu }}
with itself:
A
ν
;
ρ
σ
−
A
ν
;
σ
ρ
=
A
β
R
β
ν
ρ
σ
.
{\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }.}
This formula is often called the Ricci identity. This is the classical method used by Ricci and Levi-Civita to obtain an expression for the Riemann curvature tensor. This identity can be generalized to get the commutators for two covariant derivatives of arbitrary tensors as follows
∇
δ
∇
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
∇
γ
∇
δ
T
α
1
⋯
α
r
β
1
⋯
β
s
=
R
α
1
ρ
δ
γ
T
ρ
α
2
⋯
α
r
β
1
⋯
β
s
+
…
+
R
α
r
ρ
δ
γ
T
α
1
⋯
α
r
−
1
ρ
β
1
⋯
β
s
−
R
σ
β
1
δ
γ
T
α
1
⋯
α
r
σ
β
2
⋯
β
s
−
…
−
R
σ
β
s
δ
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
1
σ
{\displaystyle {\begin{aligned}&\nabla _{\delta }\nabla _{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\nabla _{\gamma }\nabla _{\delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}\\[3pt]={}&R^{\alpha _{1}}{}_{\rho \delta \gamma }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\ldots +R^{\alpha _{r}}{}_{\rho \delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}-R^{\sigma }{}_{\beta _{1}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}-\ldots -R^{\sigma }{}_{\beta _{s}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\end{aligned}}}
This formula also applies to tensor densities without alteration, because for the Levi-Civita (not generic) connection one gets:
∇
μ
(
g
)
≡
(
g
)
;
μ
=
0
,
{\displaystyle \nabla _{\mu }\left({\sqrt {g}}\right)\equiv \left({\sqrt {g}}\right)_{;\mu }=0,}
where
g
=
|
det
(
g
μ
ν
)
|
.
{\displaystyle g=\left|\det \left(g_{\mu \nu }\right)\right|.}
It is sometimes convenient to also define the purely covariant version of the curvature tensor by
R
σ
μ
ν
ρ
=
g
ρ
ζ
R
ζ
σ
μ
ν
.
{\displaystyle R_{\sigma \mu \nu \rho }=g_{\rho \zeta }R^{\zeta }{}_{\sigma \mu \nu }.}
== Geometric meaning ==
=== Informally ===
One can see the effects of curved space by comparing a tennis court and the Earth. Start at the lower right corner of the tennis court, with a racket held out towards north. Then while walking around the outline of the court, at each step make sure the tennis racket is maintained in the same orientation, parallel to its previous positions. Once the loop is complete the tennis racket will be parallel to its initial starting position. This is because tennis courts are built so the surface is flat. On the other hand, the surface of the Earth is curved: we can complete a loop on the surface of the Earth. Starting at the equator, point a tennis racket north along the surface of the Earth. Once again the tennis racket should always remain parallel to its previous position, using the local plane of the horizon as a reference. For this path, first walk to the north pole, then walk sideways (i.e. without turning), then down to the equator, and finally walk backwards to your starting position. Now the tennis racket will be pointing towards the west, even though when you began your journey it pointed north and you never turned your body. This process is akin to parallel transporting a vector along the path and the difference identifies how lines which appear "straight" are only "straight" locally. Each time a loop is completed the tennis racket will be deflected further from its initial position by an amount depending on the distance and the curvature of the surface. It is possible to identify paths along a curved surface where parallel transport works as it does on flat space. These are the geodesics of the space, for example any segment of a great circle of a sphere.
The concept of a curved space in mathematics differs from conversational usage. For example, if the above process was completed on a cylinder one would find that it is not curved overall as the curvature around the cylinder cancels with the flatness along the cylinder, which is a consequence of Gaussian curvature and Gauss's Theorema Egregium. A familiar example of this is a floppy pizza slice, which will remain rigid along its length if it is curved along its width.
The Riemann curvature tensor is a way to capture a measure of the intrinsic curvature. When you write it down in terms of its components (like writing down the components of a vector), it consists of a multi-dimensional array of sums and products of partial derivatives (some of those partial derivatives can be thought of as akin to capturing the curvature imposed upon someone walking in straight lines on a curved surface).
=== Formally ===
When a vector in a Euclidean space is parallel transported around a loop, it will again point in the initial direction after returning to its original position. However, this property does not hold in the general case. The Riemann curvature tensor directly measures the failure of this in a general Riemannian manifold. This failure is known as the non-holonomy of the manifold.
Let
x
t
{\displaystyle x_{t}}
be a curve in a Riemannian manifold
M
{\displaystyle M}
. Denote by
τ
x
t
:
T
x
0
M
→
T
x
t
M
{\displaystyle \tau _{x_{t}}:T_{x_{0}}M\to T_{x_{t}}M}
the parallel transport map along
x
t
{\displaystyle x_{t}}
. The parallel transport maps are related to the covariant derivative by
∇
x
˙
0
Y
=
lim
h
→
0
1
h
(
τ
x
h
−
1
(
Y
x
h
)
−
Y
x
0
)
=
d
d
t
(
τ
x
t
−
1
(
Y
x
t
)
)
|
t
=
0
{\displaystyle \nabla _{{\dot {x}}_{0}}Y=\lim _{h\to 0}{\frac {1}{h}}\left(\tau _{x_{h}}^{-1}\left(Y_{x_{h}}\right)-Y_{x_{0}}\right)=\left.{\frac {d}{dt}}\left(\tau _{x_{t}}^{-1}(Y_{x_{t}})\right)\right|_{t=0}}
for each vector field
Y
{\displaystyle Y}
defined along the curve.
Suppose that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are a pair of commuting vector fields. Each of these fields generates a one-parameter group of diffeomorphisms in a neighborhood of
x
0
{\displaystyle x_{0}}
. Denote by
τ
t
X
{\displaystyle \tau _{tX}}
and
τ
t
Y
{\displaystyle \tau _{tY}}
, respectively, the parallel transports along the flows of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
for time
t
{\displaystyle t}
. Parallel transport of a vector
Z
∈
T
x
0
M
{\displaystyle Z\in T_{x_{0}}M}
around the quadrilateral with sides
t
Y
{\displaystyle tY}
,
s
X
{\displaystyle sX}
,
−
t
Y
{\displaystyle -tY}
,
−
s
X
{\displaystyle -sX}
is given by
τ
s
X
−
1
τ
t
Y
−
1
τ
s
X
τ
t
Y
Z
.
{\displaystyle \tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z.}
The difference between this and
Z
{\displaystyle Z}
measures the failure of parallel transport to return
Z
{\displaystyle Z}
to its original position in the tangent space
T
x
0
M
{\displaystyle T_{x_{0}}M}
. Shrinking the loop by sending
s
,
t
→
0
{\displaystyle s,t\to 0}
gives the infinitesimal description of this deviation:
d
d
s
d
d
t
τ
s
X
−
1
τ
t
Y
−
1
τ
s
X
τ
t
Y
Z
|
s
=
t
=
0
=
(
∇
X
∇
Y
−
∇
Y
∇
X
−
∇
[
X
,
Y
]
)
Z
=
R
(
X
,
Y
)
Z
{\displaystyle \left.{\frac {d}{ds}}{\frac {d}{dt}}\tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z\right|_{s=t=0}=\left(\nabla _{X}\nabla _{Y}-\nabla _{Y}\nabla _{X}-\nabla _{[X,Y]}\right)Z=R(X,Y)Z}
where
R
{\displaystyle R}
is the Riemann curvature tensor.
== Coordinate expression ==
Converting to the tensor index notation, the Riemann curvature tensor is given by
R
ρ
σ
μ
ν
=
d
x
ρ
(
R
(
∂
μ
,
∂
ν
)
∂
σ
)
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=dx^{\rho }\left(R\left(\partial _{\mu },\partial _{\nu }\right)\partial _{\sigma }\right)}
where
∂
μ
=
∂
/
∂
x
μ
{\displaystyle \partial _{\mu }=\partial /\partial x^{\mu }}
are the coordinate vector fields. The above expression can be written using Christoffel symbols:
R
ρ
σ
μ
ν
=
∂
μ
Γ
ρ
ν
σ
−
∂
ν
Γ
ρ
μ
σ
+
Γ
ρ
μ
λ
Γ
λ
ν
σ
−
Γ
ρ
ν
λ
Γ
λ
μ
σ
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\partial _{\mu }\Gamma ^{\rho }{}_{\nu \sigma }-\partial _{\nu }\Gamma ^{\rho }{}_{\mu \sigma }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }}
(See also List of formulas in Riemannian geometry).
== Symmetries and identities ==
The Riemann curvature tensor has the following symmetries and identities:
where the bracket
⟨
,
⟩
{\displaystyle \langle ,\rangle }
refers to the inner product on the tangent space induced by the metric tensor and
the brackets and parentheses on the indices denote the antisymmetrization and symmetrization operators, respectively. If there is nonzero torsion, the Bianchi identities involve the torsion tensor.
The first (algebraic) Bianchi identity was discovered by Ricci, but is often called the first Bianchi identity or algebraic Bianchi identity, because it looks similar to the differential Bianchi identity.
The first three identities form a complete list of symmetries of the curvature tensor, i.e. given any tensor which satisfies the identities above, one can find a Riemannian manifold with such a curvature tensor at some point. Simple calculations show that such a tensor has
n
2
(
n
2
−
1
)
/
12
{\displaystyle n^{2}\left(n^{2}-1\right)/12}
independent components. Interchange symmetry follows from these. The algebraic symmetries are also equivalent to saying that R belongs to the image of the Young symmetrizer corresponding to the partition 2+2.
On a Riemannian manifold one has the covariant derivative
∇
u
R
{\displaystyle \nabla _{u}R}
and the Bianchi identity (often called the second Bianchi identity or differential Bianchi identity) takes the form of the last identity in the table.
== Ricci curvature ==
The Ricci curvature tensor is the contraction of the first and third indices of the Riemann tensor.
R
a
b
⏟
Ricci
≡
R
c
a
c
b
=
g
c
d
R
c
a
d
b
⏟
Riemann
{\displaystyle \underbrace {R_{ab}} _{\text{Ricci}}\equiv R^{c}{}_{acb}=g^{cd}\underbrace {R_{cadb}} _{\text{Riemann}}}
== Special cases ==
=== Surfaces ===
For a two-dimensional surface, the Bianchi identities imply that the Riemann tensor has only one independent component, which means that the Ricci scalar completely determines the Riemann tensor. There is only one valid expression for the Riemann tensor which fits the required symmetries:
R
a
b
c
d
=
f
(
R
)
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
{\displaystyle R_{abcd}=f(R)\left(g_{ac}g_{db}-g_{ad}g_{cb}\right)}
and by contracting with the metric twice we find the explicit form:
R
a
b
c
d
=
K
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
,
{\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right),}
where
g
a
b
{\displaystyle g_{ab}}
is the metric tensor and
K
=
R
/
2
{\displaystyle K=R/2}
is a function called the Gaussian curvature and
a
{\displaystyle a}
,
b
{\displaystyle b}
,
c
{\displaystyle c}
and
d
{\displaystyle d}
take values either 1 or 2. The Riemann tensor has only one functionally independent component. The Gaussian curvature coincides with the sectional curvature of the surface. It is also exactly half the scalar curvature of the 2-manifold, while the Ricci curvature tensor of the surface is simply given by
R
a
b
=
K
g
a
b
.
{\displaystyle R_{ab}=Kg_{ab}.}
=== Space forms ===
A Riemannian manifold is a space form if its sectional curvature is equal to a constant
K
{\displaystyle K}
. The Riemann tensor of a space form is given by
R
a
b
c
d
=
K
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
.
{\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right).}
Conversely, except in dimension 2, if the curvature of a Riemannian manifold has this form for some function
K
{\displaystyle K}
, then the Bianchi identities imply that
K
{\displaystyle K}
is constant and thus that the manifold is (locally) a space form.
== See also ==
Introduction to the mathematics of general relativity
Decomposition of the Riemann curvature tensor
Curvature of Riemannian manifolds
Ricci curvature tensor
== Citations ==
== References == | Wikipedia/Riemannian_curvature_tensor |
Yang–Mills theory is a quantum field theory for nuclear binding devised by Chen Ning Yang and Robert Mills in 1953, as well as a generic term for the class of similar theories. The Yang–Mills theory is a gauge theory based on a special unitary group SU(n), or more generally any compact Lie group. A Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory of the strong force (based on SU(3)). Thus it forms the basis of the understanding of the Standard Model of particle physics.
== History and qualitative description ==
=== Gauge theory in electrodynamics ===
All known fundamental interactions can be described in terms of gauge theories, but working this out took decades. Hermann Weyl's pioneering work on this project started in 1915 when his colleague Emmy Noether proved that every conserved physical quantity has a matching symmetry, and culminated in 1928 when he published his book applying the geometrical theory of symmetry (group theory) to quantum mechanics.: 194 Weyl named the relevant symmetry in Noether's theorem the "gauge symmetry", by analogy to distance standardization in railroad gauges.
Erwin Schrödinger in 1922, three years before working on his equation, connected Weyl's group concept to electron charge. Schrödinger showed that the group
U
(
1
)
{\displaystyle U(1)}
produced a phase shift
e
i
θ
{\displaystyle e^{i\theta }}
in electromagnetic fields that matched the conservation of electric charge.: 198 As the theory of quantum electrodynamics developed in the 1930's and 1940's the
U
(
1
)
{\displaystyle U(1)}
group transformations played a central role. Many physicists thought there must be an analog for the dynamics of nucleons.
Chen Ning Yang in particular was obsessed with this possibility.
=== Yang and Mills find the nuclear force gauge theory ===
Yang's core idea was to look for a conserved quantity in nuclear physics comparable to electric charge and use it to develop a corresponding gauge theory comparable to electrodynamics. He settled on conservation of isospin, a quantum number that distinguishes a neutron from a proton, but he made no progress on a theory.: 200 Taking a break from Princeton in the summer of 1953, Yang met a collaborator who could help: Robert Mills. As Mills himself describes:"During the academic year 1953–1954, Yang was a visitor to Brookhaven National Laboratory ... I was at Brookhaven also ... and was assigned to the same office as Yang. Yang, who has demonstrated on a number of occasions his generosity to physicists beginning their careers, told me about his idea of generalizing gauge invariance and we discussed it at some length ... I was able to contribute something to the discussions, especially with regard to the quantization procedures, and to a small degree in working out the formalism; however, the key ideas were Yang's."
In the summer 1953, Yang and Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to non-abelian groups, selecting the group SU(2) to provide an explanation for isospin conservation in collisions involving the strong interactions. Yang's presentation of the work at Princeton in February 1954 was challenged by Pauli, asking about the mass in the field developed with the gauge invariance idea.: 202 Pauli knew that this might be an issue as he had worked on applying gauge invariance but chose not to publish it, viewing the massless excitations of the theory to be "unphysical 'shadow particles'".: 13 Yang and Mills published in October 1954; near the end of the paper, they admit:
We next come to the question of the mass of the
b
{\displaystyle b}
quantum, to which we do not have a satisfactory answer.
This problem of unphysical massless excitation blocked further progress.
The idea was set aside until 1960, when the concept of particles acquiring mass through symmetry breaking in massless theories was put forward, initially by Jeffrey Goldstone, Yoichiro Nambu, and Giovanni Jona-Lasinio. This prompted a significant restart of Yang–Mills theory studies that proved successful in the formulation of both electroweak unification and quantum chromodynamics (QCD). The electroweak interaction is described by the gauge group SU(2) × U(1), while QCD is an SU(3) Yang–Mills theory. The massless gauge bosons of the electroweak SU(2) × U(1) mix after spontaneous symmetry breaking to produce the three massive bosons of the weak interaction (W+, W−, and Z0) as well as the still-massless photon field. The dynamics of the photon field and its interactions with matter are, in turn, governed by the U(1) gauge theory of quantum electrodynamics. The Standard Model combines the strong interaction with the unified electroweak interaction (unifying the weak and electromagnetic interaction) through the symmetry group SU(3) × SU(2) × U(1). In the current epoch the strong interaction is not unified with the electroweak interaction, but from the observed running of the coupling constants it is believed they all converge to a single value at very high energies.
Phenomenology at lower energies in quantum chromodynamics is not completely understood due to the difficulties of managing such a theory with a strong coupling. This may be the reason why confinement has not been theoretically proven, though it is a consistent experimental observation. This shows why QCD confinement at low energy is a mathematical problem of great relevance, and why the Yang–Mills existence and mass gap problem is a Millennium Prize Problem.
=== Parallel work on non-Abelian gauge theories ===
In 1953, in a private correspondence, Wolfgang Pauli formulated a six-dimensional theory of Einstein's field equations of general relativity, extending the five-dimensional theory of Kaluza, Klein, Fock, and others to a higher-dimensional internal space. However, there is no evidence that Pauli developed the Lagrangian of a gauge field or the quantization of it. Because Pauli found that his theory "leads to some rather unphysical shadow particles", he refrained from publishing his results formally. Although Pauli did not publish his six-dimensional theory, he gave two seminar lectures about it in Zürich in November 1953.
In January 1954 Ronald Shaw, a graduate student at the University of Cambridge also developed a non-Abelian gauge theory for nuclear forces.
However, the theory needed massless particles in order to maintain gauge invariance. Since no such massless particles were known at the time, Shaw and his supervisor Abdus Salam chose not to publish their work.
Shortly after Yang and Mills published their paper in October 1954, Salam encouraged Shaw to publish his work to mark his contribution. Shaw declined, and instead it only forms a chapter of his PhD thesis published in 1956.
== Mathematical overview ==
Yang–Mills theories are special examples of gauge theories with a non-abelian symmetry group given by the Lagrangian
L
g
f
=
−
1
2
tr
(
F
2
)
=
−
1
4
F
a
μ
ν
F
μ
ν
a
{\displaystyle \ {\mathcal {L}}_{\mathrm {gf} }=-{\tfrac {1}{2}}\operatorname {tr} (F^{2})=-{\tfrac {1}{4}}F^{a\mu \nu }F_{\mu \nu }^{a}\ }
with the generators
T
a
{\displaystyle \ T^{a}\ }
of the Lie algebra, indexed by a, corresponding to the F-quantities (the curvature or field-strength form) satisfying
tr
(
T
a
T
b
)
=
1
2
δ
a
b
,
[
T
a
,
T
b
]
=
i
f
a
b
c
T
c
.
{\displaystyle \ \operatorname {tr} \left(T^{a}\ T^{b}\right)={\tfrac {1}{2}}\delta ^{ab}\ ,\qquad \left[T^{a},\ T^{b}\right]=i\ f^{abc}\ T^{c}~.}
Here, the f abc are structure constants of the Lie algebra (totally antisymmetric if the generators of the Lie algebra are normalised such that
tr
(
T
a
T
b
)
{\displaystyle \ \operatorname {tr} (T^{a}\ T^{b})\ }
is proportional to
δ
a
b
{\displaystyle \ \delta ^{ab}\ }
), the covariant derivative is defined as
D
μ
=
I
∂
μ
−
i
g
T
a
A
μ
a
,
{\displaystyle \ D_{\mu }=I\ \partial _{\mu }-i\ g\ T^{a}\ A_{\mu }^{a}\ ,}
I is the identity matrix (matching the size of the generators),
A
μ
a
{\displaystyle \ A_{\mu }^{a}\ }
is the vector potential, and g is the coupling constant. In four dimensions, the coupling constant g is a pure number and for a SU(n) group one has
a
,
b
,
c
=
1
…
n
2
−
1
.
{\displaystyle \ a,b,c=1\ldots n^{2}-1~.}
The relation
F
μ
ν
a
=
∂
μ
A
ν
a
−
∂
ν
A
μ
a
+
g
f
a
b
c
A
μ
b
A
ν
c
{\displaystyle \ F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+g\ f^{abc}\ A_{\mu }^{b}\ A_{\nu }^{c}\ }
can be derived by the commutator
[
D
μ
,
D
ν
]
=
−
i
g
T
a
F
μ
ν
a
.
{\displaystyle \ \left[D_{\mu },D_{\nu }\right]=-i\ g\ T^{a}\ F_{\mu \nu }^{a}~.}
The field has the property of being self-interacting and the equations of motion that one obtains are said to be semilinear, as nonlinearities are both with and without derivatives. This means that one can manage this theory only by perturbation theory with small nonlinearities.
Note that the transition between "upper" ("contravariant") and "lower" ("covariant") vector or tensor components is trivial for a indices (e.g.
f
a
b
c
=
f
a
b
c
{\displaystyle \ f^{abc}=f_{abc}\ }
), whereas for μ and ν it is nontrivial, corresponding e.g. to the usual Lorentz signature,
η
μ
ν
=
d
i
a
g
(
+
−
−
−
)
.
{\displaystyle \ \eta _{\mu \nu }={\rm {diag}}(+---)~.}
From the given Lagrangian one can derive the equations of motion given by
∂
μ
F
μ
ν
a
+
g
f
a
b
c
A
μ
b
F
μ
ν
c
=
0
.
{\displaystyle \ \partial ^{\mu }F_{\mu \nu }^{a}+g\ f^{abc}\ A^{\mu b}\ F_{\mu \nu }^{c}=0~.}
Putting
F
μ
ν
=
T
a
F
μ
ν
a
,
{\displaystyle \ F_{\mu \nu }=T^{a}F_{\mu \nu }^{a}\ ,}
these can be rewritten as
(
D
μ
F
μ
ν
)
a
=
0
.
{\displaystyle \ \left(D^{\mu }F_{\mu \nu }\right)^{a}=0~.}
A Bianchi identity holds
(
D
μ
F
ν
κ
)
a
+
(
D
κ
F
μ
ν
)
a
+
(
D
ν
F
κ
μ
)
a
=
0
{\displaystyle \ \left(D_{\mu }\ F_{\nu \kappa }\right)^{a}+\left(D_{\kappa }\ F_{\mu \nu }\right)^{a}+\left(D_{\nu }\ F_{\kappa \mu }\right)^{a}=0\ }
which is equivalent to the Jacobi identity
[
D
μ
,
[
D
ν
,
D
κ
]
]
+
[
D
κ
,
[
D
μ
,
D
ν
]
]
+
[
D
ν
,
[
D
κ
,
D
μ
]
]
=
0
{\displaystyle \ \left[D_{\mu },\left[D_{\nu },D_{\kappa }\right]\right]+\left[D_{\kappa },\left[D_{\mu },D_{\nu }\right]\right]+\left[D_{\nu },\left[D_{\kappa },D_{\mu }\right]\right]=0\ }
since
[
D
μ
,
F
ν
κ
a
]
=
D
μ
F
ν
κ
a
.
{\displaystyle \ \left[D_{\mu },F_{\nu \kappa }^{a}\right]=D_{\mu }\ F_{\nu \kappa }^{a}~.}
Define the dual strength tensor
F
~
μ
ν
=
1
2
ε
μ
ν
ρ
σ
F
ρ
σ
,
{\displaystyle \ {\tilde {F}}^{\mu \nu }={\tfrac {1}{2}}\varepsilon ^{\mu \nu \rho \sigma }F_{\rho \sigma }\ ,}
then the Bianchi identity can be rewritten as
D
μ
F
~
μ
ν
=
0
.
{\displaystyle \ D_{\mu }{\tilde {F}}^{\mu \nu }=0~.}
A source
J
μ
a
{\displaystyle \ J_{\mu }^{a}\ }
enters into the equations of motion as
∂
μ
F
μ
ν
a
+
g
f
a
b
c
A
b
μ
F
μ
ν
c
=
−
J
ν
a
.
{\displaystyle \ \partial ^{\mu }F_{\mu \nu }^{a}+g\ f^{abc}\ A^{b\mu }\ F_{\mu \nu }^{c}=-J_{\nu }^{a}~.}
Note that the currents must properly change under gauge group transformations.
We give here some comments about the physical dimensions of the coupling. In D dimensions, the field scales as
[
A
]
=
[
L
(
2
−
D
2
)
]
{\displaystyle \ \left[A\right]=\left[L^{\left({\tfrac {2-D}{2}}\right)}\right]\ }
and so the coupling must scale as
[
g
2
]
=
[
L
(
D
−
4
)
]
.
{\displaystyle \ \left[g^{2}\right]=\left[L^{\left(D-4\right)}\right]~.}
This implies that Yang–Mills theory is not renormalizable for dimensions greater than four. Furthermore, for D = 4 , the coupling is dimensionless and both the field and the square of the coupling have the same dimensions of the field and the coupling of a massless quartic scalar field theory. So, these theories share the scale invariance at the classical level.
== Quantization ==
A method of quantizing the Yang–Mills theory is by functional methods, i.e. path integrals. One introduces a generating functional for n-point functions as
Z
[
j
]
=
∫
[
d
A
]
exp
[
−
i
2
∫
d
4
x
tr
(
F
μ
ν
F
μ
ν
)
+
i
∫
d
4
x
j
μ
a
(
x
)
A
a
μ
(
x
)
]
,
{\displaystyle \ Z[j]=\int [\mathrm {d} A]\ \exp \left[-{\tfrac {i}{2}}\int \mathrm {d} ^{4}x\ \operatorname {tr} \left(F^{\mu \nu }\ F_{\mu \nu }\right)+i\ \int \mathrm {d} ^{4}x\ j_{\mu }^{a}(x)\ A^{a\mu }(x)\right]\ ,}
but this integral has no meaning as it is because the potential vector can be arbitrarily chosen due to the gauge freedom. This problem was already known for quantum electrodynamics but here becomes more severe due to non-abelian properties of the gauge group. A way out has been given by Ludvig Faddeev and Victor Popov with the introduction of a ghost field (see Faddeev–Popov ghost) that has the property of being unphysical since, although it agrees with Fermi–Dirac statistics, it is a complex scalar field, which violates the spin–statistics theorem. So, we can write the generating functional as
Z
[
j
,
ε
¯
,
ε
]
=
∫
[
d
A
]
[
d
c
¯
]
[
d
c
]
exp
{
i
S
F
[
∂
A
,
A
]
+
i
S
g
f
[
∂
A
]
+
i
S
g
[
∂
c
,
∂
c
¯
,
c
,
c
¯
,
A
]
}
exp
{
i
∫
d
4
x
j
μ
a
(
x
)
A
a
μ
(
x
)
+
i
∫
d
4
x
[
c
¯
a
(
x
)
ε
a
(
x
)
+
ε
¯
a
(
x
)
c
a
(
x
)
]
}
{\displaystyle {\begin{aligned}Z[j,{\bar {\varepsilon }},\varepsilon ]&=\int [\mathrm {d} \ A][\mathrm {d} \ {\bar {c}}][\mathrm {d} \ c]\ \exp {\Bigl \{}i\ S_{F}\ \left[\partial A,A\right]+i\ S_{gf}\left[\partial A\right]+i\ S_{g}\left[\partial c,\partial {\bar {c}},c,{\bar {c}},A\right]{\Bigr \}}\\&\exp \left\{i\int \mathrm {d} ^{4}x\ j_{\mu }^{a}(x)A^{a\mu }(x)+i\int \mathrm {d} ^{4}x\ \left[{\bar {c}}^{a}(x)\ \varepsilon ^{a}(x)+{\bar {\varepsilon }}^{a}(x)\ c^{a}(x)\right]\right\}\end{aligned}}}
being
S
F
=
−
1
2
∫
d
4
x
tr
(
F
μ
ν
F
μ
ν
)
{\displaystyle S_{F}=-{\tfrac {1}{2}}\int \mathrm {d} ^{4}x\ \operatorname {tr} \left(F^{\mu \nu }\ F_{\mu \nu }\right)\ }
for the field,
S
g
f
=
−
1
2
ξ
∫
d
4
x
(
∂
⋅
A
)
2
{\displaystyle S_{gf}=-{\frac {1}{2\xi }}\int \mathrm {d} ^{4}x\ (\partial \cdot A)^{2}\ }
for the gauge fixing and
S
g
=
−
∫
d
4
x
(
c
¯
a
∂
μ
∂
μ
c
a
+
g
c
¯
a
f
a
b
c
∂
μ
A
b
μ
c
c
)
{\displaystyle \ S_{g}=-\int \mathrm {d} ^{4}x\ \left({\bar {c}}^{a}\ \partial _{\mu }\partial ^{\mu }c^{a}+g\ {\bar {c}}^{a}\ f^{abc}\ \partial _{\mu }\ A^{b\mu }\ c^{c}\right)\ }
for the ghost. This is the expression commonly used to derive Feynman's rules (see Feynman diagram). Here we have ca for the ghost field while ξ fixes the gauge's choice for the quantization. Feynman's rules obtained from this functional are the following
These rules for Feynman's diagrams can be obtained when the generating functional given above is rewritten as
Z
[
j
,
ε
¯
,
ε
]
=
exp
(
−
i
g
∫
d
4
x
δ
i
δ
ε
¯
a
(
x
)
f
a
b
c
∂
μ
i
δ
δ
j
μ
b
(
x
)
i
δ
δ
ε
c
(
x
)
)
×
exp
(
−
i
g
∫
d
4
x
f
a
b
c
∂
μ
i
δ
δ
j
ν
a
(
x
)
i
δ
δ
j
μ
b
(
x
)
i
δ
δ
j
c
ν
(
x
)
)
×
exp
(
−
i
g
2
4
∫
d
4
x
f
a
b
c
f
a
r
s
i
δ
δ
j
μ
b
(
x
)
i
δ
δ
j
ν
c
(
x
)
i
δ
δ
j
r
μ
(
x
)
i
δ
δ
j
s
ν
(
x
)
)
×
Z
0
[
j
,
ε
¯
,
ε
]
{\displaystyle {\begin{aligned}Z[j,{\bar {\varepsilon }},\varepsilon ]&=\exp \left(-i\ g\int \mathrm {d} ^{4}x\ {\frac {\delta }{i\ \delta \ {\bar {\varepsilon }}^{a}(x)}}\ f^{abc}\partial _{\mu }\ {\frac {i\ \delta }{\delta \ j_{\mu }^{b}(x)}}\ {\frac {i\ \delta }{\delta \ \varepsilon ^{c}(x)}}\right)\\&\qquad \times \exp \left(-i\ g\int \mathrm {d} ^{4}x\ f^{abc}\partial _{\mu }{\frac {i\ \delta }{\delta \ j_{\nu }^{a}(x)}}{\frac {i\ \delta }{\delta \ j_{\mu }^{b}(x)}}\ {\frac {i\ \delta }{\delta \ j^{c\nu }(x)}}\right)\\&\qquad \qquad \times \exp \left(-i\ {\frac {g^{2}}{4}}\int \mathrm {d} ^{4}x\ f^{abc}\ f^{ars}{\frac {i\ \delta }{\delta \ j_{\mu }^{b}(x)}}\ {\frac {i\ \delta }{\delta \ j_{\nu }^{c}(x)}}\ {\frac {\ i\delta }{\delta \ j^{r\mu }(x)}}{\frac {i\ \delta }{\delta \ j^{s\nu }(x)}}\right)\\&\qquad \qquad \qquad \times Z_{0}[j,{\bar {\varepsilon }},\varepsilon ]\end{aligned}}}
with
Z
0
[
j
,
ε
¯
,
ε
]
=
exp
(
−
∫
d
4
x
d
4
y
ε
¯
a
(
x
)
C
a
b
(
x
−
y
)
ε
b
(
y
)
)
exp
(
1
2
∫
d
4
x
d
4
y
j
μ
a
(
x
)
D
a
b
μ
ν
(
x
−
y
)
j
ν
b
(
y
)
)
{\displaystyle Z_{0}[j,{\bar {\varepsilon }},\varepsilon ]=\exp \left(-\int \mathrm {d} ^{4}x\ \mathrm {d} ^{4}y\ {\bar {\varepsilon }}^{a}(x)\ C^{ab}(x-y)\ \varepsilon ^{b}(y)\right)\exp \left({\tfrac {1}{2}}\int \mathrm {d} ^{4}x\ \mathrm {d} ^{4}y\ j_{\mu }^{a}(x)\ D^{ab\mu \nu }(x-y)\ j_{\nu }^{b}(y)\right)\ }
being the generating functional of the free theory. Expanding in g and computing the functional derivatives, we are able to obtain all the n-point functions with perturbation theory. Using LSZ reduction formula we get from the n-point functions the corresponding process amplitudes, cross sections and decay rates. The theory is renormalizable and corrections are finite at any order of perturbation theory.
For quantum electrodynamics the ghost field decouples because the gauge group is abelian. This can be seen from the coupling between the gauge field and the ghost field that is
c
¯
a
f
a
b
c
∂
μ
A
b
μ
c
c
.
{\displaystyle \ {\bar {c}}^{a}\ f^{abc}\ \partial _{\mu }A^{b\mu }\ c^{c}~.}
For the abelian case, all the structure constants
f
a
b
c
{\displaystyle \ f^{abc}\ }
are zero and so there is no coupling. In the non-abelian case, the ghost field appears as a useful way to rewrite the quantum field theory without physical consequences on the observables of the theory such as cross sections or decay rates.
One of the most important results obtained for Yang–Mills theory is asymptotic freedom. This result can be obtained by assuming that the coupling constant g is small (so small nonlinearities), as for high energies, and applying perturbation theory. The relevance of this result is due to the fact that a Yang–Mills theory that describes strong interaction and asymptotic freedom permits proper treatment of experimental results coming from deep inelastic scattering.
To obtain the behavior of the Yang–Mills theory at high energies, and so to prove asymptotic freedom, one applies perturbation theory assuming a small coupling. This is verified a posteriori in the ultraviolet limit. In the opposite limit, the infrared limit, the situation is the opposite, as the coupling is too large for perturbation theory to be reliable. Most of the difficulties that research meets is just managing the theory at low energies. That is the interesting case, being inherent to the description of hadronic matter and, more generally, to all the observed bound states of gluons and quarks and their confinement (see hadrons). The most used method to study the theory in this limit is to try to solve it on computers (see lattice gauge theory). In this case, large computational resources are needed to be sure the correct limit of infinite volume (smaller lattice spacing) is obtained. This is the limit the results must be compared with. Smaller spacing and larger coupling are not independent of each other, and larger computational resources are needed for each. As of today, the situation appears somewhat satisfactory for the hadronic spectrum and the computation of the gluon and ghost propagators, but the glueball and hybrids spectra are yet a questioned matter in view of the experimental observation of such exotic states. Indeed, the σ resonance
is not seen in any of such lattice computations and contrasting interpretations have been put forward. This is a hotly debated issue.
== Open problems ==
Yang–Mills theories met with general acceptance in the physics community after Gerard 't Hooft, in 1972, worked out their renormalization, relying on a formulation of the problem worked out by his advisor Martinus Veltman.
Renormalizability is obtained even if the gauge bosons described by this theory are massive, as in the electroweak theory, provided the mass is only an "acquired" one, generated by the Higgs mechanism.
The mathematics of the Yang–Mills theory is a very active field of research, yielding e.g. invariants of differentiable structures on four-dimensional manifolds via work of Simon Donaldson. Furthermore, the field of Yang–Mills theories was included in the Clay Mathematics Institute's list of "Millennium Prize Problems". Here the prize-problem consists, especially, in a proof of the conjecture that the lowest excitations of a pure Yang–Mills theory (i.e. without matter fields) have a finite mass-gap with regard to the vacuum state. Another open problem, connected with this conjecture, is a proof of the confinement property in the presence of additional fermions.
In physics the survey of Yang–Mills theories does not usually start from perturbation analysis or analytical methods, but more recently from systematic application of numerical methods to lattice gauge theories.
== See also ==
== References ==
== Further reading ==
== External links ==
"Yang-Mills field", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Yang–Mills theory". DispersiveWiki. Archived from the original on 2021-06-03. Retrieved 2018-08-30.
"The Millennium Prize Problems". The Clay Mathematics Institute. Archived from the original on 2009-01-16. Retrieved 2008-11-24. | Wikipedia/Yang–Mills_theory |
In mathematics, projective differential geometry is the study of differential geometry, from the point of view of properties of mathematical objects such as functions, diffeomorphisms, and submanifolds, that are invariant under transformations of the projective group. This is a mixture of the approaches from Riemannian geometry of studying invariances, and of the Erlangen program of characterizing geometries according to their group symmetries.
The area was much studied by mathematicians from around 1890 for a generation (by J. G. Darboux, George Henri Halphen, Ernest Julius Wilczynski, E. Bompiani, G. Fubini, Eduard Čech, amongst others), without a comprehensive theory of differential invariants emerging.
Élie Cartan formulated the idea of a general projective connection, as part of his method of moving frames; abstractly speaking, this is the level of generality at which the Erlangen program can be reconciled with differential geometry, while it also develops the oldest part of the theory (for the projective line), namely the Schwarzian derivative, the simplest projective differential invariant.
Further work from the 1930s onwards was carried out by J. Kanitani, Shiing-Shen Chern, A. P. Norden, G. Bol, S. P. Finikov and G. F. Laptev. Even the basic results on osculation of curves, a manifestly projective-invariant topic, lack any comprehensive theory. The ideas of projective differential geometry recur in mathematics and its applications, but the formulations given are still rooted in the language of the early twentieth century.
== See also ==
Affine geometry of curves
== References ==
Ernest Julius Wilczynski Projective differential geometry of curves and ruled surfaces (Leipzig: B.G. Teubner,1906)
== Further reading ==
Notes on Projective Differential Geometry by Michael Eastwood | Wikipedia/Projective_differential_geometry |
Spacetime topology is the topological structure of spacetime, a topic studied primarily in general relativity. This physical theory models gravitation as the curvature of a four dimensional Lorentzian manifold (a spacetime) and the concepts of topology thus become important in analysing local as well as global aspects of spacetime. The study of spacetime topology is especially important in physical cosmology.
== Types of topology ==
There are two main types of topology for a spacetime M.
=== Manifold topology ===
As with any manifold, a spacetime possesses a natural manifold topology. Here the open sets are the image of open sets in
R
4
{\displaystyle \mathbb {R} ^{4}}
.
=== Path or Zeeman topology ===
Definition: The topology
ρ
{\displaystyle \rho }
in which a subset
E
⊂
M
{\displaystyle E\subset M}
is open if for every timelike curve
c
{\displaystyle c}
there is a set
O
{\displaystyle O}
in the manifold topology such that
E
∩
c
=
O
∩
c
{\displaystyle E\cap c=O\cap c}
.
It is the finest topology which induces the same topology as
M
{\displaystyle M}
does on timelike curves.
==== Properties ====
Strictly finer than the manifold topology. It is therefore Hausdorff, separable but not locally compact.
A base for the topology is sets of the form
Y
+
(
p
,
U
)
∪
Y
−
(
p
,
U
)
∪
p
{\displaystyle Y^{+}(p,U)\cup Y^{-}(p,U)\cup p}
for some point
p
∈
M
{\displaystyle p\in M}
and some convex normal neighbourhood
U
⊂
M
{\displaystyle U\subset M}
.
(
Y
±
{\displaystyle Y^{\pm }}
denote the chronological past and future).
=== Alexandrov topology ===
The Alexandrov topology on spacetime, is the coarsest topology such that both
Y
+
(
E
)
{\displaystyle Y^{+}(E)}
and
Y
−
(
E
)
{\displaystyle Y^{-}(E)}
are open for all subsets
E
⊂
M
{\displaystyle E\subset M}
.
Here the base of open sets for the topology are sets of the form
Y
+
(
x
)
∩
Y
−
(
y
)
{\displaystyle Y^{+}(x)\cap Y^{-}(y)}
for some points
x
,
y
∈
M
{\displaystyle \,x,y\in M}
.
This topology coincides with the manifold topology if and only if the manifold is strongly causal but it is coarser in general.
Note that in mathematics, an Alexandrov topology on a partial order is usually taken to be the coarsest topology in which only the upper sets
Y
+
(
E
)
{\displaystyle Y^{+}(E)}
are required to be open. This topology goes back to Pavel Alexandrov.
Nowadays, the correct mathematical term for the Alexandrov topology on spacetime (which goes back to Alexandr D. Alexandrov) would be the interval topology, but when Kronheimer and Penrose introduced the term this difference in nomenclature was not as clear, and in physics the term Alexandrov topology remains in use.
== Planar spacetime ==
Events connected by light have zero separation. The plenum of spacetime in the plane is split into four quadrants, each of which has the topology of R2. The dividing lines are the trajectory of inbound and outbound photons at (0,0). The planar-cosmology topological segmentation is the future F, the past P, space left L, and space right D. The homeomorphism of F with R2 amounts to polar decomposition of split-complex numbers:
z
=
exp
(
a
+
j
b
)
=
e
a
(
cosh
b
+
j
sinh
b
)
→
(
a
,
b
)
,
{\displaystyle z=\exp(a+jb)=e^{a}(\cosh b+j\sinh b)\to (a,b),}
so that
z
→
(
a
,
b
)
{\displaystyle z\to (a,b)}
is the split-complex logarithm and the required homeomorphism F → R2, Note that b is the rapidity parameter for relative motion in F.
F is in bijective correspondence with each of P, L, and D under the mappings z → –z, z → jz, and z → – j z, so each acquires the same topology. The union U = F ∪ P ∪ L ∪ D then has a topology nearly covering the plane, leaving out only the null cone on (0,0). Hyperbolic rotation of the plane does not mingle the quadrants, in fact, each one is an invariant set under the unit hyperbola group.
== See also ==
4-manifold
Clifford-Klein form
Closed timelike curve
Complex spacetime
Geometrodynamics
Gravitational singularity
Hantzsche-Wendt manifold
Spacetime curvature
Wormhole
== Notes ==
== References ==
Zeeman, E. C. (1964). "Causality Implies the Lorentz Group". Journal of Mathematical Physics. 5 (4): 490–493. Bibcode:1964JMP.....5..490Z. doi:10.1063/1.1704140.
Hawking, S. W.; King, A. R.; McCarthy, P. J. (1976). "A new topology for curved space–time which incorporates the causal, differential, and conformal structures" (PDF). Journal of Mathematical Physics. 17 (2): 174–181. Bibcode:1976JMP....17..174H. doi:10.1063/1.522874. | Wikipedia/Spacetime_topology |
In mathematics, particularly topology, an atlas is a concept used to describe a manifold. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fiber bundles.
== Charts ==
The definition of an atlas depends on the notion of a chart. A chart for a topological space M is a homeomorphism
φ
{\displaystyle \varphi }
from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair
(
U
,
φ
)
{\displaystyle (U,\varphi )}
.
When a coordinate system is chosen in the Euclidean space, this defines coordinates on
U
{\displaystyle U}
: the coordinates of a point
P
{\displaystyle P}
of
U
{\displaystyle U}
are defined as the coordinates of
φ
(
P
)
.
{\displaystyle \varphi (P).}
The pair formed by a chart and such a coordinate system is called a local coordinate system, coordinate chart, coordinate patch, coordinate map, or local frame.
== Formal definition of atlas ==
An atlas for a topological space
M
{\displaystyle M}
is an indexed family
{
(
U
α
,
φ
α
)
:
α
∈
I
}
{\displaystyle \{(U_{\alpha },\varphi _{\alpha }):\alpha \in I\}}
of charts on
M
{\displaystyle M}
which covers
M
{\displaystyle M}
(that is,
⋃
α
∈
I
U
α
=
M
{\textstyle \bigcup _{\alpha \in I}U_{\alpha }=M}
). If for some fixed n, the image of each chart is an open subset of n-dimensional Euclidean space, then
M
{\displaystyle M}
is said to be an n-dimensional manifold.
The plural of atlas is atlases, although some authors use atlantes.
An atlas
(
U
i
,
φ
i
)
i
∈
I
{\displaystyle \left(U_{i},\varphi _{i}\right)_{i\in I}}
on an
n
{\displaystyle n}
-dimensional manifold
M
{\displaystyle M}
is called an adequate atlas if the following conditions hold:
The image of each chart is either
R
n
{\displaystyle \mathbb {R} ^{n}}
or
R
+
n
{\displaystyle \mathbb {R} _{+}^{n}}
, where
R
+
n
{\displaystyle \mathbb {R} _{+}^{n}}
is the closed half-space,
(
U
i
)
i
∈
I
{\displaystyle \left(U_{i}\right)_{i\in I}}
is a locally finite open cover of
M
{\displaystyle M}
, and
M
=
⋃
i
∈
I
φ
i
−
1
(
B
1
)
{\textstyle M=\bigcup _{i\in I}\varphi _{i}^{-1}\left(B_{1}\right)}
, where
B
1
{\displaystyle B_{1}}
is the open ball of radius 1 centered at the origin.
Every second-countable manifold admits an adequate atlas. Moreover, if
V
=
(
V
j
)
j
∈
J
{\displaystyle {\mathcal {V}}=\left(V_{j}\right)_{j\in J}}
is an open covering of the second-countable manifold
M
{\displaystyle M}
, then there is an adequate atlas
(
U
i
,
φ
i
)
i
∈
I
{\displaystyle \left(U_{i},\varphi _{i}\right)_{i\in I}}
on
M
{\displaystyle M}
, such that
(
U
i
)
i
∈
I
{\displaystyle \left(U_{i}\right)_{i\in I}}
is a refinement of
V
{\displaystyle {\mathcal {V}}}
.
== Transition maps ==
A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.)
To be more precise, suppose that
(
U
α
,
φ
α
)
{\displaystyle (U_{\alpha },\varphi _{\alpha })}
and
(
U
β
,
φ
β
)
{\displaystyle (U_{\beta },\varphi _{\beta })}
are two charts for a manifold M such that
U
α
∩
U
β
{\displaystyle U_{\alpha }\cap U_{\beta }}
is non-empty.
The transition map
τ
α
,
β
:
φ
α
(
U
α
∩
U
β
)
→
φ
β
(
U
α
∩
U
β
)
{\displaystyle \tau _{\alpha ,\beta }:\varphi _{\alpha }(U_{\alpha }\cap U_{\beta })\to \varphi _{\beta }(U_{\alpha }\cap U_{\beta })}
is the map defined by
τ
α
,
β
=
φ
β
∘
φ
α
−
1
.
{\displaystyle \tau _{\alpha ,\beta }=\varphi _{\beta }\circ \varphi _{\alpha }^{-1}.}
Note that since
φ
α
{\displaystyle \varphi _{\alpha }}
and
φ
β
{\displaystyle \varphi _{\beta }}
are both homeomorphisms, the transition map
τ
α
,
β
{\displaystyle \tau _{\alpha ,\beta }}
is also a homeomorphism.
== More structure ==
One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives.
If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be
C
k
{\displaystyle C^{k}}
.
Very generally, if each transition function belongs to a pseudogroup
G
{\displaystyle {\mathcal {G}}}
of homeomorphisms of Euclidean space, then the atlas is called a
G
{\displaystyle {\mathcal {G}}}
-atlas. If the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle.
== See also ==
Smooth atlas
Smooth frame
== References ==
== External links ==
Atlas by Rowland, Todd | Wikipedia/Atlas_(topology) |
In physics and mathematics, and especially differential geometry and gauge theory, the Yang–Mills equations are a system of partial differential equations for a connection on a vector bundle or principal bundle. They arise in physics as the Euler–Lagrange equations of the Yang–Mills action functional. They have also found significant use in mathematics.
Solutions of the equations are called Yang–Mills connections or instantons. The moduli space of instantons was used by Simon Donaldson to prove Donaldson's theorem.
== Motivation ==
=== Physics ===
In their foundational paper on the topic of gauge theories, Robert Mills and Chen-Ning Yang developed (essentially independent of the mathematical literature) the theory of principal bundles and connections in order to explain the concept of gauge symmetry and gauge invariance as it applies to physical theories. The gauge theories Yang and Mills discovered, now called Yang–Mills theories, generalised the classical work of James Maxwell on Maxwell's equations, which had been phrased in the language of a
U
(
1
)
{\displaystyle \operatorname {U} (1)}
gauge theory by Wolfgang Pauli and others. The novelty of the work of Yang and Mills was to define gauge theories for an arbitrary choice of Lie group
G
{\displaystyle G}
, called the structure group (or in physics the gauge group, see Gauge group (mathematics) for more details). This group could be non-Abelian as opposed to the case
G
=
U
(
1
)
{\displaystyle G=\operatorname {U} (1)}
corresponding to electromagnetism, and the right framework to discuss such objects is the theory of principal bundles.
The essential points of the work of Yang and Mills are as follows. One assumes that the fundamental description of a physical model is through the use of fields, and derives that under a local gauge transformation (change of local trivialisation of principal bundle), these physical fields must transform in precisely the way that a connection
A
{\displaystyle A}
(in physics, a gauge field) on a principal bundle transforms. The gauge field strength is the curvature
F
A
{\displaystyle F_{A}}
of the connection, and the energy of the gauge field is given (up to a constant) by the Yang–Mills action functional
YM
(
A
)
=
∫
X
‖
F
A
‖
2
d
v
o
l
g
.
{\displaystyle \operatorname {YM} (A)=\int _{X}\|F_{A}\|^{2}\,d\mathrm {vol} _{g}.}
The principle of least action dictates that the correct equations of motion for this physical theory should be given by the Euler–Lagrange equations of this functional, which are the Yang–Mills equations derived below:
d
A
⋆
F
A
=
0.
{\displaystyle d_{A}\star F_{A}=0.}
=== Mathematics ===
In addition to the physical origins of the theory, the Yang–Mills equations are of important geometric interest. There is in general no natural choice of connection on a vector bundle or principal bundle. In the special case where this bundle is the tangent bundle to a Riemannian manifold, there is such a natural choice, the Levi-Civita connection, but in general there is an infinite-dimensional space of possible choices. A Yang–Mills connection gives some kind of natural choice of a connection for a general fibre bundle, as we now describe.
A connection is defined by its local forms
A
α
∈
Ω
1
(
U
α
,
ad
(
P
)
)
{\displaystyle A_{\alpha }\in \Omega ^{1}(U_{\alpha },\operatorname {ad} (P))}
for a trivialising open cover
{
U
α
}
{\displaystyle \{U_{\alpha }\}}
for the bundle
P
→
X
{\displaystyle P\to X}
. The first attempt at choosing a canonical connection might be to demand that these forms vanish. However, this is not possible unless the trivialisation is flat, in the sense that the transition functions
g
α
β
:
U
α
∩
U
β
→
G
{\displaystyle g_{\alpha \beta }:U_{\alpha }\cap U_{\beta }\to G}
are constant functions. Not every bundle is flat, so this is not possible in general. Instead one might ask that the local connection forms
A
α
{\displaystyle A_{\alpha }}
are themselves constant. On a principal bundle the correct way to phrase this condition is that the curvature
F
A
=
d
A
+
1
2
[
A
,
A
]
{\displaystyle F_{A}=dA+{\frac {1}{2}}[A,A]}
vanishes. However, by Chern–Weil theory if the curvature
F
A
{\displaystyle F_{A}}
vanishes (that is to say,
A
{\displaystyle A}
is a flat connection), then the underlying principal bundle must have trivial Chern classes, which is a topological obstruction to the existence of flat connections: not every principal bundle can have a flat connection.
The best one can hope for is then to ask that instead of vanishing curvature, the bundle has curvature as small as possible. The Yang–Mills action functional described above is precisely (the square of) the
L
2
{\displaystyle L^{2}}
-norm of the curvature, and its Euler–Lagrange equations describe the critical points of this functional, either the absolute minima or local minima. That is to say, Yang–Mills connections are precisely those that minimize their curvature. In this sense they are the natural choice of connection on a principal or vector bundle over a manifold from a mathematical point of view.
== Definition ==
Let
X
{\displaystyle X}
be a compact, oriented, Riemannian manifold. The Yang–Mills equations can be phrased for a connection on a vector bundle or principal
G
{\displaystyle G}
-bundle over
X
{\displaystyle X}
, for some compact Lie group
G
{\displaystyle G}
. Here the latter convention is presented. Let
P
{\displaystyle P}
denote a principal
G
{\displaystyle G}
-bundle over
X
{\displaystyle X}
. Then a connection on
P
{\displaystyle P}
may be specified by a Lie algebra-valued differential form
A
{\displaystyle A}
on the total space of the principal bundle. This connection has a curvature form
F
A
{\displaystyle F_{A}}
, which is a two-form on
X
{\displaystyle X}
with values in the adjoint bundle
ad
(
P
)
{\displaystyle \operatorname {ad} (P)}
of
P
{\displaystyle P}
. Associated to the connection
A
{\displaystyle A}
is an exterior covariant derivative
d
A
{\displaystyle d_{A}}
, defined on the adjoint bundle. Additionally, since
G
{\displaystyle G}
is compact, its associated compact Lie algebra admits an invariant inner product under the adjoint representation.
Since
X
{\displaystyle X}
is Riemannian, there is an inner product on the cotangent bundle, and combined with the invariant inner product on
ad
(
P
)
{\displaystyle \operatorname {ad} (P)}
there is an inner product on the bundle
ad
(
P
)
⊗
Λ
2
T
∗
X
{\displaystyle \operatorname {ad} (P)\otimes \Lambda ^{2}T^{*}X}
of
ad
(
P
)
{\displaystyle \operatorname {ad} (P)}
-valued two-forms on
X
{\displaystyle X}
. Since
X
{\displaystyle X}
is oriented, there is an
L
2
{\displaystyle L^{2}}
-inner product on the sections of this bundle. Namely,
⟨
s
,
t
⟩
L
2
=
∫
X
⟨
s
,
t
⟩
d
v
o
l
g
{\displaystyle \langle s,t\rangle _{L^{2}}=\int _{X}\langle s,t\rangle \,dvol_{g}}
where inside the integral the fiber-wise inner product is being used, and
d
v
o
l
g
{\displaystyle dvol_{g}}
is the Riemannian volume form of
X
{\displaystyle X}
. Using this
L
2
{\displaystyle L^{2}}
-inner product, the formal adjoint operator of
d
A
{\displaystyle d_{A}}
is defined by
⟨
d
A
s
,
t
⟩
L
2
=
⟨
s
,
d
A
∗
t
⟩
L
2
{\displaystyle \langle d_{A}s,t\rangle _{L^{2}}=\langle s,d_{A}^{*}t\rangle _{L^{2}}}
.
Explicitly this is given by
d
A
∗
=
±
⋆
d
A
⋆
{\displaystyle d_{A}^{*}=\pm \star d_{A}\star }
where
⋆
{\displaystyle \star }
is the Hodge star operator acting on two-forms.
Assuming the above set up, the Yang–Mills equations are a system of (in general non-linear) partial differential equations given by
Since the Hodge star is an isomorphism, by the explicit formula for
d
A
∗
{\displaystyle d_{A}^{*}}
the Yang–Mills equations can equivalently be written
A connection satisfying (1) or (2) is called a Yang–Mills connection.
Every connection automatically satisfies the Bianchi identity
d
A
F
A
=
0
{\displaystyle d_{A}F_{A}=0}
, so Yang–Mills connections can be seen as a non-linear analogue of harmonic differential forms, which satisfy
d
ω
=
d
∗
ω
=
0
{\displaystyle d\omega =d^{*}\omega =0}
.
In this sense the search for Yang–Mills connections can be compared to Hodge theory, which seeks a harmonic representative in the de Rham cohomology class of a differential form. The analogy being that a Yang–Mills connection is like a harmonic representative in the set of all possible connections on a principal bundle.
== Derivation ==
The Yang–Mills equations are the Euler–Lagrange equations of the Yang–Mills functional, defined by
To derive the equations from the functional, recall that the space
A
{\displaystyle {\mathcal {A}}}
of all connections on
P
{\displaystyle P}
is an affine space modelled on the vector space
Ω
1
(
P
;
g
)
{\displaystyle \Omega ^{1}(P;{\mathfrak {g}})}
. Given a small deformation
A
+
t
a
{\displaystyle A+ta}
of a connection
A
{\displaystyle A}
in this affine space, the curvatures are related by
F
A
+
t
a
=
F
A
+
t
d
A
a
+
t
2
a
∧
a
.
{\displaystyle F_{A+ta}=F_{A}+td_{A}a+t^{2}a\wedge a.}
To determine the critical points of (3), compute
d
d
t
(
YM
(
A
+
t
a
)
)
t
=
0
=
d
d
t
(
∫
X
⟨
F
A
+
t
d
A
a
+
t
2
a
∧
a
,
F
A
+
t
d
A
a
+
t
2
a
∧
a
⟩
d
v
o
l
g
)
t
=
0
=
d
d
t
(
∫
X
‖
F
A
‖
2
+
2
t
⟨
F
A
,
d
A
a
⟩
+
2
t
2
⟨
F
A
,
a
∧
a
⟩
+
t
4
‖
a
∧
a
‖
2
d
v
o
l
g
)
t
=
0
=
2
∫
X
⟨
d
A
∗
F
A
,
a
⟩
d
v
o
l
g
.
{\displaystyle {\begin{aligned}{\frac {d}{dt}}\left(\operatorname {YM} (A+ta)\right)_{t=0}&={\frac {d}{dt}}\left(\int _{X}\langle F_{A}+t\,d_{A}a+t^{2}a\wedge a,F_{A}+t\,d_{A}a+t^{2}a\wedge a\rangle \,d\mathrm {vol} _{g}\right)_{t=0}\\&={\frac {d}{dt}}\left(\int _{X}\|F_{A}\|^{2}+2t\langle F_{A},d_{A}a\rangle +2t^{2}\langle F_{A},a\wedge a\rangle +t^{4}\|a\wedge a\|^{2}\,d\mathrm {vol} _{g}\right)_{t=0}\\&=2\int _{X}\langle d_{A}^{*}F_{A},a\rangle \,d\mathrm {vol} _{g}.\end{aligned}}}
The connection
A
{\displaystyle A}
is a critical point of the Yang–Mills functional if and only if this vanishes for every
a
{\displaystyle a}
, and this occurs precisely when (1) is satisfied.
== Moduli space of Yang–Mills connections ==
The Yang–Mills equations are gauge invariant. Mathematically, a gauge transformation is an automorphism
g
{\displaystyle g}
of the principal bundle
P
{\displaystyle P}
, and since the inner product on
ad
(
P
)
{\displaystyle \operatorname {ad} (P)}
is invariant, the Yang–Mills functional satisfies
YM
(
g
⋅
A
)
=
∫
X
‖
g
F
A
g
−
1
‖
2
d
v
o
l
g
=
∫
X
‖
F
A
‖
2
d
v
o
l
g
=
YM
(
A
)
{\displaystyle \operatorname {YM} (g\cdot A)=\int _{X}\|gF_{A}g^{-1}\|^{2}\,d\mathrm {vol} _{g}=\int _{X}\|F_{A}\|^{2}\,d\mathrm {vol} _{g}=\operatorname {YM} (A)}
and so if
A
{\displaystyle A}
satisfies (1), so does
g
⋅
A
{\displaystyle g\cdot A}
.
There is a moduli space of Yang–Mills connections modulo gauge transformations. Denote by
G
{\displaystyle {\mathcal {G}}}
the gauge group of automorphisms of
P
{\displaystyle P}
. The set
B
=
A
/
G
{\displaystyle {\mathcal {B}}={\mathcal {A}}/{\mathcal {G}}}
classifies all connections modulo gauge transformations, and the moduli space
M
{\displaystyle {\mathcal {M}}}
of Yang–Mills connections is a subset. In general neither
B
{\displaystyle {\mathcal {B}}}
or
M
{\displaystyle {\mathcal {M}}}
is Hausdorff or a smooth manifold. However, by restricting to irreducible connections, that is, connections
A
{\displaystyle A}
whose holonomy group is given by all of
G
{\displaystyle G}
, one does obtain Hausdorff spaces. The space of irreducible connections is denoted
A
∗
{\displaystyle {\mathcal {A}}^{*}}
, and so the moduli spaces are denoted
B
∗
{\displaystyle {\mathcal {B}}^{*}}
and
M
∗
{\displaystyle {\mathcal {M}}^{*}}
.
Moduli spaces of Yang–Mills connections have been intensively studied in specific circumstances. Michael Atiyah and Raoul Bott studied the Yang–Mills equations for bundles over compact Riemann surfaces. There the moduli space obtains an alternative description as a moduli space of holomorphic vector bundles. This is the Narasimhan–Seshadri theorem, which was proved in this form relating Yang–Mills connections to holomorphic vector bundles by Donaldson. In this setting the moduli space has the structure of a compact Kähler manifold. Moduli of Yang–Mills connections have been most studied when the dimension of the base manifold
X
{\displaystyle X}
is four. Here the Yang–Mills equations admit a simplification from a second-order PDE to a first-order PDE, the anti-self-duality equations.
== Anti-self-duality equations ==
When the dimension of the base manifold
X
{\displaystyle X}
is four, a coincidence occurs: the Hodge star operator maps two-forms to two-forms,
⋆
:
Ω
2
(
X
)
→
Ω
2
(
X
)
{\displaystyle \star :\Omega ^{2}(X)\to \Omega ^{2}(X)}
.
The Hodge star operator squares to the identity in this case, and so has eigenvalues
1
{\displaystyle 1}
and
−
1
{\displaystyle -1}
. In particular, there is a decomposition
Ω
2
(
X
)
=
Ω
+
(
X
)
⊕
Ω
−
(
X
)
{\displaystyle \Omega ^{2}(X)=\Omega _{+}(X)\oplus \Omega _{-}(X)}
into the positive and negative eigenspaces of
⋆
{\displaystyle \star }
, the self-dual and anti-self-dual two-forms. If a connection
A
{\displaystyle A}
on a principal
G
{\displaystyle G}
-bundle over a four-manifold
X
{\displaystyle X}
satisfies either
F
A
=
⋆
F
A
{\displaystyle F_{A}={\star F_{A}}}
or
F
A
=
−
⋆
F
A
{\displaystyle F_{A}=-{\star F_{A}}}
, then by (2), the connection is a Yang–Mills connection. These connections are called either self-dual connections or anti-self-dual connections, and the equations the self-duality (SD) equations and the anti-self-duality (ASD) equations. The spaces of self-dual and anti-self-dual connections are denoted by
A
+
{\displaystyle {\mathcal {A}}^{+}}
and
A
−
{\displaystyle {\mathcal {A}}^{-}}
, and similarly for
B
±
{\displaystyle {\mathcal {B}}^{\pm }}
and
M
±
{\displaystyle {\mathcal {M}}^{\pm }}
.
The moduli space of ASD connections, or instantons, was most intensively studied by Donaldson in the case where
G
=
SU
(
2
)
{\displaystyle G=\operatorname {SU} (2)}
and
X
{\displaystyle X}
is simply-connected. In this setting, the principal
SU
(
2
)
{\displaystyle \operatorname {SU} (2)}
-bundle is classified by its second Chern class,
c
2
(
P
)
∈
H
4
(
X
,
Z
)
≅
Z
{\displaystyle c_{2}(P)\in H^{4}(X,\mathbb {Z} )\cong \mathbb {Z} }
. For various choices of principal bundle, one obtains moduli spaces with interesting properties. These spaces are Hausdorff, even when allowing reducible connections, and are generically smooth. It was shown by Donaldson that the smooth part is orientable. By the Atiyah–Singer index theorem, one may compute that the dimension of
M
k
−
{\displaystyle {\mathcal {M}}_{k}^{-}}
, the moduli space of ASD connections when
c
2
(
P
)
=
k
{\displaystyle c_{2}(P)=k}
, to be
dim
M
k
−
=
8
k
−
3
(
1
−
b
1
(
X
)
+
b
+
(
X
)
)
{\displaystyle \dim {\mathcal {M}}_{k}^{-}=8k-3(1-b_{1}(X)+b_{+}(X))}
where
b
1
(
X
)
{\displaystyle b_{1}(X)}
is the first Betti number of
X
{\displaystyle X}
, and
b
+
(
X
)
{\displaystyle b_{+}(X)}
is the dimension of the positive-definite subspace of
H
2
(
X
,
R
)
{\displaystyle H_{2}(X,\mathbb {R} )}
with respect to the intersection form on
X
{\displaystyle X}
. For example, when
X
=
S
4
{\displaystyle X=S^{4}}
and
k
=
1
{\displaystyle k=1}
, the intersection form is trivial and the moduli space has dimension
dim
M
1
−
(
S
4
)
=
8
−
3
=
5
{\displaystyle \dim {\mathcal {M}}_{1}^{-}(S^{4})=8-3=5}
. This agrees with existence of the BPST instanton, which is the unique ASD instanton on
S
4
{\displaystyle S^{4}}
up to a 5 parameter family defining its centre in
R
4
{\displaystyle \mathbb {R} ^{4}}
and its scale. Such instantons on
R
4
{\displaystyle \mathbb {R} ^{4}}
may be extended across the point at infinity using Uhlenbeck's removable singularity theorem. More generally, for positive
k
,
{\displaystyle k,}
the moduli space has dimension
8
k
−
3.
{\displaystyle 8k-3.}
== Applications ==
=== Donaldson's theorem ===
The moduli space of Yang–Mills equations was used by Donaldson to prove Donaldson's theorem about the intersection form of simply-connected four-manifolds. Using analytical results of Clifford Taubes and Karen Uhlenbeck, Donaldson was able to show that in specific circumstances (when the intersection form is definite) the moduli space of ASD instantons on a smooth, compact, oriented, simply-connected four-manifold
X
{\displaystyle X}
gives a cobordism between a copy of the manifold itself, and a disjoint union of copies of the complex projective plane
C
P
2
{\displaystyle \mathbb {CP} ^{2}}
. We can count the number of copies of
C
P
2
{\displaystyle \mathbb {CP} ^{2}}
in two ways: once using that signature is a cobordism invariant, and another using a Hodge-theoretic interpretation of reducible connections. Interpreting these counts carefully, one can conclude that such a smooth manifold has diagonalisable intersection form.
The moduli space of ASD instantons may be used to define further invariants of four-manifolds. Donaldson defined polynomials on the second homology group of a suitably restricted class of four-manifolds, arising from pairings of cohomology classes on the moduli space. This work has subsequently been surpassed by Seiberg–Witten invariants.
=== Dimensional reduction and other moduli spaces ===
Through the process of dimensional reduction, the Yang–Mills equations may be used to derive other important equations in differential geometry and gauge theory. Dimensional reduction is the process of taking the Yang–Mills equations over a four-manifold, typically
R
4
{\displaystyle \mathbb {R} ^{4}}
, and imposing that the solutions be invariant under a symmetry group. For example:
By requiring the anti-self-duality equations to be invariant under translations in a single direction of
R
4
{\displaystyle \mathbb {R} ^{4}}
, one obtains the Bogomolny equations which describe magnetic monopoles on
R
3
{\displaystyle \mathbb {R} ^{3}}
.
By requiring the self-duality equations to be invariant under translation in two directions, one obtains Hitchin's equations first investigated by Hitchin. These equations naturally lead to the study of Higgs bundles and the Hitchin system.
By requiring the anti-self-duality equations to be invariant in three directions, one obtains the Nahm equations on an interval.
There is a duality between solutions of the dimensionally reduced ASD equations on
R
3
{\displaystyle \mathbb {R} ^{3}}
and
R
{\displaystyle \mathbb {R} }
called the Nahm transform, after Werner Nahm, who first described how to construct monopoles from Nahm equation data. Hitchin showed the converse, and Donaldson proved that solutions to the Nahm equations could further be linked to moduli spaces of rational maps from the complex projective line to itself.
The duality observed for these solutions is theorized to hold for arbitrary dual groups of symmetries of a four-manifold. Indeed there is a similar duality between instantons invariant under dual lattices inside
R
4
{\displaystyle \mathbb {R} ^{4}}
, instantons on dual four-dimensional tori, and the ADHM construction can be thought of as a duality between instantons on
R
4
{\displaystyle \mathbb {R} ^{4}}
and dual algebraic data over a single point.
Symmetry reductions of the ASD equations also lead to a number of integrable systems, and Ward's conjecture is that in fact all known integrable ODEs and PDEs come from symmetry reduction of ASDYM. For example reductions of SU(2) ASDYM give the sine-Gordon and Korteweg–de Vries equation, of
S
L
(
3
,
R
)
{\displaystyle \mathrm {SL} (3,\mathbb {R} )}
ASDYM gives the Tzitzeica equation, and a particular reduction to
2
+
1
{\displaystyle 2+1}
dimensions gives the integrable chiral model of Ward. In this sense it is a 'master theory' for integrable systems, allowing many known systems to be recovered by picking appropriate parameters, such as choice of gauge group and symmetry reduction scheme. Other such master theories are four-dimensional Chern–Simons theory and the affine Gaudin model.
=== Chern–Simons theory ===
The moduli space of Yang–Mills equations over a compact Riemann surface
Σ
{\displaystyle \Sigma }
can be viewed as the configuration space of Chern–Simons theory on a cylinder
Σ
×
[
0
,
1
]
{\displaystyle \Sigma \times [0,1]}
. In this case the moduli space admits a geometric quantization, discovered independently by Nigel Hitchin and Axelrod–Della Pietra–Witten.
== See also ==
Connection (vector bundle)
Connection (principal bundle)
Donaldson theory
Stable Yang–Mills connection
F-Yang–Mills equations
Bi-Yang–Mills equations
Hermitian Yang–Mills equations
Deformed Hermitian Yang–Mills equations
Yang–Mills–Higgs equations
== Notes ==
== References == | Wikipedia/Yang–Mills_equations |
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes.
The shapes studied in geometric modeling are mostly two- or three-dimensional (solid figures), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing. Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering, architecture, geology and medical image processing.
Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an opaque algorithm that generates its appearance. They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth.
Notable awards of the area are the John A. Gregory Memorial Award and the Bézier award.
== See also ==
2D geometric modeling
Architectural geometry
Computational conformal geometry
Computational topology
Computer-aided engineering
Computer-aided manufacturing
Digital geometry
Geometric modeling kernel
List of interactive geometry software
Parametric equation
Parametric surface
Solid modeling
Space partitioning
== References ==
== Further reading ==
General textbooks:
Jean Gallier (1999). Curves and Surfaces in Geometric Modeling: Theory and Algorithms. Morgan Kaufmann. This book is out of print and freely available from the author.
Gerald E. Farin (2002). Curves and Surfaces for CAGD: A Practical Guide (5th ed.). Morgan Kaufmann. ISBN 978-1-55860-737-8.
Michael E. Mortenson (2006). Geometric Modeling (3rd ed.). Industrial Press. ISBN 978-0-8311-3298-9.
Ronald Goldman (2009). An Integrated Introduction to Computer Graphics and Geometric Modeling (1st ed.). CRC Press. ISBN 978-1-4398-0334-9.
Nikolay N. Golovanov (2014). Geometric Modeling: The mathematics of shapes. CreateSpace Independent Publishing Platform. ISBN 978-1497473195.
For multi-resolution (multiple level of detail) geometric modeling :
Armin Iske; Ewald Quak; Michael S. Floater (2002). Tutorials on Multiresolution in Geometric Modelling: Summer School Lecture Notes. Springer Science & Business Media. ISBN 978-3-540-43639-3.
Neil Dodgson; Michael S. Floater; Malcolm Sabin (2006). Advances in Multiresolution for Geometric Modelling. Springer Science & Business Media. ISBN 978-3-540-26808-6.
Subdivision methods (such as subdivision surfaces):
Joseph D. Warren; Henrik Weimer (2002). Subdivision Methods for Geometric Design: A Constructive Approach. Morgan Kaufmann. ISBN 978-1-55860-446-9.
Jörg Peters; Ulrich Reif (2008). Subdivision Surfaces. Springer Science & Business Media. ISBN 978-3-540-76405-2.
Lars-Erik Andersson; Neil Frederick Stewart (2010). Introduction to the Mathematics of Subdivision Surfaces. SIAM. ISBN 978-0-89871-761-7.
== External links ==
Geometry and Algorithms for CAD (Lecture Note, TU Darmstadt) | Wikipedia/Geometric_modeling |
Computer-aided design (CAD) is the use of computers (or workstations) to aid in the creation, modification, analysis, or optimization of a design.: 3 This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.: 4 Designs made through CAD software help protect products and inventions when used in patent applications. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The terms computer-aided drafting (CAD) and computer-aided design and drafting (CADD) are also used.
Its use in designing electronic systems is known as electronic design automation (EDA). In mechanical design it is known as mechanical design automation (MDA), which includes the process of creating a technical drawing with the use of computer software.
CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) space.: 71, 106
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design (building information modeling), prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals, often called DCC digital content creation. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.
The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD).
== Overview ==
Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.
CAD is one part of the whole digital product development (DPD) activity within the product lifecycle management (PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:
Computer-aided engineering (CAE) and finite element analysis (FEA, FEM)
Computer-aided manufacturing (CAM) including instructions to computer numerical control (CNC) machines
Photorealistic rendering and motion simulation
Document management and revision control using product data management (PDM)
CAD is also used for the accurate creation of photo simulations that are often required in the preparation of environmental impact reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like, where the proposed facilities are allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD.
== Types ==
There are several different types of CAD, each requiring the operator to think differently about how to use them and design their virtual components in a different manner. Virtually all of CAD tools rely on constraint concepts that are used to define geometric or non-geometric elements of a model.
=== 2D CAD ===
There are many producers of the lower-end 2D sketching systems, including a number of free and open-source programs. These provide an approach to the drawing process where scale and placement on the drawing sheet can easily be adjusted in the final draft as required, unlike in hand drafting.
=== 3D CAD ===
3D wireframe is an extension of 2D drafting into a three-dimensional space. Each line has to be manually inserted into the drawing. The final product has no mass properties associated with it and cannot have features directly added to it, such as holes. The operator approaches these in a similar fashion to the 2D systems, although many 3D systems allow using the wireframe model to make the final engineering drawing views.
3D "dumb" solids are created in a way analogous to manipulations of real-world objects. Basic three-dimensional geometric forms (e.g., prisms, cylinders, spheres, or rectangles) have solid volumes added or subtracted from them as if assembling or cutting real-world objects. Two-dimensional projected views can easily be generated from the models. Basic 3D solids do not usually include tools to easily allow the motion of the components, set their limits to their motion, or identify interference between components.
There are several types of 3D solid modeling
Parametric modeling allows the operator to use what is referred to as "design intent". The objects and features are created modifiable. Any future modifications can be made by changing on how the original part was created. If a feature was intended to be located from the center of the part, the operator should locate it from the center of the model. The feature could be located using any geometric object already available in the part, but this random placement would defeat the design intent. If the operator designs the part as it functions, the parametric modeler is able to make changes to the part while maintaining geometric and functional relationships.
Direct or explicit modeling provide the ability to edit geometry without a history tree. With direct modeling, once a sketch is used to create geometry it is incorporated into the new geometry, and the designer only has to modify the geometry afterward without needing the original sketch. As with parametric modeling, direct modeling has the ability to include the relationships between selected geometry (e.g., tangency, concentricity).
Assembly modelling is a process which incorporates results of the previous single-part modelling into a final product containing several parts. Assemblies can be hierarchical, depending on the specific CAD software vendor, and highly complex models can be achieved (e.g. in building engineering by using computer-aided architectural design software): 539
==== Freeform CAD ====
Top-end CAD systems offer the capability to incorporate more organic, aesthetic and ergonomic features into the designs. Freeform surface modeling is often combined with solids to allow the designer to create products that fit the human form and visual requirements as well as they interface with the machine.
== Technology ==
Originally software for CAD systems was developed with computer languages such as Fortran, ALGOL but with the advancement of object-oriented programming methods this has radically changed. Typical modern parametric feature-based modeler and freeform surface systems are built around a number of key C modules with their own APIs. A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.
Unexpected capabilities of these associative relationships have led to a new form of prototyping called digital prototyping. In contrast to physical prototypes, which entail manufacturing time in the design. That said, CAD models can be generated by a computer after the physical prototype has been scanned using an industrial CT scanning machine. Depending on the nature of the business, digital or physical prototypes can be initially chosen according to specific needs.
Today, CAD systems exist for all the major platforms (Windows, Linux, UNIX and Mac OS X); some packages support multiple platforms.
Currently, no special hardware is required for most CAD software. However, some CAD systems can do graphically and computationally intensive tasks, so a modern graphics card, high speed (and possibly multiple) CPUs and large amounts of RAM may be recommended.
The human-machine interface is generally via a computer mouse but can also be via a pen and digitizing graphics tablet. Manipulation of the view of the model on the screen is also sometimes done with the use of a Spacemouse/SpaceBall. Some systems also support stereoscopic glasses for viewing the 3D model. Technologies that in the past were limited to larger installations or specialist applications have become available to a wide group of users. These include the CAVE or HMDs and interactive devices like motion-sensing technology
== Software ==
Starting with the IBM Drafting System in the mid-1960s, computer-aided design systems began to provide more capabilitties than just an ability to reproduce manual drafting with electronic drafting, and the cost-benefit for companies to switch to CAD became apparent. The software automated many tasks that are taken for granted from computer systems today, such as automated generation of bills of materials, auto layout in integrated circuits, interference checking, and many others. Eventually, CAD provided the designer with the ability to perform engineering calculations. During this transition, calculations were still performed either by hand or by those individuals who could run computer programs. CAD was a revolutionary change in the engineering industry, where draftsman, designer, and engineer roles that had previously been separate began to merge. CAD is an example of the pervasive effect computers were beginning to have on the industry.
Current computer-aided design software packages range from 2D vector-based drafting systems to 3D solid and surface modelers. Modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of dynamic mathematical modeling.
CAD technology is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories).
CAD is mainly used for detailed design of 3D models or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components. It can also be used to design objects such as jewelry, furniture, appliances, etc. Furthermore, many CAD applications now offer advanced rendering and animation capabilities so engineers can better visualize their product designs. 4D BIM is a type of virtual construction engineering simulation incorporating time or schedule-related information for project management.
CAD has become an especially important technology within the scope of computer-aided technologies, with benefits such as lower product development costs and a greatly shortened design cycle. CAD enables designers to layout and develop work on screen, print it out and save it for future editing, saving time on their drawings.
=== License management software ===
In the 2000s, some CAD system software vendors shipped their distributions with a dedicated license manager software that controlled how often or how many users can utilize the CAD system.: 166 It could run either on a local machine (by loading from a local storage device) or a local network fileserver and was usually tied to a specific IP address in latter case.: 166
== List of software packages ==
CAD software enables engineers and architects to design, inspect and manage engineering projects within an integrated graphical user interface (GUI) on a personal computer system. Most applications support solid modeling with boundary representation (B-Rep) and NURBS geometry, and enable the same to be published in a variety of formats.
Based on market statistics, commercial software from Autodesk, Dassault Systems, Siemens PLM Software, and PTC dominate the CAD industry. The following is a list of major CAD applications, grouped by usage statistics.
=== Commercial software ===
ABViewer
AC3D
Alibre Design
ArchiCAD (Graphisoft)
AutoCAD (Autodesk)
AutoTURN
AxSTREAM
BricsCAD
CATIA (Dassault Systèmes)
Cobalt
CorelCAD
EAGLE
Fusion 360 (Autodesk)
IntelliCAD
Inventor (Autodesk)
IRONCAD
KeyCreator (Kubotek)
Landscape Express
MEDUSA4
MicroStation (Bentley Systems)
Modelur (AgiliCity)
Onshape (PTC)
NX (Siemens Digital Industries Software)
PTC Creo (successor to Pro/ENGINEER) (PTC)
PunchCAD
Remo 3D
Revit (Autodesk)
Rhinoceros 3D
SketchUp
Solid Edge (Siemens Digital Industries Software)
SOLIDWORKS (Dassault Systèmes)
SpaceClaim
T-FLEX CAD
TranslateCAD
TurboCAD
Vectorworks (Nemetschek)
=== Open-source software ===
Blender
BRL-CAD
FreeCAD
LibreCAD
LeoCAD
OpenSCAD
QCAD
Salome (software)
SolveSpace
=== Freeware ===
BricsCAD Shape
Tinkercad (successor to Autodesk 123D)
=== CAD kernels ===
ACIS by (Spatial Corp owned by Dassault Systèmes)
C3D Toolkit by C3D Labs
Open CASCADE Open Source
Parasolid by (Siemens Digital Industries Software)
ShapeManager by (Autodesk)
== See also ==
== References ==
== External links ==
MIT 1982 CAD lab
Learning materials related to Computer-aided design at Wikiversity
Learning materials related to Computer-aided Geometric Design at Wikiversity | Wikipedia/Computer-aided_geometric_design |
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero.
In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field.
== History ==
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.
Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766.
== Statement ==
Let
(
X
,
L
)
{\displaystyle (X,L)}
be a real dynamical system with
n
{\displaystyle n}
degrees of freedom. Here
X
{\displaystyle X}
is the configuration space and
L
=
L
(
t
,
q
(
t
)
,
v
(
t
)
)
{\displaystyle L=L(t,{\boldsymbol {q}}(t),{\boldsymbol {v}}(t))}
the Lagrangian, i.e. a smooth real-valued function such that
q
(
t
)
∈
X
,
{\displaystyle {\boldsymbol {q}}(t)\in X,}
and
v
(
t
)
{\displaystyle {\boldsymbol {v}}(t)}
is an
n
{\displaystyle n}
-dimensional "vector of speed". (For those familiar with differential geometry,
X
{\displaystyle X}
is a smooth manifold, and
L
:
R
t
×
X
×
T
X
→
R
,
{\displaystyle L:{\mathbb {R} }_{t}\times X\times TX\to {\mathbb {R} },}
where
T
X
{\displaystyle TX}
is the tangent bundle of
X
)
.
{\displaystyle X).}
Let
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
be the set of smooth paths
q
:
[
a
,
b
]
→
X
{\displaystyle {\boldsymbol {q}}:[a,b]\to X}
for which
q
(
a
)
=
x
a
{\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}}
and
q
(
b
)
=
x
b
.
{\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.}
The action functional
S
:
P
(
a
,
b
,
x
a
,
x
b
)
→
R
{\displaystyle S:{\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} }
is defined via
S
[
q
]
=
∫
a
b
L
(
t
,
q
(
t
)
,
q
˙
(
t
)
)
d
t
.
{\displaystyle S[{\boldsymbol {q}}]=\int _{a}^{b}L(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt.}
A path
q
∈
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\boldsymbol {q}}\in {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
is a stationary point of
S
{\displaystyle S}
if and only if
Here,
q
˙
(
t
)
{\displaystyle {\dot {\boldsymbol {q}}}(t)}
is the time derivative of
q
(
t
)
.
{\displaystyle {\boldsymbol {q}}(t).}
When we say stationary point, we mean a stationary point of
S
{\displaystyle S}
with respect to any small perturbation in
q
{\displaystyle {\boldsymbol {q}}}
. See proofs below for more rigorous detail.
== Example ==
A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible.
s
=
∫
a
b
d
x
2
+
d
y
2
=
∫
a
b
1
+
y
′
2
d
x
,
{\displaystyle {\text{s}}=\int _{a}^{b}{\sqrt {\mathrm {d} x^{2}+\mathrm {d} y^{2}}}=\int _{a}^{b}{\sqrt {1+y'^{2}}}\,\mathrm {d} x,}
the integrand function being
L
(
x
,
y
,
y
′
)
=
1
+
y
′
2
{\textstyle L(x,y,y')={\sqrt {1+y'^{2}}}}
.
The partial derivatives of L are:
∂
L
(
x
,
y
,
y
′
)
∂
y
′
=
y
′
1
+
y
′
2
and
∂
L
(
x
,
y
,
y
′
)
∂
y
=
0.
{\displaystyle {\frac {\partial L(x,y,y')}{\partial y'}}={\frac {y'}{\sqrt {1+y'^{2}}}}\quad {\text{and}}\quad {\frac {\partial L(x,y,y')}{\partial y}}=0.}
By substituting these into the Euler–Lagrange equation, we obtain
d
d
x
y
′
(
x
)
1
+
(
y
′
(
x
)
)
2
=
0
y
′
(
x
)
1
+
(
y
′
(
x
)
)
2
=
C
=
constant
⇒
y
′
(
x
)
=
C
1
−
C
2
=:
A
⇒
y
(
x
)
=
A
x
+
B
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=0\\{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=C={\text{constant}}\\\Rightarrow y'(x)&={\frac {C}{\sqrt {1-C^{2}}}}=:A\\\Rightarrow y(x)&=Ax+B\end{aligned}}}
that is, the function must have a constant first derivative, and thus its graph is a straight line.
== Generalizations ==
=== Single function of single variable with higher derivatives ===
The stationary values of the functional
I
[
f
]
=
∫
x
0
x
1
L
(
x
,
f
,
f
′
,
f
″
,
…
,
f
(
k
)
)
d
x
;
f
′
:=
d
f
d
x
,
f
″
:=
d
2
f
d
x
2
,
f
(
k
)
:=
d
k
f
d
x
k
{\displaystyle I[f]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f,f',f'',\dots ,f^{(k)})~\mathrm {d} x~;~~f':={\cfrac {\mathrm {d} f}{\mathrm {d} x}},~f'':={\cfrac {\mathrm {d} ^{2}f}{\mathrm {d} x^{2}}},~f^{(k)}:={\cfrac {\mathrm {d} ^{k}f}{\mathrm {d} x^{k}}}}
can be obtained from the Euler–Lagrange equation
∂
L
∂
f
−
d
d
x
(
∂
L
∂
f
′
)
+
d
2
d
x
2
(
∂
L
∂
f
″
)
−
⋯
+
(
−
1
)
k
d
k
d
x
k
(
∂
L
∂
f
(
k
)
)
=
0
{\displaystyle {\cfrac {\partial {\mathcal {L}}}{\partial f}}-{\cfrac {\mathrm {d} }{\mathrm {d} x}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f'}}\right)+{\cfrac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f''}}\right)-\dots +(-1)^{k}{\cfrac {\mathrm {d} ^{k}}{\mathrm {d} x^{k}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f^{(k)}}}\right)=0}
under fixed boundary conditions for the function itself as well as for the first
k
−
1
{\displaystyle k-1}
derivatives (i.e. for all
f
(
i
)
,
i
∈
{
0
,
.
.
.
,
k
−
1
}
{\displaystyle f^{(i)},i\in \{0,...,k-1\}}
). The endpoint values of the highest derivative
f
(
k
)
{\displaystyle f^{(k)}}
remain flexible.
=== Several functions of single variable with single derivative ===
If the problem involves finding several functions (
f
1
,
f
2
,
…
,
f
m
{\displaystyle f_{1},f_{2},\dots ,f_{m}}
) of a single independent variable (
x
{\displaystyle x}
) that define an extremum of the functional
I
[
f
1
,
f
2
,
…
,
f
m
]
=
∫
x
0
x
1
L
(
x
,
f
1
,
f
2
,
…
,
f
m
,
f
1
′
,
f
2
′
,
…
,
f
m
′
)
d
x
;
f
i
′
:=
d
f
i
d
x
{\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f_{1},f_{2},\dots ,f_{m},f_{1}',f_{2}',\dots ,f_{m}')~\mathrm {d} x~;~~f_{i}':={\cfrac {\mathrm {d} f_{i}}{\mathrm {d} x}}}
then the corresponding Euler–Lagrange equations are
∂
L
∂
f
i
−
d
d
x
(
∂
L
∂
f
i
′
)
=
0
;
i
=
1
,
2
,
.
.
.
,
m
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{i}}}-{\frac {\mathrm {d} }{\mathrm {d} x}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i}'}}\right)=0;\quad i=1,2,...,m\end{aligned}}}
=== Single function of several variables with single derivative ===
A multi-dimensional generalization comes from considering a function on n variables. If
Ω
{\displaystyle \Omega }
is some surface, then
I
[
f
]
=
∫
Ω
L
(
x
1
,
…
,
x
n
,
f
,
f
1
,
…
,
f
n
)
d
x
;
f
j
:=
∂
f
∂
x
j
{\displaystyle I[f]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f,f_{1},\dots ,f_{n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{j}:={\cfrac {\partial f}{\partial x_{j}}}}
is extremized only if f satisfies the partial differential equation
∂
L
∂
f
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
j
)
=
0.
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{j}}}\right)=0.}
When n = 2 and functional
I
{\displaystyle {\mathcal {I}}}
is the energy functional, this leads to the soap-film minimal surface problem.
=== Several functions of several variables with single derivative ===
If there are several unknown functions to be determined and several variables such that
I
[
f
1
,
f
2
,
…
,
f
m
]
=
∫
Ω
L
(
x
1
,
…
,
x
n
,
f
1
,
…
,
f
m
,
f
1
,
1
,
…
,
f
1
,
n
,
…
,
f
m
,
1
,
…
,
f
m
,
n
)
d
x
;
f
i
,
j
:=
∂
f
i
∂
x
j
{\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f_{1},\dots ,f_{m},f_{1,1},\dots ,f_{1,n},\dots ,f_{m,1},\dots ,f_{m,n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{i,j}:={\cfrac {\partial f_{i}}{\partial x_{j}}}}
the system of Euler–Lagrange equations is
∂
L
∂
f
1
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
1
,
j
)
=
0
1
∂
L
∂
f
2
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
2
,
j
)
=
0
2
⋮
⋮
⋮
∂
L
∂
f
m
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
m
,
j
)
=
0
m
.
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{1}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1,j}}}\right)&=0_{1}\\{\frac {\partial {\mathcal {L}}}{\partial f_{2}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2,j}}}\right)&=0_{2}\\\vdots \qquad \vdots \qquad &\quad \vdots \\{\frac {\partial {\mathcal {L}}}{\partial f_{m}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{m,j}}}\right)&=0_{m}.\end{aligned}}}
=== Single function of two variables with higher derivatives ===
If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that
I
[
f
]
=
∫
Ω
L
(
x
1
,
x
2
,
f
,
f
1
,
f
2
,
f
11
,
f
12
,
f
22
,
…
,
f
22
…
2
)
d
x
f
i
:=
∂
f
∂
x
i
,
f
i
j
:=
∂
2
f
∂
x
i
∂
x
j
,
…
{\displaystyle {\begin{aligned}I[f]&=\int _{\Omega }{\mathcal {L}}(x_{1},x_{2},f,f_{1},f_{2},f_{11},f_{12},f_{22},\dots ,f_{22\dots 2})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i}:={\cfrac {\partial f}{\partial x_{i}}}\;,\quad f_{ij}:={\cfrac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}\;,\;\;\dots \end{aligned}}}
then the Euler–Lagrange equation is
∂
L
∂
f
−
∂
∂
x
1
(
∂
L
∂
f
1
)
−
∂
∂
x
2
(
∂
L
∂
f
2
)
+
∂
2
∂
x
1
2
(
∂
L
∂
f
11
)
+
∂
2
∂
x
1
∂
x
2
(
∂
L
∂
f
12
)
+
∂
2
∂
x
2
2
(
∂
L
∂
f
22
)
−
⋯
+
(
−
1
)
n
∂
n
∂
x
2
n
(
∂
L
∂
f
22
…
2
)
=
0
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f}}&-{\frac {\partial }{\partial x_{1}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1}}}\right)-{\frac {\partial }{\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{11}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{12}}}\right)+{\frac {\partial ^{2}}{\partial x_{2}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22}}}\right)\\&-\dots +(-1)^{n}{\frac {\partial ^{n}}{\partial x_{2}^{n}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22\dots 2}}}\right)=0\end{aligned}}}
which can be represented shortly as:
∂
L
∂
f
+
∑
j
=
1
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
j
∂
x
μ
1
…
∂
x
μ
j
(
∂
L
∂
f
μ
1
…
μ
j
)
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{\mu _{1}\dots \mu _{j}}}}\right)=0}
wherein
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
indices is only over
μ
1
≤
μ
2
≤
…
≤
μ
j
{\displaystyle \mu _{1}\leq \mu _{2}\leq \ldots \leq \mu _{j}}
in order to avoid counting the same partial derivative multiple times, for example
f
12
=
f
21
{\displaystyle f_{12}=f_{21}}
appears only once in the previous equation.
=== Several functions of several variables with higher derivatives ===
If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that
I
[
f
1
,
…
,
f
p
]
=
∫
Ω
L
(
x
1
,
…
,
x
m
;
f
1
,
…
,
f
p
;
f
1
,
1
,
…
,
f
p
,
m
;
f
1
,
11
,
…
,
f
p
,
m
m
;
…
;
f
p
,
1
…
1
,
…
,
f
p
,
m
…
m
)
d
x
f
i
,
μ
:=
∂
f
i
∂
x
μ
,
f
i
,
μ
1
μ
2
:=
∂
2
f
i
∂
x
μ
1
∂
x
μ
2
,
…
{\displaystyle {\begin{aligned}I[f_{1},\ldots ,f_{p}]&=\int _{\Omega }{\mathcal {L}}(x_{1},\ldots ,x_{m};f_{1},\ldots ,f_{p};f_{1,1},\ldots ,f_{p,m};f_{1,11},\ldots ,f_{p,mm};\ldots ;f_{p,1\ldots 1},\ldots ,f_{p,m\ldots m})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i,\mu }:={\cfrac {\partial f_{i}}{\partial x_{\mu }}}\;,\quad f_{i,\mu _{1}\mu _{2}}:={\cfrac {\partial ^{2}f_{i}}{\partial x_{\mu _{1}}\partial x_{\mu _{2}}}}\;,\;\;\dots \end{aligned}}}
where
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is
∂
L
∂
f
i
+
∑
j
=
1
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
j
∂
x
μ
1
…
∂
x
μ
j
(
∂
L
∂
f
i
,
μ
1
…
μ
j
)
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f_{i}}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0}
where the summation over the
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
is avoiding counting the same derivative
f
i
,
μ
1
μ
2
=
f
i
,
μ
2
μ
1
{\displaystyle f_{i,\mu _{1}\mu _{2}}=f_{i,\mu _{2}\mu _{1}}}
several times, just as in the previous subsection. This can be expressed more compactly as
∑
j
=
0
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
μ
1
…
μ
j
j
(
∂
L
∂
f
i
,
μ
1
…
μ
j
)
=
0
{\displaystyle \sum _{j=0}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}\partial _{\mu _{1}\ldots \mu _{j}}^{j}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0}
=== Field theories ===
== Generalization to manifolds ==
Let
M
{\displaystyle M}
be a smooth manifold, and let
C
∞
(
[
a
,
b
]
)
{\displaystyle C^{\infty }([a,b])}
denote the space of smooth functions
f
:
[
a
,
b
]
→
M
{\displaystyle f\colon [a,b]\to M}
. Then, for functionals
S
:
C
∞
(
[
a
,
b
]
)
→
R
{\displaystyle S\colon C^{\infty }([a,b])\to \mathbb {R} }
of the form
S
[
f
]
=
∫
a
b
(
L
∘
f
˙
)
(
t
)
d
t
{\displaystyle S[f]=\int _{a}^{b}(L\circ {\dot {f}})(t)\,\mathrm {d} t}
where
L
:
T
M
→
R
{\displaystyle L\colon TM\to \mathbb {R} }
is the Lagrangian, the statement
d
S
f
=
0
{\displaystyle \mathrm {d} S_{f}=0}
is equivalent to the statement that, for all
t
∈
[
a
,
b
]
{\displaystyle t\in [a,b]}
, each coordinate frame trivialization
(
x
i
,
X
i
)
{\displaystyle (x^{i},X^{i})}
of a neighborhood of
f
˙
(
t
)
{\displaystyle {\dot {f}}(t)}
yields the following
dim
M
{\displaystyle \dim M}
equations:
∀
i
:
d
d
t
∂
L
∂
X
i
|
f
˙
(
t
)
=
∂
L
∂
x
i
|
f
˙
(
t
)
.
{\displaystyle \forall i:{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial X^{i}}}{\bigg |}_{{\dot {f}}(t)}={\frac {\partial L}{\partial x^{i}}}{\bigg |}_{{\dot {f}}(t)}.}
Euler-Lagrange equations can also be written in a coordinate-free form as
L
Δ
θ
L
=
d
L
{\displaystyle {\mathcal {L}}_{\Delta }\theta _{L}=dL}
where
θ
L
{\displaystyle \theta _{L}}
is the canonical momenta 1-form corresponding to the Lagrangian
L
{\displaystyle L}
. The vector field generating time translations is denoted by
Δ
{\displaystyle \Delta }
and the Lie derivative is denoted by
L
{\displaystyle {\mathcal {L}}}
. One can use local charts
(
q
α
,
q
˙
α
)
{\displaystyle (q^{\alpha },{\dot {q}}^{\alpha })}
in which
θ
L
=
∂
L
∂
q
˙
α
d
q
α
{\displaystyle \theta _{L}={\frac {\partial L}{\partial {\dot {q}}^{\alpha }}}dq^{\alpha }}
and
Δ
:=
d
d
t
=
q
˙
α
∂
∂
q
α
+
q
¨
α
∂
∂
q
˙
α
{\displaystyle \Delta :={\frac {d}{dt}}={\dot {q}}^{\alpha }{\frac {\partial }{\partial q^{\alpha }}}+{\ddot {q}}^{\alpha }{\frac {\partial }{\partial {\dot {q}}^{\alpha }}}}
and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations.
== See also ==
Lagrangian mechanics
Hamiltonian mechanics
Analytical mechanics
Beltrami identity
Functional derivative
== Notes ==
== References ==
"Lagrange equations (in mechanics)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Euler-Lagrange Differential Equation". MathWorld.
Calculus of Variations at PlanetMath.
Gelfand, Izrail Moiseevich (1963). Calculus of Variations. Dover. ISBN 0-486-41448-5. {{cite book}}: ISBN / Date incompatibility (help)
Roubicek, T.: Calculus of variations. Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. | Wikipedia/Euler–Lagrange_equations |
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
== Historical background ==
In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter.
In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction.
In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form.
In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark.
The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons.
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted.
The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks.
The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France.
== Particle content ==
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Notes:
[†] An anti-electron (e+) is conventionally called a "positron".
=== Fermions ===
The Standard Model includes 12 elementary particles of spin 1⁄2, known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.
There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe.
=== Gauge bosons ===
The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below.
Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED).
Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves.
Weak Interactions: The W+, W−, and Z gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the Z having more mass than the W±. The weak interactions involving the W± act only on left-handed particles and right-handed antiparticles respectively. The W± carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral Z boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction.
Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime.
The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section.
=== Higgs boson ===
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above 1.4 TeV; therefore, the LHC (designed to collide two 7 TeV proton beams) was built to answer the question of whether the Higgs boson actually exists.
On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about 125 GeV/c2 (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson.
== Theoretical aspects ==
=== Construction of the Standard Model Lagrangian ===
Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time.
The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries.
The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above.
==== Quantum chromodynamics sector ====
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by
T
a
=
λ
a
/
2
{\displaystyle T^{a}=\lambda ^{a}/2}
. Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
L
QCD
=
ψ
¯
i
γ
μ
D
μ
ψ
−
1
4
G
μ
ν
a
G
a
μ
ν
,
{\displaystyle {\mathcal {L}}_{\text{QCD}}={\overline {\psi }}i\gamma ^{\mu }D_{\mu }\psi -{\frac {1}{4}}G_{\mu \nu }^{a}G_{a}^{\mu \nu },}
where
ψ
{\displaystyle \psi }
is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied.
The gauge covariant derivative of QCD is defined by
D
μ
≡
∂
μ
−
i
g
s
1
2
λ
a
G
μ
a
{\displaystyle D_{\mu }\equiv \partial _{\mu }-ig_{\text{s}}{\frac {1}{2}}\lambda ^{a}G_{\mu }^{a}}
, where
γμ are the Dirac matrices,
Gaμ is the 8-component (
a
=
1
,
2
,
…
,
8
{\displaystyle a=1,2,\dots ,8}
) SU(3) gauge field,
λa are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group,
Gaμν represents the gluon field strength tensor, and
gs is the strong coupling constant.
The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form
ψ
→
ψ
′
=
U
ψ
{\displaystyle \psi \rightarrow \psi '=U\psi }
, where
U
=
e
−
i
g
s
λ
a
ϕ
a
(
x
)
{\displaystyle U=e^{-ig_{\text{s}}\lambda ^{a}\phi ^{a}(x)}}
is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and
ϕ
a
(
x
)
{\displaystyle \phi ^{a}(x)}
is an arbitrary function of spacetime.
==== Electroweak sector ====
The electroweak sector is a Yang–Mills gauge theory with the symmetry group U(1) × SU(2)L,
L
EW
=
Q
¯
L
j
i
γ
μ
D
μ
Q
L
j
+
u
¯
R
j
i
γ
μ
D
μ
u
R
j
+
d
¯
R
j
i
γ
μ
D
μ
d
R
j
+
ℓ
¯
L
j
i
γ
μ
D
μ
ℓ
L
j
+
e
¯
R
j
i
γ
μ
D
μ
e
R
j
−
1
4
W
a
μ
ν
W
μ
ν
a
−
1
4
B
μ
ν
B
μ
ν
,
{\displaystyle {\mathcal {L}}_{\text{EW}}={\overline {Q}}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }Q_{{\text{L}}j}+{\overline {u}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }u_{{\text{R}}j}+{\overline {d}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }d_{{\text{R}}j}+{\overline {\ell }}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }\ell _{{\text{L}}j}+{\overline {e}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }e_{{\text{R}}j}-{\tfrac {1}{4}}W_{a}^{\mu \nu }W_{\mu \nu }^{a}-{\tfrac {1}{4}}B^{\mu \nu }B_{\mu \nu },}
where the subscript
j
{\displaystyle j}
sums over the three generations of fermions;
Q
L
,
u
R
{\displaystyle Q_{\text{L}},u_{\text{R}}}
, and
d
R
{\displaystyle d_{\text{R}}}
are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and
ℓ
L
{\displaystyle \ell _{\text{L}}}
and
e
R
{\displaystyle e_{\text{R}}}
are the left-handed doublet and right-handed singlet lepton fields.
The electroweak gauge covariant derivative is defined as
D
μ
≡
∂
μ
−
i
g
′
1
2
Y
W
B
μ
−
i
g
1
2
τ
→
L
W
→
μ
{\displaystyle D_{\mu }\equiv \partial _{\mu }-ig'{\tfrac {1}{2}}Y_{\text{W}}B_{\mu }-ig{\tfrac {1}{2}}{\vec {\tau }}_{\text{L}}{\vec {W}}_{\mu }}
, where
Bμ is the U(1) gauge field,
YW is the weak hypercharge – the generator of the U(1) group,
W→μ is the 3-component SU(2) gauge field,
→τL are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions,
g' and g are the U(1) and SU(2) coupling constants respectively,
W
a
μ
ν
{\displaystyle W^{a\mu \nu }}
(
a
=
1
,
2
,
3
{\displaystyle a=1,2,3}
) and
B
μ
ν
{\displaystyle B^{\mu \nu }}
are the field strength tensors for the weak isospin and weak hypercharge fields.
Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form
m
ψ
¯
ψ
{\displaystyle m{\overline {\psi }}\psi }
do not respect U(1) × SU(2)L gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field.
==== Higgs sector ====
In the Standard Model, the Higgs field is an SU(2)L doublet of complex scalar fields with four degrees of freedom:
φ
=
(
φ
+
φ
0
)
=
1
2
(
φ
1
+
i
φ
2
φ
3
+
i
φ
4
)
,
{\displaystyle \varphi ={\begin{pmatrix}\varphi ^{+}\\\varphi ^{0}\end{pmatrix}}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}\varphi _{1}+i\varphi _{2}\\\varphi _{3}+i\varphi _{4}\end{pmatrix}},}
where the superscripts + and 0 indicate the electric charge
Q
{\displaystyle Q}
of the components. The weak hypercharge
Y
W
{\displaystyle Y_{\text{W}}}
of both components is 1. Before symmetry breaking, the Higgs Lagrangian is
L
H
=
(
D
μ
φ
)
†
(
D
μ
φ
)
−
V
(
φ
)
,
{\displaystyle {\mathcal {L}}_{\text{H}}=\left(D_{\mu }\varphi \right)^{\dagger }\left(D^{\mu }\varphi \right)-V(\varphi ),}
where
D
μ
{\displaystyle D_{\mu }}
is the electroweak gauge covariant derivative defined above and
V
(
φ
)
{\displaystyle V(\varphi )}
is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields
W
μ
a
{\displaystyle W_{\mu }^{a}}
and
B
μ
{\displaystyle B_{\mu }}
and the scalar field
φ
{\displaystyle \varphi }
. The scalar potential is given by
V
(
φ
)
=
−
μ
2
φ
†
φ
+
λ
(
φ
†
φ
)
2
,
{\displaystyle V(\varphi )=-\mu ^{2}\varphi ^{\dagger }\varphi +\lambda \left(\varphi ^{\dagger }\varphi \right)^{2},}
where
μ
2
>
0
{\displaystyle \mu ^{2}>0}
, so that
φ
{\displaystyle \varphi }
acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and
λ
>
0
{\displaystyle \lambda >0}
, so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field
φ
{\displaystyle \varphi }
.
The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when
φ
†
φ
=
μ
2
2
λ
{\displaystyle \varphi ^{\dagger }\varphi ={\tfrac {\mu ^{2}}{2\lambda }}}
. It is possible to perform a gauge transformation on
φ
{\displaystyle \varphi }
such that the ground state is transformed to a basis where
φ
1
=
φ
2
=
φ
4
=
0
{\displaystyle \varphi _{1}=\varphi _{2}=\varphi _{4}=0}
and
φ
3
=
μ
λ
≡
v
{\displaystyle \varphi _{3}={\tfrac {\mu }{\sqrt {\lambda }}}\equiv v}
. This breaks the symmetry of the ground state. The expectation value of
φ
{\displaystyle \varphi }
now becomes
⟨
φ
⟩
=
1
2
(
0
v
)
,
{\displaystyle \langle \varphi \rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}0\\v\end{pmatrix}},}
where
v
{\displaystyle v}
has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~246 GeV/c2.
After symmetry breaking, the masses of the W and Z are given by
m
W
=
1
2
g
v
{\displaystyle m_{\text{W}}={\frac {1}{2}}gv}
and
m
Z
=
1
2
g
2
+
g
′
2
v
{\displaystyle m_{\text{Z}}={\frac {1}{2}}{\sqrt {g^{2}+g'^{2}}}v}
, which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is
m
H
=
2
μ
2
=
2
λ
v
{\displaystyle m_{\text{H}}={\sqrt {2\mu ^{2}}}={\sqrt {2\lambda }}v}
. Since
μ
{\displaystyle \mu }
and
λ
{\displaystyle \lambda }
are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally.
==== Yukawa sector ====
The Yukawa interaction terms are:
L
Yukawa
=
(
Y
u
)
m
n
(
Q
¯
L
)
m
φ
~
(
u
R
)
n
+
(
Y
d
)
m
n
(
Q
¯
L
)
m
φ
(
d
R
)
n
+
(
Y
e
)
m
n
(
ℓ
¯
L
)
m
φ
(
e
R
)
n
+
h
.
c
.
{\displaystyle {\mathcal {L}}_{\text{Yukawa}}=(Y_{\text{u}})_{mn}({\bar {Q}}_{\text{L}})_{m}{\tilde {\varphi }}(u_{\text{R}})_{n}+(Y_{\text{d}})_{mn}({\bar {Q}}_{\text{L}})_{m}\varphi (d_{\text{R}})_{n}+(Y_{\text{e}})_{mn}({\bar {\ell }}_{\text{L}})_{m}{\varphi }(e_{\text{R}})_{n}+\mathrm {h.c.} }
where
Y
u
{\displaystyle Y_{\text{u}}}
,
Y
d
{\displaystyle Y_{\text{d}}}
, and
Y
e
{\displaystyle Y_{\text{e}}}
are 3 × 3 matrices of Yukawa couplings, with the mn term giving the coupling of the generations m and n, and h.c. means Hermitian conjugate of preceding terms. The fields
Q
L
{\displaystyle Q_{\text{L}}}
and
ℓ
L
{\displaystyle \ell _{\text{L}}}
are left-handed quark and lepton doublets. Likewise,
u
R
,
d
R
{\displaystyle u_{\text{R}},d_{\text{R}}}
and
e
R
{\displaystyle e_{\text{R}}}
are right-handed up-type quark, down-type quark, and lepton singlets. Finally
φ
{\displaystyle \varphi }
is the Higgs doublet and
φ
~
=
i
τ
2
φ
∗
{\displaystyle {\tilde {\varphi }}=i\tau _{2}\varphi ^{*}}
is its charge conjugate state.
The Yukawa terms are invariant under the SU(2)L × U(1)Y gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking.
== Fundamental interactions ==
The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles.
=== Gravity ===
Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist.
=== Electromagnetism ===
Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics.
=== Weak nuclear force ===
The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions.
In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies.
=== Strong nuclear force ===
The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales.
While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model.
== Tests and predictions ==
The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed.
== Challenges ==
Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem.
Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done.
If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory.
This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude.
Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include:
The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe.
Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters.
The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles.
The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.
Currently, no proposed theory of everything has been widely accepted or verified.
== See also ==
== Notes ==
== References ==
== Further reading ==
Oerter, Robert (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Plume. ISBN 978-0-452-28786-0.
Schumm, Bruce A. (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 978-0-8018-7971-5.
"The Standard Model of Particle Physics Interactive Graphic".
=== Introductory textbooks ===
Robert Mann (2009). An Introduction to Particle Physics and the Standard Model. CRC Press. ISBN 9780429141225.
W. Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0.
J.E. Dodd; B.M. Gripaios (2020). The Ideas of Particle Physics: An Introduction for Scientists. Cambridge University Press. ISBN 978-1-108-72740-2.
D.J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-0-471-60386-3.
W. N. Cottingham and D. A. Greenwood (2023). An Introduction to the Standard Model of Particle Physics. Cambridge University Press. ISBN 9781009401685.
=== Advanced textbooks ===
T.P. Cheng; L.F. Li (2006). Gauge theory of elementary particle physics. Oxford University Press. ISBN 978-0-19-851961-4. Highlights the gauge theory aspects of the Standard Model.
J.F. Donoghue; E. Golowich; B.R. Holstein (1994). Dynamics of the Standard Model. Cambridge University Press. ISBN 978-0-521-47652-2. Highlights dynamical and phenomenological aspects of the Standard Model.
Ken J. Barnes (2010). Group Theory for the Standard Model of Particle Physics and Beyond. Taylor & Francis. ISBN 9780429184550.
Nagashima, Yorikiyo (2013). Elementary Particle Physics: Foundations of the Standard Model, Volume 2. Wiley. ISBN 978-3-527-64890-0. 920 pages.
Schwartz, Matthew D. (2014). Quantum Field Theory and the Standard Model. Cambridge University. ISBN 978-1-107-03473-0. 952 pages.
Langacker, Paul (2009). The Standard Model and Beyond. CRC Press. ISBN 978-1-4200-7907-4. 670 pages. Highlights group-theoretical aspects of the Standard Model.
=== Journal articles ===
E.S. Abers; B.W. Lee (1973). "Gauge theories". Physics Reports. 9 (1): 1–141. Bibcode:1973PhR.....9....1A. doi:10.1016/0370-1573(73)90027-6.
M. Baak; et al. (2012). "The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC". The European Physical Journal C. 72 (11): 2205. arXiv:1209.2716. Bibcode:2012EPJC...72.2205B. doi:10.1140/epjc/s10052-012-2205-9. S2CID 15052448.
Y. Hayato; et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". Physical Review Letters. 83 (8): 1529–1533. arXiv:hep-ex/9904020. Bibcode:1999PhRvL..83.1529H. doi:10.1103/PhysRevLett.83.1529. S2CID 118326409.
S.F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283.
D.P. Roy (1999). "Basic Constituents of Matter and their Interactions – A Progress Report". arXiv:hep-ph/9912523.
F. Wilczek (2004). "The Universe Is A Strange Place". Nuclear Physics B: Proceedings Supplements. 134: 3. arXiv:astro-ph/0401347. Bibcode:2004NuPhS.134....3W. doi:10.1016/j.nuclphysbps.2004.08.001. S2CID 28234516.
== External links ==
"The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast.
The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces.
Particle Physics: Standard Model, Leonard Susskind lectures (2010). | Wikipedia/Standard_model_of_particle_physics |
In algebra, a sextic (or hexic) polynomial is a polynomial of degree six.
A sextic equation is a polynomial equation of degree six—that is, an equation whose left hand side is a sextic polynomial and whose right hand side is zero. More precisely, it has the form:
a
x
6
+
b
x
5
+
c
x
4
+
d
x
3
+
e
x
2
+
f
x
+
g
=
0
,
{\displaystyle ax^{6}+bx^{5}+cx^{4}+dx^{3}+ex^{2}+fx+g=0,\,}
where a ≠ 0 and the coefficients a, b, c, d, e, f, g may be integers, rational numbers, real numbers, complex numbers or, more generally, members of any field.
A sextic function is a function defined by a sextic polynomial. Because they have an even degree, sextic functions appear similar to quartic functions when graphed, except they may possess an additional local maximum and local minimum each. The derivative of a sextic function is a quintic function.
Since a sextic function is defined by a polynomial with even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If the leading coefficient a is positive, then the function increases to positive infinity at both sides and thus the function has a global minimum. Likewise, if a is negative, the sextic function decreases to negative infinity and has a global maximum.
== Solvable sextics ==
Some sixth degree equations, such as ax6 + dx3 + g = 0, can be solved by factorizing into radicals, but other sextics cannot. Évariste Galois developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory.
It follows from Galois theory that a sextic equation is solvable in terms of radicals if and only if its Galois group is contained either in the group of order 48 which stabilizes a partition of the set of the roots into three subsets of two roots or in the group of order 72 which stabilizes a partition of the set of the roots into two subsets of three roots.
There are formulas to test either case, and, if the equation is solvable, compute the roots in term of radicals.
== Examples ==
Watt's curve, which arose in the context of early work on the steam engine, is a sextic in two variables.
One method of solving the cubic equation involves transforming variables to obtain a sextic equation having terms only of degrees 6, 3, and 0, which can be solved as a quadratic equation in the cube of the variable.
== Etymology ==
The describer "sextic" comes from the Latin stem for 6 or 6th ("sex-t-"), and the Greek suffix meaning "pertaining to" ("-ic"). The much less common "hexic" uses Greek for both its stem (hex- 6) and its suffix (-ik-). In both cases, the prefix refers to the degree of the function. Often, these type of functions will simply be referred to as "6th degree functions".
== See also ==
Cayley's sextic
Cubic function
Septic equation
== References == | Wikipedia/Sextic_function |
In the mathematical field of complex analysis, elliptic functions are special kinds of meromorphic functions, that satisfy two periodicity conditions. They are named elliptic functions because they come from elliptic integrals. Those integrals are in turn named elliptic because they first were encountered for the calculation of the arc length of an ellipse.
Important elliptic functions are Jacobi elliptic functions and the Weierstrass
℘
{\displaystyle \wp }
-function.
Further development of this theory led to hyperelliptic functions and modular forms.
== Definition ==
A meromorphic function is called an elliptic function, if there are two
R
{\displaystyle \mathbb {R} }
-linear independent complex numbers
ω
1
,
ω
2
∈
C
{\displaystyle \omega _{1},\omega _{2}\in \mathbb {C} }
such that
f
(
z
+
ω
1
)
=
f
(
z
)
{\displaystyle f(z+\omega _{1})=f(z)}
and
f
(
z
+
ω
2
)
=
f
(
z
)
,
∀
z
∈
C
{\displaystyle f(z+\omega _{2})=f(z),\quad \forall z\in \mathbb {C} }
.
So elliptic functions have two periods and are therefore doubly periodic functions.
== Period lattice and fundamental domain ==
If
f
{\displaystyle f}
is an elliptic function with periods
ω
1
,
ω
2
{\displaystyle \omega _{1},\omega _{2}}
it also holds that
f
(
z
+
γ
)
=
f
(
z
)
{\displaystyle f(z+\gamma )=f(z)}
for every linear combination
γ
=
m
ω
1
+
n
ω
2
{\displaystyle \gamma =m\omega _{1}+n\omega _{2}}
with
m
,
n
∈
Z
{\displaystyle m,n\in \mathbb {Z} }
.
The abelian group
Λ
:=
⟨
ω
1
,
ω
2
⟩
Z
:=
Z
ω
1
+
Z
ω
2
:=
{
m
ω
1
+
n
ω
2
∣
m
,
n
∈
Z
}
{\displaystyle \Lambda :=\langle \omega _{1},\omega _{2}\rangle _{\mathbb {Z} }:=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}\mid m,n\in \mathbb {Z} \}}
is called the period lattice.
The parallelogram generated by
ω
1
{\displaystyle \omega _{1}}
and
ω
2
{\displaystyle \omega _{2}}
{
μ
ω
1
+
ν
ω
2
∣
0
≤
μ
,
ν
≤
1
}
{\displaystyle \{\mu \omega _{1}+\nu \omega _{2}\mid 0\leq \mu ,\nu \leq 1\}}
is a fundamental domain of
Λ
{\displaystyle \Lambda }
acting on
C
{\displaystyle \mathbb {C} }
.
Geometrically the complex plane is tiled with parallelograms. Everything that happens in one fundamental domain repeats in all the others. For that reason we can view elliptic function as functions with the quotient group
C
/
Λ
{\displaystyle \mathbb {C} /\Lambda }
as their domain. This quotient group, called an elliptic curve, can be visualised as a parallelogram where opposite sides are identified, which topologically is a torus.
== Liouville's theorems ==
The following three theorems are known as Liouville's theorems (1847).
=== 1st theorem ===
A holomorphic elliptic function is constant.
This is the original form of Liouville's theorem and can be derived from it. A holomorphic elliptic function is bounded since it takes on all of its values on the fundamental domain which is compact. So it is constant by Liouville's theorem.
=== 2nd theorem ===
Every elliptic function has finitely many poles in
C
/
Λ
{\displaystyle \mathbb {C} /\Lambda }
and the sum of its residues is zero.
This theorem implies that there is no elliptic function not equal to zero with exactly one pole of order one or exactly one zero of order one in the fundamental domain.
=== 3rd theorem ===
A non-constant elliptic function takes on every value the same number of times in
C
/
Λ
{\displaystyle \mathbb {C} /\Lambda }
counted with multiplicity.
== Weierstrass ℘-function ==
One of the most important elliptic functions is the Weierstrass
℘
{\displaystyle \wp }
-function. For a given period lattice
Λ
{\displaystyle \Lambda }
it is defined by
℘
(
z
)
=
1
z
2
+
∑
λ
∈
Λ
∖
{
0
}
(
1
(
z
−
λ
)
2
−
1
λ
2
)
.
{\displaystyle \wp (z)={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right).}
It is constructed in such a way that it has a pole of order two at every lattice point. The term
−
1
λ
2
{\displaystyle -{\frac {1}{\lambda ^{2}}}}
is there to make the series convergent.
℘
{\displaystyle \wp }
is an even elliptic function; that is,
℘
(
−
z
)
=
℘
(
z
)
{\displaystyle \wp (-z)=\wp (z)}
.
Its derivative
℘
′
(
z
)
=
−
2
∑
λ
∈
Λ
1
(
z
−
λ
)
3
{\displaystyle \wp '(z)=-2\sum _{\lambda \in \Lambda }{\frac {1}{(z-\lambda )^{3}}}}
is an odd function, i.e.
℘
′
(
−
z
)
=
−
℘
′
(
z
)
.
{\displaystyle \wp '(-z)=-\wp '(z).}
One of the main results of the theory of elliptic functions is the following: Every elliptic function with respect to a given period lattice
Λ
{\displaystyle \Lambda }
can be expressed as a rational function in terms of
℘
{\displaystyle \wp }
and
℘
′
{\displaystyle \wp '}
.
The
℘
{\displaystyle \wp }
-function satisfies the differential equation
℘
′
(
z
)
2
=
4
℘
(
z
)
3
−
g
2
℘
(
z
)
−
g
3
,
{\displaystyle \wp '(z)^{2}=4\wp (z)^{3}-g_{2}\wp (z)-g_{3},}
where
g
2
{\displaystyle g_{2}}
and
g
3
{\displaystyle g_{3}}
are constants that depend on
Λ
{\displaystyle \Lambda }
. More precisely,
g
2
(
ω
1
,
ω
2
)
=
60
G
4
(
ω
1
,
ω
2
)
{\displaystyle g_{2}(\omega _{1},\omega _{2})=60G_{4}(\omega _{1},\omega _{2})}
and
g
3
(
ω
1
,
ω
2
)
=
140
G
6
(
ω
1
,
ω
2
)
{\displaystyle g_{3}(\omega _{1},\omega _{2})=140G_{6}(\omega _{1},\omega _{2})}
, where
G
4
{\displaystyle G_{4}}
and
G
6
{\displaystyle G_{6}}
are so called Eisenstein series.
In algebraic language, the field of elliptic functions is isomorphic to the field
C
(
X
)
[
Y
]
/
(
Y
2
−
4
X
3
+
g
2
X
+
g
3
)
{\displaystyle \mathbb {C} (X)[Y]/(Y^{2}-4X^{3}+g_{2}X+g_{3})}
,
where the isomorphism maps
℘
{\displaystyle \wp }
to
X
{\displaystyle X}
and
℘
′
{\displaystyle \wp '}
to
Y
{\displaystyle Y}
.
== Relation to elliptic integrals ==
The relation to elliptic integrals has mainly a historical background. Elliptic integrals had been studied by Legendre, whose work was taken on by Niels Henrik Abel and Carl Gustav Jacobi.
Abel discovered elliptic functions by taking the inverse function
φ
{\displaystyle \varphi }
of the elliptic integral function
α
(
x
)
=
∫
0
x
d
t
(
1
−
c
2
t
2
)
(
1
+
e
2
t
2
)
{\displaystyle \alpha (x)=\int _{0}^{x}{\frac {dt}{\sqrt {(1-c^{2}t^{2})(1+e^{2}t^{2})}}}}
with
x
=
φ
(
α
)
{\displaystyle x=\varphi (\alpha )}
.
Additionally he defined the functions
f
(
α
)
=
1
−
c
2
φ
2
(
α
)
{\displaystyle f(\alpha )={\sqrt {1-c^{2}\varphi ^{2}(\alpha )}}}
and
F
(
α
)
=
1
+
e
2
φ
2
(
α
)
{\displaystyle F(\alpha )={\sqrt {1+e^{2}\varphi ^{2}(\alpha )}}}
.
After continuation to the complex plane they turned out to be doubly periodic and are known as Abel elliptic functions.
Jacobi elliptic functions are similarly obtained as inverse functions of elliptic integrals.
Jacobi considered the integral function
ξ
(
x
)
=
∫
0
x
d
t
(
1
−
t
2
)
(
1
−
k
2
t
2
)
{\displaystyle \xi (x)=\int _{0}^{x}{\frac {dt}{\sqrt {(1-t^{2})(1-k^{2}t^{2})}}}}
and inverted it:
x
=
sn
(
ξ
)
{\displaystyle x=\operatorname {sn} (\xi )}
.
sn
{\displaystyle \operatorname {sn} }
stands for sinus amplitudinis and is the name of the new function. He then introduced the functions cosinus amplitudinis and delta amplitudinis, which are defined as follows:
cn
(
ξ
)
:=
1
−
x
2
{\displaystyle \operatorname {cn} (\xi ):={\sqrt {1-x^{2}}}}
dn
(
ξ
)
:=
1
−
k
2
x
2
{\displaystyle \operatorname {dn} (\xi ):={\sqrt {1-k^{2}x^{2}}}}
.
Only by taking this step, Jacobi could prove his general transformation formula of elliptic integrals in 1827.
== History ==
Shortly after the development of infinitesimal calculus the theory of elliptic functions was started by the Italian mathematician Giulio di Fagnano and the Swiss mathematician Leonhard Euler. When they tried to calculate the arc length of a lemniscate they encountered problems involving integrals that contained the square root of polynomials of degree 3 and 4. It was clear that those so called elliptic integrals could not be solved using elementary functions. Fagnano observed an algebraic relation between elliptic integrals, what he published in 1750. Euler immediately generalized Fagnano's results and posed his algebraic addition theorem for elliptic integrals.
Except for a comment by Landen his ideas were not pursued until 1786, when Legendre published his paper Mémoires sur les intégrations par arcs d’ellipse. Legendre subsequently studied elliptic integrals and called them elliptic functions. Legendre introduced a three-fold classification – three kinds – which was a crucial simplification of the rather complicated theory at that time. Other important works of Legendre are: Mémoire sur les transcendantes elliptiques (1792), Exercices de calcul intégral (1811–1817), Traité des fonctions elliptiques (1825–1832). Legendre's work was mostly left untouched by mathematicians until 1826.
Subsequently, Niels Henrik Abel and Carl Gustav Jacobi resumed the investigations and quickly discovered new results. At first they inverted the elliptic integral function. Following a suggestion of Jacobi in 1829 these inverse functions are now called elliptic functions. One of Jacobi's most important works is Fundamenta nova theoriae functionum ellipticarum which was published 1829. The addition theorem Euler found was posed and proved in its general form by Abel in 1829. In those days the theory of elliptic functions and the theory of doubly periodic functions were considered to be different theories. They were brought together by Briot and Bouquet in 1856. Gauss discovered many of the properties of elliptic functions 30 years earlier but never published anything on the subject.
== See also ==
Elliptic integral
Elliptic curve
Modular group
Theta function
== References ==
== Literature ==
Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 16". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 567, 627. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. See also chapter 18. (only considers the case of real invariants).
N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2
Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Springer-Verlag, New York, 1976. ISBN 0-387-97127-0 (See Chapter 1.)
E. T. Whittaker and G. N. Watson. A course of modern analysis, Cambridge University Press, 1952
== External links ==
"Elliptic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
MAA, Translation of Abel's paper on elliptic functions.
Elliptic Functions and Elliptic Integrals on YouTube, lecture by William A. Schwalm (4 hours)
Johansson, Fredrik (2018). "Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms". arXiv:1806.06725 [cs.NA]. | Wikipedia/Elliptic_function |
In algebraic geometry, a hyperelliptic curve is an algebraic curve of genus g > 1, given by an equation of the form
y
2
+
h
(
x
)
y
=
f
(
x
)
{\displaystyle y^{2}+h(x)y=f(x)}
where f(x) is a polynomial of degree n = 2g + 1 > 4 or n = 2g + 2 > 4 with n distinct roots, and h(x) is a polynomial of degree < g + 2 (if the characteristic of the ground field is not 2, one can take h(x) = 0).
A hyperelliptic function is an element of the function field of such a curve, or of the Jacobian variety on the curve; these two concepts are identical for elliptic functions, but different for hyperelliptic functions.
== Genus ==
The degree of the polynomial determines the genus of the curve: a polynomial of degree 2g + 1 or 2g + 2 gives a curve of genus g. When the degree is equal to 2g + 1, the curve is called an imaginary hyperelliptic curve. Meanwhile, a curve of degree 2g + 2 is termed a real hyperelliptic curve. This statement about genus remains true for g = 0 or 1, but those special cases are not called "hyperelliptic". In the case g = 1 (if one chooses a distinguished point), such a curve is called an elliptic curve.
== Formulation and choice of model ==
While this model is the simplest way to describe hyperelliptic curves, such an equation will have a singular point at infinity in the projective plane. This feature is specific to the case n > 3. Therefore, in giving such an equation to specify a non-singular curve, it is almost always assumed that a non-singular model (also called a smooth completion), equivalent in the sense of birational geometry, is meant.
To be more precise, the equation defines a quadratic extension of C(x), and it is that function field that is meant. The singular point at infinity can be removed (since this is a curve) by the normalization (integral closure) process. It turns out that after doing this, there is an open cover of the curve by two affine charts: the one already given by
y
2
=
f
(
x
)
{\displaystyle y^{2}=f(x)}
and another one given by
w
2
=
v
2
g
+
2
f
(
1
/
v
)
.
{\displaystyle w^{2}=v^{2g+2}f(1/v).}
The glueing maps between the two charts are given by
(
x
,
y
)
↦
(
1
/
x
,
y
/
x
g
+
1
)
{\displaystyle (x,y)\mapsto (1/x,y/x^{g+1})}
and
(
v
,
w
)
↦
(
1
/
v
,
w
/
v
g
+
1
)
,
{\displaystyle (v,w)\mapsto (1/v,w/v^{g+1}),}
wherever they are defined.
In fact geometric shorthand is assumed, with the curve C being defined as a ramified double cover of the projective line, the ramification occurring at the roots of f, and also for odd n at the point at infinity. In this way the cases n = 2g + 1 and 2g + 2 can be unified, since we might as well use an automorphism of the projective plane to move any ramification point away from infinity.
== Using Riemann–Hurwitz formula ==
Using the Riemann–Hurwitz formula, the hyperelliptic curve with genus g is defined by an equation with degree n = 2g + 2. Suppose f : X → P1 is a branched covering with ramification degree 2, where X is a curve with genus g and P1 is the Riemann sphere. Let g1 = g and g0 be the genus of P1 ( = 0 ), then the Riemann-Hurwitz formula turns out to be
2
−
2
g
1
=
2
(
2
−
2
g
0
)
−
∑
s
∈
X
(
e
s
−
1
)
{\displaystyle 2-2g_{1}=2(2-2g_{0})-\sum _{s\in X}(e_{s}-1)}
where s is over all ramified points on X. The number of ramified points is n, and at each ramified point s we have es = 2, so the formula becomes
2
−
2
×
g
=
2
(
2
−
2
×
0
)
−
n
×
(
2
−
1
)
{\displaystyle 2-2\times g=2(2-2\times 0)-n\times (2-1)}
so n = 2g + 2.
== Occurrence and applications ==
All curves of genus 2 are hyperelliptic, but for genus ≥ 3 the generic curve is not hyperelliptic. This is seen heuristically by a moduli space dimension check. Counting constants, with n = 2g + 2, the collection of n points subject to the action of the automorphisms of the projective line has (2g + 2) − 3 degrees of freedom, which is less than 3g − 3, the number of moduli of a curve of genus g, unless g is 2. Much more is known about the hyperelliptic locus in the moduli space of curves or abelian varieties, though it is harder to exhibit general non-hyperelliptic curves with simple models. One geometric characterization of hyperelliptic curves is via Weierstrass points. More detailed geometry of non-hyperelliptic curves is read from the theory of canonical curves, the canonical mapping being 2-to-1 on hyperelliptic curves but 1-to-1 otherwise for g > 2. Trigonal curves are those that correspond to taking a cube root, rather than a square root, of a polynomial.
The definition by quadratic extensions of the rational function field works for fields in general except in characteristic 2; in all cases the geometric definition as a ramified double cover of the projective line is available, if the extension is assumed to be separable.
Hyperelliptic curves can be used in hyperelliptic curve cryptography for cryptosystems based on the discrete logarithm problem.
Hyperelliptic curves also appear composing entire connected components of certain strata of the moduli space of Abelian differentials.
Hyperellipticity of genus-2 curves was used to prove Gromov's filling area conjecture in the case of fillings of genus =1.
=== Classification ===
Hyperelliptic curves of given genus g have a moduli space, closely related to the ring of invariants of a binary form of degree 2g+2.
== History ==
Hyperelliptic functions were first published by Adolph Göpel (1812-1847) in his last paper Abelsche Transcendenten erster Ordnung (Abelian transcendents of first order) (in Journal für die reine und angewandte Mathematik, vol. 35, 1847). Independently Johann G. Rosenhain worked on that matter and published Umkehrungen ultraelliptischer Integrale erster Gattung (in Mémoires des savants etc., vol. 11, 1851).
== See also ==
Bolza surface
Superelliptic curve
== References ==
"Hyper-elliptic curve", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
A user's guide to the local arithmetic of hyperelliptic curves
== Notes == | Wikipedia/Hyperelliptic_function |
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is
a
x
2
+
b
x
+
c
=
0
,
{\displaystyle ax^{2}+bx+c=0,}
where a ≠ 0.
The quadratic equation on a number
x
{\displaystyle x}
can be solved using the well-known quadratic formula, which can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves a quadratic irrational number, which is an algebraic fraction that can be evaluated as a decimal fraction only by applying an additional root extraction algorithm.
If the roots are real, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions.
== Simple example ==
Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. We begin with the equation
x
2
=
2
{\displaystyle x^{2}=2}
and manipulate it directly. Subtracting one from both sides we obtain
x
2
−
1
=
1.
{\displaystyle x^{2}-1=1.}
This is easily factored into
(
x
+
1
)
(
x
−
1
)
=
1
{\displaystyle (x+1)(x-1)=1}
from which we obtain
(
x
−
1
)
=
1
1
+
x
{\displaystyle (x-1)={\frac {1}{1+x}}}
and finally
x
=
1
+
1
1
+
x
.
{\displaystyle x=1+{\frac {1}{1+x}}.}
Now comes the crucial step. We substitute this expression for x back into itself, recursively, to obtain
x
=
1
+
1
1
+
(
1
+
1
1
+
x
)
=
1
+
1
2
+
1
1
+
x
.
{\displaystyle x=1+{\cfrac {1}{1+\left(1+{\cfrac {1}{1+x}}\right)}}=1+{\cfrac {1}{2+{\cfrac {1}{1+x}}}}.}
But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity x as far down and to the right as we please, and obtaining in the limit the infinite simple continued fraction
x
=
1
+
1
2
+
1
2
+
1
2
+
1
2
+
1
2
+
⋱
=
2
.
{\displaystyle x=1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+\ddots }}}}}}}}}}={\sqrt {2}}.}
By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers.
== Algebraic explanation ==
We can gain further insight into this simple example by considering the successive powers of
ω
=
2
−
1.
{\displaystyle \omega ={\sqrt {2}}-1.}
That sequence of successive powers is given by
ω
2
=
3
−
2
2
,
ω
3
=
5
2
−
7
,
ω
4
=
17
−
12
2
,
ω
5
=
29
2
−
41
,
ω
6
=
99
−
70
2
,
ω
7
=
169
2
−
239
,
{\displaystyle {\begin{aligned}\omega ^{2}&=3-2{\sqrt {2}},&\omega ^{3}&=5{\sqrt {2}}-7,&\omega ^{4}&=17-12{\sqrt {2}},\\\omega ^{5}&=29{\sqrt {2}}-41,&\omega ^{6}&=99-70{\sqrt {2}},&\omega ^{7}&=169{\sqrt {2}}-239,\,\end{aligned}}}
and so forth. Notice how the fractions derived as successive approximants to √2 appear in this geometric progression.
Since 0 < ω < 1, the sequence {ωn} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to √2, in the limit.
We can also find these numerators and denominators appearing in the successive powers of
ω
−
1
=
2
+
1.
{\displaystyle \omega ^{-1}={\sqrt {2}}+1.}
The sequence of successive powers {ω−n} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example.
Notice also that the set obtained by forming all the combinations a + b√2, where a and b are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain. See also algebraic number field.
== General quadratic equation ==
Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial
x
2
+
b
x
+
c
=
0
{\displaystyle x^{2}+bx+c=0}
which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that
x
2
+
b
x
=
−
c
x
+
b
=
−
c
x
x
=
−
b
−
c
x
{\displaystyle {\begin{aligned}x^{2}+bx&=-c\\x+b&={\frac {-c}{x}}\\x&=-b-{\frac {c}{x}}\,\end{aligned}}}
But now we can apply the last equation to itself recursively to obtain
x
=
−
b
−
c
−
b
−
c
−
b
−
c
−
b
−
c
−
b
−
⋱
{\displaystyle x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}}
If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial x2 + bx + c = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula and a monic polynomial with real coefficients. If the discriminant of such a polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if b and c are real numbers and b2 − 4c < 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form u + iv (where v ≠ 0), which does not lie on the real number line.
== General theorem ==
By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients
x
2
+
b
x
+
c
=
0
{\displaystyle x^{2}+bx+c=0}
given by
x
=
−
b
−
c
−
b
−
c
−
b
−
c
−
b
−
c
−
b
−
⋱
{\displaystyle x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}}
either converges or diverges depending on both the coefficient b and the value of the discriminant, b2 − 4c.
If b = 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and
∞
{\displaystyle \infty }
. If b ≠ 0 we distinguish three cases.
If the discriminant is negative, the fraction diverges by oscillation, which means that its convergents wander around in a regular or even chaotic fashion, never approaching a finite limit.
If the discriminant is zero the fraction converges to the single root of multiplicity two.
If the discriminant is positive the equation has two real roots, and the continued fraction converges to the larger (in absolute value) of these. The rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.
When the monic quadratic equation with real coefficients is of the form x2 = c, the general solution described above is useless because division by zero is not well defined. As long as c is positive, though, it is always possible to transform the equation by subtracting a perfect square from both sides and proceeding along the lines illustrated with √2 above. In symbols, if
x
2
=
c
(
c
>
0
)
{\displaystyle x^{2}=c\qquad (c>0)}
just choose some positive real number p such that
p
2
<
c
.
{\displaystyle p^{2}<c.}
Then by direct manipulation we obtain
x
2
−
p
2
=
c
−
p
2
(
x
+
p
)
(
x
−
p
)
=
c
−
p
2
x
−
p
=
c
−
p
2
p
+
x
x
=
p
+
c
−
p
2
p
+
x
=
p
+
c
−
p
2
p
+
(
p
+
c
−
p
2
p
+
x
)
=
p
+
c
−
p
2
2
p
+
c
−
p
2
2
p
+
c
−
p
2
2
p
+
⋱
{\displaystyle {\begin{aligned}x^{2}-p^{2}&=c-p^{2}\\(x+p)(x-p)&=c-p^{2}\\x-p&={\frac {c-p^{2}}{p+x}}\\x&=p+{\frac {c-p^{2}}{p+x}}\\&=p+{\cfrac {c-p^{2}}{p+\left(p+{\cfrac {c-p^{2}}{p+x}}\right)}}&=p+{\cfrac {c-p^{2}}{2p+{\cfrac {c-p^{2}}{2p+{\cfrac {c-p^{2}}{2p+\ddots \,}}}}}}\,\end{aligned}}}
and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers.
== Complex coefficients ==
By the fundamental theorem of algebra, if the monic polynomial equation x2 + bx + c = 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminant b2 − 4c is not as useful in this situation, because it may be a complex number. Still, a modified version of the general theorem can be proved.
The continued fraction solution to the general monic quadratic equation with complex coefficients
x
2
+
b
x
+
c
=
0
(
b
≠
0
)
{\displaystyle x^{2}+bx+c=0\qquad (b\neq 0)}
given by
x
=
−
b
−
c
−
b
−
c
−
b
−
c
−
b
−
c
−
b
−
⋱
{\displaystyle x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}}
converges or not depending on the value of the discriminant, b2 − 4c, and on the relative magnitude of its two roots.
Denoting the two roots by r1 and r2 we distinguish three cases.
If the discriminant is zero the fraction converges to the single root of multiplicity two.
If the discriminant is not zero, and |r1| ≠ |r2|, the continued fraction converges to the root of maximum modulus (i.e., to the root with the greater absolute value).
If the discriminant is not zero, and |r1| = |r2|, the continued fraction diverges by oscillation.
In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.
This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of the convergence problem for continued fractions with complex elements.
== See also ==
Lucas sequence
Methods of computing square roots
Pell's equation
== References ==
H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948 ISBN 0-8284-0207-8 | Wikipedia/Solving_quadratic_equations_with_continued_fractions |
In chemistry, a solution is defined by IUPAC as "A liquid or solid phase containing more than one substance, when for convenience one (or more) substance, which is called the solvent, is treated differently from the other substances, which are called solutes. When, as is often but not necessarily the case, the sum of the mole fractions of solutes is small compared with unity, the solution is called a dilute solution. A superscript attached to the ∞ symbol for a property of a solution denotes the property in the limit of infinite dilution." One important parameter of a solution is the concentration, which is a measure of the amount of solute in a given amount of solution or solvent. The term "aqueous solution" is used when one of the solvents is water.
== Types ==
Homogeneous means that the components of the mixture form a single phase. Heterogeneous means that the components of the mixture are of different phase. The properties of the mixture (such as concentration, temperature, and density) can be uniformly distributed through the volume but only in absence of diffusion phenomena or after their completion. Usually, the substance present in the greatest amount is considered the solvent. Solvents can be gases, liquids, or solids. One or more components present in the solution other than the solvent are called solutes. The solution has the same physical state as the solvent.
=== Gaseous mixtures ===
If the solvent is a gas, only gases (non-condensable) or vapors (condensable) are dissolved under a given set of conditions. An example of a gaseous solution is air (oxygen and other gases dissolved in nitrogen). Since interactions between gaseous molecules play almost no role, non-condensable gases form rather trivial solutions. In the literature, they are not even classified as solutions, but simply addressed as homogeneous mixtures of gases. The Brownian motion and the permanent molecular agitation of gas molecules guarantee the homogeneity of the gaseous systems. Non-condensable gaseous mixtures (e.g., air/CO2, or air/xenon) do not spontaneously demix, nor sediment, as distinctly stratified and separate gas layers as a function of their relative density. Diffusion forces efficiently counteract gravitation forces under normal conditions prevailing on Earth. The case of condensable vapors is different: once the saturation vapor pressure at a given temperature is reached, vapor excess condenses into the liquid state.
=== Liquid solutions ===
Liquids dissolve gases, other liquids, and solids. An example of a dissolved gas is oxygen in water, which allows fish to breathe under water. An examples of a dissolved liquid is ethanol in water, as found in alcoholic beverages. An example of a dissolved solid is sugar water, which contains dissolved sucrose.
=== Solid solutions ===
If the solvent is a solid, then gases, liquids, and solids can be dissolved.
Gas in solids:
Hydrogen dissolves rather well in metals, especially in palladium; this is studied as a means of hydrogen storage.
Liquid in solid:
Mercury in gold, forming an amalgam
Water in solid salt or sugar, forming moist solids
Hexane in paraffin wax
Polymers containing plasticizers such as phthalate (liquid) in PVC (solid)
Solid in solid:
Steel, basically a solution of carbon atoms in a crystalline matrix of iron atoms
Alloys like bronze and many others
Radium sulfate dissolved in barium sulfate: a true solid solution of Ra in BaSO4
== Solubility ==
The ability of one compound to dissolve in another compound is called solubility. When a liquid can completely dissolve in another liquid the two liquids are miscible. Two substances that can never mix to form a solution are said to be immiscible.
All solutions have a positive entropy of mixing. The interactions between different molecules or ions may be energetically favored or not. If interactions are unfavorable, then the free energy decreases with increasing solute concentration. At some point, the energy loss outweighs the entropy gain, and no more solute particles can be dissolved; the solution is said to be saturated. However, the point at which a solution can become saturated can change significantly with different environmental factors, such as temperature, pressure, and contamination. For some solute-solvent combinations, a supersaturated solution can be prepared by raising the solubility (for example by increasing the temperature) to dissolve more solute and then lowering it (for example by cooling).
Usually, the greater the temperature of the solvent, the more of a given solid solute it can dissolve. However, most gases and some compounds exhibit solubilities that decrease with increased temperature. Such behavior is a result of an exothermic enthalpy of solution. Some surfactants exhibit this behaviour. The solubility of liquids in liquids is generally less temperature-sensitive than that of solids or gases.
== Properties ==
The physical properties of compounds such as melting point and boiling point change when other compounds are added. Together they are called colligative properties. There are several ways to quantify the amount of one compound dissolved in the other compounds collectively called concentration. Examples include molarity, volume fraction, and mole fraction.
The properties of ideal solutions can be calculated by the linear combination of the properties of its components. If both solute and solvent exist in equal quantities (such as in a 50% ethanol, 50% water solution), the concepts of "solute" and "solvent" become less relevant, but the substance that is more often used as a solvent is normally designated as the solvent (in this example, water).
== Liquid solution characteristics ==
In principle, all types of liquids can behave as solvents: liquid noble gases, molten metals, molten salts, molten covalent networks, and molecular liquids. In the practice of chemistry and biochemistry, most solvents are molecular liquids. They can be classified into polar and non-polar, according to whether their molecules possess a permanent electric dipole moment. Another distinction is whether their molecules can form hydrogen bonds (protic and aprotic solvents). Water, the most commonly used solvent, is both polar and sustains hydrogen bonds.
Salts dissolve in polar solvents, forming positive and negative ions that are attracted to the negative and positive ends of the solvent molecule, respectively. If the solvent is water, hydration occurs when the charged solute ions become surrounded by water molecules. A standard example is aqueous saltwater. Such solutions are called electrolytes. Whenever salt dissolves in water ion association has to be taken into account.
Polar solutes dissolve in polar solvents, forming polar bonds or hydrogen bonds. As an example, all alcoholic beverages are aqueous solutions of ethanol. On the other hand, non-polar solutes dissolve better in non-polar solvents. Examples are hydrocarbons such as oil and grease that easily mix, while being incompatible with water.
An example of the immiscibility of oil and water is a leak of petroleum from a damaged tanker, that does not dissolve in the ocean water but rather floats on the surface.
== See also ==
Molar solution – Measure of concentration of a chemicalPages displaying short descriptions of redirect targets
Percentage solution (disambiguation)
Solubility equilibrium – Thermodynamic equilibrium between a solid and a solution of the same compound
Total dissolved solids – Measurement in environmental chemistry is a common term in a range of disciplines, and can have different meanings depending on the analytical method used. In water quality, it refers to the amount of residue remaining after the evaporation of water from a sample.
Upper critical solution temperature – Critical temperature of miscibility in a mixture
Lower critical solution temperature – Critical temperature below which components of a mixture are miscible for all compositions
Coil–globule transition – Collapse of a macromolecule from an expanded coil state to a collapsed globule state
== References ==
IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "solution". doi:10.1351/goldbook.S05746
== External links ==
Media related to Solutions at Wikimedia Commons | Wikipedia/Solution_(chemistry) |
Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.
Muller's method proceeds according to a third-order recurrence relation similar to the second-order recurrence relation of the secant method. Whereas the secant method proceeds by constructing a line through two points on the graph of f corresponding to the last two iterative approximations and then uses the line's root as the next approximation at every iteration, by contrast, Muller's method uses three points corresponding to the last three iterative approximations, constructs a parabola through these three points, and then uses a root of the parabola as the next approximation at every iteration.
== Derivation ==
Muller's method uses three initial approximations of the root,
x
0
,
x
1
{\displaystyle x_{0},x_{1}}
and
x
2
{\displaystyle x_{2}}
, and determines the next approximation
x
3
{\displaystyle x_{3}}
by considering the intersection of the x-axis with the parabola through
(
x
0
,
f
(
x
0
)
)
{\displaystyle (x_{0},f(x_{0}))}
,
(
x
1
,
f
(
x
1
)
)
{\displaystyle (x_{1},f(x_{1}))}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle (x_{2},f(x_{2}))}
.
Consider the quadratic polynomial
that passes through
(
x
0
,
f
(
x
0
)
)
{\displaystyle (x_{0},f(x_{0}))}
,
(
x
1
,
f
(
x
1
)
)
{\displaystyle (x_{1},f(x_{1}))}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle (x_{2},f(x_{2}))}
. Define the differences
h
0
=
x
1
−
x
0
,
h
1
=
x
2
−
x
1
{\displaystyle h_{0}=x_{1}-x_{0},\quad h_{1}=x_{2}-x_{1}}
and
δ
0
=
f
(
x
1
)
−
f
(
x
0
)
h
0
,
δ
1
=
f
(
x
2
)
−
f
(
x
1
)
h
1
.
{\displaystyle \delta _{0}={\frac {f(x_{1})-f(x_{0})}{h_{0}}},\quad \delta _{1}={\frac {f(x_{2})-f(x_{1})}{h_{1}}}.}
Substituting each of the three points
(
x
0
,
f
(
x
0
)
)
{\displaystyle (x_{0},f(x_{0}))}
,
(
x
1
,
f
(
x
1
)
)
{\displaystyle (x_{1},f(x_{1}))}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle (x_{2},f(x_{2}))}
into equation (1) and solving simultaneously for
a
,
b
{\displaystyle a,b}
and
c
{\displaystyle c}
gives
a
=
δ
1
−
δ
0
h
1
+
h
0
,
b
=
a
h
1
+
δ
1
,
c
=
f
(
x
2
)
{\displaystyle a={\frac {\delta _{1}-\delta _{0}}{h_{1}+h_{0}}},\quad b=ah_{1}+\delta _{1},\quad c=f(x_{2})}
The quadratic formula is then applied to (1) to determine
x
3
{\displaystyle x_{3}}
as
x
3
−
x
2
=
−
2
c
b
±
b
2
−
4
a
c
.
{\displaystyle x_{3}-x_{2}={\frac {-2c}{b\pm {\sqrt {b^{2}-4ac}}}}.}
The sign preceding the radical term is chosen to match the sign of
b
{\displaystyle b}
to ensure the next iterate is closest to
x
2
{\displaystyle x_{2}}
, giving
x
3
=
x
2
−
2
c
b
+
s
i
g
n
(
b
)
b
2
−
4
a
c
.
{\displaystyle x_{3}=x_{2}-{\frac {2c}{b+sign(b){\sqrt {b^{2}-4ac}}}}.}
Once
x
3
{\displaystyle x_{3}}
is determined, the process is repeated. Note that due to the radical expression in the denominator, iterates can be complex even when the previous iterates are all real. This is in contrast with other root-finding algorithms like the secant method, Sidi's generalized secant method or Newton's method, whose iterates will remain real if one starts with real numbers. Having complex iterates can be an advantage (if one is looking for complex roots) or a disadvantage (if it is known that all roots are real), depending on the problem.
== Speed of convergence ==
For well-behaved functions, the order of convergence of Muller's method is approximately 1.839 and exactly the tribonacci constant. This can be compared with approximately 1.618, exactly the golden ratio, for the secant method and with exactly 2 for Newton's method. So, the secant method makes less progress per iteration than Muller's method and Newton's method makes more progress.
More precisely, if ξ denotes a single root of f (so f(ξ) = 0 and f'(ξ) ≠ 0), f is three times continuously differentiable, and the initial guesses x0, x1, and x2 are taken sufficiently close to ξ, then the iterates satisfy
lim
k
→
∞
|
x
k
−
ξ
|
|
x
k
−
1
−
ξ
|
μ
=
|
f
‴
(
ξ
)
6
f
′
(
ξ
)
|
(
μ
−
1
)
/
2
,
{\displaystyle \lim _{k\to \infty }{\frac {|x_{k}-\xi |}{|x_{k-1}-\xi |^{\mu }}}=\left|{\frac {f'''(\xi )}{6f'(\xi )}}\right|^{(\mu -1)/2},}
where μ ≈ 1.84 is the positive solution of
x
3
−
x
2
−
x
−
1
=
0
{\displaystyle x^{3}-x^{2}-x-1=0}
, the defining equation for the tribonacci constant.
== Generalizations and related methods ==
Muller's method fits a parabola, i.e. a second-order polynomial, to the last three obtained points f(xk-1), f(xk-2) and f(xk-3) in each iteration. One can generalize this and fit a polynomial pk,m(x) of degree m to the last m+1 points in the kth iteration. Our parabola yk is written as pk,2 in this notation. The degree m must be 1 or larger. The next approximation xk is now one of the roots of the pk,m, i.e. one of the solutions of pk,m(x)=0. Taking m=1 we obtain the secant method whereas m=2 gives Muller's method.
Muller calculated that the sequence {xk} generated this way converges to the root ξ with an order μm where μm is the positive solution of
x
m
+
1
−
x
m
−
x
m
−
1
−
⋯
−
x
−
1
=
0
{\displaystyle x^{m+1}-x^{m}-x^{m-1}-\dots -x-1=0}
.
As m approaches infinity the positive solution for the equation approaches 2. The method is much more difficult though for m>2 than it is for m=1 or m=2 because it is much harder to determine the roots of a polynomial of degree 3 or higher. Another problem is that there seems no prescription of which of the roots of pk,m to pick as the next approximation xk for m>2.
These difficulties are overcome by Sidi's generalized secant method which also employs the polynomial pk,m. Instead of trying to solve pk,m(x)=0, the next approximation xk is calculated with the aid of the derivative of pk,m at xk-1 in this method.
== Computational example ==
Below, Muller's method is implemented in the Python programming language. It takes as parameters the three initial estimates of the root, as well as the desired decimals places of accuracy and the maximum number of iterations. The program is then applied to find a root of the function f(x) = x2 − 612.
== See also ==
Halley's method, with cubic convergence
Householder's method, includes Newton's, Halley's and higher-order convergence
== References ==
Atkinson, Kendall E. (1989). An Introduction to Numerical Analysis, 2nd edition, Section 2.4. John Wiley & Sons, New York. ISBN 0-471-50023-2.
Burden, R. L. and Faires, J. D. Numerical Analysis, 4th edition, pages 77ff.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 9.5.2. Muller's Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
== Further reading ==
A bracketing variant with global convergence: Costabile, F.; Gualtieri, M.I.; Luceri, R. (March 2006). "A modification of Muller's method". Calcolo. 43 (1): 39–50. doi:10.1007/s10092-006-0113-9. S2CID 124772103. | Wikipedia/Muller's_method |
In mathematics, particularly in number theory, an indeterminate system has fewer equations than unknowns but an additional a set of constraints on the unknowns, such as restrictions that the values be integers. In modern times indeterminate equations are often called Diophantine equations.: iii
== Examples ==
=== Linear indeterminate equations ===
An example linear indeterminate equation arises from imagining two equally rich men, one with 5 rubies, 8 sapphires, 7 pearls and 90 gold coins; the other has 7, 9, 6 and 62 gold coins; find the prices (y, c, n) of the respective gems in gold coins. As they are equally rich:
5
y
+
8
c
+
7
n
+
90
=
7
y
+
9
c
+
6
n
+
62
{\displaystyle 5y+8c+7n+90=7y+9c+6n+62}
Bhāskara II gave an general approach to this kind of problem by assigning a fixed integer to one (or N-2 in general) of the unknowns, e.g.
n
=
1
{\displaystyle n=1}
, resulting a series of possible solutions like (y, c, n)=(14, 1, 1), (13, 3, 1).: 43
For given integers a, b and n, the general linear indeterminant equation is
a
x
+
b
y
=
n
{\displaystyle ax+by=n}
with unknowns x and y restricted to integers. The necessary and sufficient condition for solutions is that the greatest common divisor,
(
a
,
b
)
{\displaystyle (a,b)}
, is divisible by n.: 11
== History ==
Early mathematicians in both India and China studied indeterminate linear equations with integer solutions. Indian astronomer Aryabhata developed a recursive algorithm to solve indeterminate equations now known to be related to Euclid's algorithm. The name of the Chinese remainder theorem relates to the view that indeterminate equations arose in these asian mathematical traditions, but it is likely that ancient Greeks also worked with indeterminate equations.
The first major work on indeterminate equations appears in Diophantus’ Arithmetica in the 3rd century AD. Diophantus sought solutions constrained to be rational numbers, but Pierre de Fermat's work in the 1600s focused on integer solutions and introduced the idea of characterizing all possible solutions rather than any one solution. In modern times integer solutions to indeterminate equations have come to be called analysis of Diophantine equations.: iii
The original paper Henry John Stephen Smith that defined the Smith normal form was written for linear indeterminate systems.
== References == | Wikipedia/Indeterminate_equation |
In mathematics, chromatic homotopy theory is a subfield of stable homotopy theory that studies complex-oriented cohomology theories from the "chromatic" point of view, which is based on Quillen's work relating cohomology theories to formal groups. In this picture, theories are classified in terms of their "chromatic levels"; i.e., the heights of the formal groups that define the theories via the Landweber exact functor theorem. Typical theories it studies include: complex K-theory, elliptic cohomology, Morava K-theory and tmf.
== Chromatic convergence theorem ==
In algebraic topology, the chromatic convergence theorem states the homotopy limit of the chromatic tower (defined below) of a finite p-local spectrum
X
{\displaystyle X}
is
X
{\displaystyle X}
itself. The theorem was proved by Hopkins and Ravenel.
=== Statement ===
Let
L
E
(
n
)
{\displaystyle L_{E(n)}}
denotes the Bousfield localization with respect to the Morava E-theory and let
X
{\displaystyle X}
be a finite,
p
{\displaystyle p}
-local spectrum. Then there is a tower associated to the localizations
⋯
→
L
E
(
2
)
X
→
L
E
(
1
)
X
→
L
E
(
0
)
X
{\displaystyle \cdots \rightarrow L_{E(2)}X\rightarrow L_{E(1)}X\rightarrow L_{E(0)}X}
called the chromatic tower, such that its homotopy limit is homotopic to the original spectrum
X
{\displaystyle X}
.
The stages in the tower above are often simplifications of the original spectrum. For example,
L
E
(
0
)
X
{\displaystyle L_{E(0)}X}
is the rational localization and
L
E
(
1
)
X
{\displaystyle L_{E(1)}X}
is the localization with respect to p-local K-theory.
==== Stable homotopy groups ====
In particular, if the
p
{\displaystyle p}
-local spectrum
X
{\displaystyle X}
is the stable
p
{\displaystyle p}
-local sphere spectrum
S
(
p
)
{\displaystyle \mathbb {S} _{(p)}}
, then the homotopy limit of this sequence is the original
p
{\displaystyle p}
-local sphere spectrum. This is a key observation for studying stable homotopy groups of spheres using chromatic homotopy theory.
== See also ==
Elliptic cohomology
Redshift conjecture
Ravenel conjectures
Moduli stack of formal group laws
Chromatic spectral sequence
Adams-Novikov spectral sequence
== References ==
Lurie, J. (2010). "Chromatic Homotopy Theory". 252x (35 lectures). Harvard University.
Lurie, J. (2017–2018). "Unstable Chomatic Homotopy Theory". 19 Lectures. Institute for Advanced Study.
== External links ==
http://ncatlab.org/nlab/show/chromatic+homotopy+theory
Hopkins, M. (1999). "Complex Oriented Cohomology Theory and the Language of Stacks" (PDF). Archived from the original (PDF) on 2020-06-20.
"References, Schedule and Notes". Talbot 2013: Chromatic Homotopy Theory. MIT Talbot Workshop. 2013. | Wikipedia/Chromatic_homotopy_theory |
In mathematics, particularly in homotopy theory, a model category is a category with distinguished classes of morphisms ('arrows') called 'weak equivalences', 'fibrations' and 'cofibrations' satisfying certain axioms relating them. These abstract from the category of topological spaces or of chain complexes (derived category theory). The concept was introduced by Daniel G. Quillen (1967).
In recent decades, the language of model categories has been used in some parts of algebraic K-theory and algebraic geometry, where homotopy-theoretic approaches led to deep results.
== Motivation ==
Model categories can provide a natural setting for homotopy theory: the category of topological spaces is a model category, with the homotopy corresponding to the usual theory. Similarly, objects that are thought of as spaces often admit a model category structure, such as the category of simplicial sets.
Another model category is the category of chain complexes of R-modules for a commutative ring R. Homotopy theory in this context is homological algebra. Homology can then be viewed as a type of homotopy, allowing generalizations of homology to other objects, such as groups and R-algebras, one of the first major applications of the theory. Because of the above example regarding homology, the study of closed model categories is sometimes thought of as homotopical algebra.
== Formal definition ==
The definition given initially by Quillen was that of a closed model category, the assumptions of which seemed strong at the time, motivating others to weaken some of the assumptions to define a model category. In practice the distinction has not proven significant and most recent authors (e.g., Mark Hovey and Philip Hirschhorn) work with closed model categories and simply drop the adjective 'closed'.
The definition has been separated to that of a model structure on a category and then further categorical conditions on that category, the necessity of which may seem unmotivated at first but becomes important later. The following definition follows that given by Hovey.
A model structure on a category C consists of three distinguished classes of morphisms (equivalently subcategories): weak equivalences, fibrations, and cofibrations, and two functorial factorizations
(
α
,
β
)
{\displaystyle (\alpha ,\beta )}
and
(
γ
,
δ
)
{\displaystyle (\gamma ,\delta )}
subject to the following axioms. A fibration that is also a weak equivalence is called an acyclic (or trivial) fibration and a cofibration that is also a weak equivalence is called an acyclic (or trivial) cofibration (or sometimes called an anodyne morphism).
Axioms
Retracts: if g is a morphism belonging to one of the distinguished classes, and f is a retract of g (as objects in the arrow category
C
2
{\displaystyle C^{2}}
, where 2 is the 2-element ordered set), then f belongs to the same distinguished class. Explicitly, the requirement that f is a retract of g means that there exist i, j, r, and s, such that the following diagram commutes:
2 of 3: if f and g are maps in C such that gf is defined and any two of these are weak equivalences then so is the third.
Lifting: acyclic cofibrations have the left lifting property with respect to fibrations, and cofibrations have the left lifting property with respect to acyclic fibrations. Explicitly, if the outer square of the following diagram commutes, where i is a cofibration and p is a fibration, and i or p is acyclic, then there exists h completing the diagram.
Factorization:
every morphism f in C can be written as
p
∘
i
{\displaystyle p\circ i}
for a fibration p and an acyclic cofibration i;
every morphism f in C can be written as
p
∘
i
{\displaystyle p\circ i}
for an acyclic fibration p and a cofibration i.
A model category is a category that has a model structure and all (small) limits and colimits, i.e., a complete and cocomplete category with a model structure.
=== Definition via weak factorization systems ===
The above definition can be succinctly phrased by the following equivalent definition: a model category is a category C and three classes of (so-called) weak equivalences W, fibrations F and cofibrations C so that
C has all limits and colimits,
(
C
∩
W
,
F
)
{\displaystyle (C\cap W,F)}
is a weak factorization system,
(
C
,
F
∩
W
)
{\displaystyle (C,F\cap W)}
is a weak factorization system
W
{\displaystyle W}
satisfies the 2 of 3 property.
=== First consequences of the definition ===
The axioms imply that any two of the three classes of maps determine the third (e.g., cofibrations and weak equivalences determine fibrations).
Also, the definition is self-dual: if C is a model category, then its opposite category
C
o
p
{\displaystyle {\mathcal {C}}^{op}}
also admits a model structure so that weak equivalences correspond to their opposites, fibrations opposites of cofibrations and cofibrations opposites of fibrations.
== Examples ==
=== Topological spaces ===
The category of topological spaces, Top, admits a standard model category structure with the usual (Serre) fibrations and with weak equivalences as weak homotopy equivalences. The cofibrations are not the usual notion found here, but rather the narrower class of maps that have the left lifting property with respect to the acyclic Serre fibrations.
Equivalently, they are the retracts of the relative cell complexes, as explained for example in Hovey's Model Categories. This structure is not unique; in general there can be many model category structures on a given category. For the category of topological spaces, another such structure is given by Hurewicz fibrations and standard cofibrations, and the weak equivalences are the (strong) homotopy equivalences.
=== Chain complexes ===
The category of (nonnegatively graded) chain complexes of R-modules carries at least two model structures, which both feature prominently in homological algebra:
weak equivalences are maps that induce isomorphisms in homology;
cofibrations are maps that are monomorphisms in each degree with projective cokernel; and
fibrations are maps that are epimorphisms in each nonzero degree
or
weak equivalences are maps that induce isomorphisms in homology;
fibrations are maps that are epimorphisms in each degree with injective kernel; and
cofibrations are maps that are monomorphisms in each nonzero degree.
This explains why Ext-groups of R-modules can be computed by either resolving the source projectively or the target injectively. These are cofibrant or fibrant replacements in the respective model structures.
The category of arbitrary chain-complexes of R-modules has a model structure that is defined by
weak equivalences are chain homotopy equivalences of chain-complexes;
cofibrations are monomorphisms that are split as morphisms of underlying R-modules; and
fibrations are epimorphisms that are split as morphisms of underlying R-modules.
=== Further examples ===
Other examples of categories admitting model structures include the category of all small categories, the category of simplicial sets or simplicial presheaves on any small Grothendieck site, the category of topological spectra, and the categories of simplicial spectra or presheaves of simplicial spectra on a small Grothendieck site.
Simplicial objects in a category are a frequent source of model categories; for instance, simplicial commutative rings or simplicial R-modules admit natural model structures. This follows because there is an adjunction between simplicial sets and simplicial commutative rings (given by the forgetful and free functors), and in nice cases one can lift model structures under an adjunction.
A simplicial model category is a simplicial category with a model structure that is compatible with the simplicial structure.
Given any category C and a model category M, under certain extra hypothesis the category of functors Fun (C, M) (also called C-diagrams in M) is also a model category. In fact, there are always two candidates for distinct model structures: in one, the so-called projective model structure, fibrations and weak equivalences are those maps of functors which are fibrations and weak equivalences when evaluated at each object of C. Dually, the injective model structure is similar with cofibrations and weak equivalences instead. In both cases the third class of morphisms is given by a lifting condition (see below). In some cases, when the category C is a Reedy category, there is a third model structure lying in between the projective and injective.
The process of forcing certain maps to become weak equivalences in a new model category structure on the same underlying category is known as Bousfield localization. For example, the category of simplicial sheaves can be obtained as a Bousfield localization of the model category of simplicial presheaves.
Denis-Charles Cisinski has developed a general theory of model structures on presheaf categories (generalizing simplicial sets, which are presheaves on the simplex category).
If C is a model category, then so is the category Pro(C) of pro-objects in C. However, a model structure on Pro(C) can also be constructed by imposing a weaker set of axioms to C.
== Some constructions ==
Every closed model category has a terminal object by completeness and an initial object by cocompleteness, since these objects are the limit and colimit, respectively, of the empty diagram. Given an object X in the model category, if the unique map from the initial object to X is a cofibration, then X is said to be cofibrant. Analogously, if the unique map from X to the terminal object is a fibration then X is said to be fibrant.
If Z and X are objects of a model category such that Z is cofibrant and there is a weak equivalence from Z to X then Z is said to be a cofibrant replacement for X. Similarly, if Z is fibrant and there is a weak equivalence from X to Z then Z is said to be a fibrant replacement for X. In general, not all objects are fibrant or cofibrant, though this is sometimes the case. For example, all objects are cofibrant in the standard model category of simplicial sets and all objects are fibrant for the standard model category structure given above for topological spaces.
Left homotopy is defined with respect to cylinder objects and right homotopy is defined with respect to path space objects. These notions coincide when the domain is cofibrant and the codomain is fibrant. In that case, homotopy defines an equivalence relation on the hom sets in the model category giving rise to homotopy classes.
== Characterizations of fibrations and cofibrations by lifting properties ==
Cofibrations can be characterized as the maps which have the left lifting property with respect to acyclic fibrations, and acyclic cofibrations are characterized as the maps which have the left lifting property with respect to fibrations. Similarly, fibrations can be characterized as the maps which have the right lifting property with respect to acyclic cofibrations, and acyclic fibrations are characterized as the maps which have the right lifting property with respect to cofibrations.
== Homotopy and the homotopy category ==
The homotopy category of a model category C is the localization of C with respect to the class of weak equivalences. This definition of homotopy category does not depend on the choice of fibrations and cofibrations. However, the classes of fibrations and cofibrations are useful in describing the homotopy category in a different way and in particular avoiding set-theoretic issues arising in general localizations of categories. More precisely, the "fundamental theorem of model categories" states that the homotopy category of C is equivalent to the category whose objects are the objects of C which are both fibrant and cofibrant, and whose morphisms are left homotopy classes of maps (equivalently, right homotopy classes of maps) as defined above. (See for instance Model Categories by Hovey, Thm 1.2.10)
Applying this to the category of topological spaces with the model structure given above, the resulting homotopy category is equivalent to the category of CW complexes and homotopy classes of continuous maps, whence the name.
=== Quillen adjunctions ===
A pair of adjoint functors
F
:
C
⇆
D
:
G
{\displaystyle F:C\leftrightarrows D:G}
between two model categories C and D is called a Quillen adjunction if F preserves cofibrations and acyclic cofibrations or, equivalently by the closed model axioms, such that G preserves fibrations and acyclic fibrations. In this case F and G induce an adjunction
L
F
:
H
o
(
C
)
⇆
H
o
(
D
)
:
R
G
{\displaystyle LF:Ho(C)\leftrightarrows Ho(D):RG}
between the homotopy categories. There is also an explicit criterion for the latter to be an equivalence (F and G are called a Quillen equivalence then).
A typical example is the standard adjunction between simplicial sets and topological spaces:
|
−
|
:
s
S
e
t
⇆
T
o
p
:
S
i
n
g
{\displaystyle |-|:\mathbf {sSet} \leftrightarrows \mathbf {Top} :Sing}
involving the geometric realization of a simplicial set and the singular chains in some topological space. The categories sSet and Top are not equivalent, but their homotopy categories are. Therefore, simplicial sets are often used as models for topological spaces because of this equivalence of homotopy categories.
== See also ==
(∞,1)-category
Cocycle category
Stable model category
== Notes ==
== References ==
Denis-Charles Cisinski: Les préfaisceaux commes modèles des types d'homotopie, Astérisque, (308) 2006, xxiv+392 pp.
Dwyer, William G.; Spaliński, Jan (1995), "Homotopy theories and model categories" (PDF), Handbook of algebraic topology, Amsterdam: North-Holland, pp. 73–126, doi:10.1016/B978-044481779-2/50003-1, ISBN 9780444817792, MR 1361887
Philip S. Hirschhorn: Model Categories and Their Localizations, 2003, ISBN 0-8218-3279-4.
Mark Hovey: Model Categories, 1999, ISBN 0-8218-1359-5.
Klaus Heiner Kamps and Timothy Porter: Abstract homotopy and simple homotopy theory, 1997, World Scientific, ISBN 981-02-1602-5.
Georges Maltsiniotis: La théorie de l'homotopie de Grothendieck. Astérisque, (301) 2005, vi+140 pp.
Riehl, Emily (2014), Categorical homotopy theory, Cambridge University Press, doi:10.1017/CBO9781107261457, ISBN 978-1-107-04845-4, MR 3221774
Quillen, Daniel G. (1967), Homotopical algebra, Lecture Notes in Mathematics, No. 43, vol. 43, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0097438, ISBN 978-3-540-03914-3, MR 0223432
Balchin, Scott (2021), A Handbook of Model Categories, Algebra and Applications, vol. 27, Springer, doi:10.1007/978-3-030-75035-0, ISBN 978-3-030-75034-3, MR 4385504, S2CID 240268465
== Further reading ==
"Do we still need model categories?"
"(infinity,1)-categories directly from model categories"
Paul Goerss and Kristen Schemmerhorn, Model Categories and Simplicial Methods
== External links ==
Model category at the nLab
Model category in Joyal's catlab | Wikipedia/Model_category |
In mathematics, a simplicial set is a sequence of sets with internal order structure (abstract simplices) and maps between them. Simplicial sets are higher-dimensional generalizations of directed graphs.
Every simplicial set gives rise to a "nice" topological space, known as its geometric realization. This realization consists of geometric simplices, glued together according to the rules of the simplicial set. Indeed, one may view a simplicial set as a purely combinatorial construction designed to capture the essence of a topological space for the purposes of homotopy theory. Specifically, the category of simplicial sets carries a natural model structure, and the corresponding homotopy category is equivalent to the familiar homotopy category of topological spaces.
Formally, a simplicial set may be defined as a contravariant functor from the simplex category to the category of sets. Simplicial sets were introduced in 1950 by Samuel Eilenberg and Joseph A. Zilber.
Simplicial sets are used to define quasi-categories, a basic notion of higher category theory. A construction analogous to that of simplicial sets can be carried out in any category, not just in the category of sets, yielding the notion of simplicial objects.
== Motivation ==
A simplicial set is a categorical (that is, purely algebraic) model capturing those topological spaces that can be built up (or faithfully represented up to homotopy) from simplices and their incidence relations. This is similar to the approach of CW complexes to modeling topological spaces, with the crucial difference that simplicial sets are purely algebraic and do not carry any actual topology.
To get back to actual topological spaces, there is a geometric realization functor which turns simplicial sets into compactly generated Hausdorff spaces. Most classical results on CW complexes in homotopy theory are generalized by analogous results for simplicial sets. While algebraic topologists largely continue to prefer CW complexes, there is a growing contingent of researchers interested in using simplicial sets for applications in algebraic geometry where CW complexes do not naturally exist.
== Intuition ==
Simplicial sets can be viewed as a higher-dimensional generalization of directed multigraphs. A simplicial set contains vertices (known as "0-simplices" in this context) and arrows ("1-simplices") between some of these vertices. Two vertices may be connected by several arrows, and directed loops that connect a vertex to itself are also allowed. Unlike directed multigraphs, simplicial sets may also contain higher simplices. A 2-simplex, for instance, can be thought of as a two-dimensional "triangular" shape bounded by a list of three vertices A, B, C and three arrows B → C, A → C and A → B. In general, an n-simplex is an object made up from a list of n + 1 vertices (which are 0-simplices) and n + 1 faces (which are (n − 1)-simplices). The vertices of the i-th face are the vertices of the n-simplex minus the i-th vertex. The vertices of a simplex need not be distinct and a simplex is not determined by its vertices and faces: two different simplices may share the same list of faces (and therefore the same list of vertices), just like two different arrows in a multigraph may connect the same two vertices.
Simplicial sets should not be confused with abstract simplicial complexes, which generalize simple undirected graphs rather than directed multigraphs.
Formally, a simplicial set X is a collection of sets Xn, n = 0, 1, 2, ..., together with certain maps between these sets: the face maps dn,i : Xn → Xn−1 (n = 1, 2, 3, ... and 0 ≤ i ≤ n) and degeneracy maps sn,i : Xn→Xn+1 (n = 0, 1, 2, ... and 0 ≤ i ≤ n). We think of the elements of Xn as the n-simplices of X. The map dn,i assigns to each such n-simplex its i-th face, the face "opposite to" (i.e. not containing) the i-th vertex. The map sn,i assigns to each n-simplex the degenerate (n+1)-simplex which arises from the given one by duplicating the i-th vertex. This description implicitly requires certain consistency relations among the maps dn,i and sn,i.
Rather than requiring these simplicial identities explicitly as part of the definition, the short modern definition uses the language of category theory.
== Formal definition ==
Let Δ denote the simplex category. The objects of Δ are nonempty totally ordered sets. Each object is uniquely order isomorphic to an object of the form
[n] = {0, 1, ..., n}
with n ≥ 0. The morphisms in Δ are (non-strictly) order-preserving functions between these sets.
A simplicial set X is a contravariant functor
X : Δ → Set
where Set is the category of sets. (Alternatively and equivalently, one may define simplicial sets as covariant functors from the opposite category Δop →f Set.) Given a simplicial set X, we often write Xn instead of X([n]).
Simplicial sets form a category, usually denoted sSet, whose objects are simplicial sets and whose morphisms are natural transformations between them. This is the category of presheaves on Δ. As such, it is a topos.
=== Face and degeneracy maps and simplicial identities ===
The morphisms (maps) of the simplex category Δ are generated by two particularly important families of morphisms, whose images under a given simplicial set functor are called the face maps and degeneracy maps of that simplicial set.
The face maps of a simplicial set X are the images in that simplicial set of the morphisms
δ
n
,
0
,
…
,
δ
n
,
n
:
[
n
−
1
]
→
[
n
]
{\displaystyle \delta ^{n,0},\dotsc ,\delta ^{n,n}\colon [n-1]\to [n]}
, where
δ
n
,
i
{\displaystyle \delta ^{n,i}}
is the only (order-preserving) injection
[
n
−
1
]
→
[
n
]
{\displaystyle [n-1]\to [n]}
that "misses"
i
{\displaystyle i}
.
Let us denote these face maps by
d
n
,
0
,
…
,
d
n
,
n
{\displaystyle d_{n,0},\dotsc ,d_{n,n}}
respectively, so that
d
n
,
i
{\displaystyle d_{n,i}}
is a map
X
n
→
X
n
−
1
{\displaystyle X_{n}\to X_{n-1}}
. If the first index is clear, we write
d
i
{\displaystyle d_{i}}
instead of
d
n
,
i
{\displaystyle d_{n,i}}
.
The degeneracy maps of the simplicial set X are the images in that simplicial set of the morphisms
σ
n
,
0
,
…
,
σ
n
,
n
:
[
n
+
1
]
→
[
n
]
{\displaystyle \sigma ^{n,0},\dotsc ,\sigma ^{n,n}\colon [n+1]\to [n]}
, where
σ
n
,
i
{\displaystyle \sigma ^{n,i}}
is the only (order-preserving) surjection
[
n
+
1
]
→
[
n
]
{\displaystyle [n+1]\to [n]}
that "hits"
i
{\displaystyle i}
twice.
Let us denote these degeneracy maps by
s
n
,
0
,
…
,
s
n
,
n
{\displaystyle s_{n,0},\dotsc ,s_{n,n}}
respectively, so that
s
n
,
i
{\displaystyle s_{n,i}}
is a map
X
n
→
X
n
+
1
{\displaystyle X_{n}\to X_{n+1}}
. If the first index is clear, we write
s
i
{\displaystyle s_{i}}
instead of
s
n
,
i
{\displaystyle s_{n,i}}
.
The defined maps satisfy the following simplicial identities:
d
i
d
j
=
d
j
−
1
d
i
{\displaystyle d_{i}d_{j}=d_{j-1}d_{i}}
if i < j. (This is short for
d
n
−
1
,
i
d
n
,
j
=
d
n
−
1
,
j
−
1
d
n
,
i
{\displaystyle d_{n-1,i}d_{n,j}=d_{n-1,j-1}d_{n,i}}
if 0 ≤ i < j ≤ n.)
d
i
s
j
=
s
j
−
1
d
i
{\displaystyle d_{i}s_{j}=s_{j-1}d_{i}}
if i < j.
d
i
s
j
=
id
{\displaystyle d_{i}s_{j}={\text{id}}}
if i = j or i = j + 1.
d
i
s
j
=
s
j
d
i
−
1
{\displaystyle d_{i}s_{j}=s_{j}d_{i-1}}
if i > j + 1.
s
i
s
j
=
s
j
+
1
s
i
{\displaystyle s_{i}s_{j}=s_{j+1}s_{i}}
if i ≤ j.
Conversely, given a sequence of sets Xn together with maps
d
n
,
i
:
X
n
→
X
n
−
1
{\displaystyle d_{n,i}:X_{n}\to X_{n-1}}
and
s
n
,
i
:
X
n
→
X
n
+
1
{\displaystyle s_{n,i}:X_{n}\to X_{n+1}}
that satisfy the simplicial identities, there is a unique simplicial set X that has these face and degeneracy maps. So the identities provide an alternative way to define simplicial sets.
== Examples ==
Given a partially ordered set (S, ≤), we can define a simplicial set NS, called the nerve of S, as follows: for every object [n] of Δ we set NS([n]) = homposet( [n] , S), the set of order-preserving maps from [n] to S. Every morphism φ: [n] → [m] in Δ is an order preserving map, and via composition induces a map NS(φ) : NS([m]) → NS([n]). It is straightforward to check that NS is a contravariant functor from Δ to Set: a simplicial set.
Concretely, the n-simplices of the nerve NS, i.e. the elements of NSn = NS([n]), can be thought of as ordered length-(n+1) sequences of elements from S: (a0 ≤ a1 ≤ ... ≤ an). The face map di drops the i-th element from such a list, and the degeneracy maps si duplicates the i-th element.
A similar construction can be performed for every category C, to obtain the nerve NC of C. Here, NC([n]) is the set of all functors from [n] to C, where we consider [n] as a category with objects 0,1,...,n and a single morphism from i to j whenever i ≤ j.
Concretely, the n-simplices of the nerve NC can be thought of as sequences of n composable morphisms in C: a0 → a1 → ... → an. (In particular, the 0-simplices are the objects of C and the 1-simplices are the morphisms of C.) The face map d0 drops the first morphism from such a list, the face map dn drops the last, and the face map di for 0 < i < n drops ai and composes the i-th and (i + 1)-th morphisms. The degeneracy maps si lengthen the sequence by inserting an identity morphism at position i.
We can recover the poset S from the nerve NS and the category C from the nerve NC; in this sense simplicial sets generalize posets and categories.
Another important class of examples of simplicial sets is given by the singular set SY of a topological space Y. Here SYn consists of all the continuous maps from the standard topological n-simplex to Y. The singular set is further explained below.
== The standard n-simplex and the category of simplices ==
The standard n-simplex, denoted Δn, is a simplicial set defined as the functor homΔ(-, [n]) where [n] denotes the ordered set {0, 1, ... ,n} of the first (n + 1) nonnegative integers. (In many texts, it is written instead as hom([n],-) where the homset is understood to be in the opposite category Δop.)
By the Yoneda lemma, the n-simplices of a simplicial set X stand in 1–1 correspondence with the natural transformations from Δn to X, i.e.
X
n
=
X
(
[
n
]
)
≅
Nat
(
hom
Δ
(
−
,
[
n
]
)
,
X
)
=
hom
sSet
(
Δ
n
,
X
)
{\displaystyle X_{n}=X([n])\cong \operatorname {Nat} (\operatorname {hom} _{\Delta }(-,[n]),X)=\operatorname {hom} _{\textbf {sSet}}(\Delta ^{n},X)}
.
Furthermore, X gives rise to a category of simplices, denoted by
Δ
↓
X
{\displaystyle \Delta \downarrow {X}}
, whose objects are maps (i.e. natural transformations) Δn → X and whose morphisms are natural transformations Δn → Δm over X arising from maps [n] → [m] in Δ. That is,
Δ
↓
X
{\displaystyle \Delta \downarrow {X}}
is a slice category of Δ over X. The following isomorphism shows that a simplicial set X is a colimit of its simplices:
X
≅
lim
→
Δ
n
→
X
Δ
n
{\displaystyle X\cong \varinjlim _{\Delta ^{n}\to X}\Delta ^{n}}
where the colimit is taken over the category of simplices of X.
== Geometric realization ==
There is a functor |•|: sSet → CGHaus called the geometric realization taking a simplicial set X to its corresponding realization in the category CGHaus of compactly-generated Hausdorff topological spaces. Intuitively, the realization of X is the topological space (in fact a CW complex) obtained if every n-simplex of X is replaced by a topological n-simplex (a certain n-dimensional subset of (n + 1)-dimensional Euclidean space defined below) and these topological simplices are glued together in the fashion the simplices of X hang together. In this process the orientation of the simplices of X is lost.
To define the realization functor, we first define it on standard n-simplices Δn as follows: the geometric realization |Δn| is the standard topological n-simplex in general position given by
|
Δ
n
|
=
{
(
x
0
,
…
,
x
n
)
∈
R
n
+
1
:
0
≤
x
i
≤
1
,
∑
x
i
=
1
}
.
{\displaystyle |\Delta ^{n}|=\{(x_{0},\dots ,x_{n})\in \mathbb {R} ^{n+1}:0\leq x_{i}\leq 1,\sum x_{i}=1\}.}
The definition then naturally extends to any simplicial set X by setting
|X| = limΔn → X | Δn|
where the colimit is taken over the n-simplex category of X. The geometric realization is functorial on sSet.
It is significant that we use the category CGHaus of compactly-generated Hausdorff spaces, rather than the category Top of topological spaces, as the target category of geometric realization: like sSet and unlike Top, the category CGHaus is cartesian closed; the categorical product is defined differently in the categories Top and CGHaus, and the one in CGHaus corresponds to the one in sSet via geometric realization.
== Singular set for a space ==
The singular set of a topological space Y is the simplicial set SY defined by
(SY)([n]) = homTop(|Δn|, Y) for each object [n] ∈ Δ.
Every order-preserving map φ:[n]→[m] induces a continuous map |Δn|→|Δm| by
(
x
0
,
.
.
.
,
x
n
)
∈
|
Δ
n
|
↦
(
y
j
)
,
y
j
=
∑
ϕ
(
i
)
=
j
x
i
.
{\displaystyle (x_{0},...,x_{n})\in |\Delta _{n}|\mapsto (y_{j}),~~y_{j}=\sum _{\phi (i)=j}x_{i}.}
Then, by composition it yields to a map SY(φ) : SY([m]) → SY([n]). This definition is analogous to a standard idea in singular homology of "probing" a target topological space with standard topological n-simplices. Furthermore, the singular functor S is right adjoint to the geometric realization functor described above, i.e.:
homTop(|X|, Y) ≅ homsSet(X, SY)
for any simplicial set X and any topological space Y. Intuitively, this adjunction can be understood as follows: a continuous map from the geometric realization of X to a space Y is uniquely specified if we associate to every simplex of X a continuous map from the corresponding standard topological simplex to Y, in such a fashion that these maps are compatible with the way the simplices in X hang together.
== Homotopy theory of simplicial sets ==
In order to define a model structure on the category of simplicial sets, one has to define fibrations, cofibrations and weak equivalences. One can define fibrations to be Kan fibrations. A map of simplicial sets is defined to be a weak equivalence if its geometric realization is a weak homotopy equivalence of spaces. A map of simplicial sets is defined to be a cofibration if it is a monomorphism of simplicial sets. It is a difficult theorem of Daniel Quillen that the category of simplicial sets with these classes of morphisms becomes a model category, and indeed satisfies the axioms for a proper closed simplicial model category.
A key turning point of the theory is that the geometric realization of a Kan fibration is a Serre fibration of spaces. With the model structure in place, a homotopy theory of simplicial sets can be developed using standard homotopical algebra methods. Furthermore, the geometric realization and singular functors give a Quillen equivalence of closed model categories inducing an equivalence
|•|: Ho(sSet) ↔ Ho(Top)
between the homotopy category for simplicial sets and the usual homotopy category of CW complexes with homotopy classes of continuous maps between them. It is part of the general
definition of a Quillen adjunction that the right adjoint functor (in this case, the singular set functor) carries fibrations (resp. trivial fibrations) to fibrations (resp. trivial fibrations).
== Simplicial objects ==
A simplicial object X in a category C is a contravariant functor
X : Δ → C
or equivalently a covariant functor
X: Δop → C,
where Δ still denotes the simplex category and op the opposite category. When C is the category of sets, we are just talking about the simplicial sets that were defined above. Letting C be the category of groups or category of abelian groups, we obtain the categories sGrp of simplicial groups and sAb of simplicial abelian groups, respectively.
Simplicial groups and simplicial abelian groups also carry closed model structures induced by that of the underlying simplicial sets.
The homotopy groups of simplicial abelian groups can be computed by making use of the Dold–Kan correspondence which yields an equivalence of categories between simplicial abelian groups and bounded chain complexes and is given by functors
N: sAb → Ch+
and
Γ: Ch+ → sAb.
See also: simplicial diagram.
== History and uses of simplicial sets ==
Simplicial sets were originally used to give precise and convenient descriptions of classifying spaces of groups. This idea was vastly extended by Grothendieck's idea of
considering classifying spaces of categories, and in particular by Quillen's work of algebraic K-theory. In this work, which earned him a Fields Medal, Quillen
developed surprisingly efficient methods for manipulating
infinite simplicial sets. These methods were used in other areas on the border between algebraic geometry and topology. For instance, the André–Quillen homology of a ring is a "non-abelian homology", defined and studied in this way.
Both the algebraic K-theory and the André–Quillen homology are defined using algebraic data to write down a simplicial set, and then taking the homotopy groups of this simplicial set.
Simplicial methods are often useful when one wants to prove that a space is a loop space. The basic idea is that if
G
{\displaystyle G}
is a group with classifying space
B
G
{\displaystyle BG}
, then
G
{\displaystyle G}
is homotopy equivalent to the loop space
Ω
B
G
{\displaystyle \Omega BG}
. If
B
G
{\displaystyle BG}
itself is a group, we can iterate the procedure, and
G
{\displaystyle G}
is homotopy equivalent to the double loop space
Ω
2
B
(
B
G
)
{\displaystyle \Omega ^{2}B(BG)}
. In case
G
{\displaystyle G}
is an abelian group, we can actually iterate this infinitely many times, and obtain that
G
{\displaystyle G}
is an infinite loop space.
Even if
X
{\displaystyle X}
is not an abelian group, it can happen that it has a composition which is sufficiently commutative so that one can use the above idea to prove that
X
{\displaystyle X}
is an infinite loop space. In this way, one can prove that the algebraic
K
{\displaystyle K}
-theory of a ring, considered as a topological space, is an infinite loop space.
In recent years, simplicial sets have been used in higher category theory and derived algebraic geometry. Quasi-categories can be thought of as categories in which the composition of morphisms is defined only up to homotopy, and information about the composition of higher homotopies is also retained. Quasi-categories are defined as simplicial sets satisfying one additional condition, the weak Kan condition.
== See also ==
Delta set
Dendroidal set, a generalization of simplicial set
Simplicial presheaf
Quasi-category
Kan complex
Dold–Kan correspondence
Simplicial homotopy
Simplicial sphere
Abstract simplicial complex
Anodyne extension
Weak equivalence between simplicial sets
== Notes ==
== References ==
== Further reading ==
Riehl, Emily. "A leisurely introduction to simplicial sets" (PDF).
May, J. Peter. Simplicial Objects in Algebraic Topology, University of Chicago Press 1967
simplicial set at the nLab | Wikipedia/Simplicial_homotopy_theory |
Shape theory is a branch of topology that provides a more global view of the topological spaces than homotopy theory. The two coincide on compacta dominated homotopically by finite polyhedra. Shape theory associates with the Čech homology theory while homotopy theory associates with the singular homology theory.
== Background ==
Shape theory was invented and published by D. E. Christie in 1944; it was reinvented, further developed and promoted by the Polish mathematician Karol Borsuk in 1968. Actually, the name shape theory was coined by Borsuk.
=== Warsaw circle ===
Borsuk lived and worked in Warsaw, hence the name of one of the fundamental examples of the area, the Warsaw circle. It is a compact subset of the plane produced by "closing up" a topologist's sine curve (also called a Warsaw sine curve) with an arc. The homotopy groups of the Warsaw circle are all trivial, just like those of a point, and so any map between the Warsaw circle and a point induces a weak homotopy equivalence. However these two spaces are not homotopy equivalent. So by the Whitehead theorem, the Warsaw circle does not have the homotopy type of a CW complex.
== Historical development ==
Borsuk's shape theory was generalized onto arbitrary (non-metric) compact spaces, and even onto general categories, by Włodzimierz Holsztyński in year 1968/1969, and published in Fund. Math. 70, 157–168, y. 1971 (see Jean-Marc Cordier, Tim Porter, (1989) below). This was done in a continuous style, characteristic for the Čech homology rendered by Samuel Eilenberg and Norman Steenrod in their monograph Foundations of Algebraic Topology. Due to the circumstance, Holsztyński's paper was hardly noticed, and instead a great popularity in the field was gained by a later paper by Sibe Mardešić and Jack Segal, Fund. Math. 72, 61–68, y.1971. Further developments are reflected by the references below, and by their contents.
For some purposes, like dynamical systems, more sophisticated invariants were developed under the name strong shape. Generalizations to noncommutative geometry, e.g. the shape theory for operator algebras have been found.
== See also ==
List of topologies
== References ==
Mardešić, Sibe (1997). "Thirty years of shape theory" (PDF). Mathematical Communications. 2: 1–12.
shape theory at the nLab
Jean-Marc Cordier and Tim Porter, (1989), Shape Theory: Categorical Methods of Approximation, Mathematics and its Applications, Ellis Horwood. Reprinted Dover (2008)
Aristide Deleanu and Peter John Hilton, On the categorical shape of a functor, Fundamenta Mathematicae 97 (1977) 157–176.
Aristide Deleanu and Peter John Hilton, Borsuk's shape and Grothendieck categories of pro-objects, Mathematical Proceedings of the Cambridge Philosophical Society 79 (1976) 473–482.
Sibe Mardešić and Jack Segal, Shapes of compacta and ANR-systems, Fundamenta Mathematicae 72 (1971) 41–59
Karol Borsuk, Concerning homotopy properties of compacta, Fundamenta Mathematicae 62 (1968) 223–254
Karol Borsuk, Theory of Shape, Monografie Matematyczne Tom 59, Warszawa 1975.
D. A. Edwards and H. M. Hastings, Čech Theory: its Past, Present, and Future, Rocky Mountain Journal of Mathematics, Volume 10, Number 3, Summer 1980
D. A. Edwards and H. M. Hastings, (1976), Čech and Steenrod homotopy theories with applications to geometric topology, Lecture Notes in Mathematics 542, Springer-Verlag.
Tim Porter, Čech homotopy I, II, Journal of the London Mathematical Society, 1, 6, 1973, pp. 429–436; 2, 6, 1973, pp. 667–675.
J.T. Lisica and Sibe Mardešić, Coherent prohomotopy and strong shape theory, Glasnik Matematički 19(39) (1984) 335–399.
Michael Batanin, Categorical strong shape theory, Cahiers Topologie Géom. Différentielle Catég. 38 (1997), no. 1, 3–66, numdam
Marius Dădărlat, Shape theory and asymptotic morphisms for C*-algebras, Duke Mathematical Journal, 73(3):687–711, 1994.
Marius Dădărlat and Terry A. Loring, Deformations of topological spaces predicted by E-theory, In Algebraic methods in operator theory, p. 316–327. Birkhäuser 1994. | Wikipedia/Shape_theory_(mathematics) |
In mathematics, simple homotopy theory is a homotopy theory (a branch of algebraic topology) that concerns with the simple-homotopy type of a space. It was originated by Whitehead in his 1950 paper "Simple homotopy types".
== See also ==
Whitehead torsion
== References ==
Cohen, M. M. (1973). A Course in Simple-Homotopy Theory. Graduate Texts in Mathematics. Vol. 10. doi:10.1007/978-1-4684-9372-6. ISBN 978-0-387-90055-1.
Hatcher, A. E. (1975). "Higher Simple Homotopy Theory". Annals of Mathematics. 102 (1): 101–137. doi:10.2307/1970977. JSTOR 1970977.
Whitehead, J. H. C. (1950). "Simple Homotopy Types". American Journal of Mathematics. 72 (1): 1–57. doi:10.2307/2372133. JSTOR 2372133.
== Further reading ==
Simple homotopy theory at the nLab
A lecture by J. Lurie. | Wikipedia/Simple_homotopy_theory |
In mathematics, the category of compactly generated weak Hausdorff spaces, CGWH, is a category used in algebraic topology as an alternative to the category of topological spaces, Top, as the latter lacks some properties that are common in practice and often convenient to use in proofs. There is also such a category for the CGWH analog of pointed topological spaces, defined by requiring maps to preserve base points.
The articles compactly generated space and weak Hausdorff space define the respective topological properties. For the historical motivation behind these conditions on spaces, see Compactly generated space#Motivation. This article focuses on the properties of the category.
== Properties ==
CGWH has the following properties:
It is complete and cocomplete.
The forgetful functor to the sets preserves small limits.
It contains all the locally compact Hausdorff spaces and all the CW complexes.
An internal Hom exists for any pairs of spaces X and Y; it is denoted by
Map
(
X
,
Y
)
{\displaystyle \operatorname {Map} (X,Y)}
or
Y
X
{\displaystyle Y^{X}}
and is called the (free) mapping space from X to Y. Moreover, there is a homeomorphism
Map
(
X
×
Y
,
Z
)
≃
Map
(
X
,
Map
(
Y
,
Z
)
)
{\displaystyle \operatorname {Map} (X\times Y,Z)\simeq \operatorname {Map} (X,\operatorname {Map} (Y,Z))}
that is natural in X, Y, and Z. In short, the category is Cartesian closed in an enriched sense.
A finite product of CW complexes is a CW complex.
If
(
X
,
∗
)
{\displaystyle (X,*)}
and
(
Y
,
∘
)
{\displaystyle (Y,\circ )}
are pointed spaces, then the smash product of them exists. The (based) mapping space
Map
(
(
X
,
∗
)
,
(
Y
,
∘
)
)
{\displaystyle \operatorname {Map} ((X,*),(Y,\circ ))}
from
(
X
,
∗
)
{\displaystyle (X,*)}
to
(
Y
,
∘
)
{\displaystyle (Y,\circ )}
consists of all base-point-preserving maps from
(
X
,
∗
)
{\displaystyle (X,*)}
to
(
Y
,
∘
)
{\displaystyle (Y,\circ )}
and is a closed subspace of the mapping space between the underlying spaces without base points. It is a based space with the base point the unique constant map. Moreover, for based spaces
(
X
,
∗
)
{\displaystyle (X,*)}
,
(
Y
,
∘
)
{\displaystyle (Y,\circ )}
, and
(
Z
,
⋆
)
{\displaystyle (Z,\star )}
, there is a homeomorphism
Map
(
(
X
,
∗
)
∧
(
Y
,
∘
)
,
(
Z
,
⋆
)
)
≃
Map
(
(
X
,
∗
)
,
Map
(
(
Y
,
∘
)
,
(
Z
,
⋆
)
)
)
{\displaystyle \operatorname {Map} ((X,*)\wedge (Y,\circ ),(Z,\star ))\simeq \operatorname {Map} ((X,*),\operatorname {Map} ((Y,\circ ),(Z,\star )))}
that is natural in
(
X
,
∗
)
{\displaystyle (X,*)}
,
(
Y
,
∘
)
{\displaystyle (Y,\circ )}
, and
(
Z
,
⋆
)
{\displaystyle (Z,\star )}
.
== Notes ==
== References ==
Frankland, Martin (February 4, 2013). "Math 527 - Homotopy Theory – Compactly generated spaces" (PDF).
Steenrod, N. E. (1 May 1967). "A convenient category of topological spaces". Michigan Mathematical Journal. 14 (2): 133–152. doi:10.1307/mmj/1028999711.
Strickland, Neil (2009). "The category of CGWH spaces" (PDF).
"Appendix". Cellular Structures in Topology. 1990. pp. 241–305. doi:10.1017/CBO9780511983948.007. ISBN 9780521327848.
== Further reading ==
The CGWH category, Dongryul Kim 2017 | Wikipedia/Category_of_compactly_generated_weak_Hausdorff_spaces |
In algebraic geometry and algebraic topology, branches of mathematics, A1 homotopy theory or motivic homotopy theory is a way to apply the techniques of algebraic topology, specifically homotopy, to algebraic varieties and, more generally, to schemes. The theory is due to Fabien Morel and Vladimir Voevodsky. The underlying idea is that it should be possible to develop a purely algebraic approach to homotopy theory by replacing the unit interval [0, 1], which is not an algebraic variety, with the affine line A1, which is. The theory has seen spectacular applications such as Voevodsky's construction of the derived category of mixed motives and the proof of the Milnor and Bloch-Kato conjectures.
== Construction ==
A1 homotopy theory is founded on a category called the A1 homotopy category
H
(
S
)
{\displaystyle {\mathcal {H}}(S)}
. Simply put, the A1 homotopy category, or rather the canonical functor
S
m
S
→
H
(
S
)
{\displaystyle Sm_{S}\to {\mathcal {H}}(S)}
, is the universal functor from the category
S
m
S
{\displaystyle Sm_{S}}
of smooth
S
{\displaystyle S}
-schemes towards an infinity category which satisfies Nisnevich descent, such that the affine line A1 becomes contractible. Here
S
{\displaystyle S}
is some prechosen base scheme (e.g., the spectrum of the complex numbers
S
p
e
c
(
C
)
{\displaystyle Spec(\mathbb {C} )}
).
This definition in terms of a universal property is not possible without infinity categories. These were not available in the 90's and the original definition passes by way of Quillen's theory of model categories. Another way of seeing the situation is that Morel-Voevodsky's original definition produces a concrete model for (the homotopy category of) the infinity category
H
(
S
)
{\displaystyle {\mathcal {H}}(S)}
.
This more concrete construction is sketched below.
=== Step 0 ===
Choose a base scheme
S
{\displaystyle S}
. Classically,
S
{\displaystyle S}
is asked to be Noetherian, but many modern authors such as Marc Hoyois work with quasi-compact quasi-separated base schemes. In any event, many important results are only known over a perfect base field, such as the complex numbers, so we consider only this case.
=== Step 1 ===
Step 1a: Nisnevich sheaves. Classically, the construction begins with the category
S
h
v
N
i
s
(
S
m
S
)
{\displaystyle Shv_{Nis}(Sm_{S})}
of Nisnevich sheaves on the category
S
m
S
{\displaystyle Sm_{S}}
of smooth schemes over
S
{\displaystyle S}
. Heuristically, this should be considered as (and in a precise technical sense is) the universal enlargement of
S
m
S
{\displaystyle Sm_{S}}
obtained by adjoining all colimits and forcing Nisnevich descent to be satisfied.
Step 1b: simplicial sheaves. In order to more easily perform standard homotopy theoretic procedures such as homotopy colimits and homotopy limits,
S
h
v
N
i
s
(
S
m
S
)
{\displaystyle Shv_{Nis}(Sm_{S})}
replaced with the following category of simplicial sheaves.
Let Δ be the simplex category, that is, the category whose objects are the sets
{0}, {0, 1}, {0, 1, 2}, ...,
and whose morphisms are order-preserving functions. We let
Δ
o
p
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle \Delta ^{op}Shv(Sm_{S})_{Nis}}
denote the category of functors
Δ
o
p
→
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle \Delta ^{op}\to Shv(Sm_{S})_{Nis}}
. That is,
Δ
o
p
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle \Delta ^{op}Shv(Sm_{S})_{Nis}}
is the category of simplicial objects on
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle Shv(Sm_{S})_{Nis}}
. Such an object is also called a simplicial sheaf on
S
m
S
{\displaystyle Sm_{S}}
.
Step 1c: fibre functors. For any smooth
S
{\displaystyle S}
-scheme
X
{\displaystyle X}
, any point
x
∈
X
{\displaystyle x\in X}
, and any sheaf
F
{\displaystyle F}
, let's write
x
∗
F
{\displaystyle x^{*}F}
for the stalk of the restriction
F
|
X
N
i
s
{\displaystyle F|_{X_{Nis}}}
of
F
{\displaystyle F}
to the small Nisnevich site of
X
{\displaystyle X}
. Explicitly,
x
∗
F
=
c
o
l
i
m
x
→
V
→
X
F
(
V
)
{\displaystyle x^{*}F=colim_{x\to V\to X}F(V)}
where the colimit is over factorisations
x
→
V
→
X
{\displaystyle x\to V\to X}
of the canonical inclusion
x
→
X
{\displaystyle x\to X}
via an étale morphism
V
→
X
{\displaystyle V\to X}
. The collection
{
x
∗
}
{\displaystyle \{x^{*}\}}
is a conservative family of fibre functors for
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle Shv(Sm_{S})_{Nis}}
.
Step 1d: the closed model structure. We will define a closed model structure on
Δ
o
p
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle \Delta ^{op}Shv(Sm_{S})_{Nis}}
in terms of fibre functors. Let
f
:
X
→
Y
{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}
be a morphism of simplicial sheaves. We say that:
f is a weak equivalence if, for any fibre functor x of T, the morphism of simplicial sets
x
∗
f
:
x
∗
X
→
x
∗
Y
{\displaystyle x^{*}f:x^{*}{\mathcal {X}}\to x^{*}{\mathcal {Y}}}
is a weak equivalence.
f is a cofibration if it is a monomorphism.
f is a fibration if it has the right lifting property with respect to any cofibration which is a weak equivalence.
The homotopy category of this model structure is denoted
H
s
(
T
)
{\displaystyle {\mathcal {H}}_{s}(T)}
.
=== Step 2 ===
This model structure has Nisnevich descent, but it does not contract the affine line. A simplicial sheaf
X
{\displaystyle {\mathcal {X}}}
is called
A
1
{\displaystyle \mathbb {A} ^{1}}
-local if for any simplicial sheaf
Y
{\displaystyle {\mathcal {Y}}}
the map
Hom
H
s
(
T
)
(
Y
×
A
1
,
X
)
→
Hom
H
s
(
T
)
(
Y
,
X
)
{\displaystyle {\text{Hom}}_{{\mathcal {H}}_{s}(T)}({\mathcal {Y}}\times \mathbb {A} ^{1},{\mathcal {X}})\to {\text{Hom}}_{{\mathcal {H}}_{s}(T)}({\mathcal {Y}},{\mathcal {X}})}
induced by
i
0
:
{
0
}
→
A
1
{\displaystyle i_{0}:\{0\}\to \mathbb {A} ^{1}}
is a bijection. Here we are considering
A
1
{\displaystyle \mathbb {A} ^{1}}
as a sheaf via the Yoneda embedding, and the constant simplicial object functor
S
h
v
(
S
m
S
)
N
i
s
→
Δ
o
p
S
h
v
(
S
m
S
)
N
i
s
{\displaystyle Shv(Sm_{S})_{Nis}\to \Delta ^{op}Shv(Sm_{S})_{Nis}}
.
A morphism
f
:
X
→
Y
{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}
is an
A
1
{\displaystyle \mathbb {A} ^{1}}
-weak equivalence if for any
A
1
{\displaystyle \mathbb {A} ^{1}}
-local
Z
{\displaystyle {\mathcal {Z}}}
, the induced map
Hom
H
s
(
T
)
(
Y
,
Z
)
→
Hom
H
s
(
T
)
(
X
,
Z
)
{\displaystyle {\text{Hom}}_{{\mathcal {H}}_{s}(T)}({\mathcal {Y}},{\mathcal {Z}})\to {\text{Hom}}_{{\mathcal {H}}_{s}(T)}({\mathcal {X}},{\mathcal {Z}})}
is a bijection. The
A
1
{\displaystyle \mathbb {A} ^{1}}
-local model structure is the localisation of the above model with respect to
A
1
{\displaystyle \mathbb {A} ^{1}}
-weak equivalences.
=== Formal Definition ===
Finally we may define the A1 homotopy category.
Definition. Let S be a finite-dimensional Noetherian scheme (for example
S
=
S
p
e
c
(
C
)
{\displaystyle S=Spec(\mathbb {C} )}
the spectrum of the complex numbers), and let Sm/S denote the category of smooth schemes over S. Equip Sm/S with the Nisnevich topology to get the site (Sm/S)Nis. The homotopy category (or infinity category) associated to the
A
1
{\displaystyle \mathbb {A} ^{1}}
-local model structure on
Δ
o
p
S
h
v
∗
(
S
m
S
)
N
i
s
{\displaystyle \Delta ^{op}Shv_{*}(Sm_{S})_{Nis}}
is called the A1-homotopy category. It is denoted
H
s
{\displaystyle {\mathcal {H}}_{s}}
. Similarly, for the pointed simplicial sheaves
Δ
o
p
S
h
v
∗
(
S
m
S
)
N
i
s
{\displaystyle \Delta ^{op}Shv_{*}(Sm_{S})_{Nis}}
there is an associated pointed homotopy category
H
s
,
∗
{\displaystyle {\mathcal {H}}_{s,*}}
.
Note that by construction, for any X in Sm/S, there is an isomorphism
X ×S A1S ≅ X,
in the homotopy category.
== Properties of the theory ==
=== Wedge and smash products of simplicial (pre)sheaves ===
Because we started with a simplicial model category to construct the
A
1
{\displaystyle \mathbf {A} ^{1}}
-homotopy category, there are a number of structures inherited from the abstract theory of simplicial models categories. In particular, for
X
,
Y
{\displaystyle {\mathcal {X}},{\mathcal {Y}}}
pointed simplicial sheaves in
Δ
o
p
Sh
∗
(
Sm
/
S
)
n
i
s
{\displaystyle \Delta ^{op}{\text{Sh}}_{*}({\text{Sm}}/S)_{nis}}
we can form the wedge product as the colimit
X
∨
Y
=
colim
→
{
∗
→
X
↓
Y
}
{\displaystyle {\mathcal {X}}\vee {\mathcal {Y}}={\underset {\to }{\text{colim}}}\left\{{\begin{matrix}*&\to &{\mathcal {X}}\\\downarrow &&\\{\mathcal {Y}}\end{matrix}}\right\}}
and the smash product is defined as
X
∧
Y
=
X
×
Y
/
X
∨
Y
{\displaystyle {\mathcal {X}}\wedge {\mathcal {Y}}={\mathcal {X}}\times {\mathcal {Y}}/{\mathcal {X}}\vee {\mathcal {Y}}}
recovering some of the classical constructions in homotopy theory. There is in addition a cone of a simplicial (pre)sheaf and a cone of a morphism, but defining these requires the definition of the simplicial spheres.
==== Simplicial spheres ====
From the fact we start with a simplicial model category, this means there is a cosimplicial functor
Δ
∙
:
Δ
→
Δ
o
p
Sh
∗
(
Sm
/
S
)
n
i
s
{\displaystyle \Delta ^{\bullet }:\Delta \to \Delta ^{op}{\text{Sh}}_{*}({\text{Sm}}/S)_{nis}}
defining the simplices in
Δ
o
p
Sh
∗
(
Sm
/
S
)
n
i
s
{\displaystyle \Delta ^{op}{\text{Sh}}_{*}({\text{Sm}}/S)_{nis}}
. Recall the algebraic n-simplex is given by the
S
{\displaystyle S}
-scheme
Δ
n
=
Spec
(
O
S
[
t
0
,
t
1
,
…
,
t
n
]
(
t
0
+
t
1
+
⋯
+
t
n
−
1
)
)
{\displaystyle \Delta ^{n}={\text{Spec}}\left({\frac {{\mathcal {O}}_{S}[t_{0},t_{1},\ldots ,t_{n}]}{(t_{0}+t_{1}+\cdots +t_{n}-1)}}\right)}
Embedding these schemes as constant presheaves and sheafifying gives objects in
Δ
o
p
Sh
∗
(
Sm
/
S
)
n
i
s
{\displaystyle \Delta ^{op}{\text{Sh}}_{*}({\text{Sm}}/S)_{nis}}
, which we denote by
Δ
n
{\displaystyle \Delta ^{n}}
. These are the objects in the image of
Δ
∙
(
[
n
]
)
{\displaystyle \Delta ^{\bullet }([n])}
, i.e.
Δ
∙
(
[
n
]
)
=
Δ
n
{\displaystyle \Delta ^{\bullet }([n])=\Delta ^{n}}
. Then using abstract simplicial homotopy theory, we get the simplicial spheres
S
n
=
Δ
n
/
∂
Δ
n
{\displaystyle S^{n}=\Delta ^{n}/\partial \Delta ^{n}}
We can then form the cone of a simplicial (pre)sheaf as
C
(
X
)
=
X
∧
Δ
1
{\displaystyle C({\mathcal {X}})={\mathcal {X}}\wedge \Delta ^{1}}
and form the cone of a morphism
f
:
X
→
Y
{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}
as the colimit of the diagram
C
(
f
)
=
colim
→
{
X
→
f
Y
↓
C
(
X
)
}
{\displaystyle C(f)={\underset {\to }{\text{colim}}}\left\{{\begin{matrix}{\mathcal {X}}&\xrightarrow {f} &{\mathcal {Y}}\\\downarrow &&\\C({\mathcal {X}})\end{matrix}}\right\}}
In addition, the cofiber of
Y
→
C
(
f
)
{\displaystyle {\mathcal {Y}}\to C(f)}
is simply the suspension
X
∧
S
1
=
Σ
X
{\displaystyle {\mathcal {X}}\wedge S^{1}=\Sigma {\mathcal {X}}}
. In the pointed homotopy category there is additionally the suspension functor
Σ
:
H
s
,
∗
(
S
m
/
S
)
N
i
s
→
H
s
,
∗
(
S
m
/
S
)
N
i
s
{\displaystyle \Sigma :{\mathcal {H}}_{s,*}(Sm/S)_{Nis}\to {\mathcal {H}}_{s,*}(Sm/S)_{Nis}}
given by
Σ
(
X
)
=
X
∧
S
1
{\displaystyle \Sigma ({\mathcal {X}})={\mathcal {X}}\wedge S^{1}}
and its right adjoint
Ω
:
H
s
,
∗
(
S
m
/
S
)
N
i
s
→
H
s
,
∗
(
S
m
/
S
)
N
i
s
{\displaystyle \Omega :{\mathcal {H}}_{s,*}(Sm/S)_{Nis}\to {\mathcal {H}}_{s,*}(Sm/S)_{Nis}}
called the loop space functor.
=== Remarks ===
The setup, especially the Nisnevich topology, is chosen as to make algebraic K-theory representable by a spectrum, and in some aspects to make a proof of the Bloch-Kato conjecture possible.
After the Morel-Voevodsky construction there have been several different approaches to A1 homotopy theory by using other model category structures or by using other sheaves than Nisnevich sheaves (for example, Zariski sheaves or just all presheaves). Each of these constructions yields the same homotopy category.
There are two kinds of spheres in the theory: those coming from the multiplicative group playing the role of the 1-sphere in topology, and those coming from the simplicial sphere (considered as constant simplicial sheaf). This leads to a theory of motivic spheres S p,q with two indices. To compute the homotopy groups of motivic spheres would also yield the classical stable homotopy groups of the spheres, so in this respect A1 homotopy theory is at least as complicated as classical homotopy theory.
== Motivic analogies ==
=== Eilenberg-Maclane spaces ===
For an abelian group
A
{\displaystyle A}
the
(
p
,
q
)
{\displaystyle (p,q)}
-motivic cohomology of a smooth scheme
X
{\displaystyle X}
is given by the sheaf hypercohomology groups
H
p
,
q
(
X
,
A
)
:=
H
p
(
X
n
i
s
,
A
(
q
)
)
{\displaystyle H^{p,q}(X,A):=\mathbb {H} ^{p}(X_{nis},A(q))}
for
A
(
q
)
=
Z
(
q
)
⊗
A
{\displaystyle A(q)=\mathbb {Z} (q)\otimes A}
. Representing this cohomology is a simplicial abelian sheaf denoted
K
(
p
,
q
,
A
)
{\displaystyle K(p,q,A)}
corresponding to
A
(
q
)
[
+
p
]
{\displaystyle A(q)[+p]}
which is considered as an object in the pointed motivic homotopy category
H
∙
(
k
)
{\displaystyle H_{\bullet }(k)}
. Then, for a smooth scheme
X
{\displaystyle X}
we have the equivalence
Hom
H
∙
(
k
)
(
X
+
,
K
(
p
,
q
,
A
)
)
=
H
p
,
q
(
X
,
A
)
{\displaystyle {\text{Hom}}_{H_{\bullet }(k)}(X_{+},K(p,q,A))=H^{p,q}(X,A)}
showing these sheaves represent motivic Eilenberg-Maclane spacespg 3.
== The stable homotopy category ==
A further construction in A1-homotopy theory is the category SH(S), which is obtained from the above unstable category by forcing the smash product with Gm to become invertible. This process can be carried out either using model-categorical constructions using so-called Gm-spectra or alternatively using infinity-categories.
For S = Spec (R), the spectrum of the field of real numbers, there is a functor
S
H
(
R
)
→
S
H
{\displaystyle SH(\mathbf {R} )\to SH}
to the stable homotopy category from algebraic topology. The functor is characterized by sending a smooth scheme X / R to the real manifold associated to X. This functor has the property that it sends the map
ρ
:
S
0
→
G
m
,
i
.
e
.
,
{
−
1
,
1
}
→
S
p
e
c
R
[
x
,
x
−
1
]
{\displaystyle \rho :S^{0}\to \mathbf {G} _{m},i.e.,\{-1,1\}\to Spec\mathbf {R} [x,x^{-1}]}
to an equivalence, since
R
×
{\displaystyle \mathbf {R} ^{\times }}
is homotopy equivalent to a two-point set. Bachmann (2018) has shown that the resulting functor
S
H
(
R
)
[
ρ
−
1
]
→
S
H
{\displaystyle SH(\mathbf {R} )[\rho ^{-1}]\to SH}
is an equivalence.
== References ==
=== Survey articles and lectures ===
Morel (2002) An Introduction to A1-homotopy theory
Antieau, Benjamin; Elmanto, Elden (2016), "A primer for unstable motivic homotopy theory", arXiv:1605.00929 [math.AG]
=== Motivic homotopy ===
==== Foundations ====
Isaksen, Daniel C.; Paul Arne Østvær (2018), "Motivic stable homotopy groups", arXiv:1811.05729 [math.AT]
Morel, Fabien; Voevodsky, Vladimir (1999), "A1-homotopy theory of schemes" (PDF), Publications Mathématiques de l'IHÉS, 90 (90): 45–143, doi:10.1007/BF02698831, MR 1813224, S2CID 14420180, retrieved 9 May 2008
Voevodsky, Vladimir (1998), "A1-homotopy theory" (PDF), Documenta Mathematica, Proceedings of the International Congress of Mathematicians, Vol. I (Berlin, 1998): 579–604, ISSN 1431-0635, MR 1648048
Voevodsky, Vladimir (2008), "Unstable motivic homotopy categories in Nisnevich and CDH-topologies", arXiv:0805.4576 [math.AG]
==== Motivic Steenrod algebra ====
Voevodsky, Vladimir (2001), "Reduced power operations in motivic cohomology", arXiv:math/0107109
Voevodsky, Vladimir (2008), "Motivic Eilenberg-Maclane spaces", arXiv:0805.4432 [math.AG]
==== Motivic adams spectral sequence ====
The motivic Adams spectral sequence
Motivic chromatic homotopy theory
==== Spectra ====
Jardine. (1999) Motivic Symmetric Spectra
=== Bloch-Kato ===
The Gersten conjecture for Milnor K-theory
Tate twists and cohomology of P1
=== Applications ===
Hoyois, Marc; Kelly, Shane; Paul Arne Østvær (2013), "The motivic Steenrod algebra in positive characteristic", arXiv:1305.5690 [math.AG]
Isaksen, Daniel C.; Paul Arne Østvær (2018), "Motivic stable homotopy groups", arXiv:1811.05729 [math.AT]
Morel, Fabien (2004). "On the Motivic π0 of the Sphere Spectrum". Axiomatic, Enriched and Motivic Homotopy Theory. pp. 219–260. doi:10.1007/978-94-007-0948-5_7. ISBN 978-1-4020-1834-3.
Röndigs, Oliver; Spitzweck, Markus; Paul Arne Østvær (2016), "The first stable homotopy groups of motivic spheres", arXiv:1604.00365 [math.AT]
Voevodsky, Vladimir (2003), "On the zero slice of the sphere spectrum", arXiv:math/0301013
Ormsby, Kyle; Röndigs, Oliver; Paul Arne Østvær (2017), "Vanishing in stable motivic homotopy sheaves", arXiv:1704.04744 [math.AT]
=== References ===
Bachmann, Tom (2018), "Motivic and Real Etale Stable Homotopy Theory", Compositio Mathematica, 154 (5): 883–917, arXiv:1608.08855, doi:10.1112/S0010437X17007710, S2CID 119305101 | Wikipedia/A1_homotopy_theory |
In algebraic topology, the cellular approximation theorem states that a map between CW-complexes can always be taken to be of a specific type. Concretely, if X and Y are CW-complexes, and f : X → Y is a continuous map, then f is said to be cellular, if f takes the n-skeleton of X to the n-skeleton of Y for all n, i.e. if
f
(
X
n
)
⊆
Y
n
{\displaystyle f(X^{n})\subseteq Y^{n}}
for all n. The content of the cellular approximation theorem is then that any continuous map f : X → Y between CW-complexes X and Y is homotopic to a cellular map, and if f is already cellular on a subcomplex A of X, then we can furthermore choose the homotopy to be stationary on A. From an algebraic topological viewpoint, any map between CW-complexes can thus be taken to be cellular.
== Idea of proof ==
The proof can be given by induction after n, with the statement that f is cellular on the skeleton Xn. For the base case n=0, notice that every path-component of Y must contain a 0-cell. The image under f of a 0-cell of X can thus be connected to a 0-cell of Y by a path, but this gives a homotopy from f to a map which is cellular on the 0-skeleton of X.
Assume inductively that f is cellular on the (n − 1)-skeleton of X, and let en be an n-cell of X. The closure of en is compact in X, being the image of the characteristic map of the cell, and hence the image of the closure of en under f is also compact in Y. Then it is a general result of CW-complexes that any compact subspace of a CW-complex meets (that is, intersects non-trivially) only finitely many cells of the complex. Thus f(en) meets at most finitely many cells of Y, so we can take
e
k
⊆
Y
{\displaystyle e^{k}\subseteq Y}
to be a cell of highest dimension meeting f(en). If
k
≤
n
{\displaystyle k\leq n}
, the map f is already cellular on en, since in this case only cells of the n-skeleton of Y meets f(en), so we may assume that k > n. It is then a technical, non-trivial result (see Hatcher) that the restriction of f to
X
n
−
1
∪
e
n
{\displaystyle X^{n-1}\cup e^{n}}
can be homotoped relative to Xn-1 to a map missing a point p ∈ ek. Since Yk − {p} deformation retracts onto the subspace Yk-ek, we can further homotope the restriction of f to
X
n
−
1
∪
e
n
{\displaystyle X^{n-1}\cup e^{n}}
to a map, say, g, with the property that g(en) misses the cell ek of Y, still relative to Xn-1. Since f(en) met only finitely many cells of Y to begin with, we can repeat this process finitely many times to make
f
(
e
n
)
{\displaystyle f(e^{n})}
miss all cells of Y of dimension larger than n.
We repeat this process for every n-cell of X, fixing cells of the subcomplex A on which f is already cellular, and we thus obtain a homotopy (relative to the (n − 1)-skeleton of X and the n-cells of A) of the restriction of f to Xn to a map cellular on all cells of X of dimension at most n. Using then the homotopy extension property to extend this to a homotopy on all of X, and patching these homotopies together, will finish the proof. For details, consult Hatcher.
== Applications ==
=== Some homotopy groups ===
The cellular approximation theorem can be used to immediately calculate some homotopy groups. In particular, if
n
<
k
,
{\displaystyle n<k,}
then
π
n
(
S
k
)
=
0.
{\displaystyle \pi _{n}(S^{k})=0.}
Give
S
n
{\displaystyle S^{n}}
and
S
k
{\displaystyle S^{k}}
their canonical CW-structure, with one 0-cell each, and with one n-cell for
S
n
{\displaystyle S^{n}}
and one k-cell for
S
k
.
{\displaystyle S^{k}.}
Any base-point preserving map
f
:
S
n
→
S
k
{\displaystyle f\colon S^{n}\to S^{k}}
is then homotopic to a map whose image lies in the n-skeleton of
S
k
,
{\displaystyle S^{k},}
which consists of the base point only. That is, any such map is nullhomotopic.
=== Cellular approximation for pairs ===
Let f:(X,A)→(Y,B) be a map of CW-pairs, that is, f is a map from X to Y, and the image of
A
⊆
X
{\displaystyle A\subseteq X\,}
under f sits inside B. Then f is homotopic to a cellular map (X,A)→(Y,B). To see this, restrict f to A and use cellular approximation to obtain a homotopy of f to a cellular map on A. Use the homotopy extension property to extend this homotopy to all of X, and apply cellular approximation again to obtain a map cellular on X, but without violating the cellular property on A.
As a consequence, we have that a CW-pair (X,A) is n-connected, if all cells of
X
−
A
{\displaystyle X-A}
have dimension strictly greater than n: If
i
≤
n
{\displaystyle i\leq n\,}
, then any map
(
D
i
,
∂
D
i
)
{\displaystyle (D^{i},\partial D^{i})\,}
→(X,A) is homotopic to a cellular map of pairs, and since the n-skeleton of X sits inside A, any such map is homotopic to a map whose image is in A, and hence it is 0 in the relative homotopy group
π
i
(
X
,
A
)
{\displaystyle \pi _{i}(X,A)\,}
.
We have in particular that
(
X
,
X
n
)
{\displaystyle (X,X^{n})\,}
is n-connected, so it follows from the long exact sequence of homotopy groups for the pair
(
X
,
X
n
)
{\displaystyle (X,X^{n})\,}
that we have isomorphisms
π
i
(
X
n
)
{\displaystyle \pi _{i}(X^{n})\,}
→
π
i
(
X
)
{\displaystyle \pi _{i}(X)\,}
for all
i
<
n
{\displaystyle i<n\,}
and a surjection
π
n
(
X
n
)
{\displaystyle \pi _{n}(X^{n})\,}
→
π
n
(
X
)
{\displaystyle \pi _{n}(X)\,}
.
=== CW approximation ===
For every space X one can construct a CW complex Z and a weak homotopy equivalence
f
:
Z
→
X
{\displaystyle f\colon Z\to X}
that is called a CW approximation to X. CW approximation, being a weak homotopy equivalence, induces isomorphisms on homology and cohomology groups of X. Thus one often can use CW approximation to reduce a general statement to a simpler version that only concerns CW complexes.
CW approximation is constructed inducting on skeleta
Z
i
{\displaystyle Z_{i}}
of
Z
{\displaystyle Z}
, so that the maps
(
f
i
)
∗
:
π
k
(
Z
i
)
→
π
k
(
X
)
{\displaystyle (f_{i})_{*}\colon \pi _{k}(Z_{i})\to \pi _{k}(X)}
are isomorphic for
k
<
i
{\displaystyle k<i}
and are onto for
k
=
i
{\displaystyle k=i}
(for any basepoint). Then
Z
i
+
1
{\displaystyle Z_{i+1}}
is built from
Z
i
{\displaystyle Z_{i}}
by attaching (i+1)-cells that (for all basepoints)
are attached by the mappings
S
i
→
Z
i
{\displaystyle S^{i}\to Z_{i}}
that generate the kernel of
π
i
(
Z
i
)
→
π
i
(
X
)
{\displaystyle \pi _{i}(Z_{i})\to \pi _{i}(X)}
(and are mapped to X by the contraction of the corresponding spheroids)
are attached by constant mappings and are mapped to X to generate
π
i
+
1
(
X
)
{\displaystyle \pi _{i+1}(X)}
(or
π
i
+
1
(
X
)
/
(
f
i
)
∗
(
π
i
+
1
(
Z
i
)
)
{\displaystyle \pi _{i+1}(X)/(f_{i})_{*}(\pi _{i+1}(Z_{i}))}
).
The cellular approximation ensures then that adding (i+1)-cells doesn't affect
π
k
(
Z
i
)
→
≅
π
k
(
X
)
{\displaystyle \pi _{k}(Z_{i}){\stackrel {\cong }{\to }}\pi _{k}(X)}
for
k
<
i
{\displaystyle k<i}
, while
π
i
(
Z
i
)
{\displaystyle \pi _{i}(Z_{i})}
gets factored by the classes of the attachment mappings
S
i
→
Z
i
{\displaystyle S^{i}\to Z_{i}}
of these cells giving
π
i
(
Z
i
+
1
)
→
≅
π
i
(
X
)
{\displaystyle \pi _{i}(Z_{i+1}){\stackrel {\cong }{\to }}\pi _{i}(X)}
. Surjectivity of
π
i
+
1
(
Z
i
+
1
)
→
π
i
+
1
(
X
)
{\displaystyle \pi _{i+1}(Z_{i+1})\to \pi _{i+1}(X)}
is evident from the second step of the construction.
== References ==
Hatcher, Allen (2005), Algebraic topology, Cambridge University Press, ISBN 978-0-521-79540-1 | Wikipedia/CW_approximation |
In mathematics and specifically in topology, rational homotopy theory is a simplified version of homotopy theory for topological spaces, in which all torsion in the homotopy groups is ignored. It was founded by Dennis Sullivan (1977) and Daniel Quillen (1969). This simplification of homotopy theory makes certain calculations much easier.
Rational homotopy types of simply connected spaces can be identified with (isomorphism classes of) certain algebraic objects called Sullivan minimal models, which are commutative differential graded algebras over the rational numbers satisfying certain conditions.
A geometric application was the theorem of Sullivan and Micheline Vigué-Poirrier (1976): every simply connected closed Riemannian manifold X whose rational cohomology ring is not generated by one element has infinitely many geometrically distinct closed geodesics. The proof used rational homotopy theory to show that the Betti numbers of the free loop space of X are unbounded. The theorem then follows from a 1969 result of Detlef Gromoll and Wolfgang Meyer.
== Rational spaces ==
A continuous map
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
of simply connected topological spaces is called a rational homotopy equivalence if it induces an isomorphism on homotopy groups tensored with the rational numbers
Q
{\displaystyle \mathbb {Q} }
. Equivalently: f is a rational homotopy equivalence if and only if it induces an isomorphism on singular homology groups with rational coefficients. The rational homotopy category (of simply connected spaces) is defined to be the localization of the category of simply connected spaces with respect to rational homotopy equivalences. The goal of rational homotopy theory is to understand this category (i.e. to determine the information that can be recovered from rational homotopy equivalences).
One basic result is that the rational homotopy category is equivalent to a full subcategory of the homotopy category of topological spaces, the subcategory of rational spaces. By definition, a rational space is a simply connected CW complex all of whose homotopy groups are vector spaces over the rational numbers. For any simply connected CW complex
X
{\displaystyle X}
, there is a rational space
X
Q
{\displaystyle X_{\mathbb {Q} }}
, unique up to homotopy equivalence, with a map
X
→
X
Q
{\displaystyle X\to X_{\mathbb {Q} }}
that induces an isomorphism on homotopy groups tensored with the rational numbers. The space
X
Q
{\displaystyle X_{\mathbb {Q} }}
is called the rationalization of
X
{\displaystyle X}
. This is a special case of Sullivan's construction of the localization of a space at a given set of prime numbers.
One obtains equivalent definitions using homology rather than homotopy groups. Namely, a simply connected CW complex
X
{\displaystyle X}
is a rational space if and only if its homology groups
H
i
(
X
,
Z
)
{\displaystyle H_{i}(X,\mathbb {Z} )}
are rational vector spaces for all
i
>
0
{\displaystyle i>0}
. The rationalization of a simply connected CW complex
X
{\displaystyle X}
is the unique rational space
X
→
X
Q
{\displaystyle X\to X_{\mathbb {Q} }}
(up to homotopy equivalence) with a map
X
→
X
Q
{\displaystyle X\to X_{\mathbb {Q} }}
that induces an isomorphism on rational homology. Thus, one has
π
i
(
X
Q
)
≅
π
i
(
X
)
⊗
Q
{\displaystyle \pi _{i}(X_{\mathbb {Q} })\cong \pi _{i}(X)\otimes {\mathbb {Q} }}
and
H
i
(
X
Q
,
Z
)
≅
H
i
(
X
,
Z
)
⊗
Q
≅
H
i
(
X
,
Q
)
{\displaystyle H_{i}(X_{\mathbb {Q} },{\mathbb {Z} })\cong H_{i}(X,{\mathbb {Z} })\otimes {\mathbb {Q} }\cong H_{i}(X,{\mathbb {Q} })}
for all
i
>
0
{\displaystyle i>0}
.
These results for simply connected spaces extend with little change to nilpotent spaces (spaces whose fundamental group is nilpotent and acts nilpotently on the higher homotopy groups). There are also several non-equivalent extensions of the notions of rational space and rationalization functor to the case of all spaces (Bousfield-Kan's
Q
{\displaystyle \mathbb {Q} }
-completion, Sullivan’s rationalization, Bousfield’s homology rationalization, Casacuberta-Peschke’s
Ω
{\displaystyle \Omega }
-rationalization and Gómez-Tato-Halperin-Tanré’s
π
1
{\displaystyle \pi _{1}}
-fiberwise rationalization).
Computing the homotopy groups of spheres is a central open problem in homotopy theory. However, the rational homotopy groups of spheres were computed by Jean-Pierre Serre in 1951:
π
i
(
S
2
a
−
1
)
⊗
Q
≅
{
Q
if
i
=
2
a
−
1
0
otherwise
{\displaystyle \pi _{i}(S^{2a-1})\otimes \mathbb {Q} \cong {\begin{cases}\mathbb {Q} &{\text{if }}i=2a-1\\0&{\text{otherwise}}\end{cases}}}
and
π
i
(
S
2
a
)
⊗
Q
≅
{
Q
if
i
=
2
a
or
i
=
4
a
−
1
0
otherwise.
{\displaystyle \pi _{i}(S^{2a})\otimes \mathbb {Q} \cong {\begin{cases}\mathbb {Q} &{\text{if }}i=2a{\text{ or }}i=4a-1\\0&{\text{otherwise.}}\end{cases}}}
This suggests the possibility of describing the whole rational homotopy category in a practically computable way. Rational homotopy theory has realized much of that goal.
In homotopy theory, spheres and Eilenberg–MacLane spaces are two very different types of basic spaces from which all spaces can be built. In rational homotopy theory, these two types of spaces become much closer. In particular, Serre's calculation implies that
S
Q
2
a
−
1
{\displaystyle S_{\mathbb {Q} }^{2a-1}}
is the Eilenberg–MacLane space
K
(
Q
,
2
a
−
1
)
{\displaystyle K(\mathbb {Q} ,2a-1)}
. More generally, let X be any space whose rational cohomology ring is a free graded-commutative algebra (a tensor product of a polynomial ring on generators of even degree and an exterior algebra on generators of odd degree). Then the rationalization
X
Q
{\displaystyle X_{\mathbb {Q} }}
is a product of Eilenberg–MacLane spaces. The hypothesis on the cohomology ring applies to any compact Lie group (or more generally, any loop space). For example, for the unitary group SU(n),
SU
(
n
)
Q
≃
S
Q
3
×
S
Q
5
×
⋯
×
S
Q
2
n
−
1
.
{\displaystyle \operatorname {SU} (n)_{\mathbb {Q} }\simeq S_{\mathbb {Q} }^{3}\times S_{\mathbb {Q} }^{5}\times \cdots \times S_{\mathbb {Q} }^{2n-1}.}
== Cohomology ring and homotopy Lie algebra ==
There are two basic invariants of a space X in the rational homotopy category: the rational cohomology ring
H
∗
(
X
,
Q
)
{\displaystyle H^{*}(X,\mathbb {Q} )}
and the homotopy Lie algebra
π
∗
(
X
)
⊗
Q
{\displaystyle \pi _{*}(X)\otimes \mathbb {Q} }
. The rational cohomology is a graded-commutative algebra over
Q
{\displaystyle \mathbb {Q} }
, and the homotopy groups form a graded Lie algebra via the Whitehead product. (More precisely, writing
Ω
X
{\displaystyle \Omega X}
for the loop space of X, we have that
π
∗
(
Ω
X
)
⊗
Q
{\displaystyle \pi _{*}(\Omega X)\otimes \mathbb {Q} }
is a graded Lie algebra over
Q
{\displaystyle \mathbb {Q} }
. In view of the isomorphism
π
i
(
X
)
≅
π
i
−
1
(
Ω
X
)
{\displaystyle \pi _{i}(X)\cong \pi _{i-1}(\Omega X)}
, this just amounts to a shift of the grading by 1.) For example, Serre's theorem above says that
π
∗
(
Ω
S
n
)
⊗
Q
{\displaystyle \pi _{*}(\Omega S^{n})\otimes \mathbb {Q} }
is the free graded Lie algebra on one generator of degree
n
−
1
{\displaystyle n-1}
.
Another way to think of the homotopy Lie algebra is that the homology of the loop space of X is the universal enveloping algebra of the homotopy Lie algebra:
H
∗
(
Ω
X
,
Q
)
≅
U
(
π
∗
(
Ω
X
)
⊗
Q
)
.
{\displaystyle H_{*}(\Omega X,{\mathbb {Q} })\cong U(\pi _{*}(\Omega X)\otimes \mathbb {Q} ).}
Conversely, one can reconstruct the rational homotopy Lie algebra from the homology of the loop space as the subspace of primitive elements in the Hopf algebra
H
∗
(
Ω
S
n
)
⊗
Q
{\displaystyle H_{*}(\Omega S^{n})\otimes \mathbb {Q} }
.
A central result of the theory is that the rational homotopy category can be described in a purely algebraic way; in fact, in two different algebraic ways. First, Quillen showed that the rational homotopy category is equivalent to the homotopy category of connected differential graded Lie algebras. (The associated graded Lie algebra
ker
(
d
)
/
im
(
d
)
{\displaystyle \ker(d)/\operatorname {im} (d)}
is the homotopy Lie algebra.) Second, Quillen showed that the rational homotopy category is equivalent to the homotopy category of 1-connected differential graded cocommutative coalgebras. (The associated coalgebra is the rational homology of X as a coalgebra; the dual vector space is the rational cohomology ring.) These equivalences were among the first applications of Quillen's theory of model categories.
In particular, the second description implies that for any graded-commutative
Q
{\displaystyle \mathbb {Q} }
-algebra A of the form
A
=
Q
⊕
A
2
⊕
A
3
⊕
⋯
,
{\displaystyle A=\mathbb {Q} \oplus A^{2}\oplus A^{3}\oplus \cdots ,}
with each vector space
A
i
{\displaystyle A^{i}}
of finite dimension, there is a simply connected space X whose rational cohomology ring is isomorphic to A. (By contrast, there are many restrictions, not completely understood, on the integral or mod p cohomology rings of topological spaces, for prime numbers p.) In the same spirit, Sullivan showed that any graded-commutative
Q
{\displaystyle \mathbb {Q} }
-algebra with
A
1
=
0
{\displaystyle A^{1}=0}
that satisfies Poincaré duality is the cohomology ring of some simply connected smooth closed manifold, except in dimension 4a; in that case, one also needs to assume that the intersection pairing on
A
2
a
{\displaystyle A^{2a}}
is of the form
∑
±
x
i
2
{\displaystyle \sum \pm x_{i}^{2}}
over
Q
{\displaystyle \mathbb {Q} }
.
One may ask how to pass between the two algebraic descriptions of the rational homotopy category. In short, a Lie algebra determines a graded-commutative algebra by Lie algebra cohomology, and an augmented commutative algebra determines a graded Lie algebra by reduced André–Quillen cohomology. More generally, there are versions of these constructions for differential graded algebras. This duality between commutative algebras and Lie algebras is a version of Koszul duality.
== Sullivan algebras ==
For spaces whose rational homology in each degree has finite dimension, Sullivan classified all rational homotopy types in terms of simpler algebraic objects, Sullivan algebras. By definition, a Sullivan algebra is a commutative differential graded algebra over the rationals
Q
{\displaystyle \mathbb {Q} }
, whose underlying algebra is the free commutative graded algebra
⋀
(
V
)
{\displaystyle \bigwedge (V)}
on a graded vector space
V
=
⨁
n
>
0
V
n
,
{\displaystyle V=\bigoplus _{n>0}V^{n},}
satisfying the following "nilpotence condition" on its differential d: the space V is the union of an increasing series of graded subspaces,
V
(
0
)
⊆
V
(
1
)
⊆
⋯
{\displaystyle V(0)\subseteq V(1)\subseteq \cdots }
, where
d
=
0
{\displaystyle d=0}
on
V
(
0
)
{\displaystyle V(0)}
and
d
(
V
(
k
)
)
{\displaystyle d(V(k))}
is contained in
⋀
(
V
(
k
−
1
)
)
{\displaystyle \bigwedge (V(k-1))}
. In the context of differential graded algebras A, "commutative" is used to mean graded-commutative; that is,
a
b
=
(
−
1
)
i
j
b
a
{\displaystyle ab=(-1)^{ij}ba}
for a in
A
i
{\displaystyle A^{i}}
and b in
A
j
{\displaystyle A^{j}}
.
The Sullivan algebra is called minimal if the image of d is contained in
⋀
+
(
V
)
2
{\displaystyle \bigwedge ^{+}(V)^{2}}
, where
⋀
+
(
V
)
{\displaystyle \bigwedge ^{+}(V)}
is the direct sum of the positive-degree subspaces of
⋀
(
V
)
{\displaystyle \bigwedge (V)}
.
A Sullivan model for a commutative differential graded algebra A is a Sullivan algebra
⋀
(
V
)
{\displaystyle \bigwedge (V)}
with a homomorphism
⋀
(
V
)
→
A
{\displaystyle \bigwedge (V)\to A}
which induces an isomorphism on cohomology. If
A
0
=
Q
{\displaystyle A^{0}=\mathbb {Q} }
, then A has a minimal Sullivan model which is unique up to isomorphism. (Warning: a minimal Sullivan algebra with the same cohomology algebra as A need not be a minimal Sullivan model for A: it is also necessary that the isomorphism of cohomology be induced by a homomorphism of differential graded algebras. There are examples of non-isomorphic minimal Sullivan models with isomorphic cohomology algebras.)
== The Sullivan minimal model of a topological space ==
For any topological space X, Sullivan defined a commutative differential graded algebra
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
, called the algebra of polynomial differential forms on X with rational coefficients. An element of this algebra consists of (roughly) a polynomial form on each singular simplex of X, compatible with face and degeneracy maps. This algebra is usually very large (uncountable dimension) but can be replaced by a much smaller algebra. More precisely, any differential graded algebra with the same Sullivan minimal model as
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is called a model for the space X. When X is simply connected, such a model determines the rational homotopy type of X.
To any simply connected CW complex X with all rational homology groups of finite dimension, there is a minimal Sullivan model
⋀
V
{\displaystyle \bigwedge V}
for
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
, which has the property that
V
1
=
0
{\displaystyle V^{1}=0}
and all the
V
k
{\displaystyle V^{k}}
have finite dimension. This is called the Sullivan minimal model of X; it is unique up to isomorphism. This gives an equivalence between rational homotopy types of such spaces and such algebras, with the properties:
The rational cohomology of the space is the cohomology of its Sullivan minimal model.
The spaces of indecomposables in V are the duals of the rational homotopy groups of the space X.
The Whitehead product on rational homotopy is the dual of the "quadratic part" of the differential d.
Two spaces have the same rational homotopy type if and only if their minimal Sullivan algebras are isomorphic.
There is a simply connected space X corresponding to each possible Sullivan algebra with
V
1
=
0
{\displaystyle V^{1}=0}
and all the
V
k
{\displaystyle V^{k}}
of finite dimension.
When X is a smooth manifold, the differential algebra of smooth differential forms on X (the de Rham complex) is almost a model for X; more precisely it is the tensor product of a model for X with the reals and therefore determines the real homotopy type. One can go further and define the p-completed homotopy type of X for a prime number p. Sullivan's "arithmetic square" reduces many problems in homotopy theory to the combination of rational and p-completed homotopy theory, for all primes p.
The construction of Sullivan minimal models for simply connected spaces extends to nilpotent spaces. For more general fundamental groups, things get more complicated; for example, the rational homotopy groups of a finite CW complex (such as the wedge
S
1
∨
S
2
{\displaystyle S^{1}\vee S^{2}}
) can be infinite-dimensional vector spaces.
== Formal spaces ==
A commutative differential graded algebra A, again with
A
0
=
Q
{\displaystyle A^{0}=\mathbb {Q} }
, is called formal if A has a model with vanishing differential. This is equivalent to requiring that the cohomology algebra of A (viewed as a differential algebra with trivial differential) is a model for A (though it does not have to be the minimal model). Thus the rational homotopy type of a formal space is completely determined by its cohomology ring.
Examples of formal spaces include spheres, H-spaces, symmetric spaces, and compact Kähler manifolds. Formality is preserved under products and wedge sums. For manifolds, formality is preserved by connected sums.
On the other hand, closed nilmanifolds are almost never formal: if M is a formal nilmanifold, then M must be the torus of some dimension. The simplest example of a non-formal nilmanifold is the Heisenberg manifold, the quotient of the Heisenberg group of real 3×3 upper triangular matrices with 1's on the diagonal by its subgroup of matrices with integral coefficients. Closed symplectic manifolds need not be formal: the simplest example is the Kodaira–Thurston manifold (the product of the Heisenberg manifold with a circle). There are also examples of non-formal, simply connected symplectic closed manifolds.
Non-formality can often be detected by Massey products. Indeed, if a differential graded algebra A is formal, then all (higher order) Massey products must vanish. The converse is not true: formality means, roughly speaking, the "uniform" vanishing of all Massey products. The complement of the Borromean rings is a non-formal space: it supports a nontrivial triple Massey product.
== Examples ==
If X is a sphere of odd dimension
2
n
+
1
>
1
{\displaystyle 2n+1>1}
, its minimal Sullivan model has one generator a of degree
2
n
+
1
{\displaystyle 2n+1}
with
d
a
=
0
{\displaystyle da=0}
, and a basis of elements 1, a.
If X is a sphere of even dimension
2
n
>
0
{\displaystyle 2n>0}
, its minimal Sullivan model has two generators a and b of degrees
2
n
{\displaystyle 2n}
and
4
n
+
1
{\displaystyle 4n+1}
, with
d
b
=
a
2
{\displaystyle db=a^{2}}
,
d
a
=
0
{\displaystyle da=0}
, and a basis of elements
1
,
a
,
b
→
a
2
{\displaystyle 1,a,b\to a^{2}}
,
a
b
→
a
3
{\displaystyle ab\to a^{3}}
,
a
2
b
→
a
4
,
…
{\displaystyle a^{2}b\to a^{4},\ldots }
, where the arrow indicates the action of d.
If X is the complex projective space
C
P
n
{\displaystyle \mathbb {CP} ^{n}}
with
n
>
0
{\displaystyle n>0}
, its minimal Sullivan model has two generators u and x of degrees 2 and
2
n
+
1
{\displaystyle 2n+1}
, with
d
u
=
0
{\displaystyle du=0}
and
d
x
=
u
n
+
1
{\displaystyle dx=u^{n+1}}
. It has a basis of elements
1
,
u
,
u
2
,
…
,
u
n
{\displaystyle 1,u,u^{2},\ldots ,u^{n}}
,
x
→
u
n
+
1
{\displaystyle x\to u^{n+1}}
,
x
u
→
u
n
+
2
,
…
{\displaystyle xu\to u^{n+2},\ldots }
.
Suppose that V has 4 elements a, b, x, y of degrees 2, 3, 3 and 4 with differentials
d
a
=
0
{\displaystyle da=0}
,
d
b
=
0
{\displaystyle db=0}
,
d
x
=
a
2
{\displaystyle dx=a^{2}}
,
d
y
=
a
b
{\displaystyle dy=ab}
. Then this algebra is a minimal Sullivan algebra that is not formal. The cohomology algebra has nontrivial components only in dimension 2, 3, 6, generated respectively by a, b, and
x
b
−
a
y
{\displaystyle xb-ay}
. Any homomorphism from V to its cohomology algebra would map y to 0 and x to a multiple of b; so it would map
x
b
−
a
y
{\displaystyle xb-ay}
to 0. So V cannot be a model for its cohomology algebra. The corresponding topological spaces are two spaces with isomorphic rational cohomology rings but different rational homotopy types. Notice that
x
b
−
a
y
{\displaystyle xb-ay}
is in the Massey product
⟨
[
a
]
,
[
a
]
,
[
b
]
⟩
{\displaystyle \langle [a],[a],[b]\rangle }
.
== Elliptic and hyperbolic spaces ==
Rational homotopy theory revealed an unexpected dichotomy among finite CW complexes: either the rational homotopy groups are zero in sufficiently high degrees, or they grow exponentially. Namely, let X be a simply connected space such that
H
∗
(
X
,
Q
)
{\displaystyle H_{*}(X,\mathbb {Q} )}
is a finite-dimensional
Q
{\displaystyle \mathbb {Q} }
-vector space (for example, a finite CW complex has this property). Define X to be rationally elliptic if
π
∗
(
X
)
⊗
Q
{\displaystyle \pi _{*}(X)\otimes \mathbb {Q} }
is also a finite-dimensional
Q
{\displaystyle \mathbb {Q} }
-vector space, and otherwise rationally hyperbolic. Then Félix and Halperin showed: if X is rationally hyperbolic, then there is a real number
C
>
1
{\displaystyle C>1}
and an integer N such that
∑
i
=
1
n
dim
Q
π
i
(
X
)
⊗
Q
≥
C
n
{\displaystyle \sum _{i=1}^{n}\dim _{\mathbb {Q} }\pi _{i}(X)\otimes {\mathbb {Q} }\geq C^{n}}
for all
n
≥
N
{\displaystyle n\geq N}
.
For example, spheres, complex projective spaces, and homogeneous spaces for compact Lie groups are elliptic. On the other hand, "most" finite complexes are hyperbolic. For example:
The rational cohomology ring of an elliptic space satisfies Poincaré duality.
If X is an elliptic space whose top nonzero rational cohomology group is in degree n, then each Betti number
b
i
(
X
)
{\displaystyle b_{i}(X)}
is at most the binomial coefficient
(
n
i
)
{\displaystyle {\binom {n}{i}}}
(with equality for the n-dimensional torus).
The Euler characteristic of an elliptic space X is nonnegative. If the Euler characteristic is positive, then all odd Betti numbers
b
2
i
+
1
(
X
)
{\displaystyle b_{2i+1}(X)}
are zero, and the rational cohomology ring of X is a complete intersection ring.
There are many other restrictions on the rational cohomology ring of an elliptic space.
Bott's conjecture predicts that every simply connected closed Riemannian manifold with nonnegative sectional curvature should be rationally elliptic. Very little is known about the conjecture, although it holds for all known examples of such manifolds.
Halperin's conjecture asserts that the rational Serre spectral sequence of a fiber sequence of simply-connected spaces with rationally elliptic fiber of non-zero Euler characteristic vanishes at the second page.
A simply connected finite complex X is rationally elliptic if and only if the rational homology of the loop space
Ω
X
{\displaystyle \Omega X}
grows at most polynomially. More generally, X is called integrally elliptic if the mod p homology of
Ω
X
{\displaystyle \Omega X}
grows at most polynomially, for every prime number p. All known Riemannian manifolds with nonnegative sectional curvature are in fact integrally elliptic.
== See also ==
Mandell's theorem – analogue of rational homotopy theory in p-adic settings
Chromatic homotopy theory
== Notes ==
== References ==
Félix, Yves; Halperin, Stephen; Thomas, Jean-Claude (1993), "Elliptic spaces II", L'Enseignement mathématique, 39 (1–2): 25–32, doi:10.5169/seals-60412, MR 1225255
Félix, Yves; Halperin, Stephen; Thomas, Jean-Claude (2001), Rational Homotopy Theory, New York: Springer Nature, doi:10.1007/978-1-4613-0105-9, ISBN 0-387-95068-0, MR 1802847
Félix, Yves; Halperin, Stephen; Thomas, Jean-Claude (2015), Rational Homotopy Theory II, Singapore: World Scientific, doi:10.1142/9473, ISBN 978-981-4651-42-4, MR 3379890
Félix, Yves; Oprea, John; Tanré, Daniel (2008), Algebraic Models in Geometry, Oxford: Oxford University Press, ISBN 978-0-19-920651-3, MR 2403898
Griffiths, Phillip A.; Morgan, John W. (1981), Rational Homotopy Theory and Differential Forms, Boston: Birkhäuser, ISBN 3-7643-3041-4, MR 0641551
Hess, Kathryn (1999), "A history of rational homotopy theory", in James, Ioan M. (ed.), History of Topology, Amsterdam: North-Holland, pp. 757–796, doi:10.1016/B978-044482375-5/50028-6, ISBN 0-444-82375-1, MR 1721122
Hess, Kathryn (2007), "Rational homotopy theory: a brief introduction" (PDF), Interactions between Homotopy Theory and Algebra, Contemporary Mathematics, vol. 436, American Mathematical Society, pp. 175–202, arXiv:math/0604626, doi:10.1090/conm/436/08409, ISBN 9780821838143, MR 2355774
Ivanov, Sergei O. (2022), "An overview of rationalization theories of non-simply connected spaces and non-nilpotent groups", Acta Mathematica Sinica, English Series, vol. 38, pp. 1705–1721, arXiv:2111.10694, doi:10.1007/s10114-022-2063-9
May, J. Peter; Ponto, Kathleen (2012), More Concise Algebraic Topology. Localization, Completion, and Model Categories (PDF), University of Chicago Press, ISBN 978-0-226-51178-8, MR 2884233
Pavlov, Aleksandr V. (2002), "Estimates for the Betti numbers of rationally elliptic spaces", Siberian Mathematical Journal, 43 (6): 1080–1085, doi:10.1023/A:1021173418920, MR 1946233
Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031
Sullivan, Dennis (1977), "Infinitesimal computations in topology", Publications Mathématiques de l'IHÉS, 47: 269–331, doi:10.1007/bf02684341, hdl:10338.dmlcz/128041, MR 0646078
Sullivan, Dennis (2001) [1994], "Rational homotopy theory", Encyclopedia of Mathematics, EMS Press
Sullivan, Dennis; Vigué-Poirrier, Micheline (1976), "The homology theory of the closed geodesic problem", Journal of Differential Geometry, 11 (4): 633–644, doi:10.4310/jdg/1214433729, MR 0455028 | Wikipedia/Rational_homotopy_theory |
In algebraic topology, a branch of mathematics, a spectrum is an object representing a generalized cohomology theory. Every such cohomology theory is representable, as follows from Brown's representability theorem. This means that, given a cohomology theory
E
∗
:
CW
o
p
→
Ab
{\displaystyle {\mathcal {E}}^{*}:{\text{CW}}^{op}\to {\text{Ab}}}
,there exist spaces
E
k
{\displaystyle E^{k}}
such that evaluating the cohomology theory in degree
k
{\displaystyle k}
on a space
X
{\displaystyle X}
is equivalent to computing the homotopy classes of maps to the space
E
k
{\displaystyle E^{k}}
, that is
E
k
(
X
)
≅
[
X
,
E
k
]
{\displaystyle {\mathcal {E}}^{k}(X)\cong \left[X,E^{k}\right]}
.Note there are several different categories of spectra leading to many technical difficulties, but they all determine the same homotopy category, known as the stable homotopy category. This is one of the key points for introducing spectra because they form a natural home for stable homotopy theory.
== The definition of a spectrum ==
There are many variations of the definition: in general, a spectrum is any sequence
X
n
{\displaystyle X_{n}}
of pointed topological spaces or pointed simplicial sets together with the structure maps
S
1
∧
X
n
→
X
n
+
1
{\displaystyle S^{1}\wedge X_{n}\to X_{n+1}}
, where
∧
{\displaystyle \wedge }
is the smash product. The smash product of a pointed space
X
{\displaystyle X}
with a circle is homeomorphic to the reduced suspension of
X
{\displaystyle X}
, denoted
Σ
X
{\displaystyle \Sigma X}
.
The following is due to Frank Adams (1974): a spectrum (or CW-spectrum) is a sequence
E
:=
{
E
n
}
n
∈
N
{\displaystyle E:=\{E_{n}\}_{n\in \mathbb {N} }}
of CW complexes together with inclusions
Σ
E
n
→
E
n
+
1
{\displaystyle \Sigma E_{n}\to E_{n+1}}
of the suspension
Σ
E
n
{\displaystyle \Sigma E_{n}}
as a subcomplex of
E
n
+
1
{\displaystyle E_{n+1}}
.
For other definitions, see symmetric spectrum and simplicial spectrum.
=== Homotopy groups of a spectrum ===
Some of the most important invariants of a spectrum are its homotopy groups. These groups mirror the definition of the stable homotopy groups of spaces since the structure of the suspension maps is integral in its definition. Given a spectrum
E
{\displaystyle E}
define the homotopy group
π
n
(
E
)
{\displaystyle \pi _{n}(E)}
as the colimit
π
n
(
E
)
=
lim
→
k
π
n
+
k
(
E
k
)
=
lim
→
(
⋯
→
π
n
+
k
(
E
k
)
→
π
n
+
k
+
1
(
E
k
+
1
)
→
⋯
)
{\displaystyle {\begin{aligned}\pi _{n}(E)&=\lim _{\to k}\pi _{n+k}(E_{k})\\&=\lim _{\to }\left(\cdots \to \pi _{n+k}(E_{k})\to \pi _{n+k+1}(E_{k+1})\to \cdots \right)\end{aligned}}}
where the maps are induced from the composition of the map
Σ
:
π
n
+
k
(
E
n
)
→
π
n
+
k
+
1
(
Σ
E
n
)
{\displaystyle \Sigma :\pi _{n+k}(E_{n})\to \pi _{n+k+1}(\Sigma E_{n})}
(that is,
[
S
n
+
k
,
E
n
]
→
[
S
n
+
k
+
1
,
Σ
E
n
]
{\displaystyle [S^{n+k},E_{n}]\to [S^{n+k+1},\Sigma E_{n}]}
given by functoriality of
Σ
{\displaystyle \Sigma }
) and the structure map
Σ
E
n
→
E
n
+
1
{\displaystyle \Sigma E_{n}\to E_{n+1}}
. A spectrum is said to be connective if its
π
k
{\displaystyle \pi _{k}}
are zero for negative k.
== Examples ==
=== Eilenberg–Maclane spectrum ===
Consider singular cohomology
H
n
(
X
;
A
)
{\displaystyle H^{n}(X;A)}
with coefficients in an abelian group
A
{\displaystyle A}
. For a CW complex
X
{\displaystyle X}
, the group
H
n
(
X
;
A
)
{\displaystyle H^{n}(X;A)}
can be identified with the set of homotopy classes of maps from
X
{\displaystyle X}
to
K
(
A
,
n
)
{\displaystyle K(A,n)}
, the Eilenberg–MacLane space with homotopy concentrated in degree
n
{\displaystyle n}
. We write this as
[
X
,
K
(
A
,
n
)
]
=
H
n
(
X
;
A
)
{\displaystyle [X,K(A,n)]=H^{n}(X;A)}
Then the corresponding spectrum
H
A
{\displaystyle HA}
has
n
{\displaystyle n}
-th space
K
(
A
,
n
)
{\displaystyle K(A,n)}
; it is called the Eilenberg–MacLane spectrum of
A
{\displaystyle A}
. Note this construction can be used to embed any ring
R
{\displaystyle R}
into the category of spectra. This embedding forms the basis of spectral geometry, a model for derived algebraic geometry. One of the important properties of this embedding are the isomorphisms
π
i
(
H
(
R
/
I
)
∧
R
H
(
R
/
J
)
)
≅
H
i
(
R
/
I
⊗
L
R
/
J
)
≅
Tor
i
R
(
R
/
I
,
R
/
J
)
{\displaystyle {\begin{aligned}\pi _{i}(H(R/I)\wedge _{R}H(R/J))&\cong H_{i}\left(R/I\otimes ^{\mathbf {L} }R/J\right)\\&\cong \operatorname {Tor} _{i}^{R}(R/I,R/J)\end{aligned}}}
showing the category of spectra keeps track of the derived information of commutative rings, where the smash product acts as the derived tensor product. Moreover, Eilenberg–Maclane spectra can be used to define theories such as topological Hochschild homology for commutative rings, a more refined theory than classical Hochschild homology.
=== Topological complex K-theory ===
As a second important example, consider topological K-theory. At least for X compact,
K
0
(
X
)
{\displaystyle K^{0}(X)}
is defined to be the Grothendieck group of the monoid of complex vector bundles on X. Also,
K
1
(
X
)
{\displaystyle K^{1}(X)}
is the group corresponding to vector bundles on the suspension of X. Topological K-theory is a generalized cohomology theory, so it gives a spectrum. The zeroth space is
Z
×
B
U
{\displaystyle \mathbb {Z} \times BU}
while the first space is
U
{\displaystyle U}
. Here
U
{\displaystyle U}
is the infinite unitary group and
B
U
{\displaystyle BU}
is its classifying space. By Bott periodicity we get
K
2
n
(
X
)
≅
K
0
(
X
)
{\displaystyle K^{2n}(X)\cong K^{0}(X)}
and
K
2
n
+
1
(
X
)
≅
K
1
(
X
)
{\displaystyle K^{2n+1}(X)\cong K^{1}(X)}
for all n, so all the spaces in the topological K-theory spectrum are given by either
Z
×
B
U
{\displaystyle \mathbb {Z} \times BU}
or
U
{\displaystyle U}
. There is a corresponding construction using real vector bundles instead of complex vector bundles, which gives an 8-periodic spectrum.
=== Sphere spectrum ===
One of the quintessential examples of a spectrum is the sphere spectrum
S
{\displaystyle \mathbb {S} }
. This is a spectrum whose homotopy groups are given by the stable homotopy groups of spheres, so
π
n
(
S
)
=
π
n
S
{\displaystyle \pi _{n}(\mathbb {S} )=\pi _{n}^{\mathbb {S} }}
We can write down this spectrum explicitly as
S
i
=
S
i
{\displaystyle \mathbb {S} _{i}=S^{i}}
where
S
0
=
{
0
,
1
}
{\displaystyle \mathbb {S} _{0}=\{0,1\}}
. Note the smash product gives a product structure on this spectrum
S
n
∧
S
m
≃
S
n
+
m
{\displaystyle S^{n}\wedge S^{m}\simeq S^{n+m}}
induces a ring structure on
S
{\displaystyle \mathbb {S} }
. Moreover, if considering the category of symmetric spectra, this forms the initial object, analogous to
Z
{\displaystyle \mathbb {Z} }
in the category of commutative rings.
=== Thom spectra ===
Another canonical example of spectra come from the Thom spectra representing various cobordism theories. This includes real cobordism
M
O
{\displaystyle MO}
, complex cobordism
M
U
{\displaystyle MU}
, framed cobordism, spin cobordism
M
S
p
i
n
{\displaystyle MSpin}
, string cobordism
M
S
t
r
i
n
g
{\displaystyle MString}
, and so on. In fact, for any topological group
G
{\displaystyle G}
there is a Thom spectrum
M
G
{\displaystyle MG}
.
=== Suspension spectrum ===
A spectrum may be constructed out of a space. The suspension spectrum of a space
X
{\displaystyle X}
, denoted
Σ
∞
X
{\displaystyle \Sigma ^{\infty }X}
is a spectrum
X
n
=
S
n
∧
X
{\displaystyle X_{n}=S^{n}\wedge X}
(the structure maps are the identity.) For example, the suspension spectrum of the 0-sphere is the sphere spectrum discussed above. The homotopy groups of this spectrum are then the stable homotopy groups of
X
{\displaystyle X}
, so
π
n
(
Σ
∞
X
)
=
π
n
S
(
X
)
{\displaystyle \pi _{n}(\Sigma ^{\infty }X)=\pi _{n}^{\mathbb {S} }(X)}
The construction of the suspension spectrum implies every space can be considered as a cohomology theory. In fact, it defines a functor
Σ
∞
:
h
CW
→
h
Spectra
{\displaystyle \Sigma ^{\infty }:h{\text{CW}}\to h{\text{Spectra}}}
from the homotopy category of CW complexes to the homotopy category of spectra. The morphisms are given by
[
Σ
∞
X
,
Σ
∞
Y
]
=
colim
→
n
[
Σ
n
X
,
Σ
n
Y
]
{\displaystyle [\Sigma ^{\infty }X,\Sigma ^{\infty }Y]={\underset {\to n}{\operatorname {colim} {}}}[\Sigma ^{n}X,\Sigma ^{n}Y]}
which by the Freudenthal suspension theorem eventually stabilizes. By this we mean
[
Σ
N
X
,
Σ
N
Y
]
≃
[
Σ
N
+
1
X
,
Σ
N
+
1
Y
]
≃
⋯
{\displaystyle \left[\Sigma ^{N}X,\Sigma ^{N}Y\right]\simeq \left[\Sigma ^{N+1}X,\Sigma ^{N+1}Y\right]\simeq \cdots }
and
[
Σ
∞
X
,
Σ
∞
Y
]
≃
[
Σ
N
X
,
Σ
N
Y
]
{\displaystyle \left[\Sigma ^{\infty }X,\Sigma ^{\infty }Y\right]\simeq \left[\Sigma ^{N}X,\Sigma ^{N}Y\right]}
for some finite integer
N
{\displaystyle N}
. For a CW complex
X
{\displaystyle X}
there is an inverse construction
Ω
∞
{\displaystyle \Omega ^{\infty }}
which takes a spectrum
E
{\displaystyle E}
and forms a space
Ω
∞
E
=
colim
→
n
Ω
n
E
n
{\displaystyle \Omega ^{\infty }E={\underset {\to n}{\operatorname {colim} {}}}\Omega ^{n}E_{n}}
called the infinite loop space of the spectrum. For a CW complex
X
{\displaystyle X}
Ω
∞
Σ
∞
X
=
colim
→
Ω
n
Σ
n
X
{\displaystyle \Omega ^{\infty }\Sigma ^{\infty }X={\underset {\to }{\operatorname {colim} {}}}\Omega ^{n}\Sigma ^{n}X}
and this construction comes with an inclusion
X
→
Ω
n
Σ
n
X
{\displaystyle X\to \Omega ^{n}\Sigma ^{n}X}
for every
n
{\displaystyle n}
, hence gives a map
X
→
Ω
∞
Σ
∞
X
{\displaystyle X\to \Omega ^{\infty }\Sigma ^{\infty }X}
which is injective. Unfortunately, these two structures, with the addition of the smash product, lead to significant complexity in the theory of spectra because there cannot exist a single category of spectra which satisfies a list of five axioms relating these structures. The above adjunction is valid only in the homotopy categories of spaces and spectra, but not always with a specific category of spectra (not the homotopy category).
=== Ω-spectrum ===
An Ω-spectrum is a spectrum such that the adjoint of the structure map (i.e., the map
X
n
→
Ω
X
n
+
1
{\displaystyle X_{n}\to \Omega X_{n+1}}
) is a weak equivalence. The K-theory spectrum of a ring is an example of an Ω-spectrum.
=== Ring spectrum ===
A ring spectrum is a spectrum X such that the diagrams that describe ring axioms in terms of smash products commute "up to homotopy" (
S
0
→
X
{\displaystyle S^{0}\to X}
corresponds to the identity.) For example, the spectrum of topological K-theory is a ring spectrum. A module spectrum may be defined analogously.
For many more examples, see the list of cohomology theories.
== Functions, maps, and homotopies of spectra ==
There are three natural categories whose objects are spectra, whose morphisms are the functions, or maps, or homotopy classes defined below.
A function between two spectra E and F is a sequence of maps from En to Fn that commute with the
maps ΣEn → En+1 and ΣFn → Fn+1.
Given a spectrum
E
n
{\displaystyle E_{n}}
, a subspectrum
F
n
{\displaystyle F_{n}}
is a sequence of subcomplexes that is also a spectrum. As each i-cell in
E
j
{\displaystyle E_{j}}
suspends to an (i + 1)-cell in
E
j
+
1
{\displaystyle E_{j+1}}
, a cofinal subspectrum is a subspectrum for which each cell of the parent spectrum is eventually contained in the subspectrum after a finite number of suspensions. Spectra can then be turned into a category by defining a map of spectra
f
:
E
→
F
{\displaystyle f:E\to F}
to be a function from a cofinal subspectrum
G
{\displaystyle G}
of
E
{\displaystyle E}
to
F
{\displaystyle F}
, where two such functions represent the same map if they coincide on some cofinal subspectrum. Intuitively such a map of spectra does not need to be everywhere defined, just eventually become defined, and two maps that coincide on a cofinal subspectrum are said to be equivalent.
This gives the category of spectra (and maps), which is a major tool. There is a natural embedding of the category of pointed CW complexes into this category: it takes
Y
{\displaystyle Y}
to the suspension spectrum in which the nth complex is
Σ
n
Y
{\displaystyle \Sigma ^{n}Y}
.
The smash product of a spectrum
E
{\displaystyle E}
and a pointed complex
X
{\displaystyle X}
is a spectrum given by
(
E
∧
X
)
n
=
E
n
∧
X
{\displaystyle (E\wedge X)_{n}=E_{n}\wedge X}
(associativity of the smash product yields immediately that this is indeed a spectrum). A homotopy of maps between spectra corresponds to a map
(
E
∧
I
+
)
→
F
{\displaystyle (E\wedge I^{+})\to F}
, where
I
+
{\displaystyle I^{+}}
is the disjoint union
[
0
,
1
]
⊔
{
∗
}
{\displaystyle [0,1]\sqcup \{*\}}
with
∗
{\displaystyle *}
taken to be the basepoint.
The stable homotopy category, or homotopy category of (CW) spectra is defined to be the category whose objects are spectra and whose morphisms are homotopy classes of maps between spectra. Many other definitions of spectrum, some appearing very different, lead to equivalent stable homotopy categories.
Finally, we can define the suspension of a spectrum by
(
Σ
E
)
n
=
E
n
+
1
{\displaystyle (\Sigma E)_{n}=E_{n+1}}
. This translation suspension is invertible, as we can desuspend too, by setting
(
Σ
−
1
E
)
n
=
E
n
−
1
{\displaystyle (\Sigma ^{-1}E)_{n}=E_{n-1}}
.
== The triangulated homotopy category of spectra ==
The stable homotopy category is additive: maps can be added by using a variant of the track addition used to define homotopy groups. Thus homotopy classes from one spectrum to another form an abelian group. Furthermore the stable homotopy category is triangulated (Vogt (1970)), the shift being given by suspension and the distinguished triangles by the mapping cone sequences of spectra
X
→
Y
→
Y
∪
C
X
→
(
Y
∪
C
X
)
∪
C
Y
≅
Σ
X
{\displaystyle X\rightarrow Y\rightarrow Y\cup CX\rightarrow (Y\cup CX)\cup CY\cong \Sigma X}
.
== Smash products of spectra ==
The smash product of spectra extends the smash product of CW complexes. It makes the stable homotopy category into a monoidal category; in other words it behaves like the (derived) tensor product of abelian groups. A major problem with the smash product is that obvious ways of defining it make it associative and commutative only up to homotopy. Some more recent definitions of spectra, such as symmetric spectra, eliminate this problem, and give a symmetric monoidal structure at the level of maps, before passing to homotopy classes.
The smash product is compatible with the triangulated category structure. In particular the smash product of a distinguished triangle with a spectrum is a distinguished triangle.
== Generalized homology and cohomology of spectra ==
We can define the (stable) homotopy groups of a spectrum to be those given by
π
n
E
=
[
Σ
n
S
,
E
]
{\displaystyle \displaystyle \pi _{n}E=[\Sigma ^{n}\mathbb {S} ,E]}
,
where
S
{\displaystyle \mathbb {S} }
is the sphere spectrum and
[
X
,
Y
]
{\displaystyle [X,Y]}
is the set of homotopy classes of maps from
X
{\displaystyle X}
to
Y
{\displaystyle Y}
.
We define the generalized homology theory of a spectrum E by
E
n
X
=
π
n
(
E
∧
X
)
=
[
Σ
n
S
,
E
∧
X
]
{\displaystyle E_{n}X=\pi _{n}(E\wedge X)=[\Sigma ^{n}\mathbb {S} ,E\wedge X]}
and define its generalized cohomology theory by
E
n
X
=
[
Σ
−
n
X
,
E
]
.
{\displaystyle \displaystyle E^{n}X=[\Sigma ^{-n}X,E].}
Here
X
{\displaystyle X}
can be a spectrum or (by using its suspension spectrum) a space.
== Technical complexities with spectra ==
One of the canonical complexities while working with spectra and defining a category of spectra comes from the fact each of these categories cannot satisfy five seemingly obvious axioms concerning the infinite loop space of a spectrum
Q
{\displaystyle Q}
Q
:
Top
∗
→
Top
∗
{\displaystyle Q:{\text{Top}}_{*}\to {\text{Top}}_{*}}
sending
Q
X
=
colim
→
n
Ω
n
Σ
n
X
{\displaystyle QX=\mathop {\text{colim}} _{\to n}\Omega ^{n}\Sigma ^{n}X}
, a pair of adjoint functors
Σ
∞
:
Top
∗
⇆
Spectra
∗
:
Ω
∞
{\displaystyle \Sigma ^{\infty }:{\text{Top}}_{*}\leftrightarrows {\text{Spectra}}_{*}:\Omega ^{\infty }}
, and the smash product
∧
{\displaystyle \wedge }
in both the category of spaces and the category of spectra. If we let
Top
∗
{\displaystyle {\text{Top}}_{*}}
denote the category of based, compactly generated, weak Hausdorff spaces, and
Spectra
∗
{\displaystyle {\text{Spectra}}_{*}}
denote a category of spectra, the following five axioms can never be satisfied by the specific model of spectra:
Spectra
∗
{\displaystyle {\text{Spectra}}_{*}}
is a symmetric monoidal category with respect to the smash product
∧
{\displaystyle \wedge }
The functor
Σ
∞
{\displaystyle \Sigma ^{\infty }}
is left-adjoint to
Ω
∞
{\displaystyle \Omega ^{\infty }}
The unit for the smash product
∧
{\displaystyle \wedge }
is the sphere spectrum
Σ
∞
S
0
=
S
{\displaystyle \Sigma ^{\infty }S^{0}=\mathbb {S} }
Either there is a natural transformation
ϕ
:
(
Ω
∞
E
)
∧
(
Ω
∞
E
′
)
→
Ω
∞
(
E
∧
E
′
)
{\displaystyle \phi :\left(\Omega ^{\infty }E\right)\wedge \left(\Omega ^{\infty }E'\right)\to \Omega ^{\infty }\left(E\wedge E'\right)}
or a natural transformation
γ
:
(
Σ
∞
E
)
∧
(
Σ
∞
E
′
)
→
Σ
∞
(
E
∧
E
′
)
{\displaystyle \gamma :\left(\Sigma ^{\infty }E\right)\wedge \left(\Sigma ^{\infty }E'\right)\to \Sigma ^{\infty }\left(E\wedge E'\right)}
which commutes with the unit object in both categories, and the commutative and associative isomorphisms in both categories.
There is a natural weak equivalence
θ
:
Ω
∞
Σ
∞
X
→
Q
X
{\displaystyle \theta :\Omega ^{\infty }\Sigma ^{\infty }X\to QX}
for
X
∈
Ob
(
Top
∗
)
{\displaystyle X\in \operatorname {Ob} ({\text{Top}}_{*})}
which means that there is a commuting diagram:
X
→
η
Ω
∞
Σ
∞
X
=
↓
↓
θ
X
→
i
Q
X
{\displaystyle {\begin{matrix}X&\xrightarrow {\eta } &\Omega ^{\infty }\Sigma ^{\infty }X\\{\mathord {=}}\downarrow &&\downarrow \theta \\X&\xrightarrow {i} &QX\end{matrix}}}
where
η
{\displaystyle \eta }
is the unit map in the adjunction.
Because of this, the study of spectra is fractured based upon the model being used. For an overview, check out the article cited above.
== History ==
A version of the concept of a spectrum was introduced in the 1958 doctoral dissertation of Elon Lages Lima. His advisor Edwin Spanier wrote further on the subject in 1959. Spectra were adopted by Michael Atiyah and George W. Whitehead in their work on generalized homology theories in the early 1960s. The 1964 doctoral thesis of J. Michael Boardman gave a workable definition of a category of spectra and of maps (not just homotopy classes) between them, as useful in stable homotopy theory as the category of CW complexes is in the unstable case. (This is essentially the category described above, and it is still used for many purposes: for other accounts, see Adams (1974) or Rainer Vogt (1970).) Important further theoretical advances have however been made since 1990, improving vastly the formal properties of spectra. Consequently, much recent literature uses modified definitions of spectrum: see Michael Mandell et al. (2001) for a unified treatment of these new approaches.
== See also ==
Ring spectrum
Symmetric spectrum
G-spectrum
Mapping spectrum
Suspension (topology)
Adams spectral sequence
== References ==
=== Introductory ===
Adams, J. Frank (1974). Stable homotopy and generalised homology. University of Chicago Press. ISBN 9780226005249.
Elmendorf, Anthony D.; Kříž, Igor; Mandell, Michael A.; May, J. Peter (1995), "Modern foundations for stable homotopy theory" (PDF), in James., Ioan M. (ed.), Handbook of algebraic topology, Amsterdam: North-Holland, pp. 213–253, CiteSeerX 10.1.1.55.8006, doi:10.1016/B978-044481779-2/50007-9, ISBN 978-0-444-81779-2, MR 1361891
=== Modern articles developing the theory ===
Mandell, Michael A.; May, J. Peter; Schwede, Stefan; Shipley, Brooke (2001), "Model categories of diagram spectra", Proceedings of the London Mathematical Society, Series 3, 82 (2): 441–512, CiteSeerX 10.1.1.22.3815, doi:10.1112/S0024611501012692, MR 1806878, S2CID 551246
=== Historically relevant articles ===
Atiyah, Michael F. (1961). "Bordism and cobordism". Proceedings of the Cambridge Philosophical Society. 57 (2): 200–8. doi:10.1017/s0305004100035064. S2CID 122937421.
Lima, Elon Lages (1959), "The Spanier–Whitehead duality in new homotopy categories", Summa Brasil. Math., 4: 91–148, MR 0116332
Lima, Elon Lages (1960), "Stable Postnikov invariants and their duals", Summa Brasil. Math., 4: 193–251
Vogt, Rainer (1970), Boardman's stable homotopy category, Lecture Notes Series, No. 21, Matematisk Institut, Aarhus Universitet, Aarhus, MR 0275431
Whitehead, George W. (1962), "Generalized homology theories", Transactions of the American Mathematical Society, 102 (2): 227–283, doi:10.1090/S0002-9947-1962-0137117-6
== External links ==
Spectral Sequences - Allen Hatcher - contains excellent introduction to spectra and applications for constructing Adams spectral sequence
An untitled book project about symmetric spectra
"Are spectra really the same as cohomology theories?". | Wikipedia/Spectrum_(algebraic_topology) |
In mathematics, the tensor-hom adjunction is that the tensor product
−
⊗
X
{\displaystyle -\otimes X}
and hom-functor
Hom
(
X
,
−
)
{\displaystyle \operatorname {Hom} (X,-)}
form an adjoint pair:
Hom
(
Y
⊗
X
,
Z
)
≅
Hom
(
Y
,
Hom
(
X
,
Z
)
)
.
{\displaystyle \operatorname {Hom} (Y\otimes X,Z)\cong \operatorname {Hom} (Y,\operatorname {Hom} (X,Z)).}
This is made more precise below. The order of terms in the phrase "tensor-hom adjunction" reflects their relationship: tensor is the left adjoint, while hom is the right adjoint.
== General statement ==
Say R and S are (possibly noncommutative) rings, and consider the right module categories (an analogous statement holds for left modules):
C
=
M
o
d
S
and
D
=
M
o
d
R
.
{\displaystyle {\mathcal {C}}=\mathrm {Mod} _{S}\quad {\text{and}}\quad {\mathcal {D}}=\mathrm {Mod} _{R}.}
Fix an
(
R
,
S
)
{\displaystyle (R,S)}
-bimodule
X
{\displaystyle X}
and define functors
F
:
D
→
C
{\displaystyle F\colon {\mathcal {D}}\rightarrow {\mathcal {C}}}
and
G
:
C
→
D
{\displaystyle G\colon {\mathcal {C}}\rightarrow {\mathcal {D}}}
as follows:
F
(
Y
)
=
Y
⊗
R
X
for
Y
∈
D
{\displaystyle F(Y)=Y\otimes _{R}X\quad {\text{for }}Y\in {\mathcal {D}}}
G
(
Z
)
=
Hom
S
(
X
,
Z
)
for
Z
∈
C
{\displaystyle G(Z)=\operatorname {Hom} _{S}(X,Z)\quad {\text{for }}Z\in {\mathcal {C}}}
Then
F
{\displaystyle F}
is left adjoint to
G
{\displaystyle G}
. This means there is a natural isomorphism
Hom
S
(
Y
⊗
R
X
,
Z
)
≅
Hom
R
(
Y
,
Hom
S
(
X
,
Z
)
)
.
{\displaystyle \operatorname {Hom} _{S}(Y\otimes _{R}X,Z)\cong \operatorname {Hom} _{R}(Y,\operatorname {Hom} _{S}(X,Z)).}
This is actually an isomorphism of abelian groups. More precisely, if
Y
{\displaystyle Y}
is an
(
A
,
R
)
{\displaystyle (A,R)}
-bimodule and
Z
{\displaystyle Z}
is a
(
B
,
S
)
{\displaystyle (B,S)}
-bimodule, then this is an isomorphism of
(
B
,
A
)
{\displaystyle (B,A)}
-bimodules. This is one of the motivating examples of the structure in a closed bicategory.
== Counit and unit ==
Like all adjunctions, the tensor-hom adjunction can be described by its counit and unit natural transformations. Using the notation from the previous section, the counit
ε
:
F
G
→
1
C
{\displaystyle \varepsilon :FG\to 1_{\mathcal {C}}}
has components
ε
Z
:
Hom
S
(
X
,
Z
)
⊗
R
X
→
Z
{\displaystyle \varepsilon _{Z}:\operatorname {Hom} _{S}(X,Z)\otimes _{R}X\to Z}
given by evaluation: For
ϕ
∈
Hom
S
(
X
,
Z
)
and
x
∈
X
,
{\displaystyle \phi \in \operatorname {Hom} _{S}(X,Z)\quad {\text{and}}\quad x\in X,}
ε
(
ϕ
⊗
x
)
=
ϕ
(
x
)
.
{\displaystyle \varepsilon (\phi \otimes x)=\phi (x).}
The components of the unit
η
:
1
D
→
G
F
{\displaystyle \eta :1_{\mathcal {D}}\to GF}
η
Y
:
Y
→
Hom
S
(
X
,
Y
⊗
R
X
)
{\displaystyle \eta _{Y}:Y\to \operatorname {Hom} _{S}(X,Y\otimes _{R}X)}
are defined as follows: For
y
{\displaystyle y}
in
Y
{\displaystyle Y}
,
η
Y
(
y
)
∈
Hom
S
(
X
,
Y
⊗
R
X
)
{\displaystyle \eta _{Y}(y)\in \operatorname {Hom} _{S}(X,Y\otimes _{R}X)}
is a right
S
{\displaystyle S}
-module homomorphism given by
η
Y
(
y
)
(
t
)
=
y
⊗
t
for
t
∈
X
.
{\displaystyle \eta _{Y}(y)(t)=y\otimes t\quad {\text{for }}t\in X.}
The counit and unit equations can now be explicitly verified. For
Y
{\displaystyle Y}
in
D
{\displaystyle {\mathcal {D}}}
,
ε
F
Y
∘
F
(
η
Y
)
:
Y
⊗
R
X
→
Hom
S
(
X
,
Y
⊗
R
X
)
⊗
R
X
→
Y
⊗
R
X
{\displaystyle \varepsilon _{FY}\circ F(\eta _{Y}):Y\otimes _{R}X\to \operatorname {Hom} _{S}(X,Y\otimes _{R}X)\otimes _{R}X\to Y\otimes _{R}X}
is given on simple tensors of
Y
⊗
X
{\displaystyle Y\otimes X}
by
ε
F
Y
∘
F
(
η
Y
)
(
y
⊗
x
)
=
η
Y
(
y
)
(
x
)
=
y
⊗
x
.
{\displaystyle \varepsilon _{FY}\circ F(\eta _{Y})(y\otimes x)=\eta _{Y}(y)(x)=y\otimes x.}
Likewise,
G
(
ε
Z
)
∘
η
G
Z
:
Hom
S
(
X
,
Z
)
→
Hom
S
(
X
,
Hom
S
(
X
,
Z
)
⊗
R
X
)
→
Hom
S
(
X
,
Z
)
.
{\displaystyle G(\varepsilon _{Z})\circ \eta _{GZ}:\operatorname {Hom} _{S}(X,Z)\to \operatorname {Hom} _{S}(X,\operatorname {Hom} _{S}(X,Z)\otimes _{R}X)\to \operatorname {Hom} _{S}(X,Z).}
For
ϕ
{\displaystyle \phi }
in
Hom
S
(
X
,
Z
)
{\displaystyle \operatorname {Hom} _{S}(X,Z)}
,
G
(
ε
Z
)
∘
η
G
Z
(
ϕ
)
{\displaystyle G(\varepsilon _{Z})\circ \eta _{GZ}(\phi )}
is a right
S
{\displaystyle S}
-module homomorphism defined by
G
(
ε
Z
)
∘
η
G
Z
(
ϕ
)
(
x
)
=
ε
Z
(
ϕ
⊗
x
)
=
ϕ
(
x
)
{\displaystyle G(\varepsilon _{Z})\circ \eta _{GZ}(\phi )(x)=\varepsilon _{Z}(\phi \otimes x)=\phi (x)}
and therefore
G
(
ε
Z
)
∘
η
G
Z
(
ϕ
)
=
ϕ
.
{\displaystyle G(\varepsilon _{Z})\circ \eta _{GZ}(\phi )=\phi .}
== The Ext and Tor functors ==
The Hom functor
hom
(
X
,
−
)
{\displaystyle \hom(X,-)}
commutes with arbitrary limits, while the tensor product
−
⊗
X
{\displaystyle -\otimes X}
functor commutes with arbitrary colimits that exist in their domain category. However, in general,
hom
(
X
,
−
)
{\displaystyle \hom(X,-)}
fails to commute with colimits, and
−
⊗
X
{\displaystyle -\otimes X}
fails to commute with limits; this failure occurs even among finite limits or colimits. This failure to preserve short exact sequences motivates the definition of the Ext functor and the Tor functor.
== In arithmetic ==
We can illustrate the tensor-hom adjunction in the category of functions of finite sets. Given a set
N
{\displaystyle N}
, its Hom functor takes any set
A
{\displaystyle A}
to the set of functions from
N
{\displaystyle N}
to
A
{\displaystyle A}
. The isomorphism class of this set of functions is the natural number
A
N
{\displaystyle A^{N}}
. Similarly, the tensor product
−
⊗
N
{\displaystyle -\otimes N}
takes a set
A
{\displaystyle A}
to its cartesian product with
N
{\displaystyle N}
. Its isomorphism class is thus the natural number
A
N
{\displaystyle AN}
.
This allows us to interpret the isomorphism of hom-sets
Hom
(
Y
⊗
X
,
Z
)
≅
Hom
(
Y
,
Hom
(
X
,
Z
)
)
.
{\displaystyle \operatorname {Hom} (Y\otimes X,Z)\cong \operatorname {Hom} (Y,\operatorname {Hom} (X,Z)).}
that universally characterizes the tensor-hom adjunction, as the categorification of the remarkably basic law of exponents
Z
Y
X
=
(
Z
X
)
Y
.
{\displaystyle Z^{YX}=(Z^{X})^{Y}.}
== See also ==
Currying
Eckmann–Hilton duality
Ext functor
Tor functor
Change of rings
== References ==
Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9 | Wikipedia/Tensor-hom_adjunction |
In mathematics, algebraic homotopy is a research program on homotopy theory proposed by J.H.C. Whitehead in his 1950 ICM talk, where he described it as:
The ultimate object of algebraic homotopy is to construct a purely algebraic theory, which is equivalent to homotopy theory in the same sort of way that 'analytic' is equivalent to 'pure' projective geometry.
In spirit, the program is somehow similar to Grothendieck's homotopy hypothesis. However, according to Ronnie Brown, "Looking again at Esquisses d'un Progamme, it seems that programme has currently little relation to Whitehead's."
== References ==
https://ncatlab.org/nlab/show/algebraic+homotopy
Handbook of Algebraic Topology edited by I.M. James
== Further reading ==
https://ncatlab.org/nlab/show/Algebraic+Homotopy, an entry about a book | Wikipedia/Algebraic_homotopy |
In mathematics, stable homotopy theory is the part of homotopy theory (and thus algebraic topology) concerned with all structure and phenomena that remain after sufficiently many applications of the suspension functor. A founding result was the Freudenthal suspension theorem, which states that given any pointed space
X
{\displaystyle X}
, the homotopy groups
π
n
+
k
(
Σ
n
X
)
{\displaystyle \pi _{n+k}(\Sigma ^{n}X)}
stabilize for
n
{\displaystyle n}
sufficiently large. In particular, the homotopy groups of spheres
π
n
+
k
(
S
n
)
{\displaystyle \pi _{n+k}(S^{n})}
stabilize for
n
≥
k
+
2
{\displaystyle n\geq k+2}
. For example,
⟨
id
S
1
⟩
=
Z
=
π
1
(
S
1
)
≅
π
2
(
S
2
)
≅
π
3
(
S
3
)
≅
⋯
{\displaystyle \langle {\text{id}}_{S^{1}}\rangle =\mathbb {Z} =\pi _{1}(S^{1})\cong \pi _{2}(S^{2})\cong \pi _{3}(S^{3})\cong \cdots }
⟨
η
⟩
=
Z
=
π
3
(
S
2
)
→
π
4
(
S
3
)
≅
π
5
(
S
4
)
≅
⋯
{\displaystyle \langle \eta \rangle =\mathbb {Z} =\pi _{3}(S^{2})\to \pi _{4}(S^{3})\cong \pi _{5}(S^{4})\cong \cdots }
In the two examples above all the maps between homotopy groups are applications of the suspension functor. The first example is a standard corollary of the Hurewicz theorem, that
π
n
(
S
n
)
≅
Z
{\displaystyle \pi _{n}(S^{n})\cong \mathbb {Z} }
. In the second example the Hopf map,
η
{\displaystyle \eta }
, is mapped to its suspension
Σ
η
{\displaystyle \Sigma \eta }
, which generates
π
4
(
S
3
)
≅
Z
/
2
{\displaystyle \pi _{4}(S^{3})\cong \mathbb {Z} /2}
.
One of the most important problems in stable homotopy theory is the computation of stable homotopy groups of spheres. According to Freudenthal's theorem, in the stable range the homotopy groups of spheres depend not on the specific dimensions of the spheres in the domain and target, but on the difference in those dimensions. With this in mind the k-th stable stem is
π
k
s
:=
lim
n
π
n
+
k
(
S
n
)
{\displaystyle \pi _{k}^{s}:=\lim _{n}\pi _{n+k}(S^{n})}
.
This is an abelian group for all k. It is a theorem of Jean-Pierre Serre that these groups are finite for
k
≠
0
{\displaystyle k\neq 0}
. In fact, composition makes
π
∗
S
{\displaystyle \pi _{*}^{S}}
into a graded ring. A theorem of Goro Nishida states that all elements of positive grading in this ring are nilpotent. Thus the only prime ideals are the primes in
π
0
s
≅
Z
{\displaystyle \pi _{0}^{s}\cong \mathbb {Z} }
. So the structure of
π
∗
s
{\displaystyle \pi _{*}^{s}}
is quite complicated.
In the modern treatment of stable homotopy theory, spaces are typically replaced by spectra. Following this line of thought, an entire stable homotopy category can be created. This category has many nice properties that are not present in the (unstable) homotopy category of spaces, following from the fact that the suspension functor becomes invertible. For example, the notion of cofibration sequence and fibration sequence are equivalent.
== See also ==
Adams filtration
Adams spectral sequence
Chromatic homotopy theory
Equivariant stable homotopy theory
Nilpotence theorem
== References ==
Adams, J. Frank (1966), Stable homotopy theory, Second revised edition. Lectures delivered at the University of California at Berkeley, vol. 1961, Berlin, New York: Springer-Verlag, MR 0196742
May, J. Peter (1999), "Stable Algebraic Topology, 1945–1966" (PDF), Stable algebraic topology, 1945--1966, Amsterdam: North-Holland, pp. 665–723, CiteSeerX 10.1.1.30.6299, doi:10.1016/B978-044482375-5/50025-0, ISBN 9780444823755, MR 1721119
Ravenel, Douglas C. (1992), Nilpotence and periodicity in stable homotopy theory, Annals of Mathematics Studies, vol. 128, Princeton University Press, ISBN 978-0-691-02572-8, MR 1192553 | Wikipedia/Stable_homotopy_theory |
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer,: 126 the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Problems that are undecidable using classical computers remain undecidable using quantum computers.: 127 What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (see Quantum supremacy).
The best-known algorithms are Shor's algorithm for factoring and Grover's algorithm for searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the most efficient known classical algorithm for factoring, the general number field sieve. Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task, a linear search.
== Overview ==
Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by a quantum circuit that acts on some input qubits and terminates with a measurement. A quantum circuit consists of simple quantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as the Hamiltonian oracle model.
Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms include phase kick-back, phase estimation, the quantum Fourier transform, quantum walks, amplitude amplification and topological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.
== Algorithms based on the quantum Fourier transform ==
The quantum Fourier transform is the quantum analogue of the discrete Fourier transform, and is used in several quantum algorithms. The Hadamard transform is also an example of a quantum Fourier transform over an n-dimensional vector space over the field F2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number of quantum gates.
=== Deutsch–Jozsa algorithm ===
The Deutsch–Jozsa algorithm solves a black-box problem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a function f is either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half).
=== Bernstein–Vazirani algorithm ===
The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create an oracle separation between BQP and BPP.
=== Simon's algorithm ===
Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation for Shor's algorithm for factoring.
=== Quantum phase estimation algorithm ===
The quantum phase estimation algorithm is used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms.
=== Shor's algorithm ===
Shor's algorithm solves the discrete logarithm problem and the integer factorization problem in polynomial time, whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are in P or NP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time.
=== Hidden subgroup problem ===
The abelian hidden subgroup problem is a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solving Pell's equation, testing the principal ideal of a ring R and factoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem. The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well as graph isomorphism and certain lattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for the symmetric group, which would give an efficient algorithm for graph isomorphism and the dihedral group, which would solve certain lattice problems.
=== Estimating Gauss sums ===
A Gauss sum is a type of exponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.
=== Fourier fishing and Fourier checking ===
Consider an oracle consisting of n random Boolean functions mapping n-bit strings to a Boolean value, with the goal of finding n n-bit strings z1,..., zn such that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy
|
f
~
(
z
i
)
|
⩾
1
{\displaystyle |{\tilde {f}}(z_{i})|\geqslant 1}
and at least 1/4 satisfy
|
f
~
(
z
i
)
|
⩾
2.
{\displaystyle |{\tilde {f}}(z_{i})|\geqslant 2.}
This can be done in bounded-error quantum polynomial time (BQP).
== Algorithms based on amplitude amplification ==
Amplitude amplification is a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.
=== Grover's algorithm ===
Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using only
O
(
N
)
{\displaystyle O({\sqrt {N}})}
queries instead of the
O
(
N
)
{\displaystyle O({N})}
queries required classically. Classically,
O
(
N
)
{\displaystyle O({N})}
queries are required even allowing bounded-error probabilistic algorithms.
Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables in Bohmian mechanics. (Such a computer is completely hypothetical and would not be a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at most
O
(
N
3
)
{\displaystyle O({\sqrt[{3}]{N}})}
steps. This is slightly faster than the
O
(
N
)
{\displaystyle O({\sqrt {N}})}
steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solve NP-complete problems in polynomial time.
=== Quantum counting ===
Quantum counting solves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in an
N
{\displaystyle N}
-element list with an error of at most
ε
{\displaystyle \varepsilon }
by making only
Θ
(
ε
−
1
N
/
k
)
{\displaystyle \Theta \left(\varepsilon ^{-1}{\sqrt {N/k}}\right)}
queries, where
k
{\displaystyle k}
is the number of marked elements in the list. More precisely, the algorithm outputs an estimate
k
′
{\displaystyle k'}
for
k
{\displaystyle k}
, the number of marked entries, with accuracy
|
k
−
k
′
|
≤
ε
k
{\displaystyle |k-k'|\leq \varepsilon k}
.
== Algorithms based on quantum walks ==
A quantum walk is the quantum analogue of a classical random walk. A classical random walk can be described by a probability distribution over some states, while a quantum walk can be described by a quantum superposition over states. Quantum walks are known to give exponential speedups for some black-box problems. They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.
=== Boson sampling problem ===
The Boson Sampling Problem in an experimental configuration assumes an input of bosons (e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a defined unitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk. The problem is then to produce a fair sample of the probability distribution of the output that depends on the input arrangement of bosons and the unitarity. Solving this problem with a classical computer algorithm requires computing the permanent of the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computable linear optical network and that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted the sampling problem had similar complexity for inputs other than Fock-state photons and identified a transition in computational complexity from classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs.
=== Element distinctness problem ===
The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically,
Ω
(
N
)
{\displaystyle \Omega (N)}
queries are required for a list of size
N
{\displaystyle N}
; however, it can be solved in
Θ
(
N
2
/
3
)
{\displaystyle \Theta (N^{2/3})}
queries on a quantum computer. The optimal algorithm was put forth by Andris Ambainis, and Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended that work to obtain the lower bound for all functions.
=== Triangle-finding problem ===
The triangle-finding problem is the problem of determining whether a given graph contains a triangle (a clique of size 3). The best-known lower bound for quantum algorithms is
Ω
(
N
)
{\displaystyle \Omega (N)}
, but the best algorithm known requires O(N1.297) queries, an improvement over the previous best O(N1.3) queries.
=== Formula evaluation ===
A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input.
A well studied formula is the balanced binary tree with only NAND gates. This type of formula requires
Θ
(
N
c
)
{\displaystyle \Theta (N^{c})}
queries using randomness, where
c
=
log
2
(
1
+
33
)
/
4
≈
0.754
{\displaystyle c=\log _{2}(1+{\sqrt {33}})/4\approx 0.754}
. With a quantum algorithm, however, it can be solved in
Θ
(
N
1
/
2
)
{\displaystyle \Theta (N^{1/2})}
queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model. The same result for the standard setting soon followed.
Fast quantum algorithms for more complicated formulas are also known.
=== Group commutativity ===
The problem is to determine if a black-box group, given by k generators, is commutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities are
Θ
(
k
2
)
{\displaystyle \Theta (k^{2})}
and
Θ
(
k
)
{\displaystyle \Theta (k)}
, respectively. A quantum algorithm requires
Ω
(
k
2
/
3
)
{\displaystyle \Omega (k^{2/3})}
queries, while the best-known classical algorithm uses
O
(
k
2
/
3
log
k
)
{\displaystyle O(k^{2/3}\log k)}
queries.
== BQP-complete problems ==
The complexity class BQP (bounded-error quantum polynomial time) is the set of decision problems solvable by a quantum computer in polynomial time with error probability of at most 1/3 for all instances. It is the quantum analogue to the classical complexity class BPP.
A problem is BQP-complete if it is in BQP and any problem in BQP can be reduced to it in polynomial time. Informally, the class of BQP-complete problems are those that are as hard as the hardest problems in BQP and are themselves efficiently solvable by a quantum computer (with bounded error).
=== Computing knot invariants ===
Witten had shown that the Chern-Simons topological quantum field theory (TQFT) can be solved in terms of Jones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial, which as far as we know, is hard to compute classically in the worst-case scenario.
=== Quantum simulation ===
The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves." Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems, as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits. Quantum computers can also efficiently simulate topological quantum field theories. In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimating quantum topological invariants such as Jones and HOMFLY polynomials, and the Turaev-Viro invariant of three-dimensional manifolds.
=== Solving a linear system of equations ===
In 2009, Aram Harrow, Avinatan Hassidim, and Seth Lloyd, formulated a quantum algorithm for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Provided that the linear system is sparse and has a low condition number
κ
{\displaystyle \kappa }
, and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime of
O
(
log
(
N
)
κ
2
)
{\displaystyle O(\log(N)\kappa ^{2})}
, where
N
{\displaystyle N}
is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in
O
(
N
κ
)
{\displaystyle O(N\kappa )}
(or
O
(
N
κ
)
{\displaystyle O(N{\sqrt {\kappa }})}
for positive semidefinite matrices).
== Hybrid quantum/classical algorithms ==
Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization. These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator.
=== QAOA ===
The quantum approximate optimization algorithm takes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory. The algorithm makes use of classical optimization of quantum operations to maximize an "objective function."
=== Variational quantum eigensolver ===
The variational quantum eigensolver (VQE) algorithm applies classical optimization to minimize the energy expectation value of an ansatz state to find the ground state of a Hermitian operator, such as a molecule's Hamiltonian. It can also be extended to find excited energies of molecular Hamiltonians.
=== Contracted quantum eigensolver ===
The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule. It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.
== See also ==
Quantum machine learning
Quantum optimization algorithms
Quantum sort
Primality test
== References ==
== External links ==
The Quantum Algorithm Zoo: A comprehensive list of quantum algorithms that provide a speedup over the fastest known classical algorithms.
Andrew Childs' lecture notes on quantum algorithms
The Quantum search algorithm - brute force Archived 1 September 2018 at the Wayback Machine.
=== Surveys ===
Dalzell, Alexander M.; et al. (2023). "Quantum algorithms: A survey of applications and end-to-end complexities". arXiv:2310.03011 [quant-ph].
Smith, J.; Mosca, M. (2012). "Algorithms for Quantum Computers". Handbook of Natural Computing. pp. 1451–1492. arXiv:1001.0767. doi:10.1007/978-3-540-92910-9_43. ISBN 978-3-540-92909-3. S2CID 16565723.
Childs, A. M.; Van Dam, W. (2010). "Quantum algorithms for algebraic problems". Reviews of Modern Physics. 82 (1): 1–52. arXiv:0812.0380. Bibcode:2010RvMP...82....1C. doi:10.1103/RevModPhys.82.1. S2CID 119261679. | Wikipedia/Quantum_algorithm |
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its secondary and tertiary structure from primary structure. Structure prediction is different from the inverse problem of protein design.
Protein structure prediction is one of the most important goals pursued by computational biology and addresses Levinthal's paradox. Accurate structure prediction has important applications in medicine (for example, in drug design) and biotechnology (for example, in novel enzyme design).
Starting in 1994, the performance of current methods is assessed biannually in the Critical Assessment of Structure Prediction (CASP) experiment. A continuous evaluation of protein structure prediction web servers is performed by the community project Continuous Automated Model EvaluatiOn (CAMEO3D).
== Protein structure and terminology ==
Proteins are chains of amino acids joined together by peptide bonds. Many conformations of this chain are possible due to the rotation of the main chain about the two torsion angles φ and ψ at the Cα atom (see figure). This conformational flexibility is responsible for differences in the three-dimensional structure of proteins. The peptide bonds in the chain are polar, i.e. they have separated positive and negative charges (partial charges) in the carbonyl group, which can act as hydrogen bond acceptor and in the NH group, which can act as hydrogen bond donor. These groups can therefore interact in the protein structure. Proteins consist mostly of 20 different types of L-α-amino acids (the proteinogenic amino acids). These can be classified according to the chemistry of the side chain, which also plays an important structural role. Glycine takes on a special position, as it has the smallest side chain, only one hydrogen atom, and therefore can increase the local flexibility in the protein structure. Cysteine in contrast can react with another cysteine residue to form one cystine and thereby form a cross link stabilizing the whole structure.
The protein structure can be considered as a sequence of secondary structure elements, such as α helices and β sheets. In these secondary structures, regular patterns of H-bonds are formed between the main chain NH and CO groups of spatially neighboring amino acids, and the amino acids have similar Φ and ψ
angles.
The formation of these secondary structures efficiently satisfies the hydrogen bonding capacities of the peptide bonds. The secondary structures can be tightly packed in the protein core in a hydrophobic environment, but they can also present at the polar protein surface. Each amino acid side chain has a limited volume to occupy and a limited number of possible interactions with other nearby side chains, a situation that must be taken into account in molecular modeling and alignments.
=== α-helix ===
The α-helix is the most abundant type of secondary structure in proteins. The α-helix has 3.6 amino acids per turn with an H-bond formed between every fourth residue; the average length is 10 amino acids (3 turns) or 10 Å but varies from 5 to 40 (1.5 to 11 turns). The alignment of the H-bonds creates a dipole moment for the helix with a resulting partial positive charge at the amino end of the helix. Because this region has free NH2 groups, it will interact with negatively charged groups such as phosphates. The most common location of α-helices is at the surface of protein cores, where they provide an interface with the aqueous environment. The inner-facing side of the helix tends to have hydrophobic amino acids and the outer-facing side hydrophilic amino acids. Thus, every third of four amino acids along the chain will tend to be hydrophobic, a pattern that can be quite readily detected. In the leucine zipper motif, a repeating pattern of leucines on the facing sides of two adjacent helices is highly predictive of the motif. A helical-wheel plot can be used to show this repeated pattern. Other α-helices buried in the protein core or in cellular membranes have a higher and more regular distribution of hydrophobic amino acids, and are highly predictive of such structures. Helices exposed on the surface have a lower proportion of hydrophobic amino acids. Amino acid content can be predictive of an α-helical region. Regions richer in alanine (A), glutamic acid (E), leucine (L), and methionine (M) and poorer in proline (P), glycine (G), tyrosine (Y), and serine (S) tend to form an α-helix. Proline destabilizes or breaks an α-helix but can be present in longer helices, forming a bend.
=== β-sheet ===
β-sheets are formed by H-bonds between an average of 5–10 consecutive amino acids in one portion of the chain with another 5–10 farther down the chain. The interacting regions may be adjacent, with a short loop in between, or far apart, with other structures in between. Every chain may run in the same direction to form a parallel sheet, every other chain may run in the reverse chemical direction to form an anti parallel sheet, or the chains may be parallel and anti parallel to form a mixed sheet. The pattern of H bonding is different in the parallel and anti parallel configurations. Each amino acid in the interior strands of the sheet forms two H-bonds with neighboring amino acids, whereas each amino acid on the outside strands forms only one bond with an interior strand. Looking across the sheet at right angles to the strands, more distant strands are rotated slightly counterclockwise to form a left-handed twist. The Cα-atoms alternate above and below the sheet in a pleated structure, and the R side groups of the amino acids alternate above and below the pleats. The Φ and Ψ angles of the amino acids in sheets vary considerably in one region of the Ramachandran plot. It is more difficult to predict the location of β-sheets than of α-helices. The situation improves somewhat when the amino acid variation in multiple sequence alignments is taken into account.
=== Deltas ===
Some parts of the protein have fixed three-dimensional structure, but do not form any regular structures. They should not be confused with disordered or unfolded segments of proteins or random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. These parts are frequently called "deltas" (Δ) because they connect β-sheets and α-helices. Deltas are usually located at protein surface, and therefore mutations of their residues are more easily tolerated. Having more substitutions, insertions, and deletions in a certain region of a sequence alignment maybe an indication of some delta. The positions of introns in genomic DNA may correlate with the locations of loops in the encoded protein . Deltas also tend to have charged and polar amino acids and are frequently a component of active sites.
== Protein classification ==
Proteins may be classified according to both structural and sequential similarity. For structural classification, the sizes and spatial arrangements of secondary structures described in the above paragraph are compared in known three-dimensional structures. Classification based on sequence similarity was historically the first to be used. Initially, similarity based on alignments of whole sequences was performed. Later, proteins were classified on the basis of the occurrence of conserved amino acid patterns. Databases that classify proteins by one or more of these schemes are available.
In considering protein classification schemes, it is important to keep several observations in mind. First, two entirely different protein sequences from different evolutionary origins may fold into a similar structure. Conversely, the sequence of an ancient gene for a given structure may have diverged considerably in different species while at the same time maintaining the same basic structural features. Recognizing any remaining sequence similarity in such cases may be a very difficult task. Second, two proteins that share a significant degree of sequence similarity either with each other or with a third sequence also share an evolutionary origin and should share some structural features also. However, gene duplication and genetic rearrangements during evolution may give rise to new gene copies, which can then evolve into proteins with new function and structure.
=== Terms used for classifying protein structures and sequences ===
The more commonly used terms for evolutionary and structural relationships among proteins are listed below. Many additional terms are used for various kinds of structural features found in proteins. Descriptions of such terms may be found at the CATH Web site, the Structural Classification of Proteins (SCOP) Web site, and a Glaxo Wellcome tutorial on the Swiss bioinformatics Expasy Web site.
Active site
a localized combination of amino acid side groups within the tertiary (three-dimensional) or quaternary (protein subunit) structure that can interact with a chemically specific substrate and that provides the protein with biological activity. Proteins of very different amino acid sequences may fold into a structure that produces the same active site.
Architecture
is the relative orientations of secondary structures in a three-dimensional structure without regard to whether or not they share a similar loop structure.
Fold (topology)
a type of architecture that also has a conserved loop structure.
Blocks
is a conserved amino acid sequence pattern in a family of proteins. The pattern includes a series of possible matches at each position in the represented sequences, but there are not any inserted or deleted positions in the pattern or in the sequences. By way of contrast, sequence profiles are a type of scoring matrix that represents a similar set of patterns that includes insertions and deletions.
Class
a term used to classify protein domains according to their secondary structural content and organization. Four classes were originally recognized by Levitt and Chothia (1976), and several others have been added in the SCOP database. Three classes are given in the CATH database: mainly-α, mainly-β, and α–β, with the α–β class including both alternating α/β and α+β structures.
Core
the portion of a folded protein molecule that comprises the hydrophobic interior of α-helices and β-sheets. The compact structure brings together side groups of amino acids into close enough proximity so that they can interact. When comparing protein structures, as in the SCOP database, core is the region common to most of the structures that share a common fold or that are in the same superfamily. In structure prediction, core is sometimes defined as the arrangement of secondary structures that is likely to be conserved during evolutionary change.
Domain (sequence context)
a segment of a polypeptide chain that can fold into a three-dimensional structure irrespective of the presence of other segments of the chain. The separate domains of a given protein may interact extensively or may be joined only by a length of polypeptide chain. A protein with several domains may use these domains for functional interactions with different molecules.
Family (sequence context)
a group of proteins of similar biochemical function that are more than 50% identical when aligned. This same cutoff is still used by the Protein Information Resource (PIR). A protein family comprises proteins with the same function in different organisms (orthologous sequences) but may also include proteins in the same organism (paralogous sequences) derived from gene duplication and rearrangements. If a multiple sequence alignment of a protein family reveals a common level of similarity throughout the lengths of the proteins, PIR refers to the family as a homeomorphic family. The aligned region is referred to as a homeomorphic domain, and this region may comprise several smaller homology domains that are shared with other families. Families may be further subdivided into subfamilies or grouped into superfamilies based on respective higher or lower levels of sequence similarity. The SCOP database reports 1296 families and the CATH database (version 1.7 beta), reports 1846 families.
When the sequences of proteins with the same function are examined in greater detail, some are found to share high sequence similarity. They are obviously members of the same family by the above criteria. However, others are found that have very little, or even insignificant, sequence similarity with other family members. In such cases, the family relationship between two distant family members A and C can often be demonstrated by finding an additional family member B that shares significant similarity with both A and C. Thus, B provides a connecting link between A and C. Another approach is to examine distant alignments for highly conserved matches.
At a level of identity of 50%, proteins are likely to have the same three-dimensional structure, and the identical atoms in the sequence alignment will also superimpose within approximately 1 Å in the structural model. Thus, if the structure of one member of a family is known, a reliable prediction may be made for a second member of the family, and the higher the identity level, the more reliable the prediction. Protein structural modeling can be performed by examining how well the amino acid substitutions fit into the core of the three-dimensional structure.
Family (structural context)
as used in the FSSP database (Families of structurally similar proteins) and the DALI/FSSP Web site, two structures that have a significant level of structural similarity but not necessarily significant sequence similarity.
Fold
similar to structural motif, includes a larger combination of secondary structural units in the same configuration. Thus, proteins sharing the same fold have the same combination of secondary structures that are connected by similar loops. An example is the Rossman fold comprising several alternating α helices and parallel β strands. In the SCOP, CATH, and FSSP databases, the known protein structures have been classified into hierarchical levels of structural complexity with the fold as a basic level of classification.
Homologous domain (sequence context)
an extended sequence pattern, generally found by sequence alignment methods, that indicates a common evolutionary origin among the aligned sequences. A homology domain is generally longer than motifs. The domain may include all of a given protein sequence or only a portion of the sequence. Some domains are complex and made up of several smaller homology domains that became joined to form a larger one during evolution. A domain that covers an entire sequence is called the homeomorphic domain by PIR (Protein Information Resource).
Module
a region of conserved amino acid patterns comprising one or more motifs and considered to be a fundamental unit of structure or function. The presence of a module has also been used to classify proteins into families.
Motif (sequence context)
a conserved pattern of amino acids that is found in two or more proteins. In the Prosite catalog, a motif is an amino acid pattern that is found in a group of proteins that have a similar biochemical activity, and that often is near the active site of the protein. Examples of sequence motif databases are the Prosite catalog and the Stanford Motifs Database.
Motif (structural context)
a combination of several secondary structural elements produced by the folding of adjacent sections of the polypeptide chain into a specific three-dimensional configuration. An example is the helix-loop-helix motif. Structural motifs are also referred to as supersecondary structures and folds.
Position-specific scoring matrix (sequence context, also known as weight or scoring matrix)
represents a conserved region in a multiple sequence alignment with no gaps. Each matrix column represents the variation found in one column of the multiple sequence alignment.
Position-specific scoring matrix—3D (structural context)
represents the amino acid variation found in an alignment of proteins that fall into the same structural class. Matrix columns represent the amino acid variation found at one amino acid position in the aligned structures.
Primary structure
the linear amino acid sequence of a protein, which chemically is a polypeptide chain composed of amino acids joined by peptide bonds.
Profile (sequence context)
a scoring matrix that represents a multiple sequence alignment of a protein family. The profile is usually obtained from a well-conserved region in a multiple sequence alignment. The profile is in the form of a matrix with each column representing a position in the alignment and each row one of the amino acids. Matrix values give the likelihood of each amino acid at the corresponding position in the alignment. The profile is moved along the target sequence to locate the best scoring regions by a dynamic programming algorithm. Gaps are allowed during matching and a gap penalty is included in this case as a negative score when no amino acid is matched. A sequence profile may also be represented by a hidden Markov model, referred to as a profile HMM.
Profile (structural context)
a scoring matrix that represents which amino acids should fit well and which should fit poorly at sequential positions in a known protein structure. Profile columns represent sequential positions in the structure, and profile rows represent the 20 amino acids. As with a sequence profile, the structural profile is moved along a target sequence to find the highest possible alignment score by a dynamic programming algorithm. Gaps may be included and receive a penalty. The resulting score provides an indication as to whether or not the target protein might adopt such a structure.
Quaternary structure
the three-dimensional configuration of a protein molecule comprising several independent polypeptide chains.
Secondary structure
the interactions that occur between the C, O, and NH groups on amino acids in a polypeptide chain to form α-helices, β-sheets, turns, loops, and other forms, and that facilitate the folding into a three-dimensional structure.
Superfamily
a group of protein families of the same or different lengths that are related by distant yet detectable sequence similarity. Members of a given superfamily thus have a common evolutionary origin. Originally, Dayhoff defined the cutoff for superfamily status as being the chance that the sequences are not related of 10 6, on the basis of an alignment score (Dayhoff et al. 1978). Proteins with few identities in an alignment of the sequences but with a convincingly common number of structural and functional features are placed in the same superfamily. At the level of three-dimensional structure, superfamily proteins will share common structural features such as a common fold, but there may also be differences in the number and arrangement of secondary structures. The PIR resource uses the term homeomorphic superfamilies to refer to superfamilies that are composed of sequences that can be aligned from end to end, representing a sharing of single sequence homology domain, a region of similarity that extends throughout the alignment. This domain may also comprise smaller homology domains that are shared with other protein families and superfamilies. Although a given protein sequence may contain domains found in several superfamilies, thus indicating a complex evolutionary history, sequences will be assigned to only one homeomorphic superfamily based on the presence of similarity throughout a multiple sequence alignment. The superfamily alignment may also include regions that do not align either within or at the ends of the alignment. In contrast, sequences in the same family align well throughout the alignment.
Supersecondary structure
a term with similar meaning to a structural motif. Tertiary structure is the three-dimensional or globular structure formed by the packing together or folding of secondary structures of a polypeptide chain.
== Secondary structure ==
Secondary structure prediction is a set of techniques in bioinformatics that aim to predict the local secondary structures of proteins based only on knowledge of their amino acid sequence. For proteins, a prediction consists of assigning regions of the amino acid sequence as likely alpha helices, beta strands (often termed extended conformations), or turns. The success of a prediction is determined by comparing it to the results of the DSSP algorithm (or similar e.g. STRIDE) applied to the crystal structure of the protein. Specialized algorithms have been developed for the detection of specific well-defined patterns such as transmembrane helices and coiled coils in proteins.
The best modern methods of secondary structure prediction in proteins were claimed to reach 80% accuracy after using machine learning and sequence alignments; this high accuracy allows the use of the predictions as feature improving fold recognition and ab initio protein structure prediction, classification of structural motifs, and refinement of sequence alignments. The accuracy of current protein secondary structure prediction methods is assessed in weekly benchmarks such as LiveBench and EVA.
=== Background ===
Early methods of secondary structure prediction, introduced in the 1960s and early 1970s, focused on identifying likely alpha helices and were based mainly on helix-coil transition models. Significantly more accurate predictions that included beta sheets were introduced in the 1970s and relied on statistical assessments based on probability parameters derived from known solved structures. These methods, applied to a single sequence, are typically at most about 60–65% accurate, and often underpredict beta sheets. Since the 1980s, artificial neural networks have been applied to the prediction of protein structures.
The evolutionary conservation of secondary structures can be exploited by simultaneously assessing many homologous sequences in a multiple sequence alignment, by calculating the net secondary structure propensity of an aligned column of amino acids. In concert with larger databases of known protein structures and modern machine learning methods such as neural nets and support vector machines, these methods can achieve up to 80% overall accuracy in globular proteins. The theoretical upper limit of accuracy is around 90%, partly due to idiosyncrasies in DSSP assignment near the ends of secondary structures, where local conformations vary under native conditions but may be forced to assume a single conformation in crystals due to packing constraints. Moreover, the typical secondary structure prediction methods do not account for the influence of tertiary structure on formation of secondary structure; for example, a sequence predicted as a likely helix may still be able to adopt a beta-strand conformation if it is located within a beta-sheet region of the protein and its side chains pack well with their neighbors. Dramatic conformational changes related to the protein's function or environment can also alter local secondary structure.
=== Historical perspective ===
To date, over 20 different secondary structure prediction methods have been developed. One of the first algorithms was Chou–Fasman method, which relies predominantly on probability parameters determined from relative frequencies of each amino acid's appearance in each type of secondary structure. The original Chou-Fasman parameters, determined from the small sample of structures solved in the mid-1970s, produce poor results compared to modern methods, though the parameterization has been updated since it was first published. The Chou-Fasman method is roughly 50–60% accurate in predicting secondary structures.
The next notable program was the GOR method is an information theory-based method. It uses the more powerful probabilistic technique of Bayesian inference. The GOR method takes into account not only the probability of each amino acid having a particular secondary structure, but also the conditional probability of the amino acid assuming each structure given the contributions of its neighbors (it does not assume that the neighbors have that same structure). The approach is both more sensitive and more accurate than that of Chou and Fasman because amino acid structural propensities are only strong for a small number of amino acids such as proline and glycine. Weak contributions from each of many neighbors can add up to strong effects overall. The original GOR method was roughly 65% accurate and is dramatically more successful in predicting alpha helices than beta sheets, which it frequently mispredicted as loops or disorganized regions.
Another big step forward, was using machine learning methods. First artificial neural networks methods were used. As a training sets they use solved structures to identify common sequence motifs associated with particular arrangements of secondary structures. These methods are over 70% accurate in their predictions, although beta strands are still often underpredicted due to the lack of three-dimensional structural information that would allow assessment of hydrogen bonding patterns that can promote formation of the extended conformation required for the presence of a complete beta sheet. PSIPRED and JPRED are some of the most known programs based on neural networks for protein secondary structure prediction. Next, support vector machines have proven particularly useful for predicting the locations of turns, which are difficult to identify with statistical methods.
Extensions of machine learning techniques attempt to predict more fine-grained local properties of proteins, such as backbone dihedral angles in unassigned regions. Both SVMs and neural networks have been applied to this problem. More recently, real-value torsion angles can be accurately predicted by SPINE-X and successfully employed for ab initio structure prediction.
=== Other improvements ===
It is reported that in addition to the protein sequence, secondary structure formation depends on other factors. For example, it is reported that secondary structure tendencies depend also on local environment, solvent accessibility of residues, protein structural class, and even the organism from which the proteins are obtained. Based on such observations, some studies have shown that secondary structure prediction can be improved by addition of information about protein structural class, residue accessible surface area and also contact number information.
== Tertiary structure ==
The practical role of protein structure prediction is now more important than ever. Massive amounts of protein sequence data are produced by modern large-scale DNA sequencing efforts such as the Human Genome Project. Despite community-wide efforts in structural genomics, the output of experimentally determined protein structures—typically by time-consuming and relatively expensive X-ray crystallography or NMR spectroscopy—is lagging far behind the output of protein sequences.
The protein structure prediction remains an extremely difficult and unresolved undertaking. The two main problems are the calculation of protein free energy and finding the global minimum of this energy. A protein structure prediction method must explore the space of possible protein structures which is astronomically large. These problems can be partially bypassed in "comparative" or homology modeling and fold recognition methods, in which the search space is pruned by the assumption that the protein in question adopts a structure that is close to the experimentally determined structure of another homologous protein. In contrast, the de novo protein structure prediction methods must explicitly resolve these problems. The progress and challenges in protein structure prediction have been reviewed by Zhang.
=== Before modelling ===
Most tertiary structure modelling methods, such as Rosetta, are optimized for modelling the tertiary structure of single protein domains. A step called domain parsing, or domain boundary prediction, is usually done first to split a protein into potential structural domains. As with the rest of tertiary structure prediction, this can be done comparatively from known structures or ab initio with the sequence only (usually by machine learning, assisted by covariation). The structures for individual domains are docked together in a process called domain assembly to form the final tertiary structure.
=== Ab initio protein modelling ===
==== Energy- and fragment-based methods ====
Ab initio- or de novo- protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures. There are many possible procedures that either attempt to mimic protein folding or apply some stochastic method to search possible solutions (i.e., global optimization of a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To predict protein structure de novo for larger proteins will require better algorithms and larger computational resources like those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing (such as Folding@home, the Human Proteome Folding Project and Rosetta@Home). Although these computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) make ab initio structure prediction an active research field.
As of 2009, a 50-residue protein could be simulated atom-by-atom on a supercomputer for 1 millisecond. As of 2012, comparable stable-state sampling could be done on a standard desktop with a new graphics card and more sophisticated algorithms. A much larger simulation timescales can be achieved using coarse-grained modeling.
==== Evolutionary covariation to predict 3D contacts ====
As sequencing became more commonplace in the 1990s several groups used protein sequence alignments to predict correlated mutations and it was hoped that these coevolved residues could be used to predict tertiary structure (using the analogy to distance constraints from experimental procedures such as NMR). The assumption is when single residue mutations are slightly deleterious, compensatory mutations may occur to restabilize residue-residue interactions.
This early work used what are known as local methods to calculate correlated mutations from protein sequences, but suffered from indirect false correlations which result from treating each pair of residues as independent of all other pairs.
In 2011, a different, and this time global statistical approach, demonstrated that predicted coevolved residues were sufficient to predict the 3D fold of a protein, providing there are enough sequences available (>1,000 homologous sequences are needed). The method, EVfold, uses no homology modeling, threading or 3D structure fragments and can be run on a standard personal computer even for proteins with hundreds of residues. The accuracy of the contacts predicted using this and related approaches has now been demonstrated on many known structures and contact maps, including the prediction of experimentally unsolved transmembrane proteins.
=== Comparative protein modeling ===
Comparative protein modeling uses previously solved structures as starting points, or templates. This is effective because it appears that although the number of actual proteins is vast, there is a limited set of tertiary structural motifs to which most proteins belong. It has been suggested that there are only around 2,000 distinct protein folds in nature, though there are many millions of different proteins. The comparative protein modeling can combine with the evolutionary covariation in the structure prediction.
These methods may also be split into two groups:
Homology modeling is based on the reasonable assumption that two homologous proteins will share very similar structures. Because a protein's fold is more evolutionarily conserved than its amino acid sequence, a target sequence can be modeled with reasonable accuracy on a very distantly related template, provided that the relationship between target and template can be discerned through sequence alignment. It has been suggested that the primary bottleneck in comparative modelling arises from difficulties in alignment rather than from errors in structure prediction given a known-good alignment. Unsurprisingly, homology modelling is most accurate when the target and template have similar sequences.
Protein threading scans the amino acid sequence of an unknown structure against a database of solved structures. In each case, a scoring function is used to assess the compatibility of the sequence to the structure, thus yielding possible three-dimensional models. This type of method is also known as 3D-1D fold recognition due to its compatibility analysis between three-dimensional structures and linear protein sequences. This method has also given rise to methods performing an inverse folding search by evaluating the compatibility of a given structure with a large database of sequences, thus predicting which sequences have the potential to produce a given fold.
=== Modeling of side-chain conformations ===
Accurate packing of the amino acid side chains represents a separate problem in protein structure prediction. Methods that specifically address the problem of predicting side-chain geometry include dead-end elimination and the self-consistent mean field methods. The side chain conformations with low energy are usually determined on the rigid polypeptide backbone and using a set of discrete side chain conformations known as "rotamers". The methods attempt to identify the set of rotamers that minimize the model's overall energy.
These methods use rotamer libraries, which are collections of favorable conformations for each residue type in proteins. Rotamer libraries may contain information about the conformation, its frequency, and the standard deviations about mean dihedral angles, which can be used in sampling. Rotamer libraries are derived from structural bioinformatics or other statistical analysis of side-chain conformations in known experimental structures of proteins, such as by clustering the observed conformations for tetrahedral carbons near the staggered (60°, 180°, −60°) values.
Rotamer libraries can be backbone-independent, secondary-structure-dependent, or backbone-dependent. Backbone-independent rotamer libraries make no reference to backbone conformation, and are calculated from all available side chains of a certain type (for instance, the first example of a rotamer library, done by Ponder and Richards at Yale in 1987). Secondary-structure-dependent libraries present different dihedral angles and/or rotamer frequencies for
α
{\displaystyle \alpha }
-helix,
β
{\displaystyle \beta }
-sheet, or coil secondary structures. Backbone-dependent rotamer libraries present conformations and/or frequencies dependent on the local backbone conformation as defined by the backbone dihedral angles
ϕ
{\displaystyle \phi }
and
ψ
{\displaystyle \psi }
, regardless of secondary structure.
The modern versions of these libraries as used in most software are presented as multidimensional distributions of probability or frequency, where the peaks correspond to the dihedral-angle conformations considered as individual rotamers in the lists. Some versions are based on very carefully curated data and are used primarily for structure validation, while others emphasize relative frequencies in much larger data sets and are the form used primarily for structure prediction, such as the Dunbrack rotamer libraries.
Side-chain packing methods are most useful for analyzing the protein's hydrophobic core, where side chains are more closely packed; they have more difficulty addressing the looser constraints and higher flexibility of surface residues, which often occupy multiple rotamer conformations rather than just one.
== Quaternary structure ==
In the case of complexes of two or more proteins, where the structures of the proteins are known or can be predicted with high accuracy, protein–protein docking methods can be used to predict the structure of the complex. Information of the effect of mutations at specific sites on the affinity of the complex helps to understand the complex structure and to guide docking methods.
== Software ==
A great number of software tools for protein structure prediction exist. Approaches include homology modeling, protein threading, ab initio methods, secondary structure prediction, and transmembrane helix and signal peptide prediction. In particular, deep learning based on long short-term memory has been used for this purpose since 2007, when it was successfully applied to protein homology detection and to
predict subcellular localization of proteins.
Some recent successful methods based on the CASP experiments include I-TASSER, HHpred and AlphaFold. In 2021, AlphaFold was reported to perform best.
Knowing the structure of a protein often allows functional prediction as well. For instance, collagen is folded into a long-extended fiber-like chain and it makes it a fibrous protein. Recently, several techniques have been developed to predict protein folding and thus protein structure, for example, Itasser, and AlphaFold.
=== AI methods ===
AlphaFold was one of the first AIs to predict protein structures. It was introduced by Google's DeepMind in the 13th CASP competition, which was held in 2018. AlphaFold relies on a
neural network approach, which directly predicts the 3D coordinates of all non-hydrogen atoms for a given protein using the amino acid sequence and aligned homologous sequences. The AlphaFold network consists of a trunk which processes the inputs through repeated layers, and a structure module which introduces an explicit 3D structure. Earlier neural networks for protein structure prediction used LSTM.
Since AlphaFold outputs protein coordinates directly, AlphaFold produces predictions in graphics processing unit (GPU) minutes to GPU hours, depending on the length of protein sequence.
The European Bioinformatics Institute together with DeepMind have constructed the AlphaFold – EBI database for predicted protein structures.
=== Current AI methods and databases of predicted protein structures ===
AlphaFold2, was introduced in CASP14, and is capable of predicting protein structures to near experimental accuracy. AlphaFold was swiftly followed by RoseTTAFold and later by OmegaFold and the ESM Metagenomic Atlas.
In a study, Sommer et al. 2022 demonstrated the application of protein structure prediction in genome annotation, specifically in identifying functional protein isoforms using computationally predicted structures, available at https://www.isoform.io. This study highlights the promise of protein structure prediction as a genome annotation tool and presents a practical, structure-guided approach that can be used to enhance the annotation of any genome.
In 2024, David Baker and Demis Hassabis were awarded the Nobel Prize in Chemistry for their contributions to computational protein modeling, including the development of AlphaFold2, an AI-based model for protein structure prediction. AlphaFold2's accuracy has been evaluated against experimentally determined protein structures using metrics such as root-mean-square deviation (RMSD). The median RMSD between different experimental structures of the same protein is approximately 0.6 Å, while the median RMSD between AlphaFold2 predictions and experimental structures is around 1 Å. For regions where AlphaFold2 assigns high confidence, the median RMSD is about 0.6 Å, comparable to the variability observed between different experimental structures. However, in low-confidence regions, the RMSD can exceed 2 Å, indicating greater deviations. In proteins with multiple domains connected by flexible linkers, AlphaFold2 predicts individual domain structures accurately but may assign random relative positions to these domains. Additionally, AlphaFold2 does not account for structural constraints such as the membrane plane, sometimes placing protein domains in positions that would physically clash with the membrane.
=== Evaluation of automatic structure prediction servers ===
CASP, which stands for Critical Assessment of Techniques for Protein Structure Prediction, is a community-wide experiment for protein structure prediction taking place every two years since 1994. CASP provides with an opportunity to assess the quality of available human, non-automated methodology (human category) and automatic servers for protein structure prediction (server category, introduced in the CASP7).
The CAMEO3D Continuous Automated Model EvaluatiOn Server evaluates automated protein structure prediction servers on a weekly basis using blind predictions for newly release protein structures. CAMEO publishes the results on its website.
== See also ==
== References ==
=== Further reading ===
== External links ==
Official website, Protein Structure Prediction Center, CASP experiments
ExPASy Proteomics tools – list of prediction tools and servers | Wikipedia/Protein_structure_prediction |
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
== Categories ==
Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.
=== Sequential models ===
Sequential models include:
Finite-state machines
Post machines (Post–Turing machines and tag machines).
Pushdown automata
Register machines
Random-access machines
Turing machines
Decision tree model
External memory model
=== Functional models ===
Functional models include:
Abstract rewriting systems
Combinatory logic
General recursive functions
Lambda calculus
=== Concurrent models ===
Concurrent models include:
Actor model
Cellular automaton
Interaction nets
Kahn process networks
Logic gates and digital circuits
Petri nets
Process calculus
Synchronous Data Flow
Some of these models have both deterministic and nondeterministic variants. Nondeterministic models correspond to limits of certain sequences of finite computers, but do not correspond to any subset of finite computers; they are used in the study of computational complexity of algorithms.
Models differ in their expressive power; for example, each function that can be computed by a finite-state machine can also be computed by a Turing machine, but not vice versa.
== Uses ==
In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above-mentioned Turing machine model.
== See also ==
Stack machine (0-operand machine)
Accumulator machine (1-operand machine)
Register machine (2,3,... operand machine)
Random-access machine
Abstract machine
Cell-probe model
Robertson–Webb query model
Chomsky hierarchy
Turing completeness
== References ==
== Further reading ==
Fernández, Maribel (2009). Models of Computation: An Introduction to Computability Theory. Undergraduate Topics in Computer Science. Springer. ISBN 978-1-84882-433-1.
Savage, John E. (1998). Models Of Computation: Exploring the Power of Computing. Addison-Wesley. ISBN 978-0201895391. | Wikipedia/Models_of_computation |
Switching circuit theory is the mathematical study of the properties of networks of idealized switches. Such networks may be strictly combinational logic, in which their output state is only a function of the present state of their inputs; or may also contain sequential elements, where the present state depends on the present state and past states; in that sense, sequential circuits are said to include "memory" of past states. An important class of sequential circuits are state machines. Switching circuit theory is applicable to the design of telephone systems, computers, and similar systems. Switching circuit theory provided the mathematical foundations and tools for digital system design in almost all areas of modern technology.
In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. During 1880–1881 he showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but this work remained unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates.
In 1898, Martin Boda described a switching theory for signalling block systems.
Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).
The theory was independently established through the works of NEC engineer Akira Nakashima in Japan, Claude Shannon in the United States, and Victor Shestakov in the Soviet Union. The three published a series of papers showing that the two-valued Boolean algebra, can describe the operation of switching circuits. However, Shannon's work has largely overshadowed the other two, and despite some scholars arguing the similarities of Nakashima's work to Shannon's, their approaches and theoretical frameworks were markedly different. Also implausible is that Shestakov's influenced the other two due to the language barriers and the relative obscurity of his work abroad. Furthermore, Shannon and Shestakov defended their theses the same year in 1938, and Shestakov did not publish until 1941.
Ideal switches are considered as having only two exclusive states, for example, open or closed. In some analysis, the state of a switch can be considered to have no influence on the output of the system and is designated as a "don't care" state. In complex networks it is necessary to also account for the finite switching time of physical switches; where two or more different paths in a network may affect the output, these delays may result in a "logic hazard" or "race condition" where the output state changes due to the different propagation times through the network.
== See also ==
Circuit switching
Message switching
Packet switching
Fast packet switching
Network switching subsystem
5ESS Switching System
Number One Electronic Switching System
Boolean circuit
C-element
Circuit complexity
Circuit minimization
Karnaugh map
Logic design
Logic gate
Logic in computer science
Nonblocking minimal spanning switch
Programmable logic controller – computer software mimics relay circuits for industrial applications
Quine–McCluskey algorithm
Relay – an early kind of logic device
Switching lemma
Unate function
== References ==
== Further reading ==
Keister, William; Ritchie, Alistair E.; Washburn, Seth H. (1951). The Design of Switching Circuits. The Bell Telephone Laboratories Series (1 ed.). D. Van Nostrand Company, Inc. p. 147. Archived from the original on 2020-05-09. Retrieved 2020-05-09. [8] (2+xx+556+2 pages)
Caldwell, Samuel Hawks (1958-12-01) [February 1958]. Written at Watertown, Massachusetts, USA. Switching Circuits and Logical Design. 5th printing September 1963 (1st ed.). New York, USA: John Wiley & Sons Inc. ISBN 0-47112969-0. LCCN 58-7896. {{cite book}}: ISBN / Date incompatibility (help) (xviii+686 pages)
Perkowski, Marek A.; Grygiel, Stanislaw (1995-11-20). "6. Historical Overview of the Research on Decomposition". A Survey of Literature on Function Decomposition (PDF). Version IV. Functional Decomposition Group, Department of Electrical Engineering, Portland University, Portland, Oregon, USA. CiteSeerX 10.1.1.64.1129. Archived (PDF) from the original on 2021-03-28. Retrieved 2021-03-28. (188 pages)
Stanković, Radomir S. [in German]; Sasao, Tsutomu; Astola, Jaakko Tapio [in Finnish] (August 2001). "Publications in the First Twenty Years of Switching Theory and Logic Design" (PDF). Tampere International Center for Signal Processing (TICSP) Series. Tampere University of Technology / TTKK, Monistamo, Finland. ISSN 1456-2774. S2CID 62319288. #14. Archived from the original (PDF) on 2017-08-09. Retrieved 2021-03-28. (4+60 pages)
Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25. (xviii+212 pages) | Wikipedia/Switching_theory |
In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.
== History ==
The theory has emerged as a result of (still failing) attempts to resolve the first and still the most important question of this kind, the P = NP problem. Most of the research is done basing on the assumption of P not being equal to NP and on a more far-reaching conjecture that the polynomial time hierarchy of complexity classes is infinite.
== Important results ==
=== The compression theorem ===
The compression theorem is an important theorem about the complexity of computable functions.
The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions.
=== Space hierarchy theorems ===
The space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.
=== Time hierarchy theorems ===
The time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.
=== Valiant–Vazirani theorem ===
The Valiant–Vazirani theorem is a theorem in computational complexity theory. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986.
The theorem states that if there is a polynomial time algorithm for Unambiguous-SAT, then NP=RP.
The proof is based on the Mulmuley–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science.
=== Sipser–Lautemann theorem ===
The Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that Bounded-error Probabilistic Polynomial (BPP) time, is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2.
=== Savitch's theorem ===
Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function
f
∈
Ω
(
log
(
n
)
)
{\displaystyle f\in \Omega (\log(n))}
,
N
S
P
A
C
E
(
f
(
n
)
)
⊆
D
S
P
A
C
E
(
(
f
(
n
)
)
2
)
.
{\displaystyle {\mathsf {NSPACE}}\left(f\left(n\right)\right)\subseteq {\mathsf {DSPACE}}\left(\left(f\left(n\right)\right)^{2}\right).}
=== Toda's theorem ===
Toda's theorem is a result that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" (1991) and was given the 1998 Gödel Prize. The theorem states that the entire polynomial hierarchy PH is contained in PPP; this implies a closely related statement, that PH is contained in P#P.
=== Immerman–Szelepcsényi theorem ===
The Immerman–Szelepcsényi theorem was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.
== Research topics ==
Major directions of research in this area include:
study of implications stemming from various unsolved problems about complexity classes
study of various types of resource-restricted reductions and the corresponding complete languages
study of consequences of various restrictions on and mechanisms of storage and access to data
== References == | Wikipedia/Structural_complexity_theory |
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical (i.e., non-quantum) complexity classes.
Two important quantum complexity classes are BQP and QMA.
== Background ==
A complexity class is a collection of computational problems that can be solved by a computational model under certain resource constraints. For instance, the complexity class P is defined as the set of problems solvable by a Turing machine in polynomial time. Similarly, quantum complexity classes may be defined using quantum models of computation, such as the quantum circuit model or the equivalent quantum Turing machine. One of the main aims of quantum complexity theory is to find out how these classes relate to classical complexity classes such as P, NP, BPP, and PSPACE.
One of the reasons quantum complexity theory is studied are the implications of quantum computing for the modern Church-Turing thesis. In short the modern Church-Turing thesis states that any computational model can be simulated in polynomial time with a probabilistic Turing machine. However, questions around the Church-Turing thesis arise in the context of quantum computing. It is unclear whether the Church-Turing thesis holds for the quantum computation model. There is much evidence that the thesis does not hold. It may not be possible for a probabilistic Turing machine to simulate quantum computation models in polynomial time.
Both quantum computational complexity of functions and classical computational complexity of functions are often expressed with asymptotic notation. Some common forms of asymptotic notion of functions are
O
(
T
(
n
)
)
{\displaystyle O(T(n))}
,
Ω
(
T
(
n
)
)
{\displaystyle \Omega (T(n))}
, and
Θ
(
T
(
n
)
)
{\displaystyle \Theta (T(n))}
.
O
(
T
(
n
)
)
{\displaystyle O(T(n))}
expresses that something is bounded above by
c
T
(
n
)
{\displaystyle cT(n)}
where
c
{\displaystyle c}
is a constant such that
c
>
0
{\displaystyle c>0}
and
T
(
n
)
{\displaystyle T(n)}
is a function of
n
{\displaystyle n}
,
Ω
(
T
(
n
)
)
{\displaystyle \Omega (T(n))}
expresses that something is bounded below by
c
T
(
n
)
{\displaystyle cT(n)}
where
c
{\displaystyle c}
is a constant such that
c
>
0
{\displaystyle c>0}
and
T
(
n
)
{\displaystyle T(n)}
is a function of
n
{\displaystyle n}
, and
Θ
(
T
(
n
)
)
{\displaystyle \Theta (T(n))}
expresses both
O
(
T
(
n
)
)
{\displaystyle O(T(n))}
and
Ω
(
T
(
n
)
)
{\displaystyle \Omega (T(n))}
. These notations also have their own names.
O
(
T
(
n
)
)
{\displaystyle O(T(n))}
is called Big O notation,
Ω
(
T
(
n
)
)
{\displaystyle \Omega (T(n))}
is called Big Omega notation, and
Θ
(
T
(
n
)
)
{\displaystyle \Theta (T(n))}
is called Big Theta notation.
== Overview of complexity classes ==
The important complexity classes P, BPP, BQP, PP, and PSPACE can be compared based on promise problems. A promise problem is a decision problem which has an input assumed to be selected from the set of all possible input strings. A promise problem is a pair
A
=
(
A
yes
,
A
no
)
{\displaystyle A=(A_{\text{yes}},A_{\text{no}})}
, where
A
yes
{\displaystyle A_{\text{yes}}}
is the set of yes instances and
A
no
{\displaystyle A_{\text{no}}}
is the set of no instances, and the intersection of these sets is empty:
A
yes
∩
A
no
=
∅
{\displaystyle A_{\text{yes}}\cap A_{\text{no}}=\varnothing }
. All of the previous complexity classes contain promise problems.
== BQP ==
The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP ("bounded error, quantum, polynomial time"). More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with error probability of at most 1/3.
As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be efficiently solved by probabilistic Turing machines with bounded error. It is known that
B
P
P
⊆
B
Q
P
{\displaystyle {\mathsf {BPP\subseteq BQP}}}
and widely suspected, but not proven, that
B
Q
P
⊈
B
P
P
{\displaystyle {\mathsf {BQP\nsubseteq BPP}}}
, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity. BQP is a subset of PP.
The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that
P
⊆
B
Q
P
⊆
P
S
P
A
C
E
{\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}}
; that is, the class of problems that can be efficiently solved by quantum computers includes all problems that can be efficiently solved by deterministic classical computers but does not include any problems that cannot be solved by classical computers with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems are in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that
N
P
⊈
B
Q
P
{\displaystyle {\mathsf {NP\nsubseteq BQP}}}
; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if any NP-complete problem were in BQP, then it follows from NP-hardness that all problems in NP are in BQP).
The relationship of BQP to the essential classical complexity classes can be summarized as:
P
⊆
B
P
P
⊆
B
Q
P
⊆
P
P
⊆
P
S
P
A
C
E
{\displaystyle {\mathsf {P\subseteq BPP\subseteq BQP\subseteq PP\subseteq PSPACE}}}
It is also known that BQP is contained in the complexity class
#
P
{\displaystyle \color {Blue}{\mathsf {\#P}}}
(or more precisely in the associated class of decision problems
P
#
P
{\displaystyle {\mathsf {P^{\#P}}}}
), which is a subset of PSPACE.
== Simulation of quantum circuits ==
There is no known way to efficiently simulate a quantum computational model with a classical computer. This means that a classical computer cannot simulate a quantum computational model in polynomial time. However, a quantum circuit of
S
(
n
)
{\displaystyle S(n)}
qubits with
T
(
n
)
{\displaystyle T(n)}
quantum gates can be simulated by a classical circuit with
O
(
2
S
(
n
)
T
(
n
)
3
)
{\displaystyle O(2^{S(n)}T(n)^{3})}
classical gates. This number of classical gates is obtained by determining how many bit operations are necessary to simulate the quantum circuit. In order to do this, first the amplitudes associated with the
S
(
n
)
{\displaystyle S(n)}
qubits must be accounted for. Each of the states of the
S
(
n
)
{\displaystyle S(n)}
qubits can be described by a two-dimensional complex vector, or a state vector. These state vectors can also be described a linear combination of its component vectors with coefficients called amplitudes. These amplitudes are complex numbers which are normalized to one, meaning the sum of the squares of the absolute values of the amplitudes must be one. The entries of the state vector are these amplitudes. Which entry each of the amplitudes are correspond to the none-zero component of the component vector which they are the coefficients of in the linear combination description. As an equation this is described as
α
[
1
0
]
+
β
[
0
1
]
=
[
α
β
]
{\displaystyle \alpha {\begin{bmatrix}1\\0\end{bmatrix}}+\beta {\begin{bmatrix}0\\1\end{bmatrix}}={\begin{bmatrix}\alpha \\\beta \end{bmatrix}}}
or
α
|
1
⟩
+
β
|
0
⟩
=
[
α
β
]
{\displaystyle \alpha \left\vert 1\right\rangle +\beta \left\vert 0\right\rangle ={\begin{bmatrix}\alpha \\\beta \end{bmatrix}}}
using Dirac notation. The state of the entire
S
(
n
)
{\displaystyle S(n)}
qubit system can be described by a single state vector. This state vector describing the entire system is the tensor product of the state vectors describing the individual qubits in the system. The result of the tensor products of the
S
(
n
)
{\displaystyle S(n)}
qubits is a single state vector which has
2
S
(
n
)
{\displaystyle 2^{S(n)}}
dimensions and entries that are the amplitudes associated with each basis state or component vector. Therefore,
2
S
(
n
)
{\displaystyle 2^{S(n)}}
amplitudes must be accounted for with a
2
S
(
n
)
{\displaystyle 2^{S(n)}}
dimensional complex vector which is the state vector for the
S
(
n
)
{\displaystyle S(n)}
qubit system. In order to obtain an upper bound for the number of gates required to simulate a quantum circuit we need a sufficient upper bound for the amount data used to specify the information about each of the
2
S
(
n
)
{\displaystyle 2^{S(n)}}
amplitudes. To do this
O
(
T
(
n
)
)
{\displaystyle O(T(n))}
bits of precision are sufficient for encoding each amplitude. So it takes
O
(
2
S
(
n
)
T
(
n
)
)
{\displaystyle O(2^{S(n)}T(n))}
classical bits to account for the state vector of the
S
(
n
)
{\displaystyle S(n)}
qubit system. Next the application of the
T
(
n
)
{\displaystyle T(n)}
quantum gates on
2
S
(
n
)
{\displaystyle 2^{S(n)}}
amplitudes must be accounted for. The quantum gates can be represented as
2
S
(
n
)
×
2
S
(
n
)
{\displaystyle 2^{S(n)}\times 2^{S(n)}}
sparse matrices. So to account for the application of each of the
T
(
n
)
{\displaystyle T(n)}
quantum gates, the state vector must be multiplied by a
2
S
(
n
)
×
2
S
(
n
)
{\displaystyle 2^{S(n)}\times 2^{S(n)}}
sparse matrix for each of the
T
(
n
)
{\displaystyle T(n)}
quantum gates. Every time the state vector is multiplied by a
2
S
(
n
)
×
2
S
(
n
)
{\displaystyle 2^{S(n)}\times 2^{S(n)}}
sparse matrix,
O
(
2
S
(
n
)
)
{\displaystyle O(2^{S(n)})}
arithmetic operations must be performed. Therefore, there are
O
(
2
S
(
n
)
T
(
n
)
2
)
{\displaystyle O(2^{S(n)}T(n)^{2})}
bit operations for every quantum gate applied to the state vector. So
O
(
2
S
(
n
)
T
(
n
)
2
)
{\displaystyle O(2^{S(n)}T(n)^{2})}
classical gate are needed to simulate
S
(
n
)
{\displaystyle S(n)}
qubit circuit with just one quantum gate. Therefore,
O
(
2
S
(
n
)
T
(
n
)
3
)
{\displaystyle O(2^{S(n)}T(n)^{3})}
classical gates are needed to simulate a quantum circuit of
S
(
n
)
{\displaystyle S(n)}
qubits with
T
(
n
)
{\displaystyle T(n)}
quantum gates. While there is no known way to efficiently simulate a quantum computer with a classical computer, it is possible to efficiently simulate a classical computer with a quantum computer. This is evident from the fact that
B
P
P
⊆
B
Q
P
{\displaystyle {\mathsf {BPP\subseteq BQP}}}
.
== Quantum query complexity ==
One major advantage of using a quantum computational system instead of a classical one, is that a quantum computer may be able to give a polynomial time algorithm for some problem for which no classical polynomial time algorithm exists, but more importantly, a quantum computer may significantly decrease the calculation time for a problem that a classical computer can already solve efficiently. Essentially, a quantum computer may be able to both determine how long it will take to solve a problem, while a classical computer may be unable to do so, and can also greatly improve the computational efficiency associated with the solution to a particular problem. Quantum query complexity refers to how complex, or how many queries to the graph associated with the solution of a particular problem, are required to solve the problem. Before we delve further into query complexity, let us consider some background regarding graphing solutions to particular problems, and the queries associated with these solutions.
=== Query models of directed graphs ===
One type of problem that quantum computing can make easier to solve are graph problems. If we are to consider the amount of queries to a graph that are required to solve a given problem, let us first consider the most common types of graphs, called directed graphs, that are associated with this type of computational modelling. In brief, directed graphs are graphs where all edges between vertices are unidirectional. Directed graphs are formally defined as the graph
G
=
(
N
,
E
)
{\displaystyle G=(N,E)}
, where N is the set of vertices, or nodes, and E is the set of edges.
==== Adjacency matrix model ====
When considering quantum computation of the solution to directed graph problems, there are two important query models to understand. First, there is the adjacency matrix model, where the graph of the solution is given by the adjacency matrix:
M
∈
{
0
,
1
}
a
n
X
n
{\displaystyle M\in \{0,1\}a^{n\mathrm {X} n}}
, with
M
i
j
=
1
{\displaystyle M_{ij}=1}
, if and only if
(
v
i
,
v
j
)
∈
E
{\displaystyle (v_{i},v_{j})\in E}
.
==== Adjacency array model ====
Next, there is the slightly more complicated adjacency array model built on the idea of adjacency lists, where every vertex,
u
{\displaystyle u}
, is associated with an array of neighboring vertices such that
f
i
:
[
d
i
+
]
→
[
n
]
{\displaystyle f_{i}:[d_{i}^{+}]\rightarrow [n]}
, for the out-degrees of vertices
d
i
+
,
.
.
.
,
d
n
+
{\displaystyle d_{i}^{+},...,d_{n}^{+}}
, where
n
{\displaystyle n}
is the minimum value of the upper bound of this model, and
f
i
(
j
)
{\displaystyle f_{i}(j)}
returns the "
j
t
h
{\displaystyle j^{th}}
" vertex adjacent to
i
{\displaystyle i}
. Additionally, the adjacency array model satisfies the simple graph condition,
∀
i
∈
[
n
]
,
j
,
j
′
∈
[
k
]
,
j
≠
j
′
:
f
i
(
j
)
≠
f
i
(
j
′
)
{\displaystyle \forall i\in [n],j,j'\in [k],j\neq j':f_{i}(j)\neq f_{i}(j')}
, meaning that there is only one edge between any pair of vertices, and the number of edges is minimized throughout the entire model (see Spanning tree model for more background).
=== Quantum query complexity of certain types of graph problems ===
Both of the above models can be used to determine the query complexity of particular types of graphing problems, including the connectivity, strong connectivity (a directed graph version of the connectivity model), minimum spanning tree, and single source shortest path models of graphs. An important caveat to consider is that the quantum complexity of a particular type of graphing problem can change based on the query model (namely either matrix or array) used to determine the solution. The following table showing the quantum query complexities of these types of graphing problems illustrates this point well.
Notice the discrepancy between the quantum query complexities associated with a particular type of problem, depending on which query model was used to determine the complexity. For example, when the matrix model is used, the quantum complexity of the connectivity model in Big O notation is
Θ
(
n
3
/
2
)
{\displaystyle \Theta (n^{3/2})}
, but when the array model is used, the complexity is
Θ
(
n
)
{\displaystyle \Theta (n)}
. Additionally, for brevity, we use the shorthand
m
{\displaystyle m}
in certain cases, where
m
=
Θ
(
n
2
)
{\displaystyle m=\Theta (n^{2})}
. The important implication here is that the efficiency of the algorithm used to solve a graphing problem is dependent on the type of query model used to model the graph.
=== Other types of quantum computational queries ===
In the query complexity model, the input can also be given as an oracle (black box). The algorithm gets information about the input only by querying the oracle. The algorithm starts in some fixed quantum state and the state evolves as it queries the oracle.
Similar to the case of graphing problems, the quantum query complexity of a black-box problem is the smallest number of queries to the oracle that are required in order to calculate the function. This makes the quantum query complexity a lower bound on the overall time complexity of a function.
==== Grover's algorithm ====
An example depicting the power of quantum computing is Grover's algorithm for searching unstructured databases. The algorithm's quantum query complexity is
O
(
N
)
{\textstyle O{\left({\sqrt {N}}\right)}}
, a quadratic improvement over the best possible classical query complexity
O
(
N
)
{\displaystyle O(N)}
, which is a linear search. Grover's algorithm is asymptotically optimal; in fact, it uses at most a
1
+
o
(
1
)
{\displaystyle 1+o(1)}
fraction more queries than the best possible algorithm.
==== Deutsch-Jozsa algorithm ====
The Deutsch-Jozsa algorithm is a quantum algorithm designed to solve a toy problem with a smaller query complexity than is possible with a classical algorithm. The toy problem asks whether a function
f
:
{
0
,
1
}
n
→
{
0
,
1
}
{\displaystyle f:\{0,1\}^{n}\rightarrow \{0,1\}}
is constant or balanced, those being the only two possibilities. The only way to evaluate the function
f
{\displaystyle f}
is to consult a black box or oracle. A classical deterministic algorithm will have to check more than half of the possible inputs to be sure of whether or not the function is constant or balanced. With
2
n
{\displaystyle 2^{n}}
possible inputs, the query complexity of the most efficient classical deterministic algorithm is
2
n
−
1
+
1
{\displaystyle 2^{n-1}+1}
. The Deutsch-Jozsa algorithm takes advantage of quantum parallelism to check all of the elements of the domain at once and only needs to query the oracle once, making its query complexity
1
{\displaystyle 1}
.
== Other theories of quantum physics ==
It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based on Bohmian Mechanics could implement a search of an N-item database in at most
O
(
N
3
)
{\displaystyle O({\sqrt[{3}]{N}})}
steps, a slight speedup over Grover's algorithm, which runs in
O
(
N
)
{\displaystyle O({\sqrt {N}})}
steps. Note, however, that neither search method would allow quantum computers to solve NP-complete problems in polynomial time. Theories of quantum gravity, such as M-theory and loop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to the problem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.
== See also ==
Quantum computing
Quantum Turing machine
Polynomial hierarchy (PH)
== Notes ==
== References ==
Nielsen, Michael; Chuang, Isaac (2000). Quantum Computation and Quantum Information. Cambridge: Cambridge University Press. ISBN 978-0-521-63503-5. OCLC 174527496.
Arora, Sanjeev; Barak, Boaz (2016). Computational Complexity: A Modern Approach. Cambridge University Press. pp. 201–236. ISBN 978-0-521-42426-4.
Watrous, John (2008). "Quantum Computational Complexity". arXiv:0804.3401v1 [quant-ph].
Watrous J. (2009) Quantum Computational Complexity. In: Meyers R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY
== External links ==
MIT lectures by Scott Aaronson | Wikipedia/Quantum_complexity_theory |
The RSA (Rivest–Shamir–Adleman) cryptosystem is a public-key cryptosystem, one of the oldest widely used for secure data transmission. The initialism "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ), the British signals intelligence agency, by the English mathematician Clifford Cocks. That system was declassified in 1997.
In a public-key cryptosystem, the encryption key is public and distinct from the decryption key, which is kept secret (private).
An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone via the public key, but can only be decrypted by someone who knows the private key.
The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem is an open question. There are no published methods to defeat the system if a large enough key is used.
RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys for symmetric-key cryptography, which are then used for bulk encryption–decryption.
== History ==
The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time. Moreover, like Diffie-Hellman, RSA is based on modular exponentiation.
Ron Rivest, Adi Shamir, and Leonard Adleman at the Massachusetts Institute of Technology made several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements. In April 1977, they spent Passover at the house of a student and drank a good deal of wine before returning to their homes at around midnight. Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.
Clifford Cocks, an English mathematician working for the British intelligence agency Government Communications Headquarters (GCHQ), described a similar system in an internal document in 1973. However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification.
Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Kid-RSA gives insight into RSA and other public-key ciphers, analogous to simplified DES.
== Patent ==
A patent describing the RSA algorithm was granted to MIT on 20 September 1983: U.S. patent 4,405,829 "Cryptographic communications system and method". From DWPI's abstract of the patent:
The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver).
A detailed description of the algorithm was published in August 1977, in Scientific American's Mathematical Games column. This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside the United States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either.
When the patent was issued, terms of patent were 17 years. The patent was about to expire on 21 September 2000, but RSA Security released the algorithm to the public domain on 6 September 2000.
== Operation ==
The RSA algorithm involves four steps: key generation, key distribution, encryption, and decryption.
A basic principle behind RSA is the observation that it is practical to find three very large positive integers e, d, and n, such that for all integers m (0 ≤ m < n), both
(
m
e
)
d
{\displaystyle (m^{e})^{d}}
and
m
{\displaystyle m}
have the same remainder when divided by
n
{\displaystyle n}
(they are congruent modulo
n
{\displaystyle n}
):
(
m
e
)
d
≡
m
(
mod
n
)
.
{\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.}
However, when given only e and n, it is extremely difficult to find d.
The integers n and e comprise the public key, d represents the private key, and m represents the message. The modular exponentiation to e and d corresponds to encryption and decryption, respectively.
In addition, because the two exponents can be swapped, the private and public key can also be swapped, allowing for message signing and verification using the same algorithm.
=== Key generation ===
The keys for the RSA algorithm are generated in the following way:
Choose two large prime numbers p and q.
To make factoring harder, p and q should be chosen at random, be both large and have a large difference. For choosing them the standard method is to choose random integers and use a primality test until two primes are found.
p and q are kept secret.
Compute n = pq.
n is used as the modulus for both the public and private keys. Its length, usually expressed in bits, is the key length.
n is released as part of the public key.
Compute λ(n), where λ is Carmichael's totient function. Since n = pq, λ(n) = lcm(λ(p), λ(q)), and since p and q are prime, λ(p) = φ(p) = p − 1, and likewise λ(q) = q − 1. Hence λ(n) = lcm(p − 1, q − 1).
The lcm may be calculated through the Euclidean algorithm, since lcm(a, b) = |ab|/gcd(a, b).
λ(n) is kept secret.
Choose an integer e such that 1 < e < λ(n) and gcd(e, λ(n)) = 1; that is, e and λ(n) are coprime.
e having a short bit-length and small Hamming weight results in more efficient encryption – the most commonly chosen value for e is 216 + 1 = 65537. The smallest (and fastest) possible value for e is 3, but such a small value for e has been shown to be less secure in some settings.
e is released as part of the public key.
Determine d as d ≡ e−1 (mod λ(n)); that is, d is the modular multiplicative inverse of e modulo λ(n).
This means: solve for d the equation de ≡ 1 (mod λ(n)); d can be computed efficiently by using the extended Euclidean algorithm, since, thanks to e and λ(n) being coprime, said equation is a form of Bézout's identity, where d is one of the coefficients.
d is kept secret as the private key exponent.
The public key consists of the modulus n and the public (or encryption) exponent e. The private key consists of the private (or decryption) exponent d, which must be kept secret. p, q, and λ(n) must also be kept secret because they can be used to calculate d. In fact, they can all be discarded after d has been computed.
In the original RSA paper, the Euler totient function φ(n) = (p − 1)(q − 1) is used instead of λ(n) for calculating the private exponent d. Since φ(n) is always divisible by λ(n), the algorithm works as well. The possibility of using Euler totient function results also from Lagrange's theorem applied to the multiplicative group of integers modulo pq. Thus any d satisfying d⋅e ≡ 1 (mod φ(n)) also satisfies d⋅e ≡ 1 (mod λ(n)). However, computing d modulo φ(n) will sometimes yield a result that is larger than necessary (i.e. d > λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponent d at all, rather than using the optimized decryption method based on the Chinese remainder theorem described below), but some standards such as FIPS 186-4 (Section B.3.1) may require that d < λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced modulo λ(n) to obtain a smaller equivalent exponent.
Since any common factors of (p − 1) and (q − 1) are present in the factorisation of n − 1 = pq − 1 = (p − 1)(q − 1) + (p − 1) + (q − 1), it is recommended that (p − 1) and (q − 1) have only very small common factors, if any, besides the necessary 2.
Note: The authors of the original RSA paper carry out the key generation by choosing d and then computing e as the modular multiplicative inverse of d modulo φ(n), whereas most current implementations of RSA, such as those following PKCS#1, do the reverse (choose e and compute d). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.
=== Key distribution ===
Suppose that Bob wants to send information to Alice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message.
To enable Bob to send his encrypted messages, Alice transmits her public key (n, e) to Bob via a reliable, but not necessarily secret, route. Alice's private key (d) is never distributed.
=== Encryption ===
After Bob obtains Alice's public key, he can send a message M to Alice.
To do it, he first turns M (strictly speaking, the un-padded plaintext) into an integer m (strictly speaking, the padded plaintext), such that 0 ≤ m < n by using an agreed-upon reversible protocol known as a padding scheme. He then computes the ciphertext c, using Alice's public key e, corresponding to
c
≡
m
e
(
mod
n
)
.
{\displaystyle c\equiv m^{e}{\pmod {n}}.}
This can be done reasonably quickly, even for very large numbers, using modular exponentiation. Bob then transmits c to Alice. Note that at least nine values of m will yield a ciphertext c equal to
m, but this is very unlikely to occur in practice.
=== Decryption ===
Alice can recover m from c by using her private key exponent d by computing
c
d
≡
(
m
e
)
d
≡
m
(
mod
n
)
.
{\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.}
Given m, she can recover the original message M by reversing the padding scheme.
=== Example ===
Here is an example of RSA encryption and decryption:
Choose two distinct prime numbers, such as
p
=
61
{\displaystyle p=61}
and
q
=
53
{\displaystyle q=53}
.
Compute n = pq giving
n
=
61
×
53
=
3233.
{\displaystyle n=61\times 53=3233.}
Compute the Carmichael's totient function of the product as λ(n) = lcm(p − 1, q − 1) giving
λ
(
3233
)
=
lcm
(
60
,
52
)
=
780.
{\displaystyle \lambda (3233)=\operatorname {lcm} (60,52)=780.}
Choose any number 1 < e < 780 that is coprime to 780. Choosing a prime number for e leaves us only to check that e is not a divisor of 780.
Let
e
=
17
{\displaystyle e=17}
.
Compute d, the modular multiplicative inverse of e (mod λ(n)), yielding
d
=
413
,
{\displaystyle d=413,}
as
1
=
(
17
×
413
)
mod
7
80.
{\displaystyle 1=(17\times 413){\bmod {7}}80.}
The public key is (n = 3233, e = 17). For a padded plaintext message m, the encryption function is
c
(
m
)
=
m
e
mod
n
=
m
17
mod
3
233.
{\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}}
The private key is (n = 3233, d = 413). For an encrypted ciphertext c, the decryption function is
m
(
c
)
=
c
d
mod
n
=
c
413
mod
3
233.
{\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}}
For instance, in order to encrypt m = 65, one calculates
c
=
65
17
mod
3
233
=
2790.
{\displaystyle c=65^{17}{\bmod {3}}233=2790.}
To decrypt c = 2790, one calculates
m
=
2790
413
mod
3
233
=
65.
{\displaystyle m=2790^{413}{\bmod {3}}233=65.}
Both of these calculations can be computed efficiently using the square-and-multiply algorithm for modular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factor n = 3233 (obtained from the freely available public key) back to the primes p and q. e, also from the public key, is then inverted to get d, thus acquiring the private key.
Practical implementations use the Chinese remainder theorem to speed up the calculation using modulus of factors (mod pq using mod p and mod q).
The values dp, dq and qinv, which are part of the private key are computed as follows:
d
p
=
d
mod
(
p
−
1
)
=
413
mod
(
61
−
1
)
=
53
,
d
q
=
d
mod
(
q
−
1
)
=
413
mod
(
53
−
1
)
=
49
,
q
inv
=
q
−
1
mod
p
=
53
−
1
mod
6
1
=
38
⇒
(
q
inv
×
q
)
mod
p
=
38
×
53
mod
6
1
=
1.
{\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}}
Here is how dp, dq and qinv are used for efficient decryption (encryption is efficient by choice of a suitable d and e pair):
m
1
=
c
d
p
mod
p
=
2790
53
mod
6
1
=
4
,
m
2
=
c
d
q
mod
q
=
2790
49
mod
5
3
=
12
,
h
=
(
q
inv
×
(
m
1
−
m
2
)
)
mod
p
=
(
38
×
−
8
)
mod
6
1
=
1
,
m
=
m
2
+
h
×
q
=
12
+
1
×
53
=
65.
{\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}}
=== Signing messages ===
Suppose Alice uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used to sign a message.
Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces a hash value of the message, raises it to the power of d (modulo n) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power of e (modulo n) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent.
This works because of exponentiation rules:
h
=
hash
(
m
)
,
{\displaystyle h=\operatorname {hash} (m),}
(
h
e
)
d
=
h
e
d
=
h
d
e
=
(
h
d
)
e
≡
h
(
mod
n
)
.
{\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.}
Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to:
Decrypt a message only intended for the recipient, which may be encrypted by anyone having the public key (asymmetric encrypted transport).
Encrypt a message which may be decrypted by anyone, but which can only be encrypted by one person; this provides a digital signature.
== Proofs of correctness ==
=== Proof using Fermat's little theorem ===
The proof of the correctness of RSA is based on Fermat's little theorem, stating that ap − 1 ≡ 1 (mod p) for any integer a and prime p, not dividing a.
We want to show that
(
m
e
)
d
≡
m
(
mod
p
q
)
{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}}
for every integer m when p and q are distinct prime numbers and e and d are positive integers satisfying ed ≡ 1 (mod λ(pq)).
Since λ(pq) = lcm(p − 1, q − 1) is, by construction, divisible by both p − 1 and q − 1, we can write
e
d
−
1
=
h
(
p
−
1
)
=
k
(
q
−
1
)
{\displaystyle ed-1=h(p-1)=k(q-1)}
for some nonnegative integers h and k.
To check whether two numbers, such as med and m, are congruent mod pq, it suffices (and in fact is equivalent) to check that they are congruent mod p and mod q separately.
To show med ≡ m (mod p), we consider two cases:
If m ≡ 0 (mod p), m is a multiple of p. Thus med is a multiple of p. So med ≡ 0 ≡ m (mod p).
If m
≢
{\displaystyle \not \equiv }
0 (mod p),
m
e
d
=
m
e
d
−
1
m
=
m
h
(
p
−
1
)
m
=
(
m
p
−
1
)
h
m
≡
1
h
m
≡
m
(
mod
p
)
,
{\displaystyle m^{ed}=m^{ed-1}m=m^{h(p-1)}m=(m^{p-1})^{h}m\equiv 1^{h}m\equiv m{\pmod {p}},}
where we used Fermat's little theorem to replace mp−1 mod p with 1.
The verification that med ≡ m (mod q) proceeds in a completely analogous way:
If m ≡ 0 (mod q), med is a multiple of q. So med ≡ 0 ≡ m (mod q).
If m
≢
{\displaystyle \not \equiv }
0 (mod q),
m
e
d
=
m
e
d
−
1
m
=
m
k
(
q
−
1
)
m
=
(
m
q
−
1
)
k
m
≡
1
k
m
≡
m
(
mod
q
)
.
{\displaystyle m^{ed}=m^{ed-1}m=m^{k(q-1)}m=(m^{q-1})^{k}m\equiv 1^{k}m\equiv m{\pmod {q}}.}
This completes the proof that, for any integer m, and integers e, d such that ed ≡ 1 (mod λ(pq)),
(
m
e
)
d
≡
m
(
mod
p
q
)
.
{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.}
==== Notes ====
=== Proof using Euler's theorem ===
Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead on Euler's theorem.
We want to show that med ≡ m (mod n), where n = pq is a product of two different prime numbers, and e and d are positive integers satisfying ed ≡ 1 (mod φ(n)). Since e and d are positive, we can write ed = 1 + hφ(n) for some non-negative integer h. Assuming that m is relatively prime to n, we have
m
e
d
=
m
1
+
h
φ
(
n
)
=
m
(
m
φ
(
n
)
)
h
≡
m
(
1
)
h
≡
m
(
mod
n
)
,
{\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},}
where the second-last congruence follows from Euler's theorem.
More generally, for any e and d satisfying ed ≡ 1 (mod λ(n)), the same conclusion follows from Carmichael's generalization of Euler's theorem, which states that mλ(n) ≡ 1 (mod n) for all m relatively prime to n.
When m is not relatively prime to n, the argument just given is invalid. This is highly improbable (only a proportion of 1/p + 1/q − 1/(pq) numbers have this property), but even in this case, the desired congruence is still true. Either m ≡ 0 (mod p) or m ≡ 0 (mod q), and these cases can be treated using the previous proof.
== Padding ==
=== Attacks against plain RSA ===
There are a number of attacks against plain RSA as described below.
When encrypting with low encryption exponents (e.g., e = 3) and small values of the m (i.e., m < n1/e), the result of me is strictly less than the modulus n. In this case, ciphertexts can be decrypted easily by taking the eth root of the ciphertext over the integers.
If the same clear-text message is sent to e or more recipients in an encrypted way, and the receivers share the same exponent e, but different p, q, and therefore n, then it is easy to decrypt the original clear-text message via the Chinese remainder theorem. Johan Håstad noticed that this attack is possible even if the clear texts are not equal, but the attacker knows a linear relation between them. This attack was later improved by Don Coppersmith (see Coppersmith's attack).
Because RSA encryption is a deterministic encryption algorithm (i.e., has no random component) an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test whether they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other, even if the attacker knows (or has chosen) the corresponding plaintexts. RSA without padding is not semantically secure.
RSA has the property that the product of two ciphertexts is equal to the encryption of the product of the respective plaintexts. That is, m1em2e ≡ (m1m2)e (mod n). Because of this multiplicative property, a chosen-ciphertext attack is possible. E.g., an attacker who wants to know the decryption of a ciphertext c ≡ me (mod n) may ask the holder of the private key d to decrypt an unsuspicious-looking ciphertext c′ ≡ cre (mod n) for some value r chosen by the attacker. Because of the multiplicative property, c' is the encryption of mr (mod n). Hence, if the attacker is successful with the attack, they will learn mr (mod n), from which they can derive the message m by multiplying mr with the modular inverse of r modulo n.
Given the private exponent d, one can efficiently factor the modulus n = pq. And given factorization of the modulus n = pq, one can obtain any private key (d', n) generated against a public key (e', n).
=== Padding schemes ===
To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, at Crypto 1998, Bleichenbacher showed that this version is vulnerable to a practical adaptive chosen-ciphertext attack. Furthermore, at Eurocrypt 2000, Coron et al. showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS).
Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771 and U.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents. Note that using different RSA key pairs for encryption and signing is potentially more secure.
== Security and practical considerations ==
=== Using the Chinese remainder algorithm ===
For efficiency, many popular crypto libraries (such as OpenSSL, Java and .NET) use for decryption and signing the following optimization based on the Chinese remainder theorem. The following values are precomputed and stored as part of the private key:
p
{\displaystyle p}
and
q
{\displaystyle q}
– the primes from the key generation,
d
P
=
d
(
mod
p
−
1
)
,
{\displaystyle d_{P}=d{\pmod {p-1}},}
d
Q
=
d
(
mod
q
−
1
)
,
{\displaystyle d_{Q}=d{\pmod {q-1}},}
q
inv
=
q
−
1
(
mod
p
)
.
{\displaystyle q_{\text{inv}}=q^{-1}{\pmod {p}}.}
These values allow the recipient to compute the exponentiation m = cd (mod pq) more efficiently as follows:
m
1
=
c
d
P
(
mod
p
)
{\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}}
,
m
2
=
c
d
Q
(
mod
q
)
{\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}}
,
h
=
q
inv
(
m
1
−
m
2
)
(
mod
p
)
{\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}}
,
m
=
m
2
+
h
q
{\displaystyle m=m_{2}+hq}
.
This is more efficient than computing exponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus.
=== Integer factorization and the RSA problem ===
The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems are hard, i.e., no efficient algorithm exists for solving them. Providing security against partial decryption may require the addition of a secure padding scheme.
The RSA problem is defined as the task of taking eth roots modulo a composite n: recovering a value m such that c ≡ me (mod n), where (n, e) is an RSA public key, and c is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus n. With the ability to recover prime factors, an attacker can compute the secret exponent d from a public key (n, e), then decrypt c using the standard procedure. To accomplish this, an attacker factors n into p and q, and computes lcm(p − 1, q − 1) that allows the determination of d from e. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; see integer factorization for a discussion of this problem.
The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months. By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process.
Rivest, Shamir, and Adleman noted that Miller has shown that – assuming the truth of the extended Riemann hypothesis – finding d from n and e is as hard as factoring n into p and q (up to a polynomial time difference). However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring.
As of 2020, the largest publicly known factored RSA number had 829 bits (250 decimal digits, RSA-250). Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003, RSA Security estimated that 1024-bit keys were likely to become crackable by 2010. As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits. It is generally presumed that RSA is secure if n is sufficiently large, outside of quantum computing.
If n is 300 bits or shorter, it can be factored in a few hours on a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, when RSA-155 was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.
In 1994, Peter Shor showed that a quantum computer – if one could ever be practically created for the purpose – would be able to factor in polynomial time, breaking RSA; see Shor's algorithm.
=== Faulty key generation ===
Finding the large primes p and q is usually done by testing random numbers of the correct size with probabilistic primality tests that quickly eliminate virtually all of the nonprimes.
The numbers p and q should not be "too close", lest the Fermat factorization for n be successful. If p − q is less than 2n1/4 (n = p⋅q, which even for "small" 1024-bit values of n is 3×1077), solving for p and q is trivial. Furthermore, if either p − 1 or q − 1 has only small prime factors, n can be factored quickly by Pollard's p − 1 algorithm, and hence such values of p or q should be discarded.
It is important that the private exponent d be large enough. Michael J. Wiener showed that if p is between q and 2q (which is quite typical) and d < n1/4/3, then d can be computed efficiently from n and e.
There is no known attack against small public exponents such as e = 3, provided that the proper padding is used. Coppersmith's attack has many applications in attacking RSA specifically if the public exponent e is small and if the encrypted message is short and not padded. 65537 is a commonly used value for e; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponents e smaller than 65537, but does not state a reason for this restriction.
In October 2017, a team of researchers from Masaryk University announced the ROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library from Infineon known as RSALib. A large number of smart cards and trusted platform modules (TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.
=== Importance of strong random number generation ===
A cryptographically strong random number generator, which has been properly seeded with adequate entropy, must be used to generate the primes p and q. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.
They exploited a weakness unique to cryptosystems based on integer factorization. If n = pq is one public key, and n′ = p′q′ is another, then if by chance p = p′ (but q is not equal to q'), then a simple computation of gcd(n, n′) = p factors both n and n', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to choose q given p, instead of choosing p and q independently.
Nadia Heninger was part of a group that did a similar experiment. They used an idea of Daniel J. Bernstein to compute the GCD of each RSA key n against the product of all the other keys n' they had found (a 729-million-digit number), instead of computing each gcd(n, n′) separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size.
Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise or atmospheric noise from a radio receiver tuned between stations should solve the problem.
Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly.
=== Timing attacks ===
Kocher described a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption key d quickly. This attack can also be applied against the RSA signature scheme. In 2003, Boneh and Brumley demonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from a Secure Sockets Layer (SSL)-enabled webserver). This attack takes advantage of information leaked by the Chinese remainder theorem optimization used by many RSA implementations.
One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known as cryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computing cd (mod n), Alice first chooses a secret random value r and computes (rec)d (mod n). The result of this computation, after applying Euler's theorem, is rcd (mod n), and so the effect of r can be removed by multiplying by its inverse. A new value of r is chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails.
=== Adaptive chosen-ciphertext attacks ===
In 1998, Daniel Bleichenbacher described the first practical adaptive chosen-ciphertext attack against RSA-encrypted messages using the PKCS #1 v1 padding scheme (a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of the Secure Sockets Layer protocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such as Optimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks.
A variant of this attack, dubbed "BERserk", came back in 2014. It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome.
=== Side-channel analysis attacks ===
A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use a branch predictor to determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implement simultaneous multithreading (SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors.
Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis", the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations.
A power-fault attack on RSA implementations was described in 2010. The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server.
=== Tricky implementation ===
There are many details to keep in mind in order to implement RSA securely (strong PRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point that the book Practical Cryptography With Go suggests avoiding RSA if possible.
== Implementations ==
Some cryptography libraries that provide support for RSA include:
Botan
Bouncy Castle
cryptlib
Crypto++
Libgcrypt
Nettle
OpenSSL
wolfCrypt
GnuTLS
mbed TLS
LibreSSL
== See also ==
Acoustic cryptanalysis
Computational complexity theory
Diffie–Hellman key exchange
Digital Signature Algorithm
Elliptic-curve cryptography
Key exchange
Key management
Key size
Public-key cryptography
Rabin cryptosystem
Trapdoor function
== Notes ==
== References ==
== Further reading ==
Menezes, Alfred; van Oorschot, Paul C.; Vanstone, Scott A. (October 1996). Handbook of Applied Cryptography. CRC Press. ISBN 978-0-8493-8523-0.
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 881–887. ISBN 978-0-262-03293-3.
== External links ==
The Original RSA Patent as filed with the U.S. Patent Office by Rivest; Ronald L. (Belmont, MA), Shamir; Adi (Cambridge, MA), Adleman; Leonard M. (Arlington, MA), December 14, 1977, U.S. patent 4,405,829.
RFC 8017: PKCS #1: RSA Cryptography Specifications Version 2.2
Explanation of RSA using colored lamps on YouTube
Thorough walk through of RSA
Prime Number Hide-And-Seek: How the RSA Cipher Works
Onur Aciicmez, Cetin Kaya Koc, Jean-Pierre Seifert: On the Power of Simple Branch Prediction Analysis | Wikipedia/RSA_(algorithm) |
Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.
Specifically, each logical system produces a set of queries expressible in it. The queries – when restricted to finite structures – correspond to the computational problems of traditional complexity theory.
The first main result of descriptive complexity was Fagin's theorem, shown by Ronald Fagin in 1974. It established that NP is precisely the set of languages expressible by sentences of existential second-order logic; that is, second-order logic excluding universal quantification over relations, functions, and subsets. Many other classes were later characterized in such a manner.
== The setting ==
When we use the logic formalism to describe a computational problem, the input is a finite structure, and the elements of that structure are the domain of discourse. Usually the input is either a string (of bits or over an alphabet) and the elements of the logical structure represent positions of the string, or the input is a graph and the elements of the logical structure represent its vertices. The length of the input will be measured by the size of the respective structure.
Whatever the structure is, we can assume that there are relations that can be tested, for example "
E
(
x
,
y
)
{\displaystyle E(x,y)}
is true if and only if there is an edge from x to y" (in case of the structure being a graph), or "
P
(
n
)
{\displaystyle P(n)}
is true if and only if the nth letter of the string is 1." These relations are the predicates for the first-order logic system. We also have constants, which are special elements of the respective structure, for example if we want to check reachability in a graph, we will have to choose two constants s (start) and t (terminal).
In descriptive complexity theory we often assume that there is a total order over the elements and that we can check equality between elements. This lets us consider elements as numbers: the element x represents the number n if and only if there are
(
n
−
1
)
{\displaystyle (n-1)}
elements y with
y
<
x
{\displaystyle y<x}
. Thanks to this we also may have the primitive predicate "bit", where
b
i
t
(
x
,
k
)
{\displaystyle bit(x,k)}
is true if only the kth bit of the binary expansion of x is 1. (We can replace addition and multiplication by ternary relations such that
p
l
u
s
(
x
,
y
,
z
)
{\displaystyle plus(x,y,z)}
is true if and only if
x
+
y
=
z
{\displaystyle x+y=z}
and
t
i
m
e
s
(
x
,
y
,
z
)
{\displaystyle times(x,y,z)}
is true if and only if
x
∗
y
=
z
{\displaystyle x*y=z}
).
== Overview of characterisations of complexity classes ==
If we restrict ourselves to ordered structures with a successor relation and basic arithmetical predicates, then we get the following characterisations:
First-order logic defines the class AC0, the languages recognized by polynomial-size circuits of bounded depth, which equals the languages recognized by a concurrent random access machine in constant time.
First-order logic augmented with symmetric or deterministic transitive closure operators yield L, problems solvable in logarithmic space.
First-order logic with a transitive closure operator yields NL, the problems solvable in nondeterministic logarithmic space.
First-order logic with a least fixed point operator gives P, the problems solvable in deterministic polynomial time.
Existential second-order logic yields NP.
Universal second-order logic (excluding existential second-order quantification) yields co-NP.
Second-order logic corresponds to the polynomial hierarchy PH.
Second-order logic with a transitive closure (commutative or not) yields PSPACE, the problems solvable in polynomial space.
Second-order logic with a least fixed point operator gives EXPTIME, the problems solvable in exponential time.
HO, the complexity class defined by higher-order logic, is equal to ELEMENTARY
== Sub-polynomial time ==
=== FO without any operators ===
In circuit complexity, first-order logic with arbitrary predicates can be shown to be equal to AC0, the first class in the AC hierarchy. Indeed, there is a natural translation from FO's symbols to nodes of circuits, with
∀
,
∃
{\displaystyle \forall ,\exists }
being
∧
{\displaystyle \land }
and
∨
{\displaystyle \lor }
of size n. First-order logic in a signature with arithmetical predicates characterises the restriction of the AC0 family of circuits to those constructible in alternating logarithmic time. First-order logic in a signature with only the order relation corresponds to the set of star-free languages.
=== Transitive closure logic ===
First-order logic gains substantially in expressive power when it is augmented with an operator that computes the transitive closure of a binary relation. The resulting transitive closure logic is known to characterise non-deterministic logarithmic space (NL) on ordered structures. This was used by Immerman to show that NL is closed under complement (i. e. that NL = co-NL).
When restricting the transitive closure operator to deterministic transitive closure, the resulting logic exactly characterises logarithmic space on ordered structures.
=== Second-order Krom formulae ===
On structures that have a successor function, NL can also be characterised by second-order Krom formulae.
SO-Krom is the set of Boolean queries definable with second-order formulae in conjunctive normal form such that the first-order quantifiers are universal and the quantifier-free part of the formula is in Krom form, which means that the first-order formula is a conjunction of disjunctions, and in each "disjunction" there are at most two variables. Every second-order Krom formula is equivalent to an existential second-order Krom formula.
SO-Krom characterises NL on structures with a successor function.
== Polynomial time ==
On ordered structures, first-order least fixed-point logic captures PTIME:
=== First-order least fixed-point logic ===
FO[LFP] is the extension of first-order logic by a least fixed-point operator, which expresses the fixed-point of a monotone expression. This augments first-order logic with the ability to express recursion. The Immerman–Vardi theorem, shown independently by Immerman and Vardi, shows that FO[LFP] characterises PTIME on ordered structures.
As of 2022, it is still open whether there is a natural logic characterising PTIME on unordered structures.
The Abiteboul–Vianu theorem states that FO[LFP]=FO[PFP] on all structures if and only if FO[LFP]=FO[PFP]; hence if and only if P=PSPACE. This result has been extended to other fixpoints.
=== Second-order Horn formulae ===
In the presence of a successor function, PTIME can also be characterised by second-order Horn formulae.
SO-Horn is the set of Boolean queries definable with SO formulae in disjunctive normal form such that the first-order quantifiers are all universal and the quantifier-free part of the formula is in Horn form, which means that it is a big AND of OR, and in each "OR" every variable except possibly one are negated.
This class is equal to P on structures with a successor function.
Those formulae can be transformed to prenex formulas in existential second-order Horn logic.
== Non-deterministic polynomial time ==
=== Fagin's theorem ===
Ronald Fagin's 1974 proof that the complexity class NP was characterised exactly by those classes of structures axiomatizable in existential second-order logic was the starting point of descriptive complexity theory.
Since the complement of an existential formula is a universal formula, it follows immediately that co-NP is characterized by universal second-order logic.
SO, unrestricted second-order logic, is equal to the Polynomial hierarchy PH. More precisely, we have the following generalisation of Fagin's theorem: The set of formulae in prenex normal form where existential and universal quantifiers of second order alternate k times characterise the kth level of the polynomial hierarchy.
Unlike most other characterisations of complexity classes, Fagin's theorem and its generalisation do not presuppose a total ordering on the structures. This is because existential second-order logic is itself sufficiently expressive to refer to the possible total orders on a structure using second-order variables.
== Beyond NP ==
=== Partial fixed point is PSPACE ===
The class of all problems computable in polynomial space, PSPACE, can be characterised by augmenting first-order logic with a more expressive partial fixed-point operator.
Partial fixed-point logic, FO[PFP], is the extension of first-order logic with a partial fixed-point operator, which expresses the fixed-point of a formula if there is one and returns 'false' otherwise.
Partial fixed-point logic characterises PSPACE on ordered structures.
=== Transitive closure is PSPACE ===
Second-order logic can be extended by a transitive closure operator in the same way as first-order logic, resulting in SO[TC]. The TC operator can now also take second-order variables as argument. SO[TC] characterises PSPACE. Since ordering can be referenced in second-order logic, this characterisation does not presuppose ordered structures.
== Elementary functions ==
The time complexity class ELEMENTARY of elementary functions can be characterised by HO, the complexity class of structures that can be recognized by formulas of higher-order logic. Higher-order logic is an extension of first-order logic and second-order logic with higher-order quantifiers. There is a relation between the
i
{\displaystyle i}
th order and non-deterministic algorithms the time of which is bounded by
i
−
1
{\displaystyle i-1}
levels of exponentials.
=== Definition ===
We define higher-order variables. A variable of order
i
>
1
{\displaystyle i>1}
has an arity
k
{\displaystyle k}
and represents any set of
k
{\displaystyle k}
-tuples of elements of order
i
−
1
{\displaystyle i-1}
. They are usually written in upper-case and with a natural number as exponent to indicate the order. Higher-order logic is the set of first-order formulae where we add quantification over higher-order variables; hence we will use the terms defined in the FO article without defining them again.
HO
i
{\displaystyle ^{i}}
is the set of formulae with variables of order at most
i
{\displaystyle i}
. HO
j
i
{\displaystyle _{j}^{i}}
is the subset of formulae of the form
ϕ
=
∃
X
1
i
¯
∀
X
2
i
¯
…
Q
X
j
i
¯
ψ
{\displaystyle \phi =\exists {\overline {X_{1}^{i}}}\forall {\overline {X_{2}^{i}}}\dots Q{\overline {X_{j}^{i}}}\psi }
, where
Q
{\displaystyle Q}
is a quantifier and
Q
X
i
¯
{\displaystyle Q{\overline {X^{i}}}}
means that
X
i
¯
{\displaystyle {\overline {X^{i}}}}
is a tuple of variable of order
i
{\displaystyle i}
with the same quantification. So HO
j
i
{\displaystyle _{j}^{i}}
is the set of formulae with
j
{\displaystyle j}
alternations of quantifiers of order
i
{\displaystyle i}
, beginning with
∃
{\displaystyle \exists }
, followed by a formula of order
i
−
1
{\displaystyle i-1}
.
Using the standard notation of the tetration,
exp
2
0
(
x
)
=
x
{\displaystyle \exp _{2}^{0}(x)=x}
and
exp
2
i
+
1
(
x
)
=
2
exp
2
i
(
x
)
{\displaystyle \exp _{2}^{i+1}(x)=2^{\exp _{2}^{i}(x)}}
.
exp
2
i
+
1
(
x
)
=
2
2
2
2
…
2
x
{\displaystyle \exp _{2}^{i+1}(x)=2^{2^{2^{2^{\dots ^{2^{x}}}}}}}
with
i
{\displaystyle i}
times
2
{\displaystyle 2}
=== Normal form ===
Every formula of order
i
{\displaystyle i}
th is equivalent to a formula in prenex normal form, where we first write quantification over variable of
i
{\displaystyle i}
th order and then a formula of order
i
−
1
{\displaystyle i-1}
in normal form.
=== Relation to complexity classes ===
HO is equal to the class ELEMENTARY of elementary functions. To be more precise,
H
O
0
i
=
N
T
I
M
E
(
exp
2
i
−
2
(
n
O
(
1
)
)
)
{\displaystyle {\mathsf {HO}}_{0}^{i}={\mathsf {NTIME}}(\exp _{2}^{i-2}(n^{O(1)}))}
, meaning a tower of
(
i
−
2
)
{\displaystyle (i-2)}
2s, ending with
n
c
{\displaystyle n^{c}}
, where
c
{\displaystyle c}
is a constant. A special case of this is that
∃
S
O
=
H
O
0
2
=
N
T
I
M
E
(
n
O
(
1
)
)
=
N
P
{\displaystyle \exists {\mathsf {SO}}={\mathsf {HO}}_{0}^{2}={\mathsf {NTIME}}(n^{O(1)})={\color {Blue}{\mathsf {NP}}}}
, which is exactly Fagin's theorem. Using oracle machines in the polynomial hierarchy,
H
O
j
i
=
N
T
I
M
E
(
exp
2
i
−
2
(
n
O
(
1
)
)
Σ
j
P
)
{\displaystyle {\mathsf {HO}}_{j}^{i}={\color {Blue}{\mathsf {NTIME}}}(\exp _{2}^{i-2}(n^{O(1)})^{\Sigma _{j}^{\mathsf {P}}})}
== Notes ==
== References ==
Immerman, Neil (1999). Descriptive complexity. Springer. ISBN 0-387-98600-6. OCLC 901297152.
== External links ==
Neil Immerman's descriptive complexity page, including a diagram | Wikipedia/Descriptive_complexity_theory |
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.
For example, consider the problem of minimizing the function
x
2
+
y
4
{\displaystyle x^{2}+y^{4}}
with respect to the variables
x
{\displaystyle x}
and
y
,
{\displaystyle y,}
subject to
1
≤
x
≤
10
{\displaystyle 1\leq x\leq 10}
and
5
≤
y
≤
12.
{\displaystyle 5\leq y\leq 12.\,}
Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is
x
2
+
y
4
.
{\displaystyle x^{2}+y^{4}.}
In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices.
Constraint satisfaction is the process of finding a point in the feasible region.
== Convex feasible set ==
A convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be minimized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum.
== No feasible set ==
If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is the empty set. In this case the problem has no solution and is said to be infeasible.
== Bounded and unbounded feasible sets ==
Feasible sets may be bounded or unbounded. For example, the feasible set defined by the constraint set {x ≥ 0, y ≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {x ≥ 0, y ≥ 0, x + 2y ≤ 4} is bounded because the extent of movement in any direction is limited by the constraints.
In linear programming problems with n variables, a necessary but insufficient condition for the feasible set to be bounded is that the number of constraints be at least n + 1 (as illustrated by the above example).
If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {x ≥ 0, y ≥ 0}, then the problem of maximizing x + y has no optimum since any candidate solution can be improved upon by increasing x or y; yet if the problem is to minimize x + y, then there is an optimum (specifically at (x, y) = (0, 0)).
== Candidate solution ==
In optimization and other branches of mathematics, and in search algorithms (a topic in computer science), a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints; that is, it is in the set of feasible solutions. Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates.
The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space. This is the set of all possible solutions that satisfy the problem's constraints. Constraint satisfaction is the process of finding a point in the feasible set.
=== Genetic algorithm ===
In the case of the genetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm.
=== Calculus ===
In calculus, an optimal solution is sought using the first derivative test: the first derivative of the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather a saddle point or an inflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of the second derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be a local optimum but not a global optimum.
In taking antiderivatives of monomials of the form
x
n
,
{\displaystyle x^{n},}
the candidate solution using Cavalieri's quadrature formula would be
1
n
+
1
x
n
+
1
+
C
.
{\displaystyle {\tfrac {1}{n+1}}x^{n+1}+C.}
This candidate solution is in fact correct except when
n
=
−
1.
{\displaystyle n=-1.}
=== Linear programming ===
In the simplex method for solving linear programming problems, a vertex of the feasible polytope is selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum.
== References == | Wikipedia/Feasible_solution |
Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical (non-quantum) algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.
Shor proposed multiple similar algorithms for solving the factoring problem, the discrete logarithm problem, and the period-finding problem. "Shor's algorithm" usually refers to the factoring algorithm, but may refer to any of the three algorithms. The discrete logarithm algorithm and the factoring algorithm are instances of the period-finding algorithm, and all three are instances of the hidden subgroup problem.
On a quantum computer, to factor an integer
N
{\displaystyle N}
, Shor's algorithm runs in polynomial time, meaning the time taken is polynomial in
log
N
{\displaystyle \log N}
. It takes quantum gates of order
O
(
(
log
N
)
2
(
log
log
N
)
(
log
log
log
N
)
)
{\displaystyle O\!\left((\log N)^{2}(\log \log N)(\log \log \log N)\right)}
using fast multiplication, or even
O
(
(
log
N
)
2
(
log
log
N
)
)
{\displaystyle O\!\left((\log N)^{2}(\log \log N)\right)}
utilizing the asymptotically fastest multiplication algorithm currently known due to Harvey and Van Der Hoven, thus demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is consequently in the complexity class BQP. This is significantly faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time:
O
(
e
1.9
(
log
N
)
1
/
3
(
log
log
N
)
2
/
3
)
{\displaystyle O\!\left(e^{1.9(\log N)^{1/3}(\log \log N)^{2/3}}\right)}
.
== Feasibility and impact ==
If a quantum computer with a sufficient number of qubits could operate without succumbing to quantum noise and other quantum-decoherence phenomena, then Shor's algorithm could be used to break public-key cryptography schemes, such as
The RSA scheme
The finite-field Diffie–Hellman key exchange
The elliptic-curve Diffie–Hellman key exchange
RSA can be broken if factoring large integers is computationally feasible. As far as is known, this is not possible using classical (non-quantum) computers; no classical algorithm is known that can factor integers in polynomial time. However, Shor's algorithm shows that factoring integers is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers, and for the study of new quantum-computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.
=== Physical implementation ===
Given the high error rates of contemporary quantum computers and too few qubits to use quantum error correction, laboratory demonstrations obtain correct results only in a fraction of attempts.
In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored
15
{\displaystyle 15}
into
3
×
5
{\displaystyle 3\times 5}
, using an NMR implementation of a quantum computer with seven qubits. After IBM's implementation, two independent groups implemented Shor's algorithm using photonic qubits, emphasizing that multi-qubit entanglement was observed when running the Shor's algorithm circuits. In 2012, the factorization of
15
{\displaystyle 15}
was performed with solid-state qubits. Later, in 2012, the factorization of
21
{\displaystyle 21}
was achieved. In 2016, the factorization of
15
{\displaystyle 15}
was performed again using trapped-ion qubits with a recycling technique. In 2019, an attempt was made to factor the number
35
{\displaystyle 35}
using Shor's algorithm on an IBM Q System One, but the algorithm failed because of accumulating errors. However, all these demonstrations have compiled the algorithm by making use of prior knowledge of the answer, and some have even oversimplified the algorithm in a way that makes it equivalent to coin flipping. Furthermore, attempts using quantum computers with other algorithms have been made. However, these algorithms are similar to classical brute-force checking of factors, so unlike Shor's algorithm, they are not expected to ever perform better than classical factoring algorithms.
Theoretical analyses of Shor's algorithm assume a quantum computer free of noise and errors. However, near-term practical implementations will have to deal with such undesired phenomena (when more qubits are available, quantum error correction can help). In 2023, Jin-Yi Cai showed that in the presence of noise, Shor's algorithm fails asymptotically almost surely for large semiprimes that are products of two primes in OEIS sequence A073024. These primes
p
{\displaystyle p}
have the property that
p
−
1
{\displaystyle p-1}
has a prime factor larger than
p
2
/
3
{\displaystyle p^{2/3}}
, and have a positive density in the set of all primes. Hence error correction will be needed to be able to factor all numbers with Shor's algorithm.
== Algorithm ==
The problem that we are trying to solve is: given an odd composite number
N
{\displaystyle N}
, find its integer factors.
To achieve this, Shor's algorithm consists of two parts:
A classical reduction of the factoring problem to the problem of order-finding. This reduction is similar to that used for other factoring algorithms, such as the quadratic sieve.
A quantum algorithm to solve the order-finding problem.
=== Classical reduction ===
A complete factoring algorithm is possible if we're able to efficiently factor arbitrary
N
{\displaystyle N}
into just two integers
p
{\displaystyle p}
and
q
{\displaystyle q}
greater than 1, since if either
p
{\displaystyle p}
or
q
{\displaystyle q}
are not prime, then the factoring algorithm can in turn be run on those until only primes remain.
A basic observation is that, using Euclid's algorithm, we can always compute the GCD between two integers efficiently. In particular, this means we can check efficiently whether
N
{\displaystyle N}
is even, in which case 2 is trivially a factor. Let us thus assume that
N
{\displaystyle N}
is odd for the remainder of this discussion. Afterwards, we can use efficient classical algorithms to check whether
N
{\displaystyle N}
is a prime power. For prime powers, efficient classical factorization algorithms exist, hence the rest of the quantum algorithm may assume that
N
{\displaystyle N}
is not a prime power.
If those easy cases do not produce a nontrivial factor of
N
{\displaystyle N}
, the algorithm proceeds to handle the remaining case. We pick a random integer
2
≤
a
<
N
{\displaystyle 2\leq a<N}
. A possible nontrivial divisor of
N
{\displaystyle N}
can be found by computing
gcd
(
a
,
N
)
{\displaystyle \gcd(a,N)}
, which can be done classically and efficiently using the Euclidean algorithm. If this produces a nontrivial factor (meaning
gcd
(
a
,
N
)
≠
1
{\displaystyle \gcd(a,N)\neq 1}
), the algorithm is finished, and the other nontrivial factor is
N
/
gcd
(
a
,
N
)
{\displaystyle N/\gcd(a,N)}
. If a nontrivial factor was not identified, then this means that
N
{\displaystyle N}
and the choice of
a
{\displaystyle a}
are coprime, so
a
{\displaystyle a}
is contained in the multiplicative group of integers modulo
N
{\displaystyle N}
, having a multiplicative inverse modulo
N
{\displaystyle N}
. Thus,
a
{\displaystyle a}
has a multiplicative order
r
{\displaystyle r}
modulo
N
{\displaystyle N}
, meaning
a
r
≡
1
mod
N
,
{\displaystyle a^{r}\equiv 1{\bmod {N}},}
and
r
{\displaystyle r}
is the smallest positive integer satisfying this congruence.
The quantum subroutine finds
r
{\displaystyle r}
. It can be seen from the congruence that
N
{\displaystyle N}
divides
a
r
−
1
{\displaystyle a^{r}-1}
, written
N
∣
a
r
−
1
{\displaystyle N\mid a^{r}-1}
. This can be factored using difference of squares:
N
∣
(
a
r
/
2
−
1
)
(
a
r
/
2
+
1
)
.
{\displaystyle N\mid (a^{r/2}-1)(a^{r/2}+1).}
Since we have factored the expression in this way, the algorithm doesn't work for odd
r
{\displaystyle r}
(because
a
r
/
2
{\displaystyle a^{r/2}}
must be an integer), meaning that the algorithm would have to restart with a new
a
{\displaystyle a}
. Hereafter we can therefore assume that
r
{\displaystyle r}
is even. It cannot be the case that
N
∣
a
r
/
2
−
1
{\displaystyle N\mid a^{r/2}-1}
, since this would imply
a
r
/
2
≡
1
mod
N
{\displaystyle a^{r/2}\equiv 1{\bmod {N}}}
, which would contradictorily imply that
r
/
2
{\displaystyle r/2}
would be the order of
a
{\displaystyle a}
, which was already
r
{\displaystyle r}
. At this point, it may or may not be the case that
N
∣
a
r
/
2
+
1
{\displaystyle N\mid a^{r/2}+1}
. If
N
{\displaystyle N}
does not divide
a
r
/
2
+
1
{\displaystyle a^{r/2}+1}
, then this means that we are able to find a nontrivial factor of
N
{\displaystyle N}
. We compute
d
=
gcd
(
N
,
a
r
/
2
−
1
)
.
{\displaystyle d=\gcd(N,a^{r/2}-1).}
If
d
=
1
{\displaystyle d=1}
, then
N
∣
a
r
/
2
+
1
{\displaystyle N\mid a^{r/2}+1}
was true, and a nontrivial factor of
N
{\displaystyle N}
cannot be achieved from
a
{\displaystyle a}
, and the algorithm must restart with a new
a
{\displaystyle a}
. Otherwise, we have found a nontrivial factor of
N
{\displaystyle N}
, with the other being
N
/
d
{\displaystyle N/d}
, and the algorithm is finished. For this step, it is also equivalent to compute
gcd
(
N
,
a
r
/
2
+
1
)
{\displaystyle \gcd(N,a^{r/2}+1)}
; it will produce a nontrivial factor if
gcd
(
N
,
a
r
/
2
−
1
)
{\displaystyle \gcd(N,a^{r/2}-1)}
is nontrivial, and will not if it's trivial (where
N
∣
a
r
/
2
+
1
{\displaystyle N\mid a^{r/2}+1}
).
The algorithm restated shortly follows: let
N
{\displaystyle N}
be odd, and not a prime power. We want to output two nontrivial factors of
N
{\displaystyle N}
.
Pick a random number
1
<
a
<
N
{\displaystyle 1<a<N}
.
Compute
K
=
gcd
(
a
,
N
)
{\displaystyle K=\gcd(a,N)}
, the greatest common divisor of
a
{\displaystyle a}
and
N
{\displaystyle N}
.
If
K
≠
1
{\displaystyle K\neq 1}
, then
K
{\displaystyle K}
is a nontrivial factor of
N
{\displaystyle N}
, with the other factor being
N
/
K
{\displaystyle N/K}
, and we are done.
Otherwise, use the quantum subroutine to find the order
r
{\displaystyle r}
of
a
{\displaystyle a}
.
If
r
{\displaystyle r}
is odd, then go back to step 1.
Compute
g
=
gcd
(
N
,
a
r
/
2
+
1
)
{\displaystyle g=\gcd(N,a^{r/2}+1)}
. If
g
{\displaystyle g}
is nontrivial, the other factor is
N
/
g
{\displaystyle N/g}
, and we're done. Otherwise, go back to step 1.
It has been shown that this will be likely to succeed after a few runs. In practice, a single call to the quantum order-finding subroutine is enough to completely factor
N
{\displaystyle N}
with very high probability of success if one uses a more advanced reduction.
=== Quantum order-finding subroutine ===
The goal of the quantum subroutine of Shor's algorithm is, given coprime integers
N
{\displaystyle N}
and
1
<
a
<
N
{\displaystyle 1<a<N}
, to find the order
r
{\displaystyle r}
of
a
{\displaystyle a}
modulo
N
{\displaystyle N}
, which is the smallest positive integer such that
a
r
≡
1
(
mod
N
)
{\displaystyle a^{r}\equiv 1{\pmod {N}}}
. To achieve this, Shor's algorithm uses a quantum circuit involving two registers. The second register uses
n
{\displaystyle n}
qubits, where
n
{\displaystyle n}
is the smallest integer such that
N
≤
2
n
{\displaystyle N\leq 2^{n}}
, i.e.,
n
=
⌈
log
2
N
⌉
{\displaystyle n=\left\lceil {\log _{2}N}\right\rceil }
. The size of the first register determines how accurate of an approximation the circuit produces. It can be shown that using
2
n
{\displaystyle 2n}
qubits gives sufficient accuracy to find
r
{\displaystyle r}
. The exact quantum circuit depends on the parameters
a
{\displaystyle a}
and
N
{\displaystyle N}
, which define the problem. The following description of the algorithm uses bra–ket notation to denote quantum states, and
⊗
{\displaystyle \otimes }
to denote the tensor product, rather than logical AND.
The algorithm consists of two main steps:
Use quantum phase estimation with unitary
U
{\displaystyle U}
representing the operation of multiplying by
a
{\displaystyle a}
(modulo
N
{\displaystyle N}
), and input state
|
0
⟩
⊗
2
n
⊗
|
1
⟩
{\displaystyle |0\rangle ^{\otimes 2n}\otimes |1\rangle }
(where the second register is
|
1
⟩
{\displaystyle |1\rangle }
made from
n
{\displaystyle n}
qubits). The eigenvalues of this
U
{\displaystyle U}
encode information about the period, and
|
1
⟩
{\displaystyle |1\rangle }
can be seen to be writable as a sum of its eigenvectors. Thanks to these properties, the quantum phase estimation stage gives as output a random integer of the form
j
r
2
2
n
{\displaystyle {\frac {j}{r}}2^{2n}}
for random
j
=
0
,
1
,
.
.
.
,
r
−
1
{\displaystyle j=0,1,...,r-1}
.
Use the continued fractions algorithm to extract the period
r
{\displaystyle r}
from the measurement outcomes obtained in the previous stage. This is a procedure to post-process (with a classical computer) the measurement data obtained from measuring the output quantum states, and retrieve the period.
The connection with quantum phase estimation was not discussed in the original formulation of Shor's algorithm, but was later proposed by Kitaev.
==== Quantum phase estimation ====
In general the quantum phase estimation algorithm, for any unitary
U
{\displaystyle U}
and eigenstate
|
ψ
⟩
{\displaystyle |\psi \rangle }
such that
U
|
ψ
⟩
=
e
2
π
i
θ
|
ψ
⟩
{\displaystyle U|\psi \rangle =e^{2\pi i\theta }|\psi \rangle }
, sends input states
|
0
⟩
|
ψ
⟩
{\displaystyle |0\rangle |\psi \rangle }
to output states close to
|
ϕ
⟩
|
ψ
⟩
{\displaystyle |\phi \rangle |\psi \rangle }
, where
ϕ
{\displaystyle \phi }
is a superposition of integers close to
2
2
n
θ
{\displaystyle 2^{2n}\theta }
. In other words, it sends each eigenstate
|
ψ
j
⟩
{\displaystyle |\psi _{j}\rangle }
of
U
{\displaystyle U}
to a state containing information close to the associated eigenvalue. For the purposes of quantum order-finding, we employ this strategy using the unitary defined by the action
U
|
k
⟩
=
{
|
a
k
(
mod
N
)
⟩
0
≤
k
<
N
,
|
k
⟩
N
≤
k
<
2
n
.
{\displaystyle U|k\rangle ={\begin{cases}|ak{\pmod {N}}\rangle &0\leq k<N,\\|k\rangle &N\leq k<2^{n}.\end{cases}}}
The action of
U
{\displaystyle U}
on states
|
k
⟩
{\displaystyle |k\rangle }
with
N
≤
k
<
2
n
{\displaystyle N\leq k<2^{n}}
is not crucial to the functioning of the algorithm, but needs to be included to ensure that the overall transformation is a well-defined quantum gate. Implementing the circuit for quantum phase estimation with
U
{\displaystyle U}
requires being able to efficiently implement the gates
U
2
j
{\displaystyle U^{2^{j}}}
. This can be accomplished via modular exponentiation, which is the slowest part of the algorithm.
The gate thus defined satisfies
U
r
=
I
{\displaystyle U^{r}=I}
, which immediately implies that its eigenvalues are the
r
{\displaystyle r}
-th roots of unity
ω
r
k
=
e
2
π
i
k
/
r
{\displaystyle \omega _{r}^{k}=e^{2\pi ik/r}}
. Furthermore, each eigenvalue
ω
r
j
{\displaystyle \omega _{r}^{j}}
has an eigenvector of the form
|
ψ
j
⟩
=
r
−
1
/
2
∑
k
=
0
r
−
1
ω
r
−
k
j
|
a
k
⟩
{\textstyle |\psi _{j}\rangle =r^{-1/2}\sum _{k=0}^{r-1}\omega _{r}^{-kj}|a^{k}\rangle }
, and these eigenvectors are such that
1
r
∑
j
=
0
r
−
1
|
ψ
j
⟩
=
1
r
∑
j
=
0
r
−
1
∑
k
=
0
r
−
1
ω
r
j
k
|
a
k
⟩
=
|
1
⟩
+
1
r
∑
k
=
1
r
−
1
(
∑
j
=
0
r
−
1
ω
r
j
k
)
|
a
k
⟩
=
|
1
⟩
,
{\displaystyle {\begin{aligned}{\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\psi _{j}\rangle &={\frac {1}{r}}\sum _{j=0}^{r-1}\sum _{k=0}^{r-1}\omega _{r}^{jk}|a^{k}\rangle \\&=|1\rangle +{\frac {1}{r}}\sum _{k=1}^{r-1}\left(\sum _{j=0}^{r-1}\omega _{r}^{jk}\right)|a^{k}\rangle =|1\rangle ,\end{aligned}}}
where the last identity follows from the geometric series formula, which implies
∑
j
=
0
r
−
1
ω
r
j
k
=
0
{\textstyle \sum _{j=0}^{r-1}\omega _{r}^{jk}=0}
.
Using quantum phase estimation on an input state
|
0
⟩
⊗
2
n
|
ψ
j
⟩
{\displaystyle |0\rangle ^{\otimes 2n}|\psi _{j}\rangle }
would then return the integer
2
2
n
j
/
r
{\displaystyle 2^{2n}j/r}
with high probability. More precisely, the quantum phase estimation circuit sends
|
0
⟩
⊗
2
n
|
ψ
j
⟩
{\displaystyle |0\rangle ^{\otimes 2n}|\psi _{j}\rangle }
to
|
ϕ
j
⟩
|
ψ
j
⟩
{\displaystyle |\phi _{j}\rangle |\psi _{j}\rangle }
such that the resulting probability distribution
p
k
≡
|
⟨
k
|
ϕ
j
⟩
|
2
{\displaystyle p_{k}\equiv |\langle k|\phi _{j}\rangle |^{2}}
is peaked around
k
=
2
2
n
j
/
r
{\displaystyle k=2^{2n}j/r}
, with
p
2
2
n
j
/
r
≥
4
/
π
2
≈
0.4053
{\displaystyle p_{2^{2n}j/r}\geq 4/\pi ^{2}\approx 0.4053}
. This probability can be made arbitrarily close to 1 using extra qubits.
Applying the above reasoning to the input
|
0
⟩
⊗
2
n
|
1
⟩
{\displaystyle |0\rangle ^{\otimes 2n}|1\rangle }
, quantum phase estimation thus results in the evolution
|
0
⟩
⊗
2
n
|
1
⟩
=
1
r
∑
j
=
0
r
−
1
|
0
⟩
⊗
2
n
|
ψ
j
⟩
→
1
r
∑
j
=
0
r
−
1
|
ϕ
j
⟩
|
ψ
j
⟩
.
{\displaystyle |0\rangle ^{\otimes 2n}|1\rangle ={\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|0\rangle ^{\otimes 2n}|\psi _{j}\rangle \to {\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\phi _{j}\rangle |\psi _{j}\rangle .}
Measuring the first register, we now have a balanced probability
1
/
r
{\displaystyle 1/r}
to find each
|
ϕ
j
⟩
{\displaystyle |\phi _{j}\rangle }
, each one giving an integer approximation to
2
2
n
j
/
r
{\displaystyle 2^{2n}j/r}
, which can be divided by
2
2
n
{\displaystyle 2^{2n}}
to get a decimal approximation for
j
/
r
{\displaystyle j/r}
.
==== Continued-fraction algorithm to retrieve the period ====
Then, we apply the continued-fraction algorithm to find integers
b
{\displaystyle b}
and
c
{\displaystyle c}
, where
b
/
c
{\displaystyle b/c}
gives the best fraction approximation for the approximation measured from the circuit, for
b
,
c
<
N
{\displaystyle b,c<N}
and coprime
b
{\displaystyle b}
and
c
{\displaystyle c}
. The number of qubits in the first register,
2
n
{\displaystyle 2n}
, which determines the accuracy of the approximation, guarantees that
b
c
=
j
r
,
{\displaystyle {\frac {b}{c}}={\frac {j}{r}},}
given the best approximation from the superposition of
|
ϕ
j
⟩
{\displaystyle |\phi _{j}\rangle }
was measured (which can be made arbitrarily likely by using extra bits and truncating the output). However, while
b
{\displaystyle b}
and
c
{\displaystyle c}
are coprime, it may be the case that
j
{\displaystyle j}
and
r
{\displaystyle r}
are not coprime. Because of that,
b
{\displaystyle b}
and
c
{\displaystyle c}
may have lost some factors that were in
j
{\displaystyle j}
and
r
{\displaystyle r}
. This can be remedied by rerunning the quantum order-finding subroutine an arbitrary number of times, to produce a list of fraction approximations
b
1
c
1
,
b
2
c
2
,
…
,
b
s
c
s
,
{\displaystyle {\frac {b_{1}}{c_{1}}},{\frac {b_{2}}{c_{2}}},\ldots ,{\frac {b_{s}}{c_{s}}},}
where
s
{\displaystyle s}
is the number of times the subroutine was run. Each
c
k
{\displaystyle c_{k}}
will have different factors taken out of it because the circuit will (likely) have measured multiple different possible values of
j
{\displaystyle j}
. To recover the actual
r
{\displaystyle r}
value, we can take the least common multiple of each
c
k
{\displaystyle c_{k}}
:
lcm
(
c
1
,
c
2
,
…
,
c
s
)
.
{\displaystyle \operatorname {lcm} (c_{1},c_{2},\ldots ,c_{s}).}
The least common multiple will be the order
r
{\displaystyle r}
of the original integer
a
{\displaystyle a}
with high probability. In practice, a single run of the quantum order-finding subroutine is in general enough if more advanced post-processing is used.
==== Choosing the size of the first register ====
Phase estimation requires choosing the size of the first register to determine the accuracy of the algorithm, and for the quantum subroutine of Shor's algorithm,
2
n
{\displaystyle 2n}
qubits is sufficient to guarantee that the optimal bitstring measured from phase estimation (meaning the
|
k
⟩
{\displaystyle |k\rangle }
where
k
/
2
2
n
{\textstyle k/2^{2n}}
is the most accurate approximation of the phase from phase estimation) will allow the actual value of
r
{\displaystyle r}
to be recovered.
Each
|
ϕ
j
⟩
{\displaystyle |\phi _{j}\rangle }
before measurement in Shor's algorithm represents a superposition of integers approximating
2
2
n
j
/
r
{\displaystyle 2^{2n}j/r}
. Let
|
k
⟩
{\displaystyle |k\rangle }
represent the most optimal integer in
|
ϕ
j
⟩
{\displaystyle |\phi _{j}\rangle }
. The following theorem guarantees that the continued fractions algorithm will recover
j
/
r
{\displaystyle j/r}
from
k
/
2
2
n
{\displaystyle k/2^{2{n}}}
:
As
k
{\displaystyle k}
is the optimal bitstring from phase estimation,
k
/
2
2
n
{\displaystyle k/2^{2{n}}}
is accurate to
j
/
r
{\displaystyle j/r}
by
2
n
{\displaystyle 2n}
bits. Thus,
|
j
r
−
k
2
2
n
|
≤
1
2
2
n
+
1
≤
1
2
N
2
≤
1
2
r
2
{\displaystyle \left\vert {\frac {j}{r}}-{\frac {k}{2^{2n}}}\right\vert \leq {\frac {1}{2^{2{n}+1}}}\leq {\frac {1}{2N^{2}}}\leq {\frac {1}{2r^{2}}}}
which implies that the continued fractions algorithm will recover
j
{\displaystyle j}
and
r
{\displaystyle r}
(or with their greatest common divisor taken out).
=== The bottleneck ===
The runtime bottleneck of Shor's algorithm is quantum modular exponentiation, which is by far slower than the quantum Fourier transform and classical pre-/post-processing. There are several approaches to constructing and optimizing circuits for modular exponentiation. The simplest and (currently) most practical approach is to mimic conventional arithmetic circuits with reversible gates, starting with ripple-carry adders. Knowing the base and the modulus of exponentiation facilitates further optimizations. Reversible circuits typically use on the order of
n
3
{\displaystyle n^{3}}
gates for
n
{\displaystyle n}
qubits. Alternative techniques asymptotically improve gate counts by using quantum Fourier transforms, but are not competitive with fewer than 600 qubits owing to high constants.
== Period finding and discrete logarithms ==
Shor's algorithms for the discrete log and the order finding problems are instances of an algorithm solving the period finding problem. All three are instances of the hidden subgroup problem.
=== Shor's algorithm for discrete logarithms ===
Given a group
G
{\displaystyle G}
with order
p
{\displaystyle p}
and generator
g
∈
G
{\displaystyle g\in G}
, suppose we know that
x
=
g
r
∈
G
{\displaystyle x=g^{r}\in G}
, for some
r
∈
Z
p
{\displaystyle r\in \mathbb {Z} _{p}}
, and we wish to compute
r
{\displaystyle r}
, which is the discrete logarithm:
r
=
log
g
(
x
)
{\displaystyle r={\log _{g}}(x)}
. Consider the abelian group
Z
p
×
Z
p
{\displaystyle \mathbb {Z} _{p}\times \mathbb {Z} _{p}}
, where each factor corresponds to modular addition of values. Now, consider the function
f
:
Z
p
×
Z
p
→
G
;
f
(
a
,
b
)
=
g
a
x
−
b
.
{\displaystyle f\colon \mathbb {Z} _{p}\times \mathbb {Z} _{p}\to G\;;\;f(a,b)=g^{a}x^{-b}.}
This gives us an abelian hidden subgroup problem, where
f
{\displaystyle f}
corresponds to a group homomorphism. The kernel corresponds to the multiples of
(
r
,
1
)
{\displaystyle (r,1)}
. So, if we can find the kernel, we can find
r
{\displaystyle r}
. A quantum algorithm for solving this problem exists. This algorithm is, like the factor-finding algorithm, due to Peter Shor and both are implemented by creating a superposition through using Hadamard gates, followed by implementing
f
{\displaystyle f}
as a quantum transform, followed finally by a quantum Fourier transform. Due to this, the quantum algorithm for computing the discrete logarithm is also occasionally referred to as "Shor's Algorithm."
The order-finding problem can also be viewed as a hidden subgroup problem. To see this, consider the group of integers under addition, and for a given
a
∈
Z
{\displaystyle a\in \mathbb {Z} }
such that:
a
r
=
1
{\displaystyle a^{r}=1}
, the function
f
:
Z
→
Z
;
f
(
x
)
=
a
x
,
f
(
x
+
r
)
=
f
(
x
)
.
{\displaystyle f\colon \mathbb {Z} \to \mathbb {Z} \;;\;f(x)=a^{x},\;f(x+r)=f(x).}
For any finite abelian group
G
{\displaystyle G}
, a quantum algorithm exists for solving the hidden subgroup for
G
{\displaystyle G}
in polynomial time.
== See also ==
GEECM, a factorization algorithm said to be "often much faster than Shor's"
Grover's algorithm
== References ==
== Further reading ==
Nielsen, Michael A.; Chuang, Isaac L. (2010). Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press. ISBN 978-1-107-00217-3.
Kaye, Phillip; Laflamme, Raymond; Mosca, Michele (2006). An Introduction to Quantum Computing. doi:10.1093/oso/9780198570004.001.0001. ISBN 978-0-19-857000-4.
"Explanation for the man in the street" by Scott Aaronson, "approved" by Peter Shor. (Shor wrote "Great article, Scott! That’s the best job of explaining quantum computing to the man on the street that I’ve seen."). An alternate metaphor for the QFT was presented in one of the comments. Scott Aaronson suggests the following 12 references as further reading (out of "the 10105000 quantum algorithm tutorials that are already on the web."):
Shor, Peter W. (1997), "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer", SIAM J. Comput., 26 (5): 1484–1509, arXiv:quant-ph/9508027v2, Bibcode:1999SIAMR..41..303S, doi:10.1137/S0036144598347011. Revised version of the original paper by Peter Shor ("28 pages, LaTeX. This is an expanded version of a paper that appeared in the Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, Nov. 20--22, 1994. Minor revisions made January, 1996").
Quantum Computing and Shor's Algorithm, Matthew Hayward's Quantum Algorithms Page, 2005-02-17, imsa.edu, LaTeX2HTML version of the original LaTeX document, also available as PDF or postscript document.
Quantum Computation and Shor's Factoring Algorithm, Ronald de Wolf, CWI and University of Amsterdam, January 12, 1999, 9 page postscript document.
Shor's Factoring Algorithm, Notes from Lecture 9 of Berkeley CS 294–2, dated 4 Oct 2004, 7 page postscript document.
Chapter 6 Quantum Computation Archived 2020-04-30 at the Wayback Machine, 91 page postscript document, Caltech, Preskill, PH229.
Quantum computation: a tutorial by Samuel L. Braunstein.
The Quantum States of Shor's Algorithm, by Neal Young, Last modified: Tue May 21 11:47:38 1996.
III. Breaking RSA Encryption with a Quantum Computer: Shor's Factoring Algorithm, Lecture notes on Quantum computation, Cornell University, Physics 481–681, CS 483; Spring, 2006 by N. David Mermin. Last revised 2006-03-28, 30 page PDF document.
Lavor, C.; Manssur, L. R. U.; Portugal, R. (2003). "Shor's Algorithm for Factoring Large Integers". arXiv:quant-ph/0303175.
Lomonaco, Jr (2000). "Shor's Quantum Factoring Algorithm". arXiv:quant-ph/0010034. This paper is a written version of a one-hour lecture given on Peter Shor's quantum factoring algorithm. 22 pages.
Chapter 20 Quantum Computation, from Computational Complexity: A Modern Approach, Draft of a book: Dated January 2007, Sanjeev Arora and Boaz Barak, Princeton University. Published as Chapter 10 Quantum Computation of Sanjeev Arora, Boaz Barak, "Computational Complexity: A Modern Approach", Cambridge University Press, 2009, ISBN 978-0-521-42426-4
A Step Toward Quantum Computing: Entangling 10 Billion Particles Archived 2011-01-20 at the Wayback Machine, from "Discover Magazine", Dated January 19, 2011.
Josef Gruska - Quantum Computing Challenges also in Mathematics unlimited: 2001 and beyond, Editors Björn Engquist, Wilfried Schmid, Springer, 2001, ISBN 978-3-540-66913-5
== External links ==
Version 1.0.0 of libquantum: contains a C language implementation of Shor's algorithm with their simulated quantum computer library, but the width variable in shor.c should be set to 1 to improve the runtime complexity.
PBS Infinite Series created two videos explaining the math behind Shor's algorithm, "How to Break Cryptography" and "Hacking at Quantum Speed with Shor's Algorithm".
Complete implementation of Shor's algorithm with Classiq | Wikipedia/Shor's_algorithm |
In computer science and computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
Different models of computation give rise to different reasons that an algorithm may be non-deterministic, and different ways to evaluate its performance or correctness:
A concurrent algorithm can perform differently on different runs due to a race condition. This can happen even with a single-threaded algorithm when it interacts with resources external to it. In general, such an algorithm is considered to perform correctly only when all possible runs produce the desired results.
A probabilistic algorithm's behavior depends on a random number generator called by the algorithm. These are subdivided into Las Vegas algorithms, for which (like concurrent algorithms) all runs must produce correct output, and Monte Carlo algorithms which are allowed to fail or produce incorrect results with low probability. The performance of such an algorithm is often measured probabilistically, for instance using an analysis of its expected time.
In computational complexity theory, nondeterminism is often modeled using an explicit mechanism for making a nondeterministic choice, such as in a nondeterministic Turing machine. For these models, a nondeterministic algorithm is considered to perform correctly when, for each input, there exists a run that produces the desired result, even when other runs produce incorrect results. This existential power makes nondeterministic algorithms of this sort more efficient than known deterministic algorithms for many problems. The P versus NP problem encapsulates this conjectured greater efficiency available to nondeterministic algorithms. Algorithms of this sort are used to define complexity classes based on nondeterministic time and nondeterministic space complexity. They may be simulated using nondeterministic programming, a method for specifying nondeterministic algorithms and searching for the choices that lead to a correct run, often using a backtracking search.
The notion of nondeterminism was introduced by Robert W. Floyd in 1967.
== References ==
== Further reading ==
Cormen, Thomas H. (2009). Introduction to Algorithms (3rd ed.). MIT Press. ISBN 978-0-262-03384-8.
"Nondeterministic algorithm". National Institute of Standards and Technology. Retrieved July 7, 2013.
"Non-deterministic Algorithms". New York University Computer Science. Retrieved July 7, 2013. | Wikipedia/Non-deterministic_algorithm |
In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'.
== Definition ==
A functional problem
P
{\displaystyle P}
is defined by a relation
R
{\displaystyle R}
over strings of an arbitrary alphabet
Σ
{\displaystyle \Sigma }
:
R
⊆
Σ
∗
×
Σ
∗
.
{\displaystyle R\subseteq \Sigma ^{*}\times \Sigma ^{*}.}
An algorithm solves
P
{\displaystyle P}
if for every input
x
{\displaystyle x}
such that there exists a
y
{\displaystyle y}
satisfying
(
x
,
y
)
∈
R
{\displaystyle (x,y)\in R}
, the algorithm produces one such
y
{\displaystyle y}
, and if there are no such
y
{\displaystyle y}
, it rejects.
A promise function problem is allowed to do anything (thus may not terminate) if no such
y
{\displaystyle y}
exists.
== Examples ==
A well-known function problem is given by the Functional Boolean Satisfiability Problem, FSAT for short. The problem, which is closely related to the SAT decision problem, can be formulated as follows:
Given a boolean formula
φ
{\displaystyle \varphi }
with variables
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, find an assignment
x
i
→
{
TRUE
,
FALSE
}
{\displaystyle x_{i}\rightarrow \{{\text{TRUE}},{\text{FALSE}}\}}
such that
φ
{\displaystyle \varphi }
evaluates to
TRUE
{\displaystyle {\text{TRUE}}}
or decide that no such assignment exists.
In this case the relation
R
{\displaystyle R}
is given by tuples of suitably encoded boolean formulas and satisfying assignments.
While a SAT algorithm, fed with a formula
φ
{\displaystyle \varphi }
, only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case.
Other notable examples include the travelling salesman problem, which asks for the route taken by the salesman, and the integer factorization problem, which asks for the list of factors.
== Relationship to other complexity classes ==
Consider an arbitrary decision problem
L
{\displaystyle L}
in the class NP. By the definition of NP, each problem instance
x
{\displaystyle x}
that is answered 'yes' has a polynomial-size certificate
y
{\displaystyle y}
which serves as a proof for the 'yes' answer. Thus, the set of these tuples
(
x
,
y
)
{\displaystyle (x,y)}
forms a relation, representing the function problem "given
x
{\displaystyle x}
in
L
{\displaystyle L}
, find a certificate
y
{\displaystyle y}
for
x
{\displaystyle x}
". This function problem is called the function variant of
L
{\displaystyle L}
; it belongs to the class FNP.
FNP can be thought of as the function class analogue of NP, in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of the length of the input) verified, but not necessarily efficiently found. In contrast, the class FP, which can be thought of as the function class analogue of P, consists of function problems whose solutions can be found in polynomial time.
== Self-reducibility ==
Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether the formula
φ
{\displaystyle \varphi }
is satisfiable. After that the algorithm can fix variable
x
1
{\displaystyle x_{1}}
to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keeps
x
1
{\displaystyle x_{1}}
fixed to TRUE and continues to fix
x
2
{\displaystyle x_{2}}
, otherwise it decides that
x
1
{\displaystyle x_{1}}
has to be FALSE and continues. Thus, FSAT is solvable in polynomial time using an oracle deciding SAT. In general, a problem in NP is called self-reducible if its function variant can be solved in polynomial time using an oracle deciding the original problem. Every NP-complete problem is self-reducible. It is conjectured that the integer factorization problem is not self-reducible, because deciding whether an integer is prime is in P (easy), while the integer factorization problem is believed to be hard for a classical computer.
There are several (slightly different) notions of self-reducibility.
== Reductions and complete problems ==
Function problems can be reduced much like decision problems: Given function problems
Π
R
{\displaystyle \Pi _{R}}
and
Π
S
{\displaystyle \Pi _{S}}
we say that
Π
R
{\displaystyle \Pi _{R}}
reduces to
Π
S
{\displaystyle \Pi _{S}}
if there exists polynomially-time computable functions
f
{\displaystyle f}
and
g
{\displaystyle g}
such that for all instances
x
{\displaystyle x}
of
R
{\displaystyle R}
and possible solutions
y
{\displaystyle y}
of
S
{\displaystyle S}
, it holds that
If
x
{\displaystyle x}
has an
R
{\displaystyle R}
-solution, then
f
(
x
)
{\displaystyle f(x)}
has an
S
{\displaystyle S}
-solution.
(
f
(
x
)
,
y
)
∈
S
⟹
(
x
,
g
(
x
,
y
)
)
∈
R
.
{\displaystyle (f(x),y)\in S\implies (x,g(x,y))\in R.}
It is therefore possible to define FNP-complete problems analogous to the NP-complete problem:
A problem
Π
R
{\displaystyle \Pi _{R}}
is FNP-complete if every problem in FNP can be reduced to
Π
R
{\displaystyle \Pi _{R}}
. The complexity class of FNP-complete problems is denoted by FNP-C or FNPC. Hence the problem FSAT is also an FNP-complete problem, and it holds that
P
=
N
P
{\displaystyle \mathbf {P} =\mathbf {NP} }
if and only if
F
P
=
F
N
P
{\displaystyle \mathbf {FP} =\mathbf {FNP} }
.
== Total function problems ==
The relation
R
(
x
,
y
)
{\displaystyle R(x,y)}
used to define function problems has the drawback of being incomplete: Not every input
x
{\displaystyle x}
has a counterpart
y
{\displaystyle y}
such that
(
x
,
y
)
∈
R
{\displaystyle (x,y)\in R}
. Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the class TFNP as a subclass of FNP. This class contains problems such as the computation of pure Nash equilibria in certain strategic games where a solution is guaranteed to exist. In addition, if TFNP contains any FNP-complete problem it follows that
N
P
=
co-NP
{\displaystyle \mathbf {NP} ={\textbf {co-NP}}}
.
== See also ==
Decision problem
Search problem
Counting problem (complexity)
Optimization problem
== References == | Wikipedia/Function_problem |
In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they can be run on real machines efficiently.
Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.
== Formal definition ==
Deterministic algorithms can be defined in terms of a state machine: a state describes what a machine is doing at a particular instant in time. State machines pass in a discrete manner from one state to another. Just after we enter the input, the machine is in its initial state or start state. If the machine is deterministic, this means that from this point onwards, its current state determines what its next state will be; its course through the set of states is predetermined. Note that a machine can be deterministic and still never stop or finish, and therefore fail to deliver a result.
Examples of particular abstract machines which are deterministic include the deterministic Turing machine and deterministic finite automaton.
== Non-deterministic algorithms ==
A variety of factors can cause an algorithm to behave in a way which is not deterministic, or non-deterministic:
If it uses an external state other than the input, such as user input, a global variable, a hardware timer value, a random value, or stored disk data.
If it operates in a way that is timing-sensitive, for example, if it has multiple processors writing to the same data at the same time. In this case, the precise order in which each processor writes its data will affect the result.
If a hardware error causes its state to change in an unexpected way.
Although real programs are rarely purely deterministic, it is easier for humans as well as other programs to reason about programs that are. For this reason, most programming languages and especially functional programming languages make an effort to prevent the above events from happening except under controlled conditions.
The prevalence of multi-core processors has resulted in a surge of interest in determinism in parallel programming and challenges of non-determinism have been well documented. A number of tools to help deal with the challenges have been proposed to deal with deadlocks and race conditions.
== Disadvantages of determinism ==
It is advantageous, in some cases, for a program to exhibit nondeterministic behavior. The behavior of a card shuffling program used in a game of blackjack, for example, should not be predictable by players — even if the source code of the program is visible. The use of a pseudorandom number generator is often not sufficient to ensure that players are unable to predict the outcome of a shuffle. A clever gambler might guess precisely the numbers the generator will choose and so determine the entire contents of the deck ahead of time, allowing him to cheat; for example, the Software Security Group at Reliable Software Technologies was able to do this for an implementation of Texas Hold 'em Poker that is distributed by ASF Software, Inc, allowing them to consistently predict the outcome of hands ahead of time. These problems can be avoided, in part, through the use of a cryptographically secure pseudo-random number generator, but it is still necessary for an unpredictable random seed to be used to initialize the generator. For this purpose, a source of nondeterminism is required, such as that provided by a hardware random number generator.
Note that a negative answer to the P=NP problem would not imply that programs with nondeterministic output are theoretically more powerful than those with deterministic output. The complexity class NP (complexity) can be defined without any reference to nondeterminism using the verifier-based definition.
== Determinism categories in languages ==
=== Mercury ===
The mercury logic-functional programming language establishes different determinism categories for predicate modes as explained in the reference.
=== Haskell ===
Haskell provides several mechanisms:
Non-determinism or notion of Fail
the Maybe and Either types include the notion of success in the result.
the fail method of the class Monad, may be used to signal fail as exception.
the Maybe monad and MaybeT monad transformer provide for failed computations (stop the computation sequence and return Nothing)
Neterminism/non-det with multiple solutions
you may retrieve all possible outcomes of a multiple result computation, by wrapping its result type in a MonadPlus monad. (its method mzero makes an outcome fail and mplus collects the successful results).
=== ML family and derived languages ===
As seen in Standard ML, OCaml and Scala
The option type includes the notion of success.
=== Java ===
In Java, the null reference value may represent an unsuccessful (out-of-domain) result.
== See also ==
Randomized algorithm
== References == | Wikipedia/Deterministic_algorithm |
Introduction to Automata Theory, Languages, and Computation is an influential computer science textbook by John Hopcroft and Jeffrey Ullman on formal languages and the theory of computation. Rajeev Motwani contributed to later editions beginning in 2000.
== Nickname ==
The Jargon File records the book's nickname, Cinderella Book, thusly: "So called because the cover depicts a girl (putatively Cinderella) sitting in front of a Rube Goldberg device and holding a rope coming out of it. On the back cover, the device is in shambles after she has (inevitably) pulled on the rope."
== Edition history and reception ==
The forerunner of this book appeared under the title Formal Languages and Their Relation to Automata in 1968. Forming a basis both for the creation of courses on the topic, as well as for further research, that book shaped the field of automata theory for over a decade, cf. (Hopcroft 1989).
Hopcroft, John E.; Ullman, Jeffrey D. (1968). Formal Languages and Their Relation to Automata. Addison-Wesley. ISBN 9780201029833.
Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Addison-Wesley. ISBN 0-201-02988-X.
Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2000). Introduction to Automata Theory, Languages, and Computation (2nd ed.). Addison-Wesley. ISBN 81-7808-347-7.
Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2006) [1979]. Introduction to Automata Theory, Languages, and Computation (3rd ed.). Addison-Wesley. ISBN 0-321-45536-3.
Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2013). Introduction to Automata Theory, Languages, and Computation (New International ed.). Pearson. ISBN 978-1292039053.
The first edition of Introduction to Automata Theory, Languages, and Computation was published in 1979, the second edition in November 2000, and the third edition appeared in February 2006. Since the second edition, Rajeev Motwani has joined Hopcroft and Ullman as the third author. Starting with the second edition, the book features extended coverage of examples where automata theory is applied, whereas large parts of more advanced theory were taken out. While this makes the second and third editions more accessible to beginners, it makes it less suited for more advanced courses. The new bias away from theory is not seen positively by all: As Shallit quotes one professor, "they have removed all good parts." (Shallit 2008).
The first edition in turn constituted a major revision of a previous textbook also written by Hopcroft and Ullman, entitled Formal Languages and Their Relation to Automata. It was published in 1968 and is referred to in the introduction of the 1979 edition.
In a personal historical note regarding the 1968 book, Hopcroft states: "Perhaps the success of the book came from our efforts to present the essence of each proof before actually giving the proof" (Hopcroft 1989). Compared with the forerunner book, the 1979 edition was expanded, and the material was reworked to make it more accessible to students, cf. (Hopcroft 1989).
This gearing towards understandability at the price of succinctness was not seen positively by all. As Hopcroft reports on feedback to the overhauled 1979 edition: "It seems that our attempts to lower the level of our presentation for the benefit of students by including more detail and explanations had an adverse effect on the faculty, who then had to sift through the added material to outline and prepare their lectures" (Hopcroft 1989).
Still, the most cited edition of the book is apparently the 1979 edition: According to the website CiteSeerX,
over 3000 scientific papers freely available online cite this edition of the book.
== See also ==
Introduction to the Theory of Computation by Michael Sipser, another standard textbook in the field
Solutions to Selected Exercises, Stanford University
== References ==
== External links ==
Entry "Cinderella book". In: The Jargon file (version 4.4.7, December 29, 2003).
Hopcroft, John E. (1989). "The emergence of computer science - A citation classic commentary on 'Formal Languages and Their Relation to Automata'". Current Contents Engineering, Technology, and Applied Sciences. 31: 12. available online (pdf)
Shallit, Jeffrey O. (2008). A Second Course in Formal Languages and Automata Theory. Cambridge University Press. p. ix. ISBN 978-0-521-86572-2.
"Introduction to Automata Theory, Languages, and Computation - Home page". Stanford University. Archived from the original on 7 June 2023.
"Introduction to Automata Theory, Languages, and Computation; 1st edition". — accessible only to Internet Archive patrons with print disabilities | Wikipedia/Introduction_to_Automata_Theory,_Languages,_and_Computation |
Jerk (also known as jolt) is the rate of change of an object's acceleration over time. It is a vector quantity (having both magnitude and direction). Jerk is most commonly denoted by the symbol j and expressed in m/s3 (SI units) or standard gravities per second(g0/s).
== Expressions ==
As a vector, jerk j can be expressed as the first time derivative of acceleration, second time derivative of velocity, and third time derivative of position:
j
(
t
)
=
d
a
(
t
)
d
t
=
d
2
v
(
t
)
d
t
2
=
d
3
r
(
t
)
d
t
3
{\displaystyle \mathbf {j} (t)={\frac {\mathrm {d} \mathbf {a} (t)}{\mathrm {d} t}}={\frac {\mathrm {d} ^{2}\mathbf {v} (t)}{\mathrm {d} t^{2}}}={\frac {\mathrm {d} ^{3}\mathbf {r} (t)}{\mathrm {d} t^{3}}}}
Where:
a is acceleration
v is velocity
r is position
t is time.
Third-order differential equations of the form
J
(
x
.
.
.
,
x
¨
,
x
˙
,
x
)
=
0
{\displaystyle J\left({\overset {\mathbf {...} }{x}},{\ddot {x}},{\dot {x}},x\right)=0}
are sometimes called jerk equations. When converted to an equivalent system of three ordinary first-order non-linear differential equations, jerk equations are the minimal setting for solutions showing chaotic behaviour. This condition generates mathematical interest in jerk systems. Systems involving fourth-order derivatives or higher are accordingly called hyperjerk systems.
== Physiological effects and human perception ==
Human body position is controlled by balancing the forces of antagonistic muscles. In balancing a given force, such as holding up a weight, the postcentral gyrus establishes a control loop to achieve the desired equilibrium. If the force changes too quickly, the muscles cannot relax or tense fast enough and overshoot in either direction, causing a temporary loss of control. The reaction time for responding to changes in force depends on physiological limitations and the attention level of the brain: an expected change will be stabilized faster than a sudden decrease or increase of load.
To avoid vehicle passengers losing control over body motion and getting injured, it is necessary to limit the exposure to both the maximum force (acceleration) and maximum jerk, since time is needed to adjust muscle tension and adapt to even limited stress changes. Sudden changes in acceleration can cause injuries such as whiplash. Excessive jerk may also result in an uncomfortable ride, even at levels that do not cause injury. Engineers expend considerable design effort minimizing "jerky motion" on elevators, trams, and other conveyances.
For example, consider the effects of acceleration and jerk when riding in a car:
Skilled and experienced drivers can accelerate smoothly, but beginners often provide a jerky ride. When changing gears in a car with a foot-operated clutch, the accelerating force is limited by engine power, but an inexperienced driver can cause severe jerk because of intermittent force closure over the clutch.
The feeling of being pressed into the seats in a high-powered sports car is due to the acceleration. As the car launches from rest, there is a large positive jerk as its acceleration rapidly increases. After the launch, there is a small, sustained negative jerk as the force of air resistance increases with the car's velocity, gradually decreasing acceleration and reducing the force pressing the passenger into the seat. When the car reaches its top speed, the acceleration has reached 0 and remains constant, after which there is no jerk until the driver decelerates or changes direction.
When braking suddenly or during collisions, passengers whip forward with an initial acceleration that is larger than during the rest of the braking process because muscle tension regains control of the body quickly after the onset of braking or impact. These effects are not modeled in vehicle testing because cadavers and crash test dummies do not have active muscle control.
To minimize the jerk, curves along roads are designed to be clothoids as are railroad curves and roller coaster loops.
== Force, acceleration, and jerk ==
For a constant mass m, acceleration a is directly proportional to force F according to Newton's second law of motion:
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
In classical mechanics of rigid bodies, there are no forces associated with the derivatives of acceleration; however, physical systems experience oscillations and deformations as a result of jerk. In designing the Hubble Space Telescope, NASA set limits on both jerk and jounce.
The Abraham–Lorentz force is the recoil force on an accelerating charged particle emitting radiation. This force is proportional to the particle's jerk and to the square of its charge. The Wheeler–Feynman absorber theory is a more advanced theory, applicable in a relativistic and quantum environment, and accounting for self-energy.
== In an idealized setting ==
Discontinuities in acceleration do not occur in real-world environments because of deformation, quantum mechanics effects, and other causes. However, a jump-discontinuity in acceleration and, accordingly, unbounded jerk are feasible in an idealized setting, such as an idealized point mass moving along a piecewise smooth, whole continuous path. The jump-discontinuity occurs at points where the path is not smooth. Extrapolating from these idealized settings, one can qualitatively describe, explain and predict the effects of jerk in real situations.
Jump-discontinuity in acceleration can be modeled using a Dirac delta function in jerk, scaled to the height of the jump. Integrating jerk over time across the Dirac delta yields the jump-discontinuity.
For example, consider a path along an arc of radius r, which tangentially connects to a straight line. The whole path is continuous, and its pieces are smooth. Now assume a point particle moves with constant speed along this path, so its tangential acceleration is zero. The centripetal acceleration given by v2/r is normal to the arc and inward. When the particle passes the connection of pieces, it experiences a jump-discontinuity in acceleration given by v2/r, and it undergoes a jerk that can be modeled by a Dirac delta, scaled to the jump-discontinuity.
For a more tangible example of discontinuous acceleration, consider an ideal spring–mass system with the mass oscillating on an idealized surface with friction. The force on the mass is equal to the vector sum of the spring force and the kinetic frictional force. When the velocity changes sign (at the maximum and minimum displacements), the magnitude of the force on the mass changes by twice the magnitude of the frictional force, because the spring force is continuous and the frictional force reverses direction with velocity. The jump in acceleration equals the force on the mass divided by the mass. That is, each time the mass passes through a minimum or maximum displacement, the mass experiences a discontinuous acceleration, and the jerk contains a Dirac delta until the mass stops. The static friction force adapts to the residual spring force, establishing equilibrium with zero net force and zero velocity.
Consider the example of a braking and decelerating car. The brake pads generate kinetic frictional forces and constant braking torques on the disks (or drums) of the wheels. Rotational velocity decreases linearly to zero with constant angular deceleration. The frictional force, torque, and car deceleration suddenly reach zero, which indicates a Dirac delta in physical jerk. The Dirac delta is smoothed down by the real environment, the cumulative effects of which are analogous to damping of the physiologically perceived jerk. This example neglects the effects of tire sliding, suspension dipping, real deflection of all ideally rigid mechanisms, etc.
Another example of significant jerk, analogous to the first example, is the cutting of a rope with a particle on its end. Assume the particle is oscillating in a circular path with non-zero centripetal acceleration. When the rope is cut, the particle's path changes abruptly to a straight path, and the force in the inward direction changes suddenly to zero. Imagine a monomolecular fiber cut by a laser; the particle would experience very high rates of jerk because of the extremely short cutting time.
== In rotation ==
Consider a rigid body rotating about a fixed axis in an inertial reference frame. If its angular position as a function of time is θ(t), the angular velocity, acceleration, and jerk can be expressed as follows:
Angular velocity,
ω
(
t
)
=
θ
˙
(
t
)
=
d
θ
(
t
)
d
t
{\displaystyle \omega (t)={\dot {\theta }}(t)={\frac {\mathrm {d} \theta (t)}{\mathrm {d} t}}}
, is the time derivative of θ(t).
Angular acceleration,
α
(
t
)
=
ω
˙
(
t
)
=
d
ω
(
t
)
d
t
{\displaystyle \alpha (t)={\dot {\omega }}(t)={\frac {\mathrm {d} \omega (t)}{\mathrm {d} t}}}
, is the time derivative of ω(t).
Angular jerk,
ζ
(
t
)
=
α
˙
(
t
)
=
ω
¨
(
t
)
=
θ
.
.
.
(
t
)
{\displaystyle \zeta (t)={\dot {\alpha }}(t)={\ddot {\omega }}(t)={\overset {...}{\theta }}(t)}
, is the time derivative of α(t).
Angular acceleration equals the torque acting on the body, divided by the body's moment of inertia with respect to the momentary axis of rotation. A change in torque results in angular jerk.
The general case of a rotating rigid body can be modeled using kinematic screw theory, which includes one axial vector, angular velocity Ω(t), and one polar vector, linear velocity v(t). From this, the angular acceleration is defined as
α
(
t
)
=
d
d
t
ω
(
t
)
=
ω
˙
(
t
)
{\displaystyle {\boldsymbol {\alpha }}(t)={\frac {\mathrm {d} }{\mathrm {d} t}}{\boldsymbol {\omega }}(t)={\dot {\boldsymbol {\omega }}}(t)}
and the angular jerk is given by
ζ
(
t
)
=
d
d
t
α
(
t
)
=
α
˙
(
t
)
=
ω
¨
(
t
)
{\displaystyle {\boldsymbol {\zeta }}(t)={\frac {\mathrm {d} }{\mathrm {d} t}}{\boldsymbol {\alpha }}(t)={\dot {\boldsymbol {\alpha }}}(t)={\ddot {\boldsymbol {\omega }}}(t)}
taking the angular acceleration from Angular acceleration#Particle in three dimensions as
α
=
d
ω
d
t
=
r
×
a
r
2
−
2
r
d
r
d
t
ω
{\displaystyle {\boldsymbol {\alpha }}={\frac {d{\boldsymbol {\omega }}}{dt}}={\frac {\mathbf {r} \times \mathbf {a} }{r^{2}}}-{\frac {2}{r}}{\frac {dr}{dt}}{\boldsymbol {\omega }}}
, we obtain
ζ
=
d
α
d
t
=
1
r
2
(
r
×
d
a
d
t
+
d
r
d
t
×
a
)
−
2
r
3
d
r
d
t
(
r
×
a
)
+
2
r
2
(
d
r
d
t
)
2
ω
−
2
r
d
2
r
d
t
2
ω
−
2
r
d
r
d
t
d
ω
d
t
{\displaystyle {\begin{aligned}{\boldsymbol {\zeta }}={\frac {d{\boldsymbol {\alpha }}}{dt}}={\frac {1}{r^{2}}}\left(\mathbf {r} \times {\frac {d\mathbf {a} }{dt}}+{\frac {d\mathbf {r} }{dt}}\times \mathbf {a} \right)-{\frac {2}{r^{3}}}{\frac {dr}{dt}}\left(\mathbf {r} \times \mathbf {a} \right)\\\\+{\frac {2}{r^{2}}}\left({\frac {dr}{dt}}\right)^{2}{\boldsymbol {\omega }}-{\frac {2}{r}}{\frac {d^{2}r}{dt^{2}}}{\boldsymbol {\omega }}-{\frac {2}{r}}{\frac {dr}{dt}}{\frac {d{\boldsymbol {\omega }}}{dt}}\end{aligned}}}
replacing
d
ω
d
t
{\displaystyle {\frac {d{\boldsymbol {\omega }}}{dt}}}
we can have the last item as
−
2
r
d
r
d
t
d
ω
d
t
=
−
2
r
d
r
d
t
(
r
×
a
r
2
−
2
r
d
r
d
t
ω
)
=
−
2
r
3
d
r
d
t
(
r
×
a
)
+
4
r
2
(
d
r
d
t
)
2
ω
{\displaystyle {\begin{aligned}-{\frac {2}{r}}{\frac {dr}{dt}}{\frac {d{\boldsymbol {\omega }}}{dt}}&=-{\frac {2}{r}}{\frac {dr}{dt}}\left({\frac {\mathbf {r} \times \mathbf {a} }{r^{2}}}-{\frac {2}{r}}{\frac {dr}{dt}}{\boldsymbol {\omega }}\right)\\\\&=-{\frac {2}{r^{3}}}{\frac {dr}{dt}}\left(\mathbf {r} \times \mathbf {a} \right)+{\frac {4}{r^{2}}}\left({\frac {dr}{dt}}\right)^{2}{\boldsymbol {\omega }}\end{aligned}}}
, and we finally get
ζ
=
r
×
j
r
2
+
v
×
a
r
2
−
4
r
3
d
r
d
t
(
r
×
a
)
+
6
r
2
(
d
r
d
t
)
2
ω
−
2
r
d
2
r
d
t
2
ω
{\displaystyle {\begin{aligned}{\boldsymbol {\zeta }}={\frac {\mathbf {r} \times \mathbf {j} }{r^{2}}}+{\frac {\mathbf {v} \times \mathbf {a} }{r^{2}}}-{\frac {4}{r^{3}}}{\frac {dr}{dt}}\left(\mathbf {r} \times \mathbf {a} \right)+{\frac {6}{r^{2}}}\left({\frac {dr}{dt}}\right)^{2}{\boldsymbol {\omega }}-{\frac {2}{r}}{\frac {d^{2}r}{dt^{2}}}{\boldsymbol {\omega }}\end{aligned}}}
or vice versa, replacing
(
r
×
a
)
{\displaystyle \left(\mathbf {r} \times \mathbf {a} \right)}
with
α
{\displaystyle {\boldsymbol {\alpha }}}
:
ζ
=
r
×
j
r
2
+
v
×
a
r
2
−
4
r
d
r
d
t
α
−
2
r
2
(
d
r
d
t
)
2
ω
−
2
r
d
2
r
d
t
2
ω
{\displaystyle {\begin{aligned}{\boldsymbol {\zeta }}={\frac {\mathbf {r} \times \mathbf {j} }{r^{2}}}+{\frac {\mathbf {v} \times \mathbf {a} }{r^{2}}}-{\frac {4}{r}}{\frac {dr}{dt}}{\boldsymbol {\alpha }}-{\frac {2}{r^{2}}}\left({\frac {dr}{dt}}\right)^{2}{\boldsymbol {\omega }}-{\frac {2}{r}}{\frac {d^{2}r}{dt^{2}}}{\boldsymbol {\omega }}\end{aligned}}}
For example, consider a Geneva drive, a device used for creating intermittent rotation of a driven wheel (the blue wheel in the animation) by continuous rotation of a driving wheel (the red wheel in the animation). During one cycle of the driving wheel, the driven wheel's angular position θ changes by 90 degrees and then remains constant. Because of the finite thickness of the driving wheel's fork (the slot for the driving pin), this device generates a discontinuity in the angular acceleration α, and an unbounded angular jerk ζ in the driven wheel.
Jerk does not preclude the Geneva drive from being used in applications such as movie projectors and cams. In movie projectors, the film advances frame-by-frame, but the projector operation has low noise and is highly reliable because of the low film load (only a small section of film weighing a few grams is driven), the moderate speed (2.4 m/s), and the low friction.
With cam drive systems, use of a dual cam can avoid the jerk of a single cam; however, the dual cam is bulkier and more expensive. The dual-cam system has two cams on one axle that shifts a second axle by a fraction of a revolution. The graphic shows step drives of one-sixth and one-third rotation per one revolution of the driving axle. There is no radial clearance because two arms of the stepped wheel are always in contact with the double cam. Generally, combined contacts may be used to avoid the jerk (and wear and noise) associated with a single follower (such as a single follower gliding along a slot and changing its contact point from one side of the slot to the other can be avoided by using two followers sliding along the same slot, one side each).
== In elastically deformable matter ==
An elastically deformable mass deforms under an applied force (or acceleration); the deformation is a function of its stiffness and the magnitude of the force. If the change in force is slow, the jerk is small, and the propagation of deformation is considered instantaneous as compared to the change in acceleration. The distorted body acts as if it were in a quasistatic regime, and only a changing force (nonzero jerk) can cause propagation of mechanical waves (or electromagnetic waves for a charged particle); therefore, for nonzero to high jerk, a shock wave and its propagation through the body should be considered.
The propagation of deformation is shown in the graphic "Compression wave patterns" as a compressional plane wave through an elastically deformable material. Also shown, for angular jerk, are the deformation waves propagating in a circular pattern, which causes shear stress and possibly other modes of vibration. The reflection of waves along the boundaries cause constructive interference patterns (not pictured), producing stresses that may exceed the material's limits. The deformation waves may cause vibrations, which can lead to noise, wear, and failure, especially in cases of resonance.
The graphic captioned "Pole with massive top" shows a block connected to an elastic pole and a massive top. The pole bends when the block accelerates, and when the acceleration stops, the top will oscillate (damped) under the regime of pole stiffness. One could argue that a greater (periodic) jerk might excite a larger amplitude of oscillation because small oscillations are damped before reinforcement by a shock wave. One can also argue that a larger jerk might increase the probability of exciting a resonant mode because the larger wave components of the shock wave have higher frequencies and Fourier coefficients.
To reduce the amplitude of excited stress waves and vibrations, one can limit jerk by shaping motion and making the acceleration continuous with slopes as flat as possible. Due to limitations of abstract models, algorithms for reducing vibrations include higher derivatives, such as jounce, or suggest continuous regimes for both acceleration and jerk. One concept for limiting jerk is to shape acceleration and deceleration sinusoidally with zero acceleration in between (see graphic captioned "Sinusoidal acceleration profile"), making the speed appear sinusoidal with constant maximum speed. The jerk, however, will remain discontinuous at the points where acceleration enters and leaves the zero phases.
== In the geometric design of roads and tracks ==
Roads and tracks are designed to limit the jerk caused by changes in their curvature. Design standards for high-speed rail vary from 0.2 m/s3 to 0.6 m/s3. Track transition curves limit the jerk when transitioning from a straight line to a curve, or vice versa. Recall that in constant-speed motion along an arc, acceleration is zero in the tangential direction and nonzero in the inward normal direction. Transition curves gradually increase the curvature and, consequently, the centripetal acceleration.
An Euler spiral, the theoretically optimum transition curve, linearly increases centripetal acceleration and results in constant jerk (see graphic). In real-world applications, the plane of the track is inclined (cant) along the curved sections. The incline causes vertical acceleration, which is a design consideration for wear on the track and embankment. The Wiener Kurve (Viennese Curve) is a patented curve designed to minimize this wear.
Rollercoasters are also designed with track transitions to limit jerk. When entering a loop, acceleration values can reach around 4g (40 m/s2), and riding in this high acceleration environment is only possible with track transitions. S-shaped curves, such as figure eights, also use track transitions for smooth rides.
== In motion control ==
In motion control, the design focus is on straight, linear motion, with the need to move a system from one steady position to another (point-to-point motion). The design concern from a jerk perspective is vertical jerk; the jerk from tangential acceleration is effectively zero since linear motion is non-rotational.
Motion control applications include passenger elevators and machining tools. Limiting vertical jerk is considered essential for elevator riding convenience. ISO 8100-34 specifies measurement methods for elevator ride quality with respect to jerk, acceleration, vibration, and noise; however, the standard does not specify levels for acceptable or unacceptable ride quality. It is reported that most passengers rate a vertical jerk of 2 m/s3 as acceptable and 6 m/s3 as intolerable. For hospitals, 0.7 m/s3 is the recommended limit.
A primary design goal for motion control is to minimize the transition time without exceeding speed, acceleration, or jerk limits. Consider a third-order motion-control profile with quadratic ramping and deramping phases in velocity (see figure).
This motion profile consists of the following seven segments:
Acceleration build up — positive jerk limit; linear increase in acceleration to the positive acceleration limit; quadratic increase in velocity
Upper acceleration limit — zero jerk; linear increase in velocity
Acceleration ramp down — negative jerk limit; linear decrease in acceleration; (negative) quadratic increase in velocity, approaching the desired velocity limit
Velocity limit — zero jerk; zero acceleration
Deceleration build up — negative jerk limit; linear decrease in acceleration to the negative acceleration limit; (negative) quadratic decrease in velocity
Lower deceleration limit — zero jerk; linear decrease in velocity
Deceleration ramp down — positive jerk limit; linear increase in acceleration to zero; quadratic decrease in velocity; approaching the desired position at zero speed and zero acceleration
Segment four's time period (constant velocity) varies with distance between the two positions. If this distance is so small that omitting segment four would not suffice, then segments two and six (constant acceleration) could be equally reduced, and the constant velocity limit would not be reached. If this modification does not sufficiently reduce the crossed distance, then segments one, three, five, and seven could be shortened by an equal amount, and the constant acceleration limits would not be reached.
Other motion profile strategies are used, such as minimizing the square of jerk for a given transition time and, as discussed above, sinusoidal-shaped acceleration profiles. Motion profiles are tailored for specific applications including machines, people movers, chain hoists, automobiles, and robotics.
=== In manufacturing ===
Jerk is an important consideration in manufacturing processes. Rapid changes in acceleration of a cutting tool can lead to premature tool wear and result in uneven cuts; consequently, modern motion controllers include jerk limitation features. In mechanical engineering, jerk, in addition to velocity and acceleration, is considered in the development of cam profiles because of tribological implications and the ability of the actuated body to follow the cam profile without chatter.
Jerk is often considered when vibration is a concern. A device that measures jerk is called a "jerkmeter".
== Further derivatives ==
Further time derivatives have also been named, as snap or jounce (fourth derivative), crackle (fifth derivative), and pop (sixth derivative). The seventh derivative is known as "Lock", as it is a logical continuation to the cycle. The eighth derivative has been referred to as "Drop". Even though, the seventh and eighth derivative are not officially recognized and thus no reliable source is found. Time derivatives of position of higher order than four appear rarely.
The terms snap, crackle, and pop—for the fourth, fifth, and sixth derivatives of position—were inspired by the advertising mascots Snap, Crackle, and Pop.
== See also ==
Geomagnetic jerk
Shock (mechanics)
Yank
== References ==
Sprott JC (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 0-19-850839-5.
Sprott JC (1997). "Some simple chaotic jerk functions" (PDF). Am J Phys. 65 (6): 537–43. Bibcode:1997AmJPh..65..537S. doi:10.1119/1.18585. Archived from the original (PDF) on 2010-06-13. Retrieved 2009-09-28.
Blair G (2005). "Making the Cam" (PDF). Race Engine Technology (10). Archived (PDF) from the original on 2008-05-15. Retrieved 2009-09-29.
== External links ==
What is the term used for the third derivative of position? Archived 2016-11-30 at the Wayback Machine, description of jerk in the Usenet Physics FAQ Archived 2011-06-23 at the Wayback Machine
Mathematics of Motion Control Profiles Archived 2020-10-02 at the Wayback Machine
Elevator-Ride-Quality Archived 2022-03-28 at the Wayback Machine
Elevator manufacturer brochure
Patent of Wiener Kurve
(in German) Description of Wiener Kurve | Wikipedia/Jerk_(physics) |
The Charlot equation, named after Gaston Charlot, is used in analytical chemistry to relate the hydrogen ion concentration, and therefore the pH, with the formal analytical concentration of an acid and its conjugate base. It can be used for computing the pH of buffer solutions when the approximations of the Henderson–Hasselbalch equation break down. The Henderson–Hasselbalch equation assumes that the autoionization of water is negligible and that the dissociation or hydrolysis of the acid and the base in solution are negligible (in other words, that the formal concentration is the same as the equilibrium concentration).
For an acid-base equilibrium such as HA ⇌ H+ + A−, the Charlot equation may be written as
[
H
+
]
=
K
a
c
a
−
Δ
c
b
+
Δ
{\displaystyle \mathrm {[H^{+}]} =K_{a}{\frac {c_{a}-\Delta }{c_{b}+\Delta }}}
where [H+] is the equilibrium concentration of H+, Ka is the acid dissociation constant, Ca and Cb are the analytical concentrations of the acid and its conjugate base, respectively, and Δ = [H+] − [OH−]. The equation can be solved for [H+] by using the autoionization constant for water, Kw, to introduce [OH−] = Kw/[H+]. This results in the following cubic equation for [H+], which can be solved either numerically or analytically:
[
H
+
]
3
+
(
K
a
+
C
b
)
[
H
+
]
2
−
(
K
w
+
K
a
C
a
)
[
H
+
]
−
K
a
K
w
=
0
{\displaystyle \mathrm {[H^{+}]^{3}} +(K_{a}+C_{b})\mathrm {[H^{+}]^{2}} -(K_{w}+K_{a}C_{a})\mathrm {[H^{+}]} -K_{a}K_{w}=0}
The solution to this equation may also be given in explicit form, although this is inconvenient to use
[
H
+
]
=
2
3
(
K
a
−
C
b
)
2
+
(
3
C
a
+
4
C
b
)
K
a
+
3
K
w
)
⋅
cos
(
1
3
arccos
(
K
a
+
C
b
)
(
18
K
w
−
2
(
K
a
+
C
b
)
2
−
9
C
a
K
a
)
−
27
C
b
K
w
2
(
(
K
a
−
C
b
)
2
+
(
3
C
a
+
4
C
b
)
K
a
+
3
K
w
)
3
2
)
−
K
a
+
C
b
3
{\displaystyle {\begin{alignedat}{2}\mathrm {[H^{+}]} ={}&{\frac {2}{3}}{\sqrt {(K_{a}-C_{b})^{2}+(3C_{a}+4C_{b})K_{a}+3K_{w})}}\cdot {}\\&\cos \left({\frac {1}{3}}\arccos {\frac {{\bigl (}K_{a}+C_{b}{\bigr )}{\bigl (}18K_{w}-2(K_{a}+C_{b})^{2}-9C_{a}K_{a}{\bigr )}-27C_{b}K_{w}}{2{\bigl (}(K_{a}-C_{b})^{2}+(3C_{a}+4C_{b})K_{a}+3K_{w}{\bigr )}^{\frac {3}{2}}}}\right)-{\frac {K_{a}+C_{b}}{3}}\end{alignedat}}}
== Derivation ==
Considering the dissociation of the weak acid HA (e.g., acetic acid):
HA ⇌ H+ + A−
Starting from the definition of the equilibrium constant
K
a
=
[
H
+
]
[
A
−
]
[
H
A
]
{\displaystyle K_{a}=\mathrm {\frac {[H^{+}][A^{-}]}{[HA]}} }
one can solve for [H+] as follows:
[
H
+
]
=
K
a
[
H
A
]
[
A
−
]
{\displaystyle \mathrm {[H^{+}]={\mathit {K_{a}}}{\frac {[HA]}{[A^{-}]}}} }
The main issue is how to determine the equilibrium concentrations [HA] and [A−] from the initial, or analytical concentrations Ca and Cb. This can be achieved by considering the electroneutrality and mass balance constraints on the system. The first constraint is that the total concentration of cations needs to equal the total concentration of anions, because the system has to be electrically neutral:
[
M
+
]
+
[
H
+
]
=
[
A
−
]
+
[
O
H
−
]
{\displaystyle \mathrm {[M^{+}]+[H^{+}]=[A^{-}]+[OH^{-}]} }
Here M+ is the counterion that comes with the conjugate base, [A−], that is added to the solution. For example, if HA is acetic acid, A− would be acetate, which could be added to the solution in the form of sodium acetate. In this case, M+ would be the sodium cation. The equilibrium concentration [M+] is constant and equal to the analytical concentration of the base, Cb. Therefore,
[
A
−
]
=
C
b
+
[
H
+
]
−
[
O
H
−
]
=
C
b
+
Δ
{\displaystyle \mathrm {[A^{-}]={\mathit {C_{b}}}+[H^{+}]-[OH^{-}]} =C_{b}+\Delta }
Because of mass balance, the sum of the equilibrium concentrations of the acid and its conjugate base has to remain equal to the sum of their analytical concentrations. (HA may convert into A− and vice versa, but what is lost of HA is gained of A−, keeping the sum constant.)
[
H
A
]
+
[
A
−
]
=
C
a
+
C
b
{\displaystyle \mathrm {[HA]+[A^{-}]} =C_{a}+C_{b}}
Substituting [A−] and solving for [HA]:
[
H
A
]
=
C
a
−
Δ
{\displaystyle \mathrm {[HA]} =C_{a}-\Delta }
Introducing the equations for [HA] and [A−] into the equation for [H+] yields the Charlot equation.
== See also ==
Bjerrum plot
== References ==
Charlot, Gaston (1947). "Utilité de la définition de Brönsted des acides et des bases en chimie analytique". Analytica Chimica Acta. 1: 59–68. doi:10.1016/S0003-2670(00)89721-4.
de Levie, Robert (2002). "The Henderson approximation and the Mass Action law of Guldberg and Waage". The Chemical Educator. 7 (3): 132–135. doi:10.1007/s00897020562a. | Wikipedia/Charlot_equation |
In algebra, Gauss's lemma, named after Carl Friedrich Gauss, is a theorem about polynomials over the integers, or, more generally, over a unique factorization domain (that is, a ring that has a unique factorization property similar to the fundamental theorem of arithmetic). Gauss's lemma underlies all the theory of factorization and greatest common divisors of such polynomials.
Gauss's lemma asserts that the product of two primitive polynomials is primitive. (A polynomial with integer coefficients is primitive if it has 1 as a greatest common divisor of its coefficients.)
A corollary of Gauss's lemma, sometimes also called Gauss's lemma, is that a primitive polynomial is irreducible over the integers if and only if it is irreducible over the rational numbers. More generally, a primitive polynomial has the same complete factorization over the integers and over the rational numbers. In the case of coefficients in a unique factorization domain R, "rational numbers" must be replaced by "field of fractions of R". This implies that, if R is either a field, the ring of integers, or a unique factorization domain, then every polynomial ring (in one or several indeterminates) over R is a unique factorization domain. Another consequence is that factorization and greatest common divisor computation of polynomials with integers or rational coefficients may be reduced to similar computations on integers and primitive polynomials. This is systematically used (explicitly or implicitly) in all implemented algorithms (see Polynomial greatest common divisor and Factorization of polynomials).
Gauss's lemma, as well as its consequences that do not involve the existence of a complete factorization, remain true over any GCD domain (an integral domain over which greatest common divisors exist). In particular, a polynomial ring over a GCD domain is also a GCD domain. If one calls primitive a polynomial such that the coefficients generate the unit ideal, Gauss's lemma is true over every commutative ring. However, some care must be taken when using this definition of primitive, as, over a unique factorization domain that is not a principal ideal domain, there are polynomials that are primitive in the above sense and not primitive in this new sense.
== The lemma over the integers ==
If
F
(
X
)
=
a
0
+
a
1
X
+
⋯
+
a
n
X
n
{\displaystyle F(X)=a_{0}+a_{1}X+\dots +a_{n}X^{n}}
is a polynomial with integer coefficients, then
F
{\displaystyle F}
is called primitive if the greatest common divisor of all the coefficients
a
0
,
a
1
,
…
,
a
n
{\displaystyle a_{0},a_{1},\dots ,a_{n}}
is 1; in other words, no prime number divides all the coefficients.
Proof: Clearly the product f(x)g(x) of two primitive polynomials has integer coefficients. Therefore, if it is not primitive, there must be a prime p which is a common divisor of all its coefficients. But p cannot divide all the coefficients of either f(x) or g(x) (otherwise they would not be primitive). Let arxr be the first term of f(x) not divisible by p and let bsxs be the first term of g(x) not divisible by p. Now consider the term xr+s in the product, whose coefficient is
⋯
+
a
r
+
2
b
s
−
2
+
a
r
+
1
b
s
−
1
+
a
r
b
s
+
a
r
−
1
b
s
+
1
+
a
r
−
2
b
s
+
2
+
⋯
.
{\displaystyle \cdots +a_{r+2}b_{s-2}+a_{r+1}b_{s-1}+a_{r}b_{s}+a_{r-1}b_{s+1}+a_{r-2}b_{s+2}+\cdots .}
The term arbs is not divisible by p (because p is prime), yet all the remaining ones are, so the entire sum cannot be divisible by p. By assumption all coefficients in the product are divisible by p, leading to a contradiction. Therefore, the coefficients of the product can have no common divisor and are thus primitive.
◻
{\displaystyle \square }
The proof is given below for the more general case. Note that an irreducible element of Z (a prime number) is still irreducible when viewed as constant polynomial in Z[X]; this explains the need for "non-constant" in the statement.
== Statements for unique factorization domains ==
Gauss's lemma holds more generally over arbitrary unique factorization domains. There the content c(P) of a polynomial P can be defined as the greatest common divisor of the coefficients of P (like the gcd, the content is actually a set of associate elements). A polynomial P with coefficients in a UFD R is then said to be primitive if the only elements of R that divide all coefficients of P at once are the invertible elements of R; i.e., the gcd of the coefficients is one.
Primitivity statement: If R is a UFD, then the set of primitive polynomials in R[X] is closed under multiplication. More generally, the content of a product
f
g
{\displaystyle fg}
of polynomials is the product
c
(
f
)
c
(
g
)
{\displaystyle c(f)c(g)}
of their individual contents.
Irreducibility statement: Let R be a unique factorization domain and F its field of fractions. A non-constant polynomial
f
{\displaystyle f}
in
R
[
x
]
{\displaystyle R[x]}
is irreducible in
R
[
x
]
{\displaystyle R[x]}
if and only if it is both irreducible in
F
[
x
]
{\displaystyle F[x]}
and primitive in
R
[
x
]
{\displaystyle R[x]}
.
(For the proofs, see #General version below.)
Let
R
{\displaystyle R}
be a unique factorization domain with field of fractions
F
{\displaystyle F}
. If
f
∈
F
[
x
]
{\displaystyle f\in F[x]}
is a polynomial over
F
{\displaystyle F}
then for some
d
{\displaystyle d}
in
R
{\displaystyle R}
,
d
f
{\displaystyle df}
has coefficients in
R
{\displaystyle R}
, and so – factoring out the gcd
q
{\displaystyle q}
of the coefficients – we can write
d
f
=
q
f
′
{\displaystyle df=qf'}
for some primitive polynomial
f
′
∈
R
[
x
]
{\displaystyle f'\in R[x]}
. As one can check, this polynomial
f
′
{\displaystyle f'}
is unique up to the multiplication by a unit and is called the primitive part (or primitive representative) of
f
{\displaystyle f}
and is denoted by
pp
(
f
)
{\displaystyle \operatorname {pp} (f)}
. The procedure is compatible with product:
pp
(
f
g
)
=
pp
(
f
)
pp
(
g
)
{\displaystyle \operatorname {pp} (fg)=\operatorname {pp} (f)\operatorname {pp} (g)}
.
The construct can be used to show the statement:
A polynomial ring over a UFD is a UFD.
Indeed, by induction, it is enough to show
R
[
x
]
{\displaystyle R[x]}
is a UFD when
R
{\displaystyle R}
is a UFD. Let
f
∈
R
[
x
]
{\displaystyle f\in R[x]}
be a non-zero polynomial. Now,
F
[
x
]
{\displaystyle F[x]}
is a unique factorization domain (since it is a principal ideal domain) and so, as a polynomial in
F
[
x
]
{\displaystyle F[x]}
,
f
{\displaystyle f}
can be factorized as:
f
=
g
1
g
2
…
g
r
{\displaystyle f=g_{1}g_{2}\dots g_{r}}
where
g
i
{\displaystyle g_{i}}
are irreducible polynomials of
F
[
x
]
{\displaystyle F[x]}
. Now, we write
f
=
c
f
′
{\displaystyle f=cf'}
for the gcd
c
{\displaystyle c}
of the coefficients of
f
{\displaystyle f}
(and
f
′
{\displaystyle f'}
is the primitive part) and then:
f
=
c
f
′
=
c
pp
(
g
1
)
pp
(
g
2
)
⋯
pp
(
g
r
)
.
{\displaystyle f=cf'=c\operatorname {pp} (g_{1})\operatorname {pp} (g_{2})\cdots \operatorname {pp} (g_{r}).}
Now,
c
{\displaystyle c}
is a product of prime elements of
R
{\displaystyle R}
(since
R
{\displaystyle R}
is a UFD) and a prime element of
R
{\displaystyle R}
is a prime element of
R
[
x
]
{\displaystyle R[x]}
, as
R
[
x
]
/
(
p
)
≅
R
/
(
p
)
[
x
]
{\displaystyle R[x]/(p)\cong R/(p)[x]}
is an integral domain. Hence,
c
{\displaystyle c}
admits a prime factorization (or a unique factorization into irreducibles). Next, observe that
f
′
=
pp
(
g
1
)
⋯
pp
(
g
r
)
{\displaystyle f'=\operatorname {pp} (g_{1})\cdots \operatorname {pp} (g_{r})}
is a unique factorization into irreducible elements of
R
[
x
]
{\displaystyle R[x]}
, as (1) each
pp
(
g
i
)
{\displaystyle \operatorname {pp} (g_{i})}
is irreducible by the irreducibility statement and (2) it is unique since the factorization of
f
′
{\displaystyle f'}
can also be viewed as a factorization in
F
[
x
]
{\displaystyle F[x]}
and factorization there is unique. Since
c
{\displaystyle c}
and
f
′
{\displaystyle f'}
are uniquely determined by
f
{\displaystyle f}
up to unit elements, the above factorization of
f
{\displaystyle f}
is a unique factorization into irreducible elements.
◻
{\displaystyle \square }
The condition that "R is a unique factorization domain" is not superfluous because it implies that every irreducible element of this ring is also a prime element, which in turn implies that every non-zero element of R has at most one factorization into a product of irreducible elements and a unit up to order and associate relationship. In a ring where factorization is not unique, say pa = qb with p and q irreducible elements that do not divide any of the factors on the other side, the product (p + qX)(a + qX) = pa + (p+a)qX + q2X2 = q(b + (p+a)X + qX2) shows the failure of the primitivity statement. For a concrete example one can take R = Z[i√5], p = 1 + i√5, a = 1 − i√5, q = 2, b = 3. In this example the polynomial 3 + 2X + 2X2 (obtained by dividing the right hand side by q = 2) provides an example of the failure of the irreducibility statement (it is irreducible over R, but reducible over its field of fractions Q[i√5]). Another well-known example is the polynomial X2 − X − 1, whose roots are the golden ratio φ = (1 + √5)/2 and its conjugate (1 − √5)/2 showing that it is reducible over the field Q[√5], although it is irreducible over the non-UFD Z[√5] which has Q[√5] as field of fractions. In the latter example the ring can be made into an UFD by taking its integral closure Z[φ] in Q[√5] (the ring of Dirichlet integers), over which X2 − X − 1 becomes reducible, but in the former example R is already integrally closed.
== General version ==
Let
R
{\displaystyle R}
be a commutative ring. If
f
{\displaystyle f}
is a polynomial in
R
[
x
1
,
…
,
x
n
]
{\displaystyle R[x_{1},\dots ,x_{n}]}
, then we write
cont
(
f
)
{\displaystyle \operatorname {cont} (f)}
for the ideal of
R
{\displaystyle R}
generated by all the coefficients of
f
{\displaystyle f}
; it is called the content of
f
{\displaystyle f}
. Note that
cont
(
a
f
)
=
a
cont
(
f
)
{\displaystyle \operatorname {cont} (af)=a\operatorname {cont} (f)}
for each
a
{\displaystyle a}
in
R
{\displaystyle R}
. The next proposition states a more substantial property.
A polynomial
f
{\displaystyle f}
is said to be primitive if
cont
(
f
)
{\displaystyle \operatorname {cont} (f)}
is the unit ideal
(
1
)
{\displaystyle (1)}
. When
R
=
Z
{\displaystyle R=\mathbb {Z} }
(or more generally when
R
{\displaystyle R}
is a Bézout domain), this agrees with the usual definition of a primitive polynomial. (But if
R
{\displaystyle R}
is only a UFD, this definition is inconsistent with the definition of primitivity in #Statements for unique factorization domains.)
Proof: This is easy using the fact that
I
=
(
1
)
{\displaystyle {\sqrt {I}}=(1)}
implies
I
=
(
1
)
.
{\displaystyle I=(1).}
◻
{\displaystyle \square }
Proof: (
⇒
{\displaystyle \Rightarrow }
) First note that the gcd of the coefficients of
f
{\displaystyle f}
is 1 since, otherwise, we can factor out some element
c
∈
R
{\displaystyle c\in R}
from the coefficients of
f
{\displaystyle f}
to write
f
=
c
f
′
{\displaystyle f=cf'}
, contradicting the irreducibility of
f
{\displaystyle f}
. Next, suppose
f
=
g
h
{\displaystyle f=gh}
for some non-constant polynomials
g
,
h
{\displaystyle g,h}
in
F
[
x
]
{\displaystyle F[x]}
. Then, for some
d
∈
R
{\displaystyle d\in R}
, the polynomial
d
g
{\displaystyle dg}
has coefficients in
R
{\displaystyle R}
and so, by factoring out the gcd
q
{\displaystyle q}
of the coefficients, we write
d
g
=
q
g
′
{\displaystyle dg=qg'}
. Do the same for
h
{\displaystyle h}
and we can write
f
=
c
g
′
h
′
{\displaystyle f=cg'h'}
for some
c
∈
F
{\displaystyle c\in F}
. Now, let
c
=
a
/
b
{\displaystyle c=a/b}
for some
a
,
b
∈
R
{\displaystyle a,b\in R}
. Then
b
f
=
a
g
′
h
′
{\displaystyle bf=ag'h'}
. From this, using the proposition, we get:
(
b
)
⊃
gcd
(
cont
(
b
f
)
)
=
(
a
)
{\displaystyle (b)\supset \operatorname {gcd} (\operatorname {cont} (bf))=(a)}
.
That is,
b
{\displaystyle b}
divides
a
{\displaystyle a}
. Thus,
c
∈
R
{\displaystyle c\in R}
and then the factorization
f
=
c
g
′
h
′
{\displaystyle f=cg'h'}
constitutes a contradiction to the irreducibility of
f
{\displaystyle f}
.
(
⇐
{\displaystyle \Leftarrow }
) If
f
{\displaystyle f}
is irreducible over
F
{\displaystyle F}
, then either it is irreducible over
R
{\displaystyle R}
or it contains a constant polynomial as a factor, the second possibility is ruled out by the assumption.
◻
{\displaystyle \square }
Proof of the proposition: Clearly,
cont
(
f
g
)
⊂
cont
(
f
)
cont
(
g
)
{\displaystyle \operatorname {cont} (fg)\subset \operatorname {cont} (f)\operatorname {cont} (g)}
. If
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal containing
cont
(
f
g
)
{\displaystyle \operatorname {cont} (fg)}
, then
f
g
≡
0
{\displaystyle fg\equiv 0}
modulo
p
{\displaystyle {\mathfrak {p}}}
. Since
R
/
p
[
x
1
,
…
,
x
n
]
{\displaystyle R/{\mathfrak {p}}[x_{1},\dots ,x_{n}]}
is a polynomial ring over an integral domain and thus is an integral domain, this implies either
f
≡
0
{\displaystyle f\equiv 0}
or
g
≡
0
{\displaystyle g\equiv 0}
modulo
p
{\displaystyle {\mathfrak {p}}}
. Hence, either
cont
(
f
)
{\displaystyle \operatorname {cont} (f)}
or
cont
(
g
)
{\displaystyle \operatorname {cont} (g)}
is contained in
p
{\displaystyle {\mathfrak {p}}}
. Since
cont
(
f
g
)
{\displaystyle {\sqrt {\operatorname {cont} (fg)}}}
is the intersection of all prime ideals that contain
cont
(
f
g
)
{\displaystyle \operatorname {cont} (fg)}
and the choice of
p
{\displaystyle {\mathfrak {p}}}
was arbitrary,
cont
(
f
)
cont
(
g
)
⊂
cont
(
f
g
)
{\displaystyle \operatorname {cont} (f)\operatorname {cont} (g)\subset {\sqrt {\operatorname {cont} (fg)}}}
.
We now prove the "moreover" part. Factoring out the gcd's from the coefficients, we can write
f
=
a
f
′
{\displaystyle f=af'}
and
g
=
b
g
′
{\displaystyle g=bg'}
where the gcds of the coefficients of
f
′
,
g
′
{\displaystyle f',g'}
are both 1. Clearly, it is enough to prove the assertion when
f
,
g
{\displaystyle f,g}
are replaced by
f
′
,
g
′
{\displaystyle f',g'}
; thus, we assume the gcd's of the coefficients of
f
,
g
{\displaystyle f,g}
are both 1. The rest of the proof is easy and transparent if
R
{\displaystyle R}
is a unique factorization domain; thus we give the proof in that case here (and see for the proof for the GCD case). If
gcd
(
cont
(
f
g
)
)
=
(
1
)
{\displaystyle \gcd(\operatorname {cont} (fg))=(1)}
, then there is nothing to prove. So, assume otherwise; then there is a non-unit element dividing the coefficients of
f
g
{\displaystyle fg}
. Factorizing that element into a product of prime elements, we can take that element to be a prime element
π
{\displaystyle \pi }
. Now, we have:
(
π
)
=
(
π
)
⊃
cont
(
f
g
)
⊃
cont
(
f
)
cont
(
g
)
{\displaystyle (\pi )={\sqrt {(\pi )}}\supset {\sqrt {\operatorname {cont} (fg)}}\supset \operatorname {cont} (f)\operatorname {cont} (g)}
.
Thus, either
(
π
)
{\displaystyle (\pi )}
contains
cont
(
f
)
{\displaystyle \operatorname {cont} (f)}
or
cont
(
g
)
{\displaystyle \operatorname {cont} (g)}
; contradicting the gcd's of the coefficients of
f
,
g
{\displaystyle f,g}
are both 1.
◻
{\displaystyle \square }
Remark: Over a GCD domain (e.g., a unique factorization domain), the gcd of all the coefficients of a polynomial
f
{\displaystyle f}
, unique up to unit elements, is also called the content of
f
{\displaystyle f}
.
== Applications ==
It follows from Gauss's lemma that for each unique factorization domain
R
{\displaystyle R}
, the polynomial ring
R
[
X
1
,
X
2
,
.
.
.
,
X
n
]
{\displaystyle R[X_{1},X_{2},...,X_{n}]}
is also a unique factorization domain (see #Statements for unique factorization domains). Gauss's lemma can also be used to show Eisenstein's irreducibility criterion. Finally, it can be used to show that cyclotomic polynomials (unitary units with integer coefficients) are irreducible.
Gauss's lemma implies the following statement:
If
f
(
x
)
{\displaystyle f(x)}
is a monic polynomial in one variable with coefficients in a unique factorization domain
R
{\displaystyle R}
(or more generally a GCD domain), then a root of
f
{\displaystyle f}
that is in the field of fractions
F
{\displaystyle F}
of
R
{\displaystyle R}
is in
R
{\displaystyle R}
.
If
R
=
Z
{\displaystyle R=\mathbb {Z} }
, then it says a rational root of a monic polynomial over integers is an integer (cf. the rational root theorem). To see the statement, let
a
/
b
{\displaystyle a/b}
be a root of
f
{\displaystyle f}
in
F
{\displaystyle F}
and assume
a
,
b
{\displaystyle a,b}
are relatively prime. In
F
[
x
]
{\displaystyle F[x]}
we can write
f
=
(
x
−
a
/
b
)
g
{\displaystyle f=(x-a/b)g}
with
c
g
∈
R
[
x
]
{\displaystyle cg\in R[x]}
for some
c
∈
R
{\displaystyle c\in R}
. Then
c
b
f
=
(
b
x
−
a
)
c
g
{\displaystyle cbf=(bx-a)cg}
is a factorization in
R
[
x
]
{\displaystyle R[x]}
. But
b
x
−
a
{\displaystyle bx-a}
is primitive (in the UFD sense) and thus
c
b
{\displaystyle cb}
divides the coefficients of
c
g
{\displaystyle cg}
by Gauss's lemma, and so
f
=
(
b
x
−
a
)
h
{\displaystyle f=(bx-a)h}
with
h
{\displaystyle h}
in
R
[
x
]
{\displaystyle R[x]}
. Since
f
{\displaystyle f}
is monic, this is possible only when
b
{\displaystyle b}
is a unit.
A similar argument shows:
Let
R
{\displaystyle R}
be a GCD domain with the field of fractions
F
{\displaystyle F}
and
f
∈
R
[
x
]
{\displaystyle f\in R[x]}
. If
f
=
g
h
{\displaystyle f=gh}
for some polynomial
g
∈
R
[
x
]
{\displaystyle g\in R[x]}
that is primitive in the UFD sense and
h
∈
F
[
x
]
{\displaystyle h\in F[x]}
, then
h
∈
R
[
x
]
{\displaystyle h\in R[x]}
.
The irreducibility statement also implies that the minimal polynomial over the rational numbers of an algebraic integer has integer coefficients.
== Notes ==
== References ==
Atiyah, Michael Francis; Macdonald, I.G. (1969), Introduction to Commutative Algebra, Westview Press, ISBN 978-0-201-40751-8
Eisenbud, David (1995), Commutative algebra, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-5350-1, ISBN 978-0-387-94268-1, MR 1322960, ISBN 978-0-387-94269-8 | Wikipedia/Gauss's_lemma_(polynomial) |
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
The DFT is used in the Fourier analysis of many practical applications. In digital signal processing, the function is any quantity or signal that varies over time, such as the pressure of a sound wave, a radio signal, or daily temperature readings, sampled over a finite time interval (often defined by a window function). In image processing, the samples can be the values of pixels along a row or column of a raster image. The DFT is also used to efficiently solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers.
Since it deals with a finite amount of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms; so much so that the terms "FFT" and "DFT" are often used interchangeably. Prior to its current usage, the "FFT" initialism may have also been used for the ambiguous term "finite Fourier transform".
== Definition ==
The discrete Fourier transform transforms a sequence of N complex numbers
{
x
n
}
:=
x
0
,
x
1
,
…
,
x
N
−
1
{\displaystyle \left\{\mathbf {x} _{n}\right\}:=x_{0},x_{1},\ldots ,x_{N-1}}
into another sequence of complex numbers,
{
X
k
}
:=
X
0
,
X
1
,
…
,
X
N
−
1
,
{\displaystyle \left\{\mathbf {X} _{k}\right\}:=X_{0},X_{1},\ldots ,X_{N-1},}
which is defined by:
The transform is sometimes denoted by the symbol
F
{\displaystyle {\mathcal {F}}}
, as in
X
=
F
{
x
}
{\displaystyle \mathbf {X} ={\mathcal {F}}\left\{\mathbf {x} \right\}}
or
F
(
x
)
{\displaystyle {\mathcal {F}}\left(\mathbf {x} \right)}
or
F
x
{\displaystyle {\mathcal {F}}\mathbf {x} }
.
Eq.1 can be interpreted or derived in various ways, for example:Eq.1 can also be evaluated outside the domain
k
∈
[
0
,
N
−
1
]
{\displaystyle k\in [0,N-1]}
, and that extended sequence is
N
{\displaystyle N}
-periodic. Accordingly, other sequences of
N
{\displaystyle N}
indices are sometimes used, such as
[
−
N
2
,
N
2
−
1
]
{\textstyle \left[-{\frac {N}{2}},{\frac {N}{2}}-1\right]}
(if
N
{\displaystyle N}
is even) and
[
−
N
−
1
2
,
N
−
1
2
]
{\textstyle \left[-{\frac {N-1}{2}},{\frac {N-1}{2}}\right]}
(if
N
{\displaystyle N}
is odd), which amounts to swapping the left and right halves of the result of the transform.
The inverse transform is given by:
Eq.2. is also
N
{\displaystyle N}
-periodic (in index n). In Eq.2, each
X
k
{\displaystyle X_{k}}
is a complex number whose polar coordinates are the amplitude and phase of a complex sinusoidal component
(
e
i
2
π
k
N
n
)
{\displaystyle \left(e^{i2\pi {\tfrac {k}{N}}n}\right)}
of function
x
n
.
{\displaystyle x_{n}.}
(see Discrete Fourier series) The sinusoid's frequency is
k
{\displaystyle k}
cycles per
N
{\displaystyle N}
samples.
The normalization factor multiplying the DFT and IDFT (here 1 and
1
N
{\displaystyle {\tfrac {1}{N}}}
) and the signs of the exponents are the most common conventions. The only actual requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be
1
N
.
{\displaystyle {\tfrac {1}{N}}.}
An uncommon normalization of
1
N
{\displaystyle {\sqrt {\tfrac {1}{N}}}}
for both the DFT and IDFT makes the transform-pair unitary.
== Example ==
This example demonstrates how to apply the DFT to a sequence of length
N
=
4
{\displaystyle N=4}
and the input vector
x
=
(
x
0
x
1
x
2
x
3
)
=
(
1
2
−
i
−
i
−
1
+
2
i
)
.
{\displaystyle \mathbf {x} ={\begin{pmatrix}x_{0}\\x_{1}\\x_{2}\\x_{3}\end{pmatrix}}={\begin{pmatrix}1\\2-i\\-i\\-1+2i\end{pmatrix}}.}
Calculating the DFT of
x
{\displaystyle \mathbf {x} }
using Eq.1
X
0
=
e
−
i
2
π
0
⋅
0
/
4
⋅
1
+
e
−
i
2
π
0
⋅
1
/
4
⋅
(
2
−
i
)
+
e
−
i
2
π
0
⋅
2
/
4
⋅
(
−
i
)
+
e
−
i
2
π
0
⋅
3
/
4
⋅
(
−
1
+
2
i
)
=
2
X
1
=
e
−
i
2
π
1
⋅
0
/
4
⋅
1
+
e
−
i
2
π
1
⋅
1
/
4
⋅
(
2
−
i
)
+
e
−
i
2
π
1
⋅
2
/
4
⋅
(
−
i
)
+
e
−
i
2
π
1
⋅
3
/
4
⋅
(
−
1
+
2
i
)
=
−
2
−
2
i
X
2
=
e
−
i
2
π
2
⋅
0
/
4
⋅
1
+
e
−
i
2
π
2
⋅
1
/
4
⋅
(
2
−
i
)
+
e
−
i
2
π
2
⋅
2
/
4
⋅
(
−
i
)
+
e
−
i
2
π
2
⋅
3
/
4
⋅
(
−
1
+
2
i
)
=
−
2
i
X
3
=
e
−
i
2
π
3
⋅
0
/
4
⋅
1
+
e
−
i
2
π
3
⋅
1
/
4
⋅
(
2
−
i
)
+
e
−
i
2
π
3
⋅
2
/
4
⋅
(
−
i
)
+
e
−
i
2
π
3
⋅
3
/
4
⋅
(
−
1
+
2
i
)
=
4
+
4
i
{\displaystyle {\begin{aligned}X_{0}&=e^{-i2\pi 0\cdot 0/4}\cdot 1+e^{-i2\pi 0\cdot 1/4}\cdot (2-i)+e^{-i2\pi 0\cdot 2/4}\cdot (-i)+e^{-i2\pi 0\cdot 3/4}\cdot (-1+2i)=2\\X_{1}&=e^{-i2\pi 1\cdot 0/4}\cdot 1+e^{-i2\pi 1\cdot 1/4}\cdot (2-i)+e^{-i2\pi 1\cdot 2/4}\cdot (-i)+e^{-i2\pi 1\cdot 3/4}\cdot (-1+2i)=-2-2i\\X_{2}&=e^{-i2\pi 2\cdot 0/4}\cdot 1+e^{-i2\pi 2\cdot 1/4}\cdot (2-i)+e^{-i2\pi 2\cdot 2/4}\cdot (-i)+e^{-i2\pi 2\cdot 3/4}\cdot (-1+2i)=-2i\\X_{3}&=e^{-i2\pi 3\cdot 0/4}\cdot 1+e^{-i2\pi 3\cdot 1/4}\cdot (2-i)+e^{-i2\pi 3\cdot 2/4}\cdot (-i)+e^{-i2\pi 3\cdot 3/4}\cdot (-1+2i)=4+4i\end{aligned}}}
results in
X
=
(
X
0
X
1
X
2
X
3
)
=
(
2
−
2
−
2
i
−
2
i
4
+
4
i
)
.
{\displaystyle \mathbf {X} ={\begin{pmatrix}X_{0}\\X_{1}\\X_{2}\\X_{3}\end{pmatrix}}={\begin{pmatrix}2\\-2-2i\\-2i\\4+4i\end{pmatrix}}.}
== Properties ==
=== Linearity ===
The DFT is a linear transform, i.e. if
F
(
{
x
n
}
)
k
=
X
k
{\displaystyle {\mathcal {F}}(\{x_{n}\})_{k}=X_{k}}
and
F
(
{
y
n
}
)
k
=
Y
k
{\displaystyle {\mathcal {F}}(\{y_{n}\})_{k}=Y_{k}}
, then for any complex numbers
a
,
b
{\displaystyle a,b}
:
F
(
{
a
x
n
+
b
y
n
}
)
k
=
a
X
k
+
b
Y
k
{\displaystyle {\mathcal {F}}(\{ax_{n}+by_{n}\})_{k}=aX_{k}+bY_{k}}
=== Time and frequency reversal ===
Reversing the time (i.e. replacing
n
{\displaystyle n}
by
N
−
n
{\displaystyle N-n}
) in
x
n
{\displaystyle x_{n}}
corresponds to reversing the frequency (i.e.
k
{\displaystyle k}
by
N
−
k
{\displaystyle N-k}
).: p.421 Mathematically, if
{
x
n
}
{\displaystyle \{x_{n}\}}
represents the vector x then
if
F
(
{
x
n
}
)
k
=
X
k
{\displaystyle {\mathcal {F}}(\{x_{n}\})_{k}=X_{k}}
then
F
(
{
x
N
−
n
}
)
k
=
X
N
−
k
{\displaystyle {\mathcal {F}}(\{x_{N-n}\})_{k}=X_{N-k}}
=== Conjugation in time ===
If
F
(
{
x
n
}
)
k
=
X
k
{\displaystyle {\mathcal {F}}(\{x_{n}\})_{k}=X_{k}}
then
F
(
{
x
n
∗
}
)
k
=
X
N
−
k
∗
{\displaystyle {\mathcal {F}}(\{x_{n}^{*}\})_{k}=X_{N-k}^{*}}
.: p.423
=== Real and imaginary part ===
This table shows some mathematical operations on
x
n
{\displaystyle x_{n}}
in the time domain and the corresponding effects on its DFT
X
k
{\displaystyle X_{k}}
in the frequency domain.
=== Orthogonality ===
The vectors
u
k
=
[
e
i
2
π
N
k
n
|
n
=
0
,
1
,
…
,
N
−
1
]
T
{\displaystyle u_{k}=\left[\left.e^{{\frac {i2\pi }{N}}kn}\;\right|\;n=0,1,\ldots ,N-1\right]^{\mathsf {T}}}
, for
k
=
0
,
1
,
…
,
N
−
1
{\displaystyle k=0,1,\ldots ,N-1}
, form an orthogonal basis over the set of N-dimensional complex vectors:
u
k
T
u
k
′
∗
=
∑
n
=
0
N
−
1
(
e
i
2
π
N
k
n
)
(
e
i
2
π
N
(
−
k
′
)
n
)
=
∑
n
=
0
N
−
1
e
i
2
π
N
(
k
−
k
′
)
n
=
N
δ
k
k
′
{\displaystyle u_{k}^{\mathsf {T}}u_{k'}^{*}=\sum _{n=0}^{N-1}\left(e^{{\frac {i2\pi }{N}}kn}\right)\left(e^{{\frac {i2\pi }{N}}(-k')n}\right)=\sum _{n=0}^{N-1}e^{{\frac {i2\pi }{N}}(k-k')n}=N~\delta _{kk'}}
where
δ
k
k
′
{\displaystyle \delta _{kk'}}
is the Kronecker delta. (In the last step, the summation is trivial if
k
=
k
′
{\displaystyle k=k'}
, where it is 1 + 1 + ⋯ = N, and otherwise is a geometric series that can be explicitly summed to obtain zero.) This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below.
=== The Plancherel theorem and Parseval's theorem ===
If
X
k
{\displaystyle X_{k}}
and
Y
k
{\displaystyle Y_{k}}
are the DFTs of
x
n
{\displaystyle x_{n}}
and
y
n
{\displaystyle y_{n}}
respectively then Parseval's theorem states:
∑
n
=
0
N
−
1
x
n
y
n
∗
=
1
N
∑
k
=
0
N
−
1
X
k
Y
k
∗
{\displaystyle \sum _{n=0}^{N-1}x_{n}y_{n}^{*}={\frac {1}{N}}\sum _{k=0}^{N-1}X_{k}Y_{k}^{*}}
where the star denotes complex conjugation. The Plancherel theorem is a special case of Parseval's theorem and states:
∑
n
=
0
N
−
1
|
x
n
|
2
=
1
N
∑
k
=
0
N
−
1
|
X
k
|
2
.
{\displaystyle \sum _{n=0}^{N-1}|x_{n}|^{2}={\frac {1}{N}}\sum _{k=0}^{N-1}|X_{k}|^{2}.}
These theorems are also equivalent to the unitary condition below.
=== Periodicity ===
The periodicity can be shown directly from the definition:
X
k
+
N
≜
∑
n
=
0
N
−
1
x
n
e
−
i
2
π
N
(
k
+
N
)
n
=
∑
n
=
0
N
−
1
x
n
e
−
i
2
π
N
k
n
e
−
i
2
π
n
⏟
1
=
∑
n
=
0
N
−
1
x
n
e
−
i
2
π
N
k
n
=
X
k
.
{\displaystyle X_{k+N}\ \triangleq \ \sum _{n=0}^{N-1}x_{n}e^{-{\frac {i2\pi }{N}}(k+N)n}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {i2\pi }{N}}kn}\underbrace {e^{-i2\pi n}} _{1}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {i2\pi }{N}}kn}=X_{k}.}
Similarly, it can be shown that the IDFT formula leads to a periodic extension of
x
n
{\displaystyle x_{n}}
.
=== Shift theorem ===
Multiplying
x
n
{\displaystyle x_{n}}
by a linear phase
e
i
2
π
N
n
m
{\displaystyle e^{{\frac {i2\pi }{N}}nm}}
for some integer m corresponds to a circular shift of the output
X
k
{\displaystyle X_{k}}
:
X
k
{\displaystyle X_{k}}
is replaced by
X
k
−
m
{\displaystyle X_{k-m}}
, where the subscript is interpreted modulo N (i.e., periodically). Similarly, a circular shift of the input
x
n
{\displaystyle x_{n}}
corresponds to multiplying the output
X
k
{\displaystyle X_{k}}
by a linear phase. Mathematically, if
{
x
n
}
{\displaystyle \{x_{n}\}}
represents the vector x then
if
F
(
{
x
n
}
)
k
=
X
k
{\displaystyle {\mathcal {F}}(\{x_{n}\})_{k}=X_{k}}
then
F
(
{
x
n
⋅
e
i
2
π
N
n
m
}
)
k
=
X
k
−
m
{\displaystyle {\mathcal {F}}\left(\left\{x_{n}\cdot e^{{\frac {i2\pi }{N}}nm}\right\}\right)_{k}=X_{k-m}}
and
F
(
{
x
n
−
m
}
)
k
=
X
k
⋅
e
−
i
2
π
N
k
m
{\displaystyle {\mathcal {F}}\left(\left\{x_{n-m}\right\}\right)_{k}=X_{k}\cdot e^{-{\frac {i2\pi }{N}}km}}
=== Circular convolution theorem and cross-correlation theorem ===
The convolution theorem for the discrete-time Fourier transform (DTFT) indicates that a convolution of two sequences can be obtained as the inverse transform of the product of the individual transforms. An important simplification occurs when one of sequences is N-periodic, denoted here by
y
N
,
{\displaystyle y_{_{N}},}
because
DTFT
{
y
N
}
{\displaystyle \scriptstyle {\text{DTFT}}\displaystyle \{y_{_{N}}\}}
is non-zero at only discrete frequencies (see DTFT § Periodic data), and therefore so is its product with the continuous function
DTFT
{
x
}
.
{\displaystyle \scriptstyle {\text{DTFT}}\displaystyle \{x\}.}
That leads to a considerable simplification of the inverse transform.
x
∗
y
N
=
D
T
F
T
−
1
[
D
T
F
T
{
x
}
⋅
D
T
F
T
{
y
N
}
]
=
D
F
T
−
1
[
D
F
T
{
x
N
}
⋅
D
F
T
{
y
N
}
]
,
{\displaystyle x*y_{_{N}}\ =\ \scriptstyle {\rm {DTFT}}^{-1}\displaystyle \left[\scriptstyle {\rm {DTFT}}\displaystyle \{x\}\cdot \scriptstyle {\rm {DTFT}}\displaystyle \{y_{_{N}}\}\right]\ =\ \scriptstyle {\rm {DFT}}^{-1}\displaystyle \left[\scriptstyle {\rm {DFT}}\displaystyle \{x_{_{N}}\}\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{_{N}}\}\right],}
where
x
N
{\displaystyle x_{_{N}}}
is a periodic summation of the
x
{\displaystyle x}
sequence:
(
x
N
)
n
≜
∑
m
=
−
∞
∞
x
(
n
−
m
N
)
.
{\displaystyle (x_{_{N}})_{n}\ \triangleq \sum _{m=-\infty }^{\infty }x_{(n-mN)}.}
Customarily, the DFT and inverse DFT summations are taken over the domain
[
0
,
N
−
1
]
{\displaystyle [0,N-1]}
. Defining those DFTs as
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, the result is:
(
x
∗
y
N
)
n
≜
∑
ℓ
=
−
∞
∞
x
ℓ
⋅
(
y
N
)
n
−
ℓ
=
F
−
1
⏟
D
F
T
−
1
{
X
⋅
Y
}
n
.
{\displaystyle (x*y_{_{N}})_{n}\triangleq \sum _{\ell =-\infty }^{\infty }x_{\ell }\cdot (y_{_{N}})_{n-\ell }=\underbrace {{\mathcal {F}}^{-1}} _{\rm {DFT^{-1}}}\left\{X\cdot Y\right\}_{n}.}
In practice, the
x
{\displaystyle x}
sequence is usually length N or less, and
y
N
{\displaystyle y_{_{N}}}
is a periodic extension of an N-length
y
{\displaystyle y}
-sequence, which can also be expressed as a circular function:
(
y
N
)
n
=
∑
p
=
−
∞
∞
y
(
n
−
p
N
)
=
y
(
n
mod
N
)
,
n
∈
Z
.
{\displaystyle (y_{_{N}})_{n}=\sum _{p=-\infty }^{\infty }y_{(n-pN)}=y_{(n\operatorname {mod} N)},\quad n\in \mathbb {Z} .}
Then the convolution can be written as:
which gives rise to the interpretation as a circular convolution of
x
{\displaystyle x}
and
y
.
{\displaystyle y.}
It is often used to efficiently compute their linear convolution. (see Circular convolution, Fast convolution algorithms, and Overlap-save)
Similarly, the cross-correlation of
x
{\displaystyle x}
and
y
N
{\displaystyle y_{_{N}}}
is given by:
(
x
⋆
y
N
)
n
≜
∑
ℓ
=
−
∞
∞
x
ℓ
∗
⋅
(
y
N
)
n
+
ℓ
=
F
−
1
{
X
∗
⋅
Y
}
n
.
{\displaystyle (x\star y_{_{N}})_{n}\triangleq \sum _{\ell =-\infty }^{\infty }x_{\ell }^{*}\cdot (y_{_{N}})_{n+\ell }={\mathcal {F}}^{-1}\left\{X^{*}\cdot Y\right\}_{n}.}
=== Uniqueness of the Discrete Fourier Transform ===
As seen above, the discrete Fourier transform has the fundamental property of carrying convolution into componentwise product. A natural question is whether it is the only one with this ability. It has been shown that any linear transform that turns convolution into pointwise product is the DFT up to a permutation of coefficients. Since the number of permutations of n elements equals n!, there exists exactly n! linear and invertible maps with the same fundamental property as the DFT with respect to convolution.
=== Convolution theorem duality ===
It can also be shown that:
F
{
x
⋅
y
}
k
≜
∑
n
=
0
N
−
1
x
n
⋅
y
n
⋅
e
−
i
2
π
N
k
n
{\displaystyle {\mathcal {F}}\left\{\mathbf {x\cdot y} \right\}_{k}\ \triangleq \sum _{n=0}^{N-1}x_{n}\cdot y_{n}\cdot e^{-i{\frac {2\pi }{N}}kn}}
=
1
N
(
X
∗
Y
N
)
k
,
{\displaystyle ={\frac {1}{N}}(\mathbf {X*Y_{N}} )_{k},}
which is the circular convolution of
X
{\displaystyle \mathbf {X} }
and
Y
{\displaystyle \mathbf {Y} }
.
=== Trigonometric interpolation polynomial ===
The trigonometric interpolation polynomial
p
(
t
)
=
{
1
N
[
X
0
+
X
1
e
i
2
π
t
+
⋯
+
X
N
2
−
1
e
i
2
π
(
N
2
−
1
)
t
+
X
N
2
cos
(
N
π
t
)
+
X
N
2
+
1
e
−
i
2
π
(
N
2
−
1
)
t
+
⋯
+
X
N
−
1
e
−
i
2
π
t
]
N
even
1
N
[
X
0
+
X
1
e
i
2
π
t
+
⋯
+
X
N
−
1
2
e
i
2
π
N
−
1
2
t
+
X
N
+
1
2
e
−
i
2
π
N
−
1
2
t
+
⋯
+
X
N
−
1
e
−
i
2
π
t
]
N
odd
{\displaystyle p(t)={\begin{cases}\displaystyle {\frac {1}{N}}\left[{\begin{alignedat}{3}X_{0}+X_{1}e^{i2\pi t}+\cdots &+X_{{\frac {N}{2}}-1}e^{i2\pi {\big (}\!{\frac {N}{2}}-1\!{\big )}t}&\\&+X_{\frac {N}{2}}\cos(N\pi t)&\\&+X_{{\frac {N}{2}}+1}e^{-i2\pi {\big (}\!{\frac {N}{2}}-1\!{\big )}t}&+\cdots +X_{N-1}e^{-i2\pi t}\end{alignedat}}\right]&N{\text{ even}}\\\displaystyle {\frac {1}{N}}\left[{\begin{alignedat}{3}X_{0}+X_{1}e^{i2\pi t}+\cdots &+X_{\frac {N-1}{2}}e^{i2\pi {\frac {N-1}{2}}t}&\\&+X_{\frac {N+1}{2}}e^{-i2\pi {\frac {N-1}{2}}t}&+\cdots +X_{N-1}e^{-i2\pi t}\end{alignedat}}\right]&N{\text{ odd}}\end{cases}}}
where the coefficients Xk are given by the DFT of xn above, satisfies the interpolation property
p
(
n
/
N
)
=
x
n
{\displaystyle p(n/N)=x_{n}}
for
n
=
0
,
…
,
N
−
1
{\displaystyle n=0,\ldots ,N-1}
.
For even N, notice that the Nyquist component
X
N
/
2
N
cos
(
N
π
t
)
{\textstyle {\frac {X_{N/2}}{N}}\cos(N\pi t)}
is handled specially.
This interpolation is not unique: aliasing implies that one could add N to any of the complex-sinusoid frequencies (e.g. changing
e
−
i
t
{\displaystyle e^{-it}}
to
e
i
(
N
−
1
)
t
{\displaystyle e^{i(N-1)t}}
) without changing the interpolation property, but giving different values in between the
x
n
{\displaystyle x_{n}}
points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the
x
n
{\displaystyle x_{n}}
are real numbers, then
p
(
t
)
{\displaystyle p(t)}
is real as well.
In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to
N
−
1
{\displaystyle N-1}
(instead of roughly
−
N
/
2
{\displaystyle -N/2}
to
+
N
/
2
{\displaystyle +N/2}
as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real
x
n
{\displaystyle x_{n}}
; its use is a common mistake.
=== The unitary DFT ===
Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as the DFT matrix, a Vandermonde matrix,
introduced by Sylvester in 1867,
F
=
[
ω
N
0
⋅
0
ω
N
0
⋅
1
⋯
ω
N
0
⋅
(
N
−
1
)
ω
N
1
⋅
0
ω
N
1
⋅
1
⋯
ω
N
1
⋅
(
N
−
1
)
⋮
⋮
⋱
⋮
ω
N
(
N
−
1
)
⋅
0
ω
N
(
N
−
1
)
⋅
1
⋯
ω
N
(
N
−
1
)
⋅
(
N
−
1
)
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\omega _{N}^{0\cdot 0}&\omega _{N}^{0\cdot 1}&\cdots &\omega _{N}^{0\cdot (N-1)}\\\omega _{N}^{1\cdot 0}&\omega _{N}^{1\cdot 1}&\cdots &\omega _{N}^{1\cdot (N-1)}\\\vdots &\vdots &\ddots &\vdots \\\omega _{N}^{(N-1)\cdot 0}&\omega _{N}^{(N-1)\cdot 1}&\cdots &\omega _{N}^{(N-1)\cdot (N-1)}\\\end{bmatrix}}}
where
ω
N
=
e
−
i
2
π
/
N
{\displaystyle \omega _{N}=e^{-i2\pi /N}}
is a primitive Nth root of unity.
For example, in the case when
N
=
2
{\displaystyle N=2}
,
ω
N
=
e
−
i
π
=
−
1
{\displaystyle \omega _{N}=e^{-i\pi }=-1}
, and
F
=
[
1
1
1
−
1
]
,
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&1\\1&-1\\\end{bmatrix}},}
(which is a Hadamard matrix) or when
N
=
4
{\displaystyle N=4}
as in the Discrete Fourier transform § Example above,
ω
N
=
e
−
i
π
/
2
=
−
i
{\displaystyle \omega _{N}=e^{-i\pi /2}=-i}
, and
F
=
[
1
1
1
1
1
−
i
−
1
i
1
−
1
1
−
1
1
i
−
1
−
i
]
.
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&1&1&1\\1&-i&-1&i\\1&-1&1&-1\\1&i&-1&-i\\\end{bmatrix}}.}
The inverse transform is then given by the inverse of the above matrix,
F
−
1
=
1
N
F
∗
{\displaystyle \mathbf {F} ^{-1}={\frac {1}{N}}\mathbf {F} ^{*}}
With unitary normalization constants
1
/
N
{\textstyle 1/{\sqrt {N}}}
, the DFT becomes a unitary transformation, defined by a unitary matrix:
U
=
1
N
F
U
−
1
=
U
∗
|
det
(
U
)
|
=
1
{\displaystyle {\begin{aligned}\mathbf {U} &={\frac {1}{\sqrt {N}}}\mathbf {F} \\\mathbf {U} ^{-1}&=\mathbf {U} ^{*}\\\left|\det(\mathbf {U} )\right|&=1\end{aligned}}}
where
det
(
)
{\displaystyle \det()}
is the determinant function. The determinant is the product of the eigenvalues, which are always
±
1
{\displaystyle \pm 1}
or
±
i
{\displaystyle \pm i}
as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT.
The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity):
∑
m
=
0
N
−
1
U
k
m
U
m
n
∗
=
δ
k
n
{\displaystyle \sum _{m=0}^{N-1}U_{km}U_{mn}^{*}=\delta _{kn}}
If X is defined as the unitary DFT of the vector x, then
X
k
=
∑
n
=
0
N
−
1
U
k
n
x
n
{\displaystyle X_{k}=\sum _{n=0}^{N-1}U_{kn}x_{n}}
and the Parseval's theorem is expressed as
∑
n
=
0
N
−
1
x
n
y
n
∗
=
∑
k
=
0
N
−
1
X
k
Y
k
∗
{\displaystyle \sum _{n=0}^{N-1}x_{n}y_{n}^{*}=\sum _{k=0}^{N-1}X_{k}Y_{k}^{*}}
If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case
x
=
y
{\displaystyle \mathbf {x} =\mathbf {y} }
, this implies that the length of a vector is preserved as well — this is just Plancherel theorem,
∑
n
=
0
N
−
1
|
x
n
|
2
=
∑
k
=
0
N
−
1
|
X
k
|
2
{\displaystyle \sum _{n=0}^{N-1}|x_{n}|^{2}=\sum _{k=0}^{N-1}|X_{k}|^{2}}
A consequence of the circular convolution theorem is that the DFT matrix F diagonalizes any circulant matrix.
=== Expressing the inverse DFT in terms of the DFT ===
A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.)
First, we can compute the inverse DFT by reversing all but one of the inputs (Duhamel et al., 1988):
F
−
1
(
{
x
n
}
)
=
1
N
F
(
{
x
N
−
n
}
)
{\displaystyle {\mathcal {F}}^{-1}(\{x_{n}\})={\frac {1}{N}}{\mathcal {F}}(\{x_{N-n}\})}
(As usual, the subscripts are interpreted modulo N; thus, for
n
=
0
{\displaystyle n=0}
, we have
x
N
−
0
=
x
0
{\displaystyle x_{N-0}=x_{0}}
.)
Second, one can also conjugate the inputs and outputs:
F
−
1
(
x
)
=
1
N
F
(
x
∗
)
∗
{\displaystyle {\mathcal {F}}^{-1}(\mathbf {x} )={\frac {1}{N}}{\mathcal {F}}\left(\mathbf {x} ^{*}\right)^{*}}
Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define
swap
(
x
n
)
{\textstyle \operatorname {swap} (x_{n})}
as
x
n
{\displaystyle x_{n}}
with its real and imaginary parts swapped—that is, if
x
n
=
a
+
b
i
{\displaystyle x_{n}=a+bi}
then
swap
(
x
n
)
{\textstyle \operatorname {swap} (x_{n})}
is
b
+
a
i
{\displaystyle b+ai}
. Equivalently,
swap
(
x
n
)
{\textstyle \operatorname {swap} (x_{n})}
equals
i
x
n
∗
{\displaystyle ix_{n}^{*}}
. Then
F
−
1
(
x
)
=
1
N
swap
(
F
(
swap
(
x
)
)
)
{\displaystyle {\mathcal {F}}^{-1}(\mathbf {x} )={\frac {1}{N}}\operatorname {swap} ({\mathcal {F}}(\operatorname {swap} (\mathbf {x} )))}
That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988).
The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutory—that is, which is its own inverse. In particular,
T
(
x
)
=
F
(
x
∗
)
/
N
{\displaystyle T(\mathbf {x} )={\mathcal {F}}\left(\mathbf {x} ^{*}\right)/{\sqrt {N}}}
is clearly its own inverse:
T
(
T
(
x
)
)
=
x
{\displaystyle T(T(\mathbf {x} ))=\mathbf {x} }
. A closely related involutory transformation (by a factor of
1
+
i
2
{\textstyle {\frac {1+i}{\sqrt {2}}}}
) is
H
(
x
)
=
F
(
(
1
+
i
)
x
∗
)
/
2
N
{\displaystyle H(\mathbf {x} )={\mathcal {F}}\left((1+i)\mathbf {x} ^{*}\right)/{\sqrt {2N}}}
, since the
(
1
+
i
)
{\displaystyle (1+i)}
factors in
H
(
H
(
x
)
)
{\displaystyle H(H(\mathbf {x} ))}
cancel the 2. For real inputs
x
{\displaystyle \mathbf {x} }
, the real part of
H
(
x
)
{\displaystyle H(\mathbf {x} )}
is none other than the discrete Hartley transform, which is also involutory.
=== Eigenvalues and eigenvectors ===
The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research. Explicit formulas are given with a significant amount of number theory.
Consider the unitary form
U
{\displaystyle \mathbf {U} }
defined above for the DFT of length N, where
U
m
,
n
=
1
N
ω
N
(
m
−
1
)
(
n
−
1
)
=
1
N
e
−
i
2
π
N
(
m
−
1
)
(
n
−
1
)
.
{\displaystyle \mathbf {U} _{m,n}={\frac {1}{\sqrt {N}}}\omega _{N}^{(m-1)(n-1)}={\frac {1}{\sqrt {N}}}e^{-{\frac {i2\pi }{N}}(m-1)(n-1)}.}
This matrix satisfies the matrix polynomial equation:
U
4
=
I
.
{\displaystyle \mathbf {U} ^{4}=\mathbf {I} .}
This can be seen from the inverse properties above: operating
U
{\displaystyle \mathbf {U} }
twice gives the original data in reverse order, so operating
U
{\displaystyle \mathbf {U} }
four times gives back the original data and is thus the identity matrix. This means that the eigenvalues
λ
{\displaystyle \lambda }
satisfy the equation:
λ
4
=
1.
{\displaystyle \lambda ^{4}=1.}
Therefore, the eigenvalues of
U
{\displaystyle \mathbf {U} }
are the fourth roots of unity:
λ
{\displaystyle \lambda }
is +1, −1, +i, or −i.
Since there are only four distinct eigenvalues for this
N
×
N
{\displaystyle N\times N}
matrix, they have some multiplicity. The multiplicity gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (There are N independent eigenvectors; a unitary matrix is never defective.)
The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by Gauss (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of N modulo 4, and is given by the following table:
Otherwise stated, the characteristic polynomial of
U
{\displaystyle \mathbf {U} }
is:
det
(
λ
I
−
U
)
=
(
λ
−
1
)
⌊
N
+
4
4
⌋
(
λ
+
1
)
⌊
N
+
2
4
⌋
(
λ
+
i
)
⌊
N
+
1
4
⌋
(
λ
−
i
)
⌊
N
−
1
4
⌋
.
{\displaystyle \det(\lambda I-\mathbf {U} )=(\lambda -1)^{\left\lfloor {\tfrac {N+4}{4}}\right\rfloor }(\lambda +1)^{\left\lfloor {\tfrac {N+2}{4}}\right\rfloor }(\lambda +i)^{\left\lfloor {\tfrac {N+1}{4}}\right\rfloor }(\lambda -i)^{\left\lfloor {\tfrac {N-1}{4}}\right\rfloor }.}
No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like orthogonality and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grünbaum, 1982; Atakishiyev and Wolf, 1997; Candan et al., 2000; Hanna et al., 2004; Gurevich and Hadani, 2008).
One method to construct DFT eigenvectors to an eigenvalue
λ
{\displaystyle \lambda }
is based on the linear combination of operators:
P
λ
=
1
4
(
I
+
λ
−
1
U
+
λ
−
2
U
2
+
λ
−
3
U
3
)
{\displaystyle {\mathcal {P}}_{\lambda }={\frac {1}{4}}\left(\mathbf {I} +\lambda ^{-1}\mathbf {U} +\lambda ^{-2}\mathbf {U} ^{2}+\lambda ^{-3}\mathbf {U} ^{3}\right)}
For an arbitrary vector
v
{\displaystyle \mathbf {v} }
, vector
u
(
λ
)
=
P
λ
v
{\displaystyle \mathbf {u} (\lambda )={\mathcal {P}}_{\lambda }\mathbf {v} }
satisfies:
U
u
(
λ
)
=
λ
u
(
λ
)
{\displaystyle {\textbf {U}}\mathbf {u} (\lambda )=\lambda \mathbf {u} (\lambda )}
hence, vector
u
(
λ
)
{\displaystyle \mathbf {u} (\lambda )}
is, indeed, the eigenvector of DFT matrix
U
{\displaystyle \mathbf {U} }
. Operators
P
λ
{\displaystyle {\mathcal {P}}_{\lambda }}
project vectors onto subspaces which are orthogonal for each value of
λ
{\displaystyle \lambda }
. That is, for two eigenvectors,
u
(
λ
)
=
P
λ
v
{\displaystyle \mathbf {u} (\lambda )={\mathcal {P}}_{\lambda }\mathbf {v} }
and
u
′
(
λ
′
)
=
P
λ
′
v
′
{\displaystyle \mathbf {u} '(\lambda ')={\mathcal {P}}_{\lambda '}\mathbf {v} '}
we have:
u
†
(
λ
)
u
′
(
λ
′
)
=
δ
λ
λ
′
u
†
(
λ
)
v
′
{\displaystyle \mathbf {u} ^{\dagger }(\lambda )\mathbf {u} '(\lambda ')=\delta _{\lambda \lambda '}\mathbf {u} ^{\dagger }(\lambda )\mathbf {v} '}
However, in general, projection operator method does not produce orthogonal eigenvectors within one subspace. The operator
P
λ
{\displaystyle {\mathcal {P}}_{\lambda }}
can be seen as a matrix, whose columns are eigenvectors of
U
{\displaystyle \mathbf {U} }
, but they are not orthogonal. When a set of vectors
{
v
n
}
n
=
1
,
…
,
N
λ
{\displaystyle \{\mathbf {v} _{n}\}_{n=1,\dots ,N_{\lambda }}}
, spanning
N
λ
{\displaystyle N_{\lambda }}
-dimensional space (where
N
λ
{\displaystyle N_{\lambda }}
is the multiplicity of eigenvalue
λ
{\displaystyle \lambda }
) is chosen to generate the set of eigenvectors
{
u
n
(
λ
)
=
P
λ
v
n
}
n
=
1
,
…
,
N
λ
{\displaystyle \{\mathbf {u} _{n}(\lambda )={\mathcal {P}}_{\lambda }\mathbf {v} _{n}\}_{n=1,\dots ,N_{\lambda }}}
to eigenvalue
λ
{\displaystyle \lambda }
, the mutual orthogonality of
u
n
(
λ
)
{\displaystyle \mathbf {u} _{n}(\lambda )}
is not guaranteed. However, the orthogonal set can be obtained by further applying orthogonalization algorithm to the set
{
u
n
(
λ
)
}
n
=
1
,
…
,
N
λ
{\displaystyle \{\mathbf {u} _{n}(\lambda )\}_{n=1,\dots ,N_{\lambda }}}
, e.g. Gram-Schmidt process.
A straightforward approach to obtain DFT eigenvectors is to discretize an eigenfunction of the continuous Fourier transform,
of which the most famous is the Gaussian function.
Since periodic summation of the function means discretizing its frequency spectrum
and discretization means periodic summation of the spectrum,
the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform:
F
(
m
)
=
∑
k
∈
Z
exp
(
−
π
⋅
(
m
+
N
⋅
k
)
2
N
)
.
{\displaystyle F(m)=\sum _{k\in \mathbb {Z} }\exp \left(-{\frac {\pi \cdot (m+N\cdot k)^{2}}{N}}\right).}
The closed form expression for the series can be expressed by Jacobi theta functions as
F
(
m
)
=
1
N
ϑ
3
(
π
m
N
,
exp
(
−
π
N
)
)
.
{\displaystyle F(m)={\frac {1}{\sqrt {N}}}\vartheta _{3}\left({\frac {\pi m}{N}},\exp \left(-{\frac {\pi }{N}}\right)\right).}
Several other simple closed-form analytical eigenvectors for special DFT period N were found (Kong, 2008 and Casper-Yakimov, 2024):
For DFT period N = 2L + 1 = 4K + 1, where K is an integer, the following is an eigenvector of DFT:
F
(
m
)
=
∏
s
=
K
+
1
L
[
cos
(
2
π
N
m
)
−
cos
(
2
π
N
s
)
]
{\displaystyle F(m)=\prod _{s=K+1}^{L}\left[\cos \left({\frac {2\pi }{N}}m\right)-\cos \left({\frac {2\pi }{N}}s\right)\right]}
For DFT period N = 2L = 4K, where K is an integer, the following are eigenvectors of DFT:
F
(
m
)
=
sin
(
2
π
N
m
)
∏
s
=
K
+
1
L
−
1
[
cos
(
2
π
N
m
)
−
cos
(
2
π
N
s
)
]
{\displaystyle F(m)=\sin \left({\frac {2\pi }{N}}m\right)\prod _{s=K+1}^{L-1}\left[\cos \left({\frac {2\pi }{N}}m\right)-\cos \left({\frac {2\pi }{N}}s\right)\right]}
F
(
m
)
=
cos
(
π
N
m
)
∏
s
=
K
+
1
3
K
−
1
sin
(
π
(
s
−
m
)
N
)
{\displaystyle F(m)=\cos \left({\frac {\pi }{N}}m\right)\prod _{s=K+1}^{3K-1}\sin \left({\frac {\pi (s-m)}{N}}\right)}
For DFT period N = 4K - 1, where K is an integer, the following are eigenvectors of DFT:
F
(
m
)
=
sin
(
2
π
N
m
)
∏
s
=
K
+
1
3
K
−
2
sin
(
π
(
s
−
m
)
N
)
{\displaystyle F(m)=\sin \left({\frac {2\pi }{N}}m\right)\prod _{s=K+1}^{3K-2}\sin \left({\frac {\pi (s-m)}{N}}\right)}
F
(
m
)
=
(
cos
(
2
π
N
m
)
−
cos
(
2
π
N
K
)
±
sin
(
2
π
N
K
)
)
∏
s
=
K
+
1
3
K
−
2
sin
(
π
(
s
−
m
)
N
)
{\displaystyle F(m)=\left(\cos \left({\frac {2\pi }{N}}m\right)-\cos \left({\frac {2\pi }{N}}K\right)\pm \sin \left({\frac {2\pi }{N}}K\right)\right)\prod _{s=K+1}^{3K-2}\sin \left({\frac {\pi (s-m)}{N}}\right)}
The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the fractional Fourier transform—the DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the continuous Fourier transform, the natural orthogonal eigenfunctions are the Hermite functions, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the Kravchuk polynomials (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however.
=== Uncertainty principles ===
==== Probabilistic uncertainty principle ====
If the random variable Xk is constrained by
∑
n
=
0
N
−
1
|
X
n
|
2
=
1
,
{\displaystyle \sum _{n=0}^{N-1}|X_{n}|^{2}=1,}
then
P
n
=
|
X
n
|
2
{\displaystyle P_{n}=|X_{n}|^{2}}
may be considered to represent a discrete probability mass function of n, with an associated probability mass function constructed from the transformed variable,
Q
m
=
N
|
x
m
|
2
.
{\displaystyle Q_{m}=N|x_{m}|^{2}.}
For the case of continuous functions
P
(
x
)
{\displaystyle P(x)}
and
Q
(
k
)
{\displaystyle Q(k)}
, the Heisenberg uncertainty principle states that
D
0
(
X
)
D
0
(
x
)
≥
1
16
π
2
{\displaystyle D_{0}(X)D_{0}(x)\geq {\frac {1}{16\pi ^{2}}}}
where
D
0
(
X
)
{\displaystyle D_{0}(X)}
and
D
0
(
x
)
{\displaystyle D_{0}(x)}
are the variances of
|
X
|
2
{\displaystyle |X|^{2}}
and
|
x
|
2
{\displaystyle |x|^{2}}
respectively, with the equality attained in the case of a suitably normalized Gaussian distribution. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. Still, a meaningful uncertainty principle has been introduced by Massar and Spindel.
However, the Hirschman entropic uncertainty will have a useful analog for the case of the DFT. The Hirschman uncertainty principle is expressed in terms of the Shannon entropy of the two probability functions.
In the discrete case, the Shannon entropies are defined as
H
(
X
)
=
−
∑
n
=
0
N
−
1
P
n
ln
P
n
{\displaystyle H(X)=-\sum _{n=0}^{N-1}P_{n}\ln P_{n}}
and
H
(
x
)
=
−
∑
m
=
0
N
−
1
Q
m
ln
Q
m
,
{\displaystyle H(x)=-\sum _{m=0}^{N-1}Q_{m}\ln Q_{m},}
and the entropic uncertainty principle becomes
H
(
X
)
+
H
(
x
)
≥
ln
(
N
)
.
{\displaystyle H(X)+H(x)\geq \ln(N).}
The equality is obtained for
P
n
{\displaystyle P_{n}}
equal to translations and modulations of a suitably normalized Kronecker comb of period
A
{\displaystyle A}
where
A
{\displaystyle A}
is any exact integer divisor of
N
{\displaystyle N}
. The probability mass function
Q
m
{\displaystyle Q_{m}}
will then be proportional to a suitably translated Kronecker comb of period
B
=
N
/
A
{\displaystyle B=N/A}
.
==== Deterministic uncertainty principle ====
There is also a well-known deterministic uncertainty principle that uses signal sparsity (or the number of non-zero coefficients). Let
‖
x
‖
0
{\displaystyle \left\|x\right\|_{0}}
and
‖
X
‖
0
{\displaystyle \left\|X\right\|_{0}}
be the number of non-zero elements of the time and frequency sequences
x
0
,
x
1
,
…
,
x
N
−
1
{\displaystyle x_{0},x_{1},\ldots ,x_{N-1}}
and
X
0
,
X
1
,
…
,
X
N
−
1
{\displaystyle X_{0},X_{1},\ldots ,X_{N-1}}
, respectively. Then,
N
≤
‖
x
‖
0
⋅
‖
X
‖
0
.
{\displaystyle N\leq \left\|x\right\|_{0}\cdot \left\|X\right\|_{0}.}
As an immediate consequence of the inequality of arithmetic and geometric means, one also has
2
N
≤
‖
x
‖
0
+
‖
X
‖
0
{\displaystyle 2{\sqrt {N}}\leq \left\|x\right\|_{0}+\left\|X\right\|_{0}}
. Both uncertainty principles were shown to be tight for specifically chosen "picket-fence" sequences (discrete impulse trains), and find practical use for signal recovery applications.
=== DFT of real and purely imaginary signals ===
If
x
0
,
…
,
x
N
−
1
{\displaystyle x_{0},\ldots ,x_{N-1}}
are real numbers, as they often are in practical applications, then the DFT
X
0
,
…
,
X
N
−
1
{\displaystyle X_{0},\ldots ,X_{N-1}}
is even symmetric:
x
n
∈
R
∀
n
∈
{
0
,
…
,
N
−
1
}
⟹
X
k
=
X
−
k
mod
N
∗
∀
k
∈
{
0
,
…
,
N
−
1
}
{\displaystyle x_{n}\in \mathbb {R} \quad \forall n\in \{0,\ldots ,N-1\}\implies X_{k}=X_{-k\mod N}^{*}\quad \forall k\in \{0,\ldots ,N-1\}}
, where
X
∗
{\displaystyle X^{*}\,}
denotes complex conjugation.
It follows that for even
N
{\displaystyle N}
X
0
{\displaystyle X_{0}}
and
X
N
/
2
{\displaystyle X_{N/2}}
are real-valued, and the remainder of the DFT is completely specified by just
N
/
2
−
1
{\displaystyle N/2-1}
complex numbers.
If
x
0
,
…
,
x
N
−
1
{\displaystyle x_{0},\ldots ,x_{N-1}}
are purely imaginary numbers, then the DFT
X
0
,
…
,
X
N
−
1
{\displaystyle X_{0},\ldots ,X_{N-1}}
is odd symmetric:
x
n
∈
i
R
∀
n
∈
{
0
,
…
,
N
−
1
}
⟹
X
k
=
−
X
−
k
mod
N
∗
∀
k
∈
{
0
,
…
,
N
−
1
}
{\displaystyle x_{n}\in i\mathbb {R} \quad \forall n\in \{0,\ldots ,N-1\}\implies X_{k}=-X_{-k\mod N}^{*}\quad \forall k\in \{0,\ldots ,N-1\}}
, where
X
∗
{\displaystyle X^{*}\,}
denotes complex conjugation.
== Generalized DFT (shifted and non-linear phase) ==
It is possible to shift the transform sampling in time and/or frequency domain by some real shifts a and b, respectively. This is sometimes known as a generalized DFT (or GDFT), also called the shifted DFT or offset DFT, and has analogous properties to the ordinary DFT:
X
k
=
∑
n
=
0
N
−
1
x
n
e
−
i
2
π
N
(
k
+
b
)
(
n
+
a
)
k
=
0
,
…
,
N
−
1.
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {i2\pi }{N}}(k+b)(n+a)}\quad \quad k=0,\dots ,N-1.}
Most often, shifts of
1
/
2
{\displaystyle 1/2}
(half a sample) are used.
While the ordinary DFT corresponds to a periodic signal in both time and frequency domains,
a
=
1
/
2
{\displaystyle a=1/2}
produces a signal that is anti-periodic in frequency domain (
X
k
+
N
=
−
X
k
{\displaystyle X_{k+N}=-X_{k}}
) and vice versa for
b
=
1
/
2
{\displaystyle b=1/2}
.
Thus, the specific case of
a
=
b
=
1
/
2
{\displaystyle a=b=1/2}
is known as an odd-time odd-frequency discrete Fourier transform (or O2 DFT).
Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms.
Another interesting choice is
a
=
b
=
−
(
N
−
1
)
/
2
{\displaystyle a=b=-(N-1)/2}
, which is called the centered DFT (or CDFT). The centered DFT has the useful property that, when N is a multiple of four, all four of its eigenvalues (see above) have equal multiplicities (Rubio and Santhanam, 2005)
The term GDFT is also used for the non-linear phase extensions of DFT. Hence, GDFT method provides a generalization for constant amplitude orthogonal block transforms including linear and non-linear phase types. GDFT is a framework
to improve time and frequency domain properties of the traditional DFT, e.g. auto/cross-correlations, by the addition of the properly designed phase shaping function (non-linear, in general) to the original linear phase functions (Akansu and Agirman-Tosun, 2010).
The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane; more general z-transforms correspond to complex shifts a and b above.
== Multidimensional DFT ==
The ordinary DFT transforms a one-dimensional sequence or array
x
n
{\displaystyle x_{n}}
that is a function of exactly one discrete variable n. The multidimensional DFT of a multidimensional array
x
n
1
,
n
2
,
…
,
n
d
{\displaystyle x_{n_{1},n_{2},\dots ,n_{d}}}
that is a function of d discrete variables
n
ℓ
=
0
,
1
,
…
,
N
ℓ
−
1
{\displaystyle n_{\ell }=0,1,\dots ,N_{\ell }-1}
for
ℓ
{\displaystyle \ell }
in
1
,
2
,
…
,
d
{\displaystyle 1,2,\dots ,d}
is defined by:
X
k
1
,
k
2
,
…
,
k
d
=
∑
n
1
=
0
N
1
−
1
(
ω
N
1
k
1
n
1
∑
n
2
=
0
N
2
−
1
(
ω
N
2
k
2
n
2
⋯
∑
n
d
=
0
N
d
−
1
ω
N
d
k
d
n
d
⋅
x
n
1
,
n
2
,
…
,
n
d
)
)
,
{\displaystyle X_{k_{1},k_{2},\dots ,k_{d}}=\sum _{n_{1}=0}^{N_{1}-1}\left(\omega _{N_{1}}^{~k_{1}n_{1}}\sum _{n_{2}=0}^{N_{2}-1}\left(\omega _{N_{2}}^{~k_{2}n_{2}}\cdots \sum _{n_{d}=0}^{N_{d}-1}\omega _{N_{d}}^{~k_{d}n_{d}}\cdot x_{n_{1},n_{2},\dots ,n_{d}}\right)\right),}
where
ω
N
ℓ
=
exp
(
−
i
2
π
/
N
ℓ
)
{\displaystyle \omega _{N_{\ell }}=\exp(-i2\pi /N_{\ell })}
as above and the d output indices run from
k
ℓ
=
0
,
1
,
…
,
N
ℓ
−
1
{\displaystyle k_{\ell }=0,1,\dots ,N_{\ell }-1}
. This is more compactly expressed in vector notation, where we define
n
=
(
n
1
,
n
2
,
…
,
n
d
)
{\displaystyle \mathbf {n} =(n_{1},n_{2},\dots ,n_{d})}
and
k
=
(
k
1
,
k
2
,
…
,
k
d
)
{\displaystyle \mathbf {k} =(k_{1},k_{2},\dots ,k_{d})}
as d-dimensional vectors of indices from 0 to
N
−
1
{\displaystyle \mathbf {N} -1}
, which we define as
N
−
1
=
(
N
1
−
1
,
N
2
−
1
,
…
,
N
d
−
1
)
{\displaystyle \mathbf {N} -1=(N_{1}-1,N_{2}-1,\dots ,N_{d}-1)}
:
X
k
=
∑
n
=
0
N
−
1
e
−
i
2
π
k
⋅
(
n
/
N
)
x
n
,
{\displaystyle X_{\mathbf {k} }=\sum _{\mathbf {n} =\mathbf {0} }^{\mathbf {N} -1}e^{-i2\pi \mathbf {k} \cdot (\mathbf {n} /\mathbf {N} )}x_{\mathbf {n} }\,,}
where the division
n
/
N
{\displaystyle \mathbf {n} /\mathbf {N} }
is defined as
n
/
N
=
(
n
1
/
N
1
,
…
,
n
d
/
N
d
)
{\displaystyle \mathbf {n} /\mathbf {N} =(n_{1}/N_{1},\dots ,n_{d}/N_{d})}
to be performed element-wise, and the sum denotes the set of nested summations above.
The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by:
x
n
=
1
∏
ℓ
=
1
d
N
ℓ
∑
k
=
0
N
−
1
e
i
2
π
n
⋅
(
k
/
N
)
X
k
.
{\displaystyle x_{\mathbf {n} }={\frac {1}{\prod _{\ell =1}^{d}N_{\ell }}}\sum _{\mathbf {k} =\mathbf {0} }^{\mathbf {N} -1}e^{i2\pi \mathbf {n} \cdot (\mathbf {k} /\mathbf {N} )}X_{\mathbf {k} }\,.}
As the one-dimensional DFT expresses the input
x
n
{\displaystyle x_{n}}
as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or multidimensional sinusoids. The direction of oscillation in space is
k
/
N
{\displaystyle \mathbf {k} /\mathbf {N} }
. The amplitudes are
X
k
{\displaystyle X_{\mathbf {k} }}
. This decomposition is of great importance for everything from digital image processing (two-dimensional) to solving partial differential equations. The solution is broken up into plane waves.
The multidimensional DFT can be computed by the composition of a sequence of one-dimensional DFTs along each dimension. In the two-dimensional case
x
n
1
,
n
2
{\displaystyle x_{n_{1},n_{2}}}
the
N
1
{\displaystyle N_{1}}
independent DFTs of the rows (i.e., along
n
2
{\displaystyle n_{2}}
) are computed first to form a new array
y
n
1
,
k
2
{\displaystyle y_{n_{1},k_{2}}}
. Then the
N
2
{\displaystyle N_{2}}
independent DFTs of y along the columns (along
n
1
{\displaystyle n_{1}}
) are computed to form the final result
X
k
1
,
k
2
{\displaystyle X_{k_{1},k_{2}}}
. Alternatively the columns can be computed first and then the rows. The order is immaterial because the nested summations above commute.
An algorithm to compute a one-dimensional DFT is thus sufficient to efficiently compute a multidimensional DFT. This approach is known as the row-column algorithm. There are also intrinsically multidimensional FFT algorithms.
=== The real-input multidimensional DFT ===
For input data
x
n
1
,
n
2
,
…
,
n
d
{\displaystyle x_{n_{1},n_{2},\dots ,n_{d}}}
consisting of real numbers, the DFT outputs have a conjugate symmetry similar to the one-dimensional case above:
X
k
1
,
k
2
,
…
,
k
d
=
X
N
1
−
k
1
,
N
2
−
k
2
,
…
,
N
d
−
k
d
∗
,
{\displaystyle X_{k_{1},k_{2},\dots ,k_{d}}=X_{N_{1}-k_{1},N_{2}-k_{2},\dots ,N_{d}-k_{d}}^{*},}
where the star again denotes complex conjugation and the
ℓ
{\displaystyle \ell }
-th subscript is again interpreted modulo
N
ℓ
{\displaystyle N_{\ell }}
(for
ℓ
=
1
,
2
,
…
,
d
{\displaystyle \ell =1,2,\ldots ,d}
).
== Applications ==
The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform.
=== Spectral analysis ===
When the DFT is used for signal spectral analysis, the
{
x
n
}
{\displaystyle \{x_{n}\}}
sequence usually represents a finite set of uniformly spaced time-samples of some signal
x
(
t
)
{\displaystyle x(t)\,}
, where
t
{\displaystyle t}
represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of
x
(
t
)
{\displaystyle x(t)}
into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (a.k.a. resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation.
A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated at § Sampling the DTFT.
The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT.
As already stated, leakage imposes a limit on the inherent resolution of the DTFT, so there is a practical limit to the benefit that can be obtained from a fine-grained DFT.
Steps to Perform Spectral Analysis of Audio Signal
1.Recording and Pre-Processing the Audio Signal
Begin by recording the audio signal, which could be a spoken password, music, or any other sound. Once recorded, the audio signal is denoted as x[n], where n represents the discrete time index. To enhance the accuracy of spectral analysis, any unwanted noise should be reduced using appropriate filtering techniques.
2.Plotting the Original Time-Domain Signal
After noise reduction, the audio signal is plotted in the time domain to visualize its characteristics over time. This helps in understanding the amplitude variations of the signal as a function of time, which provides an initial insight into the signal's behavior.
3.Transforming the Signal from Time Domain to Frequency Domain
The next step is to transform the audio signal from the time domain to the frequency domain using the Discrete Fourier Transform (DFT). The DFT is defined as:
X
[
k
]
=
∑
n
=
0
N
−
1
x
[
n
]
⋅
e
−
j
2
π
N
k
n
{\displaystyle X[k]=\sum _{n=0}^{N-1}x[n]\cdot e^{-j{\frac {2\pi }{N}}kn}}
where N is the total number of samples, k represents the frequency index, and X[k] is the complex-valued frequency spectrum of the signal. The DFT allows for decomposing the signal into its constituent frequency components, providing a representation that indicates which frequencies are present and their respective magnitudes.
4.Plotting the Magnitude Spectrum
The magnitude of the frequency-domain representation X[k] is plotted to analyze the spectral content. The magnitude spectrum shows how the energy of the signal is distributed across different frequencies, which is useful for identifying prominent frequency components. It is calculated as:
|
X
[
k
]
|
=
Re
(
X
[
k
]
)
2
+
Im
(
X
[
k
]
)
2
{\displaystyle |X[k]|={\sqrt {{\text{Re}}(X[k])^{2}+{\text{Im}}(X[k])^{2}}}}
=== Example ===
Analyze a discrete-time audio signal in the frequency domain using the DFT to identify its frequency components
==== Given Data ====
Let's consider a simple discrete-time audio signal represented as:
x
[
n
]
=
{
1
,
0.5
,
−
0.5
,
−
1
}
{\displaystyle x[n]=\{1,0.5,-0.5,-1\}}
where n represents discrete time samples of the signal.
1.Time-Domain Signal Representation
The given time-domain signal is:
x
[
0
]
=
1
,
x
[
1
]
=
0.5
,
x
[
2
]
=
−
0.5
,
x
[
3
]
=
−
1
{\textstyle x[0]=1,\quad x[1]=0.5,\quad x[2]=-0.5,\quad x[3]=-1}
===== 2.DFT Calculation =====
The DFT is calculated using the formula:
X
[
k
]
=
∑
n
=
0
N
−
1
x
[
n
]
⋅
e
−
j
2
π
N
k
n
{\displaystyle X[k]=\sum _{n=0}^{N-1}x[n]\cdot e^{-j{\frac {2\pi }{N}}kn}}
where N is the number of samples (in this case, N=4).
Let's compute X[k] for k=0,1,2,3
For k=0:
X
[
0
]
=
1
⋅
e
−
j
2
π
4
⋅
0
⋅
0
+
0.5
⋅
e
−
j
2
π
4
⋅
0
⋅
1
+
(
−
0.5
)
⋅
e
−
j
2
π
4
⋅
0
⋅
2
+
(
−
1
)
⋅
e
−
j
2
π
4
⋅
0
⋅
3
{\displaystyle X[0]=1\cdot e^{-j{\frac {2\pi }{4}}\cdot 0\cdot 0}+0.5\cdot e^{-j{\frac {2\pi }{4}}\cdot 0\cdot 1}+(-0.5)\cdot e^{-j{\frac {2\pi }{4}}\cdot 0\cdot 2}+(-1)\cdot e^{-j{\frac {2\pi }{4}}\cdot 0\cdot 3}}
X
[
0
]
=
1
+
0.5
−
0.5
−
1
=
0
{\displaystyle X[0]=1+0.5-0.5-1=0}
For k=1:
X
[
1
]
=
1
⋅
e
−
j
2
π
4
⋅
1
⋅
0
+
0.5
⋅
e
−
j
2
π
4
⋅
1
⋅
1
+
(
−
0.5
)
⋅
e
−
j
2
π
4
⋅
1
⋅
2
+
(
−
1
)
⋅
e
−
j
2
π
4
⋅
1
⋅
3
{\displaystyle X[1]=1\cdot e^{-j{\frac {2\pi }{4}}\cdot 1\cdot 0}+0.5\cdot e^{-j{\frac {2\pi }{4}}\cdot 1\cdot 1}+(-0.5)\cdot e^{-j{\frac {2\pi }{4}}\cdot 1\cdot 2}+(-1)\cdot e^{-j{\frac {2\pi }{4}}\cdot 1\cdot 3}}
X
[
1
]
=
1
+
0.5
(
−
j
)
+
(
−
0.5
)
(
−
1
)
+
(
−
1
)
(
j
)
{\displaystyle X[1]=1+0.5(-j)+(-0.5)(-1)+(-1)(j)}
X
[
1
]
=
1
−
0.5
j
+
0.5
−
j
=
1.5
−
1.5
j
{\displaystyle X[1]=1-0.5j+0.5-j=1.5-1.5j}
For k=2:
X
[
2
]
=
1
⋅
e
−
j
2
π
4
⋅
2
⋅
0
+
0.5
⋅
e
−
j
2
π
4
⋅
2
⋅
1
+
(
−
0.5
)
⋅
e
−
j
2
π
4
⋅
2
⋅
2
+
(
−
1
)
⋅
e
−
j
2
π
4
⋅
2
⋅
3
{\displaystyle X[2]=1\cdot e^{-j{\frac {2\pi }{4}}\cdot 2\cdot 0}+0.5\cdot e^{-j{\frac {2\pi }{4}}\cdot 2\cdot 1}+(-0.5)\cdot e^{-j{\frac {2\pi }{4}}\cdot 2\cdot 2}+(-1)\cdot e^{-j{\frac {2\pi }{4}}\cdot 2\cdot 3}}
X
[
2
]
=
1
+
0.5
(
−
1
)
+
(
−
0.5
)
(
1
)
+
(
−
1
)
(
−
1
)
{\displaystyle X[2]=1+0.5(-1)+(-0.5)(1)+(-1)(-1)}
X
[
2
]
=
1
−
0.5
−
0.5
+
1
=
1
{\displaystyle X[2]=1-0.5-0.5+1=1}
For k=3:
X
[
3
]
=
1
⋅
e
−
j
2
π
4
⋅
3
⋅
0
+
0.5
⋅
e
−
j
2
π
4
⋅
3
⋅
1
+
(
−
0.5
)
⋅
e
−
j
2
π
4
⋅
3
⋅
2
+
(
−
1
)
⋅
e
−
j
2
π
4
⋅
3
⋅
3
{\displaystyle X[3]=1\cdot e^{-j{\frac {2\pi }{4}}\cdot 3\cdot 0}+0.5\cdot e^{-j{\frac {2\pi }{4}}\cdot 3\cdot 1}+(-0.5)\cdot e^{-j{\frac {2\pi }{4}}\cdot 3\cdot 2}+(-1)\cdot e^{-j{\frac {2\pi }{4}}\cdot 3\cdot 3}}
X
[
3
]
=
1
+
0.5
j
+
(
−
0.5
)
(
−
1
)
+
(
−
1
)
(
−
j
)
{\displaystyle X[3]=1+0.5j+(-0.5)(-1)+(-1)(-j)}
X
[
3
]
=
1
+
0.5
j
+
0.5
+
j
=
1.5
+
1.5
j
{\displaystyle X[3]=1+0.5j+0.5+j=1.5+1.5j}
===== 3.Magnitude Spectrum =====
The magnitude of X[k] represents the strength of each frequency component:
|
X
[
0
]
|
=
0
,
|
X
[
1
]
|
=
(
1.5
)
2
+
(
−
1.5
)
2
=
4.5
≈
2.12
{\displaystyle |X[0]|=0,\quad |X[1]|={\sqrt {(1.5)^{2}+(-1.5)^{2}}}={\sqrt {4.5}}\approx 2.12}
|
X
[
2
]
|
=
1
,
|
X
[
3
]
|
=
(
1.5
)
2
+
(
1.5
)
2
=
4.5
≈
2.12
{\displaystyle |X[2]|=1,\quad |X[3]|={\sqrt {(1.5)^{2}+(1.5)^{2}}}={\sqrt {4.5}}\approx 2.12}
The resulting frequency components indicate the distribution of signal energy at different frequencies. The peaks in the magnitude spectrum correspond to dominant frequencies in the original signal.
=== Optics, diffraction, and tomography ===
The discrete Fourier transform is widely used with spatial frequencies in modeling the way that light, electrons, and other probes travel through optical systems and scatter from objects in two and three dimensions. The dual (direct/reciprocal) vector space of three dimensional objects further makes available a three dimensional reciprocal lattice, whose construction from translucent object shadows (via the Fourier slice theorem) allows tomographic reconstruction of three dimensional objects with a wide range of applications e.g. in modern medicine.
=== Filter bank ===
See § FFT filter banks and § Sampling the DTFT.
=== Data compression ===
The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform.)
Some relatively recent compression algorithms, however, use wavelet transforms, which give a more uniform compromise between time and frequency domain than obtained by chopping data into segments and transforming each segment. In the case of JPEG2000, this avoids the spurious image features that appear when images are highly compressed with the original JPEG.
=== Partial differential equations ===
Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The advantage of this approach is that it expands the signal in complex exponentials
e
i
n
x
{\displaystyle e^{inx}}
, which are eigenfunctions of differentiation:
d
(
e
i
n
x
)
/
d
x
=
i
n
e
i
n
x
{\displaystyle {{\text{d}}{\big (}e^{inx}{\big )}}/{\text{d}}x=ine^{inx}}
. Thus, in the Fourier representation, differentiation is simple—we just multiply by
i
n
{\displaystyle in}
. (However, the choice of
n
{\displaystyle n}
is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method.
=== Polynomial multiplication ===
Suppose we wish to compute the polynomial product c(x) = a(x) · b(x). The ordinary product expression for the coefficients of c involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for a(x) and b(x) with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension d > deg(a(x)) + deg(b(x)). Then,
c
=
a
∗
b
{\displaystyle \mathbf {c} =\mathbf {a} *\mathbf {b} }
Where c is the vector of coefficients for c(x), and the convolution operator
∗
{\displaystyle *\,}
is defined so
c
n
=
∑
m
=
0
d
−
1
a
m
b
n
−
m
m
o
d
d
n
=
0
,
1
…
,
d
−
1
{\displaystyle c_{n}=\sum _{m=0}^{d-1}a_{m}b_{n-m\ \mathrm {mod} \ d}\qquad \qquad \qquad n=0,1\dots ,d-1}
But convolution becomes multiplication under the DFT:
F
(
c
)
=
F
(
a
)
F
(
b
)
{\displaystyle {\mathcal {F}}(\mathbf {c} )={\mathcal {F}}(\mathbf {a} ){\mathcal {F}}(\mathbf {b} )}
Here the vector product is taken elementwise. Thus the coefficients of the product polynomial c(x) are just the terms 0, ..., deg(a(x)) + deg(b(x)) of the coefficient vector
c
=
F
−
1
(
F
(
a
)
F
(
b
)
)
.
{\displaystyle \mathbf {c} ={\mathcal {F}}^{-1}({\mathcal {F}}(\mathbf {a} ){\mathcal {F}}(\mathbf {b} )).}
With a fast Fourier transform, the resulting algorithm takes O(N log N) arithmetic operations. Due to its simplicity and speed, the Cooley–Tukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, d should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation).
==== Multiplication of large integers ====
The fastest known algorithms for the multiplication of very large integers use the polynomial multiplication method outlined above. Integers can be treated as the value of a polynomial evaluated specifically at the number base, with the coefficients of the polynomial corresponding to the digits in that base (ex.
123
=
1
⋅
10
2
+
2
⋅
10
1
+
3
⋅
10
0
{\displaystyle 123=1\cdot 10^{2}+2\cdot 10^{1}+3\cdot 10^{0}}
). After polynomial multiplication, a relatively low-complexity carry-propagation step completes the multiplication.
==== Convolution ====
When data is convolved with a function with wide support, such as for downsampling by a large sampling ratio, because of the Convolution theorem and the FFT algorithm, it may be faster to transform it, multiply pointwise by the transform of the filter and then reverse transform it. Alternatively, a good filter is obtained by simply truncating the transformed data and re-transforming the shortened data set.
== Some discrete Fourier transform pairs ==
== Generalizations ==
=== Representation theory ===
The DFT can be interpreted as a complex-valued representation of the finite cyclic group. In other words, a sequence of
n
{\displaystyle n}
complex numbers can be thought of as an element of
n
{\displaystyle n}
-dimensional complex space
C
n
{\displaystyle \mathbb {C} ^{n}}
or equivalently a function
f
{\displaystyle f}
from the finite cyclic group of order
n
{\displaystyle n}
to the complex numbers,
Z
n
↦
C
{\displaystyle \mathbb {Z} _{n}\mapsto \mathbb {C} }
. So
f
{\displaystyle f}
is a class function on the finite cyclic group, and thus can be expressed as a linear combination of the irreducible characters of this group, which are the roots of unity.
From this point of view, one may generalize the DFT to representation theory generally, or more narrowly to the representation theory of finite groups.
More narrowly still, one may generalize the DFT by either changing the target (taking values in a field other than the complex numbers), or the domain (a group other than a finite cyclic group), as detailed in the sequel.
=== Other fields ===
Many of the properties of the DFT only depend on the fact that
e
−
i
2
π
N
{\displaystyle e^{-{\frac {i2\pi }{N}}}}
is a primitive root of unity, sometimes denoted
ω
N
{\displaystyle \omega _{N}}
or
W
N
{\displaystyle W_{N}}
(so that
ω
N
N
=
1
{\displaystyle \omega _{N}^{N}=1}
). Such properties include the completeness, orthogonality, Plancherel/Parseval, periodicity, shift, convolution, and unitarity properties above, as well as many FFT algorithms. For this reason, the discrete Fourier transform can be defined by using roots of unity in fields other than the complex numbers, and such generalizations are commonly called number-theoretic transforms (NTTs) in the case of finite fields. For more information, see number-theoretic transform and discrete Fourier transform (general).
=== Other finite groups ===
The standard DFT acts on a sequence x0, x1, ..., xN−1 of complex numbers, which can be viewed as a function {0, 1, ..., N − 1} → C. The multidimensional DFT acts on multidimensional sequences, which can be viewed as functions
{
0
,
1
,
…
,
N
1
−
1
}
×
⋯
×
{
0
,
1
,
…
,
N
d
−
1
}
→
C
.
{\displaystyle \{0,1,\ldots ,N_{1}-1\}\times \cdots \times \{0,1,\ldots ,N_{d}-1\}\to \mathbb {C} .}
This suggests the generalization to Fourier transforms on arbitrary finite groups, which act on functions G → C where G is a finite group. In this framework, the standard DFT is seen as the Fourier transform on a cyclic group, while the multidimensional DFT is a Fourier transform on a direct sum of cyclic groups.
Further, Fourier transform can be on cosets of a group.
== Alternatives ==
There are various alternatives to the DFT for various applications, prominent among which are wavelets. The analog of the DFT is the discrete wavelet transform (DWT). From the point of view of time–frequency analysis, a key limitation of the Fourier transform is that it does not include location information, only frequency information, and thus has difficulty in representing transients. As wavelets have location as well as frequency, they are better able to represent location, at the expense of greater difficulty representing frequency. For details, see comparison of the discrete wavelet transform with the discrete Fourier transform.
== See also ==
Companion matrix
DFT matrix
Fast Fourier transform
FFTPACK
FFTW
Generalizations of Pauli matrices
Least-squares spectral analysis
List of Fourier-related transforms
Multidimensional transform
Zak transform
Quantum Fourier transform
== Notes ==
== References ==
== Further reading ==
Brigham, E. Oran (1988). The fast Fourier transform and its applications. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-307505-2.
Smith, Steven W. (1999). "Chapter 8: The Discrete Fourier Transform". The Scientist and Engineer's Guide to Digital Signal Processing (Second ed.). San Diego, Calif.: California Technical Publishing. ISBN 978-0-9660176-3-2.
Cormen, Thomas H.; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2001). "Chapter 30: Polynomials and the FFT". Introduction to Algorithms (Second ed.). MIT Press and McGraw-Hill. pp. 822–848. ISBN 978-0-262-03293-3. esp. section 30.2: The DFT and FFT, pp. 830–838.
P. Duhamel; B. Piron; J. M. Etcheto (1988). "On computing the inverse DFT". IEEE Transactions on Acoustics, Speech, and Signal Processing. 36 (2): 285–286. doi:10.1109/29.1519.
J. H. McClellan; T. W. Parks (1972). "Eigenvalues and eigenvectors of the discrete Fourier transformation". IEEE Transactions on Audio and Electroacoustics. 20 (1): 66–74. doi:10.1109/TAU.1972.1162342.
Bradley W. Dickinson; Kenneth Steiglitz (1982). "Eigenvectors and functions of the discrete Fourier transform" (PDF). IEEE Transactions on Acoustics, Speech, and Signal Processing. 30 (1): 25–31. CiteSeerX 10.1.1.434.5279. doi:10.1109/TASSP.1982.1163843. (Note that this paper has an apparent typo in its table of the eigenvalue multiplicities: the +i/−i columns are interchanged. The correct table can be found in McClellan and Parks, 1972, and is easily confirmed numerically.)
F. A. Grünbaum (1982). "The eigenvectors of the discrete Fourier transform". Journal of Mathematical Analysis and Applications. 88 (2): 355–363. doi:10.1016/0022-247X(82)90199-8.
Natig M. Atakishiyev; Kurt Bernardo Wolf (1997). "Fractional Fourier-Kravchuk transform". Journal of the Optical Society of America A. 14 (7): 1467–1477. Bibcode:1997JOSAA..14.1467A. doi:10.1364/JOSAA.14.001467.
C. Candan; M. A. Kutay; H. M.Ozaktas (2000). "The discrete fractional Fourier transform" (PDF). IEEE Transactions on Signal Processing. 48 (5): 1329–1337. Bibcode:2000ITSP...48.1329C. doi:10.1109/78.839980. hdl:11693/11130. Archived (PDF) from the original on 2017-09-21.
Magdy Tawfik Hanna, Nabila Philip Attalla Seif, and Waleed Abd El Maguid Ahmed (2004). "Hermite-Gaussian-like eigenvectors of the discrete Fourier transform matrix based on the singular-value decomposition of its orthogonal projection matrices". IEEE Transactions on Circuits and Systems I: Regular Papers. 51 (11): 2245–2254. doi:10.1109/TCSI.2004.836850. S2CID 14468134.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Shamgar Gurevich; Ronny Hadani (2009). "On the diagonalization of the discrete Fourier transform". Applied and Computational Harmonic Analysis. 27 (1): 87–99. arXiv:0808.3281. doi:10.1016/j.acha.2008.11.003. S2CID 14833478. preprint at.
Shamgar Gurevich; Ronny Hadani; Nir Sochen (2008). "The finite harmonic oscillator and its applications to sequences, communication and radar". IEEE Transactions on Information Theory. 54 (9): 4239–4253. arXiv:0808.1495. Bibcode:2008arXiv0808.1495G. doi:10.1109/TIT.2008.926440. S2CID 6037080. preprint at.
Juan G. Vargas-Rubio; Balu Santhanam (2005). "On the multiangle centered discrete fractional Fourier transform". IEEE Signal Processing Letters. 12 (4): 273–276. Bibcode:2005ISPL...12..273V. doi:10.1109/LSP.2005.843762. S2CID 1499353.
F.N. Kong (2008). "Analytic Expressions of Two Discrete Hermite-Gaussian Signals". IEEE Transactions on Circuits and Systems II: Express Briefs. 55 (1): 56–60. doi:10.1109/TCSII.2007.909865. S2CID 5154718.
Casper, William; Yakimov, Milen (2024). "The restricted discrete Fourier transform". arXiv:2407.20379 [math.CA].
"Digital Signal Processing" by Thomas Holton.
== External links ==
Interactive explanation of the DFT
Matlab tutorial on the Discrete Fourier Transformation Archived 2016-03-04 at the Wayback Machine
Interactive flash tutorial on the DFT
Mathematics of the Discrete Fourier Transform by Julius O. Smith III
FFTW: Fast implementation of the DFT - coded in C and under General Public License (GPL)
General Purpose FFT Package: Yet another fast DFT implementation in C & FORTRAN, permissive license
Explained: The Discrete Fourier Transform
Discrete Fourier Transform
Indexing and shifting of Discrete Fourier Transform
Discrete Fourier Transform Properties
Generalized Discrete Fourier Transform (GDFT) with Nonlinear Phase | Wikipedia/Discrete_Fourier_transform |
Hippocrates of Chios (Ancient Greek: Ἱπποκράτης ὁ Χῖος; c. 470 – c. 421 BC) was an ancient Greek mathematician, geometer, and astronomer.
He was born on the isle of Chios, where he was originally a merchant. After some misadventures (he was robbed by either pirates or fraudulent customs officials) he went to Athens, possibly for litigation, where he became a leading mathematician.
On Chios, Hippocrates may have been a pupil of the mathematician and astronomer Oenopides of Chios. In his mathematical work there probably was some Pythagorean influence too, perhaps via contacts between Chios and the neighboring island of Samos, a center of Pythagorean thinking: Hippocrates has been described as a 'para-Pythagorean', a philosophical 'fellow traveler'. "Reduction" arguments such as reductio ad absurdum argument (or proof by contradiction) have been traced to him, as has the use of power to denote the square of a line.
== Mathematics ==
The major accomplishment of Hippocrates is that he was the first to write a systematically organized geometry textbook, called Elements (Στοιχεῖα, Stoicheia), that is, basic theorems, or building blocks of mathematical theory. From then on, mathematicians from all over the ancient world could, at least in principle, build on a common framework of basic concepts, methods, and theorems, which stimulated the scientific progress of mathematics.
Only a single, famous fragment of Hippocrates' Elements is existent, embedded in the work of Simplicius. In this fragment the area is calculated of some so-called Hippocratic lunes. This was part of a research program to square the circle, that is, to construct a square with the same area as a circle. Although Hippocrates failed to square the circle, he was the first to prove an equality of area between a curved shape and a polygonal shape. Only much later was it proven (by Ferdinand von Lindemann, in 1882) that this approach had no chance of success, because the side length of the square would have a transcendental ratio
π
{\displaystyle {\sqrt {\pi }}}
to the radius of the circle, impossible to construct using compass and straightedge.
In the century after Hippocrates, at least four other mathematicians wrote their own Elements, steadily improving terminology and logical structure. In this way, Hippocrates' pioneering work laid the foundation for Euclid's Elements (c. 325 BC), which was to remain the standard geometry textbook for many centuries. Hippocrates is believed to have originated the use of letters to refer to the geometric points and figures in a proposition, e.g., "triangle ABC" for a triangle with vertices at points A, B, and C.
Two other contributions by Hippocrates in the field of mathematics are noteworthy. He found a way to tackle the problem of 'duplication of the cube', that is, the problem of how to construct a cube root. Like the quadrature of the circle, this was another of the so-called three great mathematical problems of antiquity. Hippocrates also invented the technique of 'reduction', that is, to transform specific mathematical problems into a more general problem that is easier to solve. The solution to the more general problem then automatically gives a solution to the original problem.
== Astronomy ==
In the field of astronomy, Hippocrates tried to explain the phenomena of comets and the Milky Way. His ideas have not been handed down very clearly, but he probably thought both were optical illusions, the result of refraction of solar light by moisture that was exhaled by, respectively, a putative planet near the Sun, and the stars. The fact that Hippocrates thought that light rays originated in our eyes instead of in the object that is seen, adds to the unfamiliar character of his ideas.
== Notes ==
== References ==
Ivor Bulmer-Thomas, 'Hippocrates of Chios', in: Dictionary of Scientific Biography, Charles Coulston Gillispie, ed. (18 Volumes, New York 1970–1990) pp. 410–418.
[Axel Anthon] Björnbo, 'Hippokrates', in: Paulys Realencyclopädie der Classischen Altertumswissenschaft, G. Wissowa, ed. (51 Volumes; 1894–1980) Vol. 8 (1913) col. 1780–1801.
== External links ==
O'Connor, John J.; Robertson, Edmund F., "Hippocrates of Chios", MacTutor History of Mathematics Archive, University of St Andrews
The Quadrature of the Circle and Hippocrates' Lunes at Convergence
Mesolabe Compass and Square Roots - Numberphile video explaining Hippocrates' mesolabe compass | Wikipedia/Hippocrates_of_Chios |
A buffer solution is a solution where the pH does not change significantly on dilution or if an acid or base is added at constant temperature. Its pH changes very little when a small amount of strong acid or base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many living systems that use buffering for pH regulation. For example, the bicarbonate buffering system is used to regulate the pH of blood, and bicarbonate also acts as a buffer in the ocean.
== Principles of buffering ==
Buffer solutions resist pH change because of a chemical equilibrium between the weak acid HA and its conjugate base A−:
When some strong acid is added to an equilibrium mixture of the weak acid and its conjugate base, hydrogen ions (H+) are added, and the equilibrium is shifted to the left, in accordance with Le Chatelier's principle. Because of this, the hydrogen ion concentration increases by less than the amount expected for the quantity of strong acid added.
Similarly, if strong alkali is added to the mixture, the hydrogen ion concentration decreases by less than the amount expected for the quantity of alkali added. In Figure 1, the effect is illustrated by the simulated titration of a weak acid with pKa = 4.7. The relative concentration of undissociated acid is shown in blue, and of its conjugate base in red. The pH changes relatively slowly in the buffer region, pH = pKa ± 1, centered at pH = 4.7, where [HA] = [A−]. The hydrogen ion concentration decreases by less than the amount expected because most of the added hydroxide ion is consumed in the reaction
and only a little is consumed in the neutralization reaction (which is the reaction that results in an increase in pH)
Once the acid is more than 95% deprotonated, the pH rises rapidly because most of the added alkali is consumed in the neutralization reaction.
=== Buffer capacity ===
Buffer capacity is a quantitative measure of the resistance to change of pH of a solution containing a buffering agent with respect to a change of acid or alkali concentration. It can be defined as follows:
β
=
d
C
b
d
(
p
H
)
,
{\displaystyle \beta ={\frac {dC_{b}}{d(\mathrm {pH} )}},}
where
d
C
b
{\displaystyle dC_{b}}
is an infinitesimal amount of added base, or
β
=
−
d
C
a
d
(
p
H
)
,
{\displaystyle \beta =-{\frac {dC_{a}}{d(\mathrm {pH} )}},}
where
d
C
a
{\displaystyle dC_{a}}
is an infinitesimal amount of added acid. pH is defined as −log10[H+], and d(pH) is an infinitesimal change in pH.
With either definition the buffer capacity for a weak acid HA with dissociation constant Ka can be expressed as
β
=
2.303
(
[
H
+
]
+
T
HA
K
a
[
H
+
]
(
K
a
+
[
H
+
]
)
2
+
K
w
[
H
+
]
)
,
{\displaystyle \beta =2.303\left([{\ce {H+}}]+{\frac {T_{{\ce {HA}}}K_{a}[{\ce {H+}}]}{(K_{a}+[{\ce {H+}}])^{2}}}+{\frac {K_{\text{w}}}{[{\ce {H+}}]}}\right),}
where [H+] is the concentration of hydrogen ions, and
T
HA
{\displaystyle T_{\text{HA}}}
is the total concentration of added acid. Kw is the equilibrium constant for self-ionization of water, equal to 1.0×10−14. Note that in solution H+ exists as the hydronium ion H3O+, and further aquation of the hydronium ion has negligible effect on the dissociation equilibrium, except at very high acid concentration.
This equation shows that there are three regions of raised buffer capacity (see figure 2).
In the central region of the curve (colored green on the plot), the second term is dominant, and
β
≈
2.303
T
HA
K
a
[
H
+
]
(
K
a
+
[
H
+
]
)
2
.
{\displaystyle \beta \approx 2.303{\frac {T_{{\ce {HA}}}K_{a}[{\ce {H+}}]}{(K_{a}+[{\ce {H+}}])^{2}}}.}
Buffer capacity rises to a local maximum at pH = pKa. The height of this peak depends on the value of pKa. Buffer capacity is negligible when the concentration [HA] of buffering agent is very small and increases with increasing concentration of the buffering agent. Some authors show only this region in graphs of buffer capacity. Buffer capacity falls to 33% of the maximum value at pH = pKa ± 1, to 10% at pH = pKa ± 1.5 and to 1% at pH = pKa ± 2. For this reason the most useful range is approximately pKa ± 1. When choosing a buffer for use at a specific pH, it should have a pKa value as close as possible to that pH.
With strongly acidic solutions, pH less than about 2 (coloured red on the plot), the first term in the equation dominates, and buffer capacity rises exponentially with decreasing pH:
β
≈
10
−
p
H
.
{\displaystyle \beta \approx 10^{-\mathrm {pH} }.}
This results from the fact that the second and third terms become negligible at very low pH. This term is independent of the presence or absence of a buffering agent.
With strongly alkaline solutions, pH more than about 12 (coloured blue on the plot), the third term in the equation dominates, and buffer capacity rises exponentially with increasing pH:
β
≈
10
p
H
−
p
K
w
.
{\displaystyle \beta \approx 10^{\mathrm {pH} -\mathrm {p} K_{\text{w}}}.}
This results from the fact that the first and second terms become negligible at very high pH. This term is also independent of the presence or absence of a buffering agent.
== Applications of buffers ==
The pH of a solution containing a buffering agent can only vary within a narrow range, regardless of what else may be present in the solution. In biological systems this is an essential condition for enzymes to function correctly. For example, in human blood a mixture of carbonic acid (H2CO3) and bicarbonate (HCO−3) is present in the plasma fraction; this constitutes the major mechanism for maintaining the pH of blood between 7.35 and 7.45. Outside this narrow range (7.40 ± 0.05 pH unit), acidosis and alkalosis metabolic conditions rapidly develop, ultimately leading to death if the correct buffering capacity is not rapidly restored.
If the pH value of a solution rises or falls too much, the effectiveness of an enzyme decreases in a process, known as denaturation, which is usually irreversible. The majority of biological samples that are used in research are kept in a buffer solution, often phosphate buffered saline (PBS) at pH 7.4.
In industry, buffering agents are used in fermentation processes and in setting the correct conditions for dyes used in colouring fabrics. They are also used in chemical analysis and calibration of pH meters.
=== Simple buffering agents ===
For buffers in acid regions, the pH may be adjusted to a desired value by adding a strong acid such as hydrochloric acid to the particular buffering agent. For alkaline buffers, a strong base such as sodium hydroxide may be added. Alternatively, a buffer mixture can be made from a mixture of an acid and its conjugate base. For example, an acetate buffer can be made from a mixture of acetic acid and sodium acetate. Similarly, an alkaline buffer can be made from a mixture of the base and its conjugate acid.
=== "Universal" buffer mixtures ===
By combining substances with pKa values differing by only two or less and adjusting the pH, a wide range of buffers can be obtained. Citric acid is a useful component of a buffer mixture because it has three pKa values, separated by less than two. The buffer range can be extended by adding other buffering agents. The following mixtures (McIlvaine's buffer solutions) have a buffer range of pH 3 to 8.
A mixture containing citric acid, monopotassium phosphate, boric acid, and diethyl barbituric acid can be made to cover the pH range 2.6 to 12.
Other universal buffers are the Carmody buffer and the Britton–Robinson buffer, developed in 1931.
=== Common buffer compounds used in biology ===
For effective range see Buffer capacity, above. Also see Good's buffers for the historic design principles and favourable properties of these buffer substances in biochemical applications.
== Calculating buffer pH ==
=== Monoprotic acids ===
First write down the equilibrium expression
This shows that when the acid dissociates, equal amounts of hydrogen ion and anion are produced. The equilibrium concentrations of these three components can be calculated in an ICE table (ICE standing for "initial, change, equilibrium").
The first row, labelled I, lists the initial conditions: the concentration of acid is C0, initially undissociated, so the concentrations of A− and H+ would be zero; y is the initial concentration of added strong acid, such as hydrochloric acid. If strong alkali, such as sodium hydroxide, is added, then y will have a negative sign because alkali removes hydrogen ions from the solution. The second row, labelled C for "change", specifies the changes that occur when the acid dissociates. The acid concentration decreases by an amount −x, and the concentrations of A− and H+ both increase by an amount +x. This follows from the equilibrium expression. The third row, labelled E for "equilibrium", adds together the first two rows and shows the concentrations at equilibrium.
To find x, use the formula for the equilibrium constant in terms of concentrations:
K
a
=
[
H
+
]
[
A
−
]
[
HA
]
.
{\displaystyle K_{\text{a}}={\frac {[{\ce {H+}}][{\ce {A-}}]}{[{\ce {HA}}]}}.}
Substitute the concentrations with the values found in the last row of the ICE table:
K
a
=
x
(
x
+
y
)
C
0
−
x
.
{\displaystyle K_{\text{a}}={\frac {x(x+y)}{C_{0}-x}}.}
Simplify to
x
2
+
(
K
a
+
y
)
x
−
K
a
C
0
=
0.
{\displaystyle x^{2}+(K_{\text{a}}+y)x-K_{\text{a}}C_{0}=0.}
With specific values for C0, Ka and y, this equation can be solved for x. Assuming that pH = −log10[H+], the pH can be calculated as pH = −log10(x + y).
=== Polyprotic acids ===
Polyprotic acids are acids that can lose more than one proton. The constant for dissociation of the first proton may be denoted as Ka1, and the constants for dissociation of successive protons as Ka2, etc. Citric acid is an example of a polyprotic acid H3A, as it can lose three protons.
When the difference between successive pKa values is less than about 3, there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. In the case of citric acid, the overlap is extensive and solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5.
Calculation of the pH with a polyprotic acid requires a speciation calculation to be performed. In the case of citric acid, this entails the solution of the two equations of mass balance:
C
A
=
[
A
3
−
]
+
β
1
[
A
3
−
]
[
H
+
]
+
β
2
[
A
3
−
]
[
H
+
]
2
+
β
3
[
A
3
−
]
[
H
+
]
3
,
C
H
=
[
H
+
]
+
β
1
[
A
3
−
]
[
H
+
]
+
2
β
2
[
A
3
−
]
[
H
+
]
2
+
3
β
3
[
A
3
−
]
[
H
+
]
3
−
K
w
[
H
+
]
−
1
.
{\displaystyle {\begin{aligned}C_{{\ce {A}}}&=[{\ce {A^3-}}]+\beta _{1}[{\ce {A^3-}}][{\ce {H+}}]+\beta _{2}[{\ce {A^3-}}][{\ce {H+}}]^{2}+\beta _{3}[{\ce {A^3-}}][{\ce {H+}}]^{3},\\C_{{\ce {H}}}&=[{\ce {H+}}]+\beta _{1}[{\ce {A^3-}}][{\ce {H+}}]+2\beta _{2}[{\ce {A^3-}}][{\ce {H+}}]^{2}+3\beta _{3}[{\ce {A^3-}}][{\ce {H+}}]^{3}-K_{\text{w}}[{\ce {H+}}]^{-1}.\end{aligned}}}
CA is the analytical concentration of the acid, CH is the analytical concentration of added hydrogen ions, βq are the cumulative association constants. Kw is the constant for self-ionization of water. There are two non-linear simultaneous equations in two unknown quantities [A3−] and [H+]. Many computer programs are available to do this calculation. The speciation diagram for citric acid was produced with the program HySS.
N.B. The numbering of cumulative, overall constants is the reverse of the numbering of the stepwise, dissociation constants.
Cumulative association constants are used in general-purpose computer programs such as the one used to obtain the speciation diagram above.
== See also ==
Henderson–Hasselbalch equation
Good's buffers
Common-ion effect
Metal ion buffer
Mineral redox buffer
== References ==
== External links ==
"Biological buffers". REACH Devices. | Wikipedia/Buffer_solution |
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid-20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.
The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.
Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.
== Applications ==
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
Insurance companies use numerical programs for actuarial analysis.
== History ==
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine,
but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications.
== Key concepts ==
=== Direct and iterative methods ===
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
As an example, consider the problem of solving
3x3 + 4 = 28
for the unknown quantity x.
For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
=== Conditioning ===
Ill-conditioned problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Well-conditioned problem: By contrast, evaluating the same function f(x) = 1/(x − 1) near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x).
=== Discretization ===
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
== Generation and propagation of errors ==
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.
=== Round-off ===
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).
=== Truncation and discretization error ===
Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of
3
x
3
+
4
=
28
{\displaystyle 3x^{3}+4=28}
, after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01.
Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type
a
+
b
+
c
+
d
+
e
{\displaystyle a+b+c+d+e}
is even more inexact.
A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen.
=== Numerical stability and well-posed problems ===
An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
== Areas of study ==
The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
=== Computing values of functions ===
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic.
=== Interpolation, extrapolation, and regression ===
Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.
=== Solving equations and systems of equations ===
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation
2
x
+
5
=
3
{\displaystyle 2x+5=3}
is linear while
2
x
2
+
5
=
3
{\displaystyle 2x^{2}+5=3}
is not.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.
=== Solving eigenvalue or singular value problems ===
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
=== Optimization ===
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
=== Evaluating integrals ===
Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.
=== Differential equations ===
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.
== Software ==
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.
Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here);
ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here).
The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here).
There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis.
Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
=== Journals ===
Numerische Mathematik, volumes 1–..., Springer, 1959–
volumes 1–66, 1959–1994 (searchable; pages are images). (in English and German)
Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964–
=== Online texts ===
"Numerical analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Numerical Recipes, William H. Press (free, downloadable previous editions)
First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner
CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01)
Numerical Methods, ch 3. in the Digital Library of Mathematical Functions
Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun)
Tobin A. Driscoll and Richard J. Braun: Fundamentals of Numerical Computation (free online version)
=== Online course material ===
Numerical Methods (Archived 28 July 2009 at the Wayback Machine), Stuart Dalziel University of Cambridge
Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania
Numerical methods, John D. Fenton University of Karlsruhe
Numerical Methods for Physicists, Anthony O’Hare Oxford University
Lectures in Numerical Analysis (archived), R. Radok Mahidol University
Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology
Numerical Analysis for Engineering, D. W. Harder University of Waterloo
Introduction to Numerical Analysis, Doron Levy University of Maryland
Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton | Wikipedia/Numerical_approximation |
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.
The algorithm is based on Horner's rule, in which a polynomial is written in nested form:
a
0
+
a
1
x
+
a
2
x
2
+
a
3
x
3
+
⋯
+
a
n
x
n
=
a
0
+
x
(
a
1
+
x
(
a
2
+
x
(
a
3
+
⋯
+
x
(
a
n
−
1
+
x
a
n
)
⋯
)
)
)
.
{\displaystyle {\begin{aligned}&a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\={}&a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}}
This allows the evaluation of a polynomial of degree n with only
n
{\displaystyle n}
multiplications and
n
{\displaystyle n}
additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations.
Alternatively, Horner's method and Horner–Ruffini method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970.
== Polynomial evaluation and long division ==
Given the polynomial
p
(
x
)
=
∑
i
=
0
n
a
i
x
i
=
a
0
+
a
1
x
+
a
2
x
2
+
a
3
x
3
+
⋯
+
a
n
x
n
,
{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},}
where
a
0
,
…
,
a
n
{\displaystyle a_{0},\ldots ,a_{n}}
are constant coefficients, the problem is to evaluate the polynomial at a specific value
x
0
{\displaystyle x_{0}}
of
x
.
{\displaystyle x.}
For this, a new sequence of constants is defined recursively as follows:
Then
b
0
{\displaystyle b_{0}}
is the value of
p
(
x
0
)
{\displaystyle p(x_{0})}
.
To see why this works, the polynomial can be written in the form
p
(
x
)
=
a
0
+
x
(
a
1
+
x
(
a
2
+
x
(
a
3
+
⋯
+
x
(
a
n
−
1
+
x
a
n
)
⋯
)
)
)
.
{\displaystyle p(x)=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}\ .}
Thus, by iteratively substituting the
b
i
{\displaystyle b_{i}}
into the expression,
p
(
x
0
)
=
a
0
+
x
0
(
a
1
+
x
0
(
a
2
+
⋯
+
x
0
(
a
n
−
1
+
b
n
x
0
)
⋯
)
)
=
a
0
+
x
0
(
a
1
+
x
0
(
a
2
+
⋯
+
x
0
b
n
−
1
)
)
⋮
=
a
0
+
x
0
b
1
=
b
0
.
{\displaystyle {\begin{aligned}p(x_{0})&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}(a_{n-1}+b_{n}x_{0})\cdots {\big )}{\Big )}\\&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}b_{n-1}{\big )}{\Big )}\\&~~\vdots \\&=a_{0}+x_{0}b_{1}\\&=b_{0}.\end{aligned}}}
Now, it can be proven that;
This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of;
p
(
x
)
/
(
x
−
x
0
)
{\displaystyle p(x)/(x-x_{0})}
with
b
0
{\displaystyle b_{0}}
(which is equal to
p
(
x
0
)
{\displaystyle p(x_{0})}
) being the division's remainder, as is demonstrated by the examples below. If
x
0
{\displaystyle x_{0}}
is a root of
p
(
x
)
{\displaystyle p(x)}
, then
b
0
=
0
{\displaystyle b_{0}=0}
(meaning the remainder is
0
{\displaystyle 0}
), which means you can factor
p
(
x
)
{\displaystyle p(x)}
as
x
−
x
0
{\displaystyle x-x_{0}}
.
To finding the consecutive
b
{\displaystyle b}
-values, you start with determining
b
n
{\displaystyle b_{n}}
, which is simply equal to
a
n
{\displaystyle a_{n}}
. Then you then work recursively using the formula:
b
n
−
1
=
a
n
−
1
+
b
n
x
0
{\displaystyle b_{n-1}=a_{n-1}+b_{n}x_{0}}
till you arrive at
b
0
{\displaystyle b_{0}}
.
=== Examples ===
Evaluate
f
(
x
)
=
2
x
3
−
6
x
2
+
2
x
−
1
{\displaystyle f(x)=2x^{3}-6x^{2}+2x-1}
for
x
=
3
{\displaystyle x=3}
.
We use synthetic division as follows:
x0│ x3 x2 x1 x0
3 │ 2 −6 2 −1
│ 6 0 6
└────────────────────────
2 0 2 5
The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of
f
(
x
)
{\displaystyle f(x)}
on division by
x
−
3
{\displaystyle x-3}
is 5.
But by the polynomial remainder theorem, we know that the remainder is
f
(
3
)
{\displaystyle f(3)}
. Thus,
f
(
3
)
=
5
{\displaystyle f(3)=5}
.
In this example, if
a
3
=
2
,
a
2
=
−
6
,
a
1
=
2
,
a
0
=
−
1
{\displaystyle a_{3}=2,a_{2}=-6,a_{1}=2,a_{0}=-1}
we can see that
b
3
=
2
,
b
2
=
0
,
b
1
=
2
,
b
0
=
5
{\displaystyle b_{3}=2,b_{2}=0,b_{1}=2,b_{0}=5}
, the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method.
As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of
f
(
x
)
{\displaystyle f(x)}
on division by
x
−
3
{\displaystyle x-3}
.
The remainder is 5. This makes Horner's method useful for polynomial long division.
Divide
x
3
−
6
x
2
+
11
x
−
6
{\displaystyle x^{3}-6x^{2}+11x-6}
by
x
−
2
{\displaystyle x-2}
:
2 │ 1 −6 11 −6
│ 2 −8 6
└────────────────────────
1 −4 3 0
The quotient is
x
2
−
4
x
+
3
{\displaystyle x^{2}-4x+3}
.
Let
f
1
(
x
)
=
4
x
4
−
6
x
3
+
3
x
−
5
{\displaystyle f_{1}(x)=4x^{4}-6x^{3}+3x-5}
and
f
2
(
x
)
=
2
x
−
1
{\displaystyle f_{2}(x)=2x-1}
. Divide
f
1
(
x
)
{\displaystyle f_{1}(x)}
by
f
2
(
x
)
{\displaystyle f_{2}\,(x)}
using Horner's method.
0.5 │ 4 −6 0 3 −5
│ 2 −2 −1 1
└───────────────────────
2 −2 −1 1 −4
The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is
f
1
(
x
)
f
2
(
x
)
=
2
x
3
−
2
x
2
−
x
+
1
−
4
2
x
−
1
.
{\displaystyle {\frac {f_{1}(x)}{f_{2}(x)}}=2x^{3}-2x^{2}-x+1-{\frac {4}{2x-1}}.}
=== Efficiency ===
Evaluation using the monomial form of a degree
n
{\displaystyle n}
polynomial requires at most
n
{\displaystyle n}
additions and
(
n
2
+
n
)
/
2
{\displaystyle (n^{2}+n)/2}
multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to
n
{\displaystyle n}
additions and
2
n
−
1
{\displaystyle 2n-1}
multiplications by evaluating the powers of
x
{\displaystyle x}
by iteration.
If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately
2
n
{\displaystyle 2n}
times the number of bits of
x
{\displaystyle x}
: the evaluated polynomial has approximate magnitude
x
n
{\displaystyle x^{n}}
, and one must also store
x
n
{\displaystyle x^{n}}
itself. By contrast, Horner's method requires only
n
{\displaystyle n}
additions and
n
{\displaystyle n}
multiplications, and its storage requirements are only
n
{\displaystyle n}
times the number of bits of
x
{\displaystyle x}
. Alternatively, Horner's method can be computed with
n
{\displaystyle n}
fused multiply–adds. Horner's method can also be extended to evaluate the first
k
{\displaystyle k}
derivatives of the polynomial with
k
n
{\displaystyle kn}
additions and multiplications.
Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when
x
{\displaystyle x}
is a matrix, Horner's method is not optimal.
This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-
n
{\displaystyle n}
polynomial can be evaluated using only ⌊n/2⌋+2 multiplications and
n
{\displaystyle n}
additions.
==== Parallel evaluation ====
A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.
If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:
p
(
x
)
=
∑
i
=
0
n
a
i
x
i
=
a
0
+
a
1
x
+
a
2
x
2
+
a
3
x
3
+
⋯
+
a
n
x
n
=
(
a
0
+
a
2
x
2
+
a
4
x
4
+
⋯
)
+
(
a
1
x
+
a
3
x
3
+
a
5
x
5
+
⋯
)
=
(
a
0
+
a
2
x
2
+
a
4
x
4
+
⋯
)
+
x
(
a
1
+
a
3
x
2
+
a
5
x
4
+
⋯
)
=
∑
i
=
0
⌊
n
/
2
⌋
a
2
i
x
2
i
+
x
∑
i
=
0
⌊
n
/
2
⌋
a
2
i
+
1
x
2
i
=
p
0
(
x
2
)
+
x
p
1
(
x
2
)
.
{\displaystyle {\begin{aligned}p(x)&=\sum _{i=0}^{n}a_{i}x^{i}\\[1ex]&=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+\left(a_{1}x+a_{3}x^{3}+a_{5}x^{5}+\cdots \right)\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+x\left(a_{1}+a_{3}x^{2}+a_{5}x^{4}+\cdots \right)\\[1ex]&=\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i}x^{2i}+x\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i+1}x^{2i}\\[1ex]&=p_{0}(x^{2})+xp_{1}(x^{2}).\end{aligned}}}
More generally, the summation can be broken into k parts:
p
(
x
)
=
∑
i
=
0
n
a
i
x
i
=
∑
j
=
0
k
−
1
x
j
∑
i
=
0
⌊
n
/
k
⌋
a
k
i
+
j
x
k
i
=
∑
j
=
0
k
−
1
x
j
p
j
(
x
k
)
{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=\sum _{j=0}^{k-1}x^{j}\sum _{i=0}^{\lfloor n/k\rfloor }a_{ki+j}x^{ki}=\sum _{j=0}^{k-1}x^{j}p_{j}(x^{k})}
where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of instruction-level parallelism.
=== Application to floating-point multiplication and division ===
Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation)
a
i
=
1
{\displaystyle a_{i}=1}
, and
x
=
2
{\displaystyle x=2}
. Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2),
x
=
2
{\displaystyle x=2}
, so powers of 2 are repeatedly factored out.
==== Example ====
For example, to find the product of two numbers (0.15625) and m:
(
0.15625
)
m
=
(
0.00101
b
)
m
=
(
2
−
3
+
2
−
5
)
m
=
(
2
−
3
)
m
+
(
2
−
5
)
m
=
2
−
3
(
m
+
(
2
−
2
)
m
)
=
2
−
3
(
m
+
2
−
2
(
m
)
)
.
{\displaystyle {\begin{aligned}(0.15625)m&=(0.00101_{b})m=\left(2^{-3}+2^{-5}\right)m=\left(2^{-3})m+(2^{-5}\right)m\\&=2^{-3}\left(m+\left(2^{-2}\right)m\right)=2^{-3}\left(m+2^{-2}(m)\right).\end{aligned}}}
==== Method ====
To find the product of two binary numbers d and m:
A register holding the intermediate result is initialized to d.
Begin with the least significant (rightmost) non-zero bit in m.
If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m.
==== Derivation ====
In general, for a binary number with bit values (
d
3
d
2
d
1
d
0
{\displaystyle d_{3}d_{2}d_{1}d_{0}}
) the product is
(
d
3
2
3
+
d
2
2
2
+
d
1
2
1
+
d
0
2
0
)
m
=
d
3
2
3
m
+
d
2
2
2
m
+
d
1
2
1
m
+
d
0
2
0
m
.
{\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}2^{3}m+d_{2}2^{2}m+d_{1}2^{1}m+d_{0}2^{0}m.}
At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:
=
d
0
(
m
+
2
d
1
d
0
(
m
+
2
d
2
d
1
(
m
+
2
d
3
d
2
(
m
)
)
)
)
.
{\displaystyle =d_{0}\left(m+2{\frac {d_{1}}{d_{0}}}\left(m+2{\frac {d_{2}}{d_{1}}}\left(m+2{\frac {d_{3}}{d_{2}}}(m)\right)\right)\right).}
The denominators all equal one (or the term is absent), so this reduces to
=
d
0
(
m
+
2
d
1
(
m
+
2
d
2
(
m
+
2
d
3
(
m
)
)
)
)
,
{\displaystyle =d_{0}(m+2{d_{1}}(m+2{d_{2}}(m+2{d_{3}}(m)))),}
or equivalently (as consistent with the "method" described above)
=
d
3
(
m
+
2
−
1
d
2
(
m
+
2
−
1
d
1
(
m
+
d
0
(
m
)
)
)
)
.
{\displaystyle =d_{3}(m+2^{-1}{d_{2}}(m+2^{-1}{d_{1}}(m+{d_{0}}(m)))).}
In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift.
The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.
The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.
=== Other applications ===
Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known.
== Polynomial root finding ==
Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial
p
n
(
x
)
{\displaystyle p_{n}(x)}
of degree
n
{\displaystyle n}
with zeros
z
n
<
z
n
−
1
<
⋯
<
z
1
,
{\displaystyle z_{n}<z_{n-1}<\cdots <z_{1},}
make some initial guess
x
0
{\displaystyle x_{0}}
such that
z
1
<
x
0
{\displaystyle z_{1}<x_{0}}
. Now iterate the following two steps:
Using Newton's method, find the largest zero
z
1
{\displaystyle z_{1}}
of
p
n
(
x
)
{\displaystyle p_{n}(x)}
using the guess
x
0
{\displaystyle x_{0}}
.
Using Horner's method, divide out
(
x
−
z
1
)
{\displaystyle (x-z_{1})}
to obtain
p
n
−
1
{\displaystyle p_{n-1}}
. Return to step 1 but use the polynomial
p
n
−
1
{\displaystyle p_{n-1}}
and the initial guess
z
1
{\displaystyle z_{1}}
.
These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.
=== Example ===
Consider the polynomial
p
6
(
x
)
=
(
x
+
8
)
(
x
+
5
)
(
x
+
3
)
(
x
−
2
)
(
x
−
3
)
(
x
−
7
)
{\displaystyle p_{6}(x)=(x+8)(x+5)(x+3)(x-2)(x-3)(x-7)}
which can be expanded to
p
6
(
x
)
=
x
6
+
4
x
5
−
72
x
4
−
214
x
3
+
1127
x
2
+
1602
x
−
5040.
{\displaystyle p_{6}(x)=x^{6}+4x^{5}-72x^{4}-214x^{3}+1127x^{2}+1602x-5040.}
From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next
p
(
x
)
{\displaystyle p(x)}
is divided by
(
x
−
7
)
{\displaystyle (x-7)}
to obtain
p
5
(
x
)
=
x
5
+
11
x
4
+
5
x
3
−
179
x
2
−
126
x
+
720
{\displaystyle p_{5}(x)=x^{5}+11x^{4}+5x^{3}-179x^{2}-126x+720}
which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by
(
x
−
3
)
{\displaystyle (x-3)}
to obtain
p
4
(
x
)
=
x
4
+
14
x
3
+
47
x
2
−
38
x
−
240
{\displaystyle p_{4}(x)=x^{4}+14x^{3}+47x^{2}-38x-240}
which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain
p
3
(
x
)
=
x
3
+
16
x
2
+
79
x
+
120
{\displaystyle p_{3}(x)=x^{3}+16x^{2}+79x+120}
which is shown in green and found to have a zero at −3. This polynomial is further reduced to
p
2
(
x
)
=
x
2
+
13
x
+
40
{\displaystyle p_{2}(x)=x^{2}+13x+40}
which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing
p
2
(
x
)
{\displaystyle p_{2}(x)}
and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.
== Divided difference of a polynomial ==
Horner's method can be modified to compute the divided difference
(
p
(
y
)
−
p
(
x
)
)
/
(
y
−
x
)
.
{\displaystyle (p(y)-p(x))/(y-x).}
Given the polynomial (as before)
p
(
x
)
=
∑
i
=
0
n
a
i
x
i
=
a
0
+
a
1
x
+
a
2
x
2
+
a
3
x
3
+
⋯
+
a
n
x
n
,
{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},}
proceed as follows
b
n
=
a
n
,
d
n
=
b
n
,
b
n
−
1
=
a
n
−
1
+
b
n
x
,
d
n
−
1
=
b
n
−
1
+
d
n
y
,
⋮
⋮
b
1
=
a
1
+
b
2
x
,
d
1
=
b
1
+
d
2
y
,
b
0
=
a
0
+
b
1
x
.
{\displaystyle {\begin{aligned}b_{n}&=a_{n},&\quad d_{n}&=b_{n},\\b_{n-1}&=a_{n-1}+b_{n}x,&\quad d_{n-1}&=b_{n-1}+d_{n}y,\\&{}\ \ \vdots &\quad &{}\ \ \vdots \\b_{1}&=a_{1}+b_{2}x,&\quad d_{1}&=b_{1}+d_{2}y,\\b_{0}&=a_{0}+b_{1}x.\end{aligned}}}
At completion, we have
p
(
x
)
=
b
0
,
p
(
y
)
−
p
(
x
)
y
−
x
=
d
1
,
p
(
y
)
=
b
0
+
(
y
−
x
)
d
1
.
{\displaystyle {\begin{aligned}p(x)&=b_{0},\\{\frac {p(y)-p(x)}{y-x}}&=d_{1},\\p(y)&=b_{0}+(y-x)d_{1}.\end{aligned}}}
This computation of the divided difference is subject to less round-off error than evaluating
p
(
x
)
{\displaystyle p(x)}
and
p
(
y
)
{\displaystyle p(y)}
separately, particularly when
x
≈
y
{\displaystyle x\approx y}
. Substituting
y
=
x
{\displaystyle y=x}
in this method gives
d
1
=
p
′
(
x
)
{\displaystyle d_{1}=p'(x)}
, the derivative of
p
(
x
)
{\displaystyle p(x)}
.
== History ==
Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).
Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini.
Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:
Paolo Ruffini in 1809 (see Ruffini's rule)
Isaac Newton in 1669
the Chinese mathematician Zhu Shijie in the 14th century
the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century
the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation)
the Chinese mathematician Jia Xian in the 11th century (Song dynasty)
The Nine Chapters on the Mathematical Art, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century).
Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."
Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.
== See also ==
Clenshaw algorithm to evaluate polynomials in Chebyshev form
De Boor's algorithm to evaluate splines in B-spline form
De Casteljau's algorithm to evaluate polynomials in Bézier form
Estrin's scheme to facilitate parallelization on modern computer architectures
Lill's method to approximate roots graphically
Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r
== Notes ==
== References ==
== External links ==
"Horner scheme", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Qiu Jin-Shao, Shu Shu Jiu Zhang (Cong Shu Ji Cheng ed.)
For more on the root-finding application see [1] Archived 2018-09-28 at the Wayback Machine | Wikipedia/Horner_method |
In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars. Though there are many equations of state, none accurately predicts properties of substances under all conditions. The quest for a universal equation of state has spanned three centuries.
== Overview ==
At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.
The general form of an equation of state may be written as
f
(
p
,
V
,
T
)
=
0
{\displaystyle f(p,V,T)=0}
where
p
{\displaystyle p}
is the pressure,
V
{\displaystyle V}
the volume, and
T
{\displaystyle T}
the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.
An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.
Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.
Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.
n
{\displaystyle n}
, number of moles of a substance
V
m
{\displaystyle V_{m}}
,
V
n
{\displaystyle {\frac {V}{n}}}
, molar volume, the volume of 1 mole of gas or liquid
R
{\displaystyle R}
, ideal gas constant ≈ 8.3144621 J/mol·K
p
c
{\displaystyle p_{c}}
, pressure at the critical point
V
c
{\displaystyle V_{c}}
, molar volume at the critical point
T
c
{\displaystyle T_{c}}
, absolute temperature at the critical point
== Historical background ==
Equations of state essentially begin three centuries ago with the history of the ideal gas law:
p
V
=
n
R
T
{\displaystyle pV=nRT}
Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:
p
V
=
c
o
n
s
t
a
n
t
.
{\displaystyle pV=\mathrm {constant} .}
The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676.
In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:
V
1
T
1
=
V
2
T
2
.
{\displaystyle {\frac {V_{1}}{T_{1}}}={\frac {V_{2}}{T_{2}}}.}
Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone.
Mathematically, this can be represented for
n
{\displaystyle n}
species as:
p
total
=
p
1
+
p
2
+
⋯
+
p
n
=
∑
i
=
1
n
p
i
.
{\displaystyle p_{\text{total}}=p_{1}+p_{2}+\cdots +p_{n}=\sum _{i=1}^{n}p_{i}.}
In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with
0
∘
C
=
273.15
K
{\displaystyle 0~^{\circ }\mathrm {C} =273.15~\mathrm {K} }
, giving:
p
V
m
=
R
(
T
C
+
273.15
∘
C
)
.
{\displaystyle pV_{m}=R\left(T_{C}+273.15\ {}^{\circ }{\text{C}}\right).}
In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.
The van der Waals equation of state can be written as
(
P
+
a
1
V
m
2
)
(
V
m
−
b
)
=
R
T
{\displaystyle \left(P+a{\frac {1}{V_{m}^{2}}}\right)(V_{m}-b)=RT}
where
a
{\displaystyle a}
is a parameter describing the attractive energy between particles and
b
{\displaystyle b}
is a parameter describing the volume of the particles.
== Ideal gas law ==
=== Classical ideal gas law ===
The classical ideal gas law may be written
p
V
=
n
R
T
.
{\displaystyle pV=nRT.}
In the form shown above, the equation of state is thus
f
(
p
,
V
,
T
)
=
p
V
−
n
R
T
=
0.
{\displaystyle f(p,V,T)=pV-nRT=0.}
If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows
p
=
ρ
(
γ
−
1
)
e
{\displaystyle p=\rho (\gamma -1)e}
where
ρ
{\displaystyle \rho }
is the number density of the gas (number of atoms/molecules per unit volume),
γ
=
C
p
/
C
v
{\displaystyle \gamma =C_{p}/C_{v}}
is the (constant) adiabatic index (ratio of specific heats),
e
=
C
v
T
{\displaystyle e=C_{v}T}
is the internal energy per unit mass (the "specific internal energy"),
C
v
{\displaystyle C_{v}}
is the specific heat capacity at constant volume, and
C
p
{\displaystyle C_{p}}
is the specific heat capacity at constant pressure.
=== Quantum ideal gas law ===
Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass
m
{\displaystyle m}
and spin
s
{\displaystyle s}
that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with
N
{\displaystyle N}
particles occupying a volume
V
{\displaystyle V}
with temperature
T
{\displaystyle T}
and pressure
p
{\displaystyle p}
is given by
p
=
(
2
s
+
1
)
2
m
3
k
B
5
T
5
3
π
2
ℏ
3
∫
0
∞
z
3
/
2
d
z
e
z
−
μ
/
(
k
B
T
)
±
1
{\displaystyle p={\frac {(2s+1){\sqrt {2m^{3}k_{\text{B}}^{5}T^{5}}}}{3\pi ^{2}\hbar ^{3}}}\int _{0}^{\infty }{\frac {z^{3/2}\,\mathrm {d} z}{e^{z-\mu /(k_{\text{B}}T)}\pm 1}}}
where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant and
μ
(
T
,
N
/
V
)
{\displaystyle \mu (T,N/V)}
the chemical potential is given by the following implicit function
N
V
=
(
2
s
+
1
)
(
m
k
B
T
)
3
/
2
2
π
2
ℏ
3
∫
0
∞
z
1
/
2
d
z
e
z
−
μ
/
(
k
B
T
)
±
1
.
{\displaystyle {\frac {N}{V}}={\frac {(2s+1)(mk_{\text{B}}T)^{3/2}}{{\sqrt {2}}\pi ^{2}\hbar ^{3}}}\int _{0}^{\infty }{\frac {z^{1/2}\,\mathrm {d} z}{e^{z-\mu /(k_{\text{B}}T)}\pm 1}}.}
In the limiting case where
e
μ
/
(
k
B
T
)
≪
1
{\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1}
, this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit
e
μ
/
(
k
B
T
)
≪
1
{\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1}
reduces to
p
V
=
N
k
B
T
[
1
±
π
3
/
2
2
(
2
s
+
1
)
N
ℏ
3
V
(
m
k
B
T
)
3
/
2
+
⋯
]
{\displaystyle pV=Nk_{\text{B}}T\left[1\pm {\frac {\pi ^{3/2}}{2(2s+1)}}{\frac {N\hbar ^{3}}{V(mk_{\text{B}}T)^{3/2}}}+\cdots \right]}
With a fixed number density
N
/
V
{\displaystyle N/V}
, decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.
== Cubic equations of state ==
Cubic equations of state are called such because they can be rewritten as a cubic function of
V
m
{\displaystyle V_{m}}
. Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.
== Virial equations of state ==
=== Virial equation of state ===
p
V
m
R
T
=
A
+
B
V
m
+
C
V
m
2
+
D
V
m
3
+
⋯
{\displaystyle {\frac {pV_{m}}{RT}}=A+{\frac {B}{V_{m}}}+{\frac {C}{V_{m}^{2}}}+{\frac {D}{V_{m}^{3}}}+\cdots }
Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only.
=== The BWR equation of state ===
p
=
ρ
R
T
+
(
B
0
R
T
−
A
0
−
C
0
T
2
+
D
0
T
3
−
E
0
T
4
)
ρ
2
+
(
b
R
T
−
a
−
d
T
)
ρ
3
+
α
(
a
+
d
T
)
ρ
6
+
c
ρ
3
T
2
(
1
+
γ
ρ
2
)
exp
(
−
γ
ρ
2
)
{\displaystyle {\begin{aligned}p=\rho RT&+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}+\left(bRT-a-{\frac {d}{T}}\right)\rho ^{3}\\[2pt]&+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}+{\frac {c\rho ^{3}}{T^{2}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)\end{aligned}}}
where
p
{\displaystyle p}
is pressure
ρ
{\displaystyle \rho }
is molar density
Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.
The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as
p
=
ρ
R
T
+
(
B
0
R
T
−
A
0
−
C
0
T
2
+
D
0
T
3
−
E
0
T
4
)
ρ
2
+
(
b
R
T
−
a
−
d
T
+
c
T
2
)
ρ
3
+
α
(
a
+
d
T
)
ρ
6
{\displaystyle {\begin{aligned}p=\rho RT&+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}\\[2pt]&+\left(bRT-a-{\frac {d}{T}}+{\frac {c}{T^{2}}}\right)\rho ^{3}+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}\end{aligned}}}
Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.
The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.
p
=
R
T
V
(
1
+
B
V
r
+
C
V
r
2
+
D
V
r
5
+
c
4
T
r
3
V
r
2
(
β
+
γ
V
r
2
)
exp
(
−
γ
V
r
2
)
)
{\displaystyle p={\frac {RT}{V}}\left(1+{\frac {B}{V_{r}}}+{\frac {C}{V_{r}^{2}}}+{\frac {D}{V_{r}^{5}}}+{\frac {c_{4}}{T_{r}^{3}V_{r}^{2}}}\left(\beta +{\frac {\gamma }{V_{r}^{2}}}\right)\exp \left(-{\frac {\gamma }{V_{r}^{2}}}\right)\right)}
== Physically based equations of state ==
There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.
=== Perturbation theory-based models ===
Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.
=== Statistical associating fluid theory (SAFT) ===
An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.
== Multiparameter equations of state ==
Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:
a
(
T
,
ρ
)
R
T
=
a
i
d
e
a
l
g
a
s
(
τ
,
δ
)
+
a
residual
(
τ
,
δ
)
R
T
{\displaystyle {\frac {a(T,\rho )}{RT}}={\frac {a^{\mathrm {ideal\,gas} }(\tau ,\delta )+a^{\textrm {residual}}(\tau ,\delta )}{RT}}}
with
τ
=
T
r
T
,
δ
=
ρ
ρ
r
{\displaystyle \tau ={\frac {T_{r}}{T}},\delta ={\frac {\rho }{\rho _{r}}}}
The reduced density
ρ
r
{\displaystyle \rho _{r}}
and reduced temperature
T
r
{\displaystyle T_{r}}
are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.
One example of such an equation of state is the form proposed by Span and Wagner.
a
r
e
s
i
d
u
a
l
=
∑
i
=
1
8
∑
j
=
−
8
12
n
i
,
j
δ
i
τ
j
/
8
+
∑
i
=
1
5
∑
j
=
−
8
24
n
i
,
j
δ
i
τ
j
/
8
exp
(
−
δ
)
+
∑
i
=
1
5
∑
j
=
16
56
n
i
,
j
δ
i
τ
j
/
8
exp
(
−
δ
2
)
+
∑
i
=
2
4
∑
j
=
24
38
n
i
,
j
δ
i
τ
j
/
2
exp
(
−
δ
3
)
{\displaystyle {\begin{aligned}a^{\mathrm {residual} }={}&\sum _{i=1}^{8}\sum _{j=-8}^{12}n_{i,j}\delta ^{i}\tau ^{j/8}+\sum _{i=1}^{5}\sum _{j=-8}^{24}n_{i,j}\delta ^{i}\tau ^{j/8}\exp \left(-\delta \right)\\&+\sum _{i=1}^{5}\sum _{j=16}^{56}n_{i,j}\delta ^{i}\tau ^{j/8}\exp \left(-\delta ^{2}\right)+\sum _{i=2}^{4}\sum _{j=24}^{38}n_{i,j}\delta ^{i}\tau ^{j/2}\exp \left(-\delta ^{3}\right)\end{aligned}}}
This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.
== List of further equations of state ==
=== Stiffened equation of state ===
When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:
p
=
ρ
(
γ
−
1
)
e
−
γ
p
0
{\displaystyle p=\rho (\gamma -1)e-\gamma p^{0}\,}
where
e
{\displaystyle e}
is the internal energy per unit mass,
γ
{\displaystyle \gamma }
is an empirically determined constant typically taken to be about 6.1, and
p
0
{\displaystyle p^{0}}
is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).
The equation is stated in this form because the speed of sound in water is given by
c
2
=
γ
(
p
+
p
0
)
/
ρ
{\displaystyle c^{2}=\gamma \left(p+p^{0}\right)/\rho }
.
Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).
This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.
=== Morse oscillator equation of state ===
An equation of state of Morse oscillator has been derived, and it has the following form:
p
=
Γ
1
ν
+
Γ
2
ν
2
{\displaystyle p=\Gamma _{1}\nu +\Gamma _{2}\nu ^{2}}
Where
Γ
1
{\displaystyle \Gamma _{1}}
is the first order virial parameter and it depends on the temperature,
Γ
2
{\displaystyle \Gamma _{2}}
is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature.
ν
{\displaystyle \nu }
is the fractional volume of the system.
=== Ultrarelativistic equation of state ===
An ultrarelativistic fluid has equation of state
p
=
ρ
m
c
s
2
{\displaystyle p=\rho _{m}c_{s}^{2}}
where
p
{\displaystyle p}
is the pressure,
ρ
m
{\displaystyle \rho _{m}}
is the mass density, and
c
s
{\displaystyle c_{s}}
is the speed of sound.
=== Ideal Bose equation of state ===
The equation of state for an ideal Bose gas is
p
V
m
=
R
T
Li
α
+
1
(
z
)
ζ
(
α
)
(
T
T
c
)
α
{\displaystyle pV_{m}=RT~{\frac {\operatorname {Li} _{\alpha +1}(z)}{\zeta (\alpha )}}\left({\frac {T}{T_{c}}}\right)^{\alpha }}
where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form.
=== Jones–Wilkins–Lee equation of state for explosives (JWL equation) ===
The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.
p
=
A
(
1
−
ω
R
1
V
)
exp
(
−
R
1
V
)
+
B
(
1
−
ω
R
2
V
)
exp
(
−
R
2
V
)
+
ω
e
0
V
{\displaystyle p=A\left(1-{\frac {\omega }{R_{1}V}}\right)\exp(-R_{1}V)+B\left(1-{\frac {\omega }{R_{2}V}}\right)\exp \left(-R_{2}V\right)+{\frac {\omega e_{0}}{V}}}
The ratio
V
=
ρ
e
/
ρ
{\displaystyle V=\rho _{e}/\rho }
is defined by using
ρ
e
{\displaystyle \rho _{e}}
, which is the density of the explosive (solid part) and
ρ
{\displaystyle \rho }
, which is the density of the detonation products. The parameters
A
{\displaystyle A}
,
B
{\displaystyle B}
,
R
1
{\displaystyle R_{1}}
,
R
2
{\displaystyle R_{2}}
and
ω
{\displaystyle \omega }
are given by several references. In addition, the initial density (solid part)
ρ
0
{\displaystyle \rho _{0}}
, speed of detonation
V
D
{\displaystyle V_{D}}
, Chapman–Jouguet pressure
P
C
J
{\displaystyle P_{CJ}}
and the chemical energy per unit volume of the explosive
e
0
{\displaystyle e_{0}}
are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below.
=== Others ===
Tait equation for water and other liquids. Several equations are referred to as the Tait equation.
Murnaghan equation of state
Birch–Murnaghan equation of state
Stacey–Brennan–Irvine equation of state
Modified Rydberg equation of state
Adapted polynomial equation of state
Johnson–Holmquist equation of state
Mie–Grüneisen equation of state
Anton-Schmidt equation of state
State-transition equation
== See also ==
Gas laws
Departure function
Table of thermodynamic equations
Real gas
Cluster expansion
Polytrope
== References ==
== External links == | Wikipedia/Equation_of_state |
In mathematics, a Tschirnhaus transformation, also known as Tschirnhausen transformation, is a type of mapping on polynomials developed by Ehrenfried Walther von Tschirnhaus in 1683.
Simply, it is a method for transforming a polynomial equation of degree
n
≥
2
{\displaystyle n\geq 2}
with some nonzero intermediate coefficients,
a
1
,
.
.
.
,
a
n
−
1
{\displaystyle a_{1},...,a_{n-1}}
, such that some or all of the transformed intermediate coefficients,
a
1
′
,
.
.
.
,
a
n
−
1
′
{\displaystyle a'_{1},...,a'_{n-1}}
, are exactly zero.
For example, finding a substitution
y
(
x
)
=
k
1
x
2
+
k
2
x
+
k
3
{\displaystyle y(x)=k_{1}x^{2}+k_{2}x+k_{3}}
for a cubic equation of degree
n
=
3
{\displaystyle n=3}
,
f
(
x
)
=
x
3
+
a
2
x
2
+
a
1
x
+
a
0
{\displaystyle f(x)=x^{3}+a_{2}x^{2}+a_{1}x+a_{0}}
such that substituting
x
=
x
(
y
)
{\displaystyle x=x(y)}
yields a new equation
f
′
(
y
)
=
y
3
+
a
2
′
y
2
+
a
1
′
y
+
a
0
′
{\displaystyle f'(y)=y^{3}+a'_{2}y^{2}+a'_{1}y+a'_{0}}
such that
a
1
′
=
0
{\displaystyle a'_{1}=0}
,
a
2
′
=
0
{\displaystyle a'_{2}=0}
, or both.
More generally, it may be defined conveniently by means of field theory, as the transformation on minimal polynomials implied by a different choice of primitive element. This is the most general transformation of an irreducible polynomial that takes a root to some rational function applied to that root.
== Definition ==
For a generic
n
t
h
{\displaystyle n^{th}}
degree reducible monic polynomial equation
f
(
x
)
=
0
{\displaystyle f(x)=0}
of the form
f
(
x
)
=
g
(
x
)
/
h
(
x
)
{\displaystyle f(x)=g(x)/h(x)}
, where
g
(
x
)
{\displaystyle g(x)}
and
h
(
x
)
{\displaystyle h(x)}
are polynomials and
h
(
x
)
{\displaystyle h(x)}
does not vanish at
f
(
x
)
=
0
{\displaystyle f(x)=0}
,
f
(
x
)
=
x
n
+
a
1
x
n
−
1
+
a
2
x
n
−
2
+
.
.
.
+
a
n
−
1
x
+
a
n
=
0
{\displaystyle f(x)=x^{n}+a_{1}x^{n-1}+a_{2}x^{n-2}+...+a_{n-1}x+a_{n}=0}
the Tschirnhaus transformation is the function:
y
=
k
1
x
n
−
1
+
k
2
x
n
−
2
+
.
.
.
+
k
n
−
1
x
+
k
n
{\displaystyle y=k_{1}x^{n-1}+k_{2}x^{n-2}+...+k_{n-1}x+k_{n}}
Such that the new equation in
y
{\displaystyle y}
,
f
′
(
y
)
{\displaystyle f'(y)}
, has certain special properties, most commonly such that some coefficients,
a
1
′
,
.
.
.
,
a
n
−
1
′
{\displaystyle a'_{1},...,a'_{n-1}}
, are identically zero.
== Example: Tschirnhaus' method for cubic equations ==
In Tschirnhaus' 1683 paper, he solved the equation
f
(
x
)
=
x
3
−
p
x
2
+
q
x
−
r
=
0
{\displaystyle f(x)=x^{3}-px^{2}+qx-r=0}
using the Tschirnhaus transformation
y
(
x
;
a
)
=
x
−
a
⟷
x
(
y
;
a
)
=
x
=
y
+
a
.
{\displaystyle y(x;a)=x-a\longleftrightarrow x(y;a)=x=y+a.}
Substituting yields the transformed equation
f
′
(
y
;
a
)
=
y
3
+
(
3
a
−
p
)
y
2
+
(
3
a
2
−
2
p
a
+
q
)
y
+
(
a
3
−
p
a
2
+
q
a
−
r
)
=
0
{\displaystyle f'(y;a)=y^{3}+(3a-p)y^{2}+(3a^{2}-2pa+q)y+(a^{3}-pa^{2}+qa-r)=0}
or
{
a
1
′
=
3
a
−
p
a
2
′
=
3
a
2
−
2
p
a
+
q
a
3
′
=
a
3
−
p
a
2
+
q
a
−
r
.
{\displaystyle {\begin{cases}a'_{1}=3a-p\\a'_{2}=3a^{2}-2pa+q\\a'_{3}=a^{3}-pa^{2}+qa-r\end{cases}}.}
Setting
a
1
′
=
0
{\displaystyle a'_{1}=0}
yields,
3
a
−
p
=
0
→
a
=
p
3
{\displaystyle 3a-p=0\rightarrow a={\frac {p}{3}}}
and finally the Tschirnhaus transformation
y
=
x
−
p
3
,
{\displaystyle y=x-{\frac {p}{3}},}
which may be substituted into
f
′
(
y
;
a
)
{\displaystyle f'(y;a)}
to yield an equation of the form:
f
′
(
y
)
=
y
3
−
q
′
y
−
r
′
.
{\displaystyle f'(y)=y^{3}-q'y-r'.}
Tschirnhaus went on to describe how a Tschirnhaus transformation of the form:
x
2
(
y
;
a
,
b
)
=
x
2
=
b
x
+
y
+
a
{\displaystyle x^{2}(y;a,b)=x^{2}=bx+y+a}
may be used to eliminate two coefficients in a similar way.
== Generalization ==
In detail, let
K
{\displaystyle K}
be a field, and
P
(
t
)
{\displaystyle P(t)}
a polynomial over
K
{\displaystyle K}
. If
P
{\displaystyle P}
is irreducible, then the quotient ring of the polynomial ring
K
[
t
]
{\displaystyle K[t]}
by the principal ideal generated by
P
{\displaystyle P}
,
K
[
t
]
/
(
P
(
t
)
)
=
L
{\displaystyle K[t]/(P(t))=L}
,
is a field extension of
K
{\displaystyle K}
. We have
L
=
K
(
α
)
{\displaystyle L=K(\alpha )}
where
α
{\displaystyle \alpha }
is
t
{\displaystyle t}
modulo
(
P
)
{\displaystyle (P)}
. That is, any element of
L
{\displaystyle L}
is a polynomial in
α
{\displaystyle \alpha }
, which is thus a primitive element of
L
{\displaystyle L}
. There will be other choices
β
{\displaystyle \beta }
of primitive element in
L
{\displaystyle L}
: for any such choice of
β
{\displaystyle \beta }
we will have by definition:
β
=
F
(
α
)
,
α
=
G
(
β
)
{\displaystyle \beta =F(\alpha ),\alpha =G(\beta )}
,
with polynomials
F
{\displaystyle F}
and
G
{\displaystyle G}
over
K
{\displaystyle K}
. Now if
Q
{\displaystyle Q}
is the minimal polynomial for
β
{\displaystyle \beta }
over
K
{\displaystyle K}
, we can call
Q
{\displaystyle Q}
a Tschirnhaus transformation of
P
{\displaystyle P}
.
Therefore the set of all Tschirnhaus transformations of an irreducible polynomial is to be described as running over all ways of changing
P
{\displaystyle P}
, but leaving
L
{\displaystyle L}
the same. This concept is used in reducing quintics to Bring–Jerrard form, for example. There is a connection with Galois theory, when
L
{\displaystyle L}
is a Galois extension of
K
{\displaystyle K}
. The Galois group may then be considered as all the Tschirnhaus transformations of
P
{\displaystyle P}
to itself.
== History ==
In 1683, Ehrenfried Walther von Tschirnhaus published a method for rewriting a polynomial of degree
n
>
2
{\displaystyle n>2}
such that the
x
n
−
1
{\displaystyle x^{n-1}}
and
x
n
−
2
{\displaystyle x^{n-2}}
terms have zero coefficients. In his paper, Tschirnhaus referenced a method by René Descartes to reduce a quadratic polynomial
(
n
=
2
)
{\displaystyle (n=2)}
such that the
x
{\displaystyle x}
term has zero coefficient.
In 1786, this work was expanded by Erland Samuel Bring who showed that any generic quintic polynomial could be similarly reduced.
In 1834, George Jerrard further expanded Tschirnhaus' work by showing a Tschirnhaus transformation may be used to eliminate the
x
n
−
1
{\displaystyle x^{n-1}}
,
x
n
−
2
{\displaystyle x^{n-2}}
, and
x
n
−
3
{\displaystyle x^{n-3}}
for a general polynomial of degree
n
>
3
{\displaystyle n>3}
.
== See also ==
Polynomial transformations
Monic polynomial
Reducible polynomial
Quintic function
Galois theory
Abel–Ruffini theorem
Principal equation form
== References == | Wikipedia/Tschirnhaus_transformation |
In algebra, a quartic function is a function of the formα
f
(
x
)
=
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
,
{\displaystyle f(x)=ax^{4}+bx^{3}+cx^{2}+dx+e,}
where a is nonzero,
which is defined by a polynomial of degree four, called a quartic polynomial.
A quartic equation, or equation of the fourth degree, is an equation that equates a quartic polynomial to zero, of the form
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
0
,
{\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0,}
where a ≠ 0.
The derivative of a quartic function is a cubic function.
Sometimes the term biquadratic is used instead of quartic, but, usually, biquadratic function refers to a quadratic function of a square (or, equivalently, to the function defined by a quartic polynomial without terms of odd degree), having the form
f
(
x
)
=
a
x
4
+
c
x
2
+
e
.
{\displaystyle f(x)=ax^{4}+cx^{2}+e.}
Since a quartic function is defined by a polynomial of even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If a is positive, then the function increases to positive infinity at both ends; and thus the function has a global minimum. Likewise, if a is negative, it decreases to negative infinity and has a global maximum. In both cases it may or may not have another local maximum and another local minimum.
The degree four (quartic case) is the highest degree such that every polynomial equation can be solved by radicals, according to the Abel–Ruffini theorem.
== History ==
Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna.
The proof that four is the highest degree of a general polynomial for which such solutions can be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois prior to dying in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result.
== Applications ==
Each coordinate of the intersection points of two conic sections is a solution of a quartic equation. The same is true for the intersection of a line and a torus. It follows that quartic equations often arise in computational geometry and all related fields such as computer graphics, computer-aided design, computer-aided manufacturing and optics. Here are examples of other geometric problems whose solution involves solving a quartic equation.
In computer-aided manufacturing, the torus is a shape that is commonly associated with the endmill cutter. To calculate its location relative to a triangulated surface, the position of a horizontal torus on the z-axis must be found where it is tangent to a fixed line, and this requires the solution of a general quartic equation to be calculated.
A quartic equation arises also in the process of solving the crossed ladders problem, in which the lengths of two crossed ladders, each based against one wall and leaning against another, are given along with the height at which they cross, and the distance between the walls is to be found.
In optics, Alhazen's problem is "Given a light source and a spherical mirror, find the point on the mirror where the light will be reflected to the eye of an observer." This leads to a quartic equation.
Finding the distance of closest approach of two ellipses involves solving a quartic equation.
The eigenvalues of a 4×4 matrix are the roots of a quartic polynomial which is the characteristic polynomial of the matrix.
The characteristic equation of a fourth-order linear difference equation or differential equation is a quartic equation. An example arises in the Timoshenko-Rayleigh theory of beam bending.
Intersections between spheres, cylinders, or other quadrics can be found using quartic equations.
== Inflection points and golden ratio ==
Letting F and G be the distinct inflection points of the graph of a quartic function, and letting H be the intersection of the inflection secant line FG and the quartic, nearer to G than to F, then G divides FH into the golden section:
F
G
G
H
=
1
+
5
2
=
φ
(
the golden ratio
)
.
{\displaystyle {\frac {FG}{GH}}={\frac {1+{\sqrt {5}}}{2}}=\varphi \;({\text{the golden ratio}}).}
Moreover, the area of the region between the secant line and the quartic below the secant line equals the area of the region between the secant line and the quartic above the secant line. One of those regions is disjointed into sub-regions of equal area.
== Solution ==
=== Nature of the roots ===
Given the general quartic equation
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
0
{\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0}
with real coefficients and a ≠ 0 the nature of its roots is mainly determined by the sign of its discriminant
Δ
=
256
a
3
e
3
−
192
a
2
b
d
e
2
−
128
a
2
c
2
e
2
+
144
a
2
c
d
2
e
−
27
a
2
d
4
+
144
a
b
2
c
e
2
−
6
a
b
2
d
2
e
−
80
a
b
c
2
d
e
+
18
a
b
c
d
3
+
16
a
c
4
e
−
4
a
c
3
d
2
−
27
b
4
e
2
+
18
b
3
c
d
e
−
4
b
3
d
3
−
4
b
2
c
3
e
+
b
2
c
2
d
2
{\displaystyle {\begin{aligned}\Delta ={}&256a^{3}e^{3}-192a^{2}bde^{2}-128a^{2}c^{2}e^{2}+144a^{2}cd^{2}e-27a^{2}d^{4}\\&+144ab^{2}ce^{2}-6ab^{2}d^{2}e-80abc^{2}de+18abcd^{3}+16ac^{4}e\\&-4ac^{3}d^{2}-27b^{4}e^{2}+18b^{3}cde-4b^{3}d^{3}-4b^{2}c^{3}e+b^{2}c^{2}d^{2}\end{aligned}}}
This may be refined by considering the signs of four other polynomials:
P
=
8
a
c
−
3
b
2
{\displaystyle P=8ac-3b^{2}}
such that P/8a2 is the second degree coefficient of the associated depressed quartic (see below);
R
=
b
3
+
8
d
a
2
−
4
a
b
c
,
{\displaystyle R=b^{3}+8da^{2}-4abc,}
such that R/8a3 is the first degree coefficient of the associated depressed quartic;
Δ
0
=
c
2
−
3
b
d
+
12
a
e
,
{\displaystyle \Delta _{0}=c^{2}-3bd+12ae,}
which is 0 if the quartic has a triple root; and
D
=
64
a
3
e
−
16
a
2
c
2
+
16
a
b
2
c
−
16
a
2
b
d
−
3
b
4
{\displaystyle D=64a^{3}e-16a^{2}c^{2}+16ab^{2}c-16a^{2}bd-3b^{4}}
which is 0 if the quartic has two double roots.
The possible cases for the nature of the roots are as follows:
If ∆ < 0 then the equation has two distinct real roots and two complex conjugate non-real roots.
If ∆ > 0 then either the equation's four roots are all real or none is.
If P < 0 and D < 0 then all four roots are real and distinct.
If P > 0 or D > 0 then there are two pairs of non-real complex conjugate roots.
If ∆ = 0 then (and only then) the polynomial has a multiple root. Here are the different cases that can occur:
If P < 0 and D < 0 and ∆0 ≠ 0, there are a real double root and two real simple roots.
If D > 0 or (P > 0 and (D ≠ 0 or R ≠ 0)), there are a real double root and two complex conjugate roots.
If ∆0 = 0 and D ≠ 0, there are a triple root and a simple root, all real.
If D = 0, then:
If P < 0, there are two real double roots.
If P > 0 and R = 0, there are two complex conjugate double roots.
If ∆0 = 0, all four roots are equal to −b/4a
There are some cases that do not seem to be covered, but in fact they cannot occur. For example, ∆0 > 0, P = 0 and D ≤ 0 is not a possible case. In fact, if ∆0 > 0 and P = 0 then D > 0, since
16
a
2
Δ
0
=
3
D
+
P
2
;
{\displaystyle 16a^{2}\Delta _{0}=3D+P^{2};}
so this combination is not possible.
=== General formula for roots ===
The four roots x1, x2, x3, and x4 for the general quartic equation
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
0
{\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0\,}
with a ≠ 0 are given in the following formula, which is deduced from the one in the section on Ferrari's method by back changing the variables (see § Converting to a depressed quartic) and using the formulas for the quadratic and cubic equations.
x
1
,
2
=
−
b
4
a
−
S
±
1
2
−
4
S
2
−
2
p
+
q
S
x
3
,
4
=
−
b
4
a
+
S
±
1
2
−
4
S
2
−
2
p
−
q
S
{\displaystyle {\begin{aligned}x_{1,2}\ &=-{\frac {b}{4a}}-S\pm {\frac {1}{2}}{\sqrt {-4S^{2}-2p+{\frac {q}{S}}}}\\x_{3,4}\ &=-{\frac {b}{4a}}+S\pm {\frac {1}{2}}{\sqrt {-4S^{2}-2p-{\frac {q}{S}}}}\end{aligned}}}
where p and q are the coefficients of the second and of the first degree respectively in the associated depressed quartic
p
=
8
a
c
−
3
b
2
8
a
2
q
=
b
3
−
4
a
b
c
+
8
a
2
d
8
a
3
{\displaystyle {\begin{aligned}p&={\frac {8ac-3b^{2}}{8a^{2}}}\\q&={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}\end{aligned}}}
and where
S
=
1
2
−
2
3
p
+
1
3
a
(
Q
+
Δ
0
Q
)
Q
=
Δ
1
+
Δ
1
2
−
4
Δ
0
3
2
3
{\displaystyle {\begin{aligned}S&={\frac {1}{2}}{\sqrt {-{\frac {2}{3}}\ p+{\frac {1}{3a}}\left(Q+{\frac {\Delta _{0}}{Q}}\right)}}\\Q&={\sqrt[{3}]{\frac {\Delta _{1}+{\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}}{2}}}\end{aligned}}}
(if S = 0 or Q = 0, see § Special cases of the formula, below)
with
Δ
0
=
c
2
−
3
b
d
+
12
a
e
Δ
1
=
2
c
3
−
9
b
c
d
+
27
b
2
e
+
27
a
d
2
−
72
a
c
e
{\displaystyle {\begin{aligned}\Delta _{0}&=c^{2}-3bd+12ae\\\Delta _{1}&=2c^{3}-9bcd+27b^{2}e+27ad^{2}-72ace\end{aligned}}}
and
Δ
1
2
−
4
Δ
0
3
=
−
27
Δ
,
{\displaystyle \Delta _{1}^{2}-4\Delta _{0}^{3}=-27\Delta \ ,}
where
Δ
{\displaystyle \Delta }
is the aforementioned discriminant. For the cube root expression for Q, any of the three cube roots in the complex plane can be used, although if one of them is real that is the natural and simplest one to choose. The mathematical expressions of these last four terms are very similar to those of their cubic counterparts.
==== Special cases of the formula ====
If
Δ
>
0
,
{\displaystyle \Delta >0,}
the value of
Q
{\displaystyle Q}
is a non-real complex number. In this case, either all roots are non-real or they are all real. In the latter case, the value of
S
{\displaystyle S}
is also real, despite being expressed in terms of
Q
;
{\displaystyle Q;}
this is casus irreducibilis of the cubic function extended to the present context of the quartic. One may prefer to express it in a purely real way, by using trigonometric functions, as follows:
S
=
1
2
−
2
3
p
+
2
3
a
Δ
0
cos
φ
3
{\displaystyle S={\frac {1}{2}}{\sqrt {-{\frac {2}{3}}\ p+{\frac {2}{3a}}{\sqrt {\Delta _{0}}}\cos {\frac {\varphi }{3}}}}}
where
φ
=
arccos
(
Δ
1
2
Δ
0
3
)
.
{\displaystyle \varphi =\arccos \left({\frac {\Delta _{1}}{2{\sqrt {\Delta _{0}^{3}}}}}\right).}
If
Δ
≠
0
{\displaystyle \Delta \neq 0}
and
Δ
0
=
0
,
{\displaystyle \Delta _{0}=0,}
the sign of
Δ
1
2
−
4
Δ
0
3
=
Δ
1
2
{\displaystyle {\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}={\sqrt {\Delta _{1}^{2}}}}
has to be chosen to have
Q
≠
0
,
{\displaystyle Q\neq 0,}
that is one should define
Δ
1
2
{\displaystyle {\sqrt {\Delta _{1}^{2}}}}
as
Δ
1
,
{\displaystyle \Delta _{1},}
maintaining the sign of
Δ
1
.
{\displaystyle \Delta _{1}.}
If
S
=
0
,
{\displaystyle S=0,}
then one must change the choice of the cube root in
Q
{\displaystyle Q}
in order to have
S
≠
0.
{\displaystyle S\neq 0.}
This is always possible except if the quartic may be factored into
(
x
+
b
4
a
)
4
.
{\displaystyle \left(x+{\tfrac {b}{4a}}\right)^{4}.}
The result is then correct, but misleading because it hides the fact that no cube root is needed in this case. In fact this case may occur only if the numerator of
q
{\displaystyle q}
is zero, in which case the associated depressed quartic is biquadratic; it may thus be solved by the method described below.
If
Δ
=
0
{\displaystyle \Delta =0}
and
Δ
0
=
0
,
{\displaystyle \Delta _{0}=0,}
and thus also
Δ
1
=
0
,
{\displaystyle \Delta _{1}=0,}
at least three roots are equal to each other, and the roots are rational functions of the coefficients. The triple root
x
0
{\displaystyle x_{0}}
is a common root of the quartic and its second derivative
2
(
6
a
x
2
+
3
b
x
+
c
)
;
{\displaystyle 2(6ax^{2}+3bx+c);}
it is thus also the unique root of the remainder of the Euclidean division of the quartic by its second derivative, which is a linear polynomial. The simple root
x
1
{\displaystyle x_{1}}
can be deduced from
x
1
+
3
x
0
=
−
b
/
a
.
{\displaystyle x_{1}+3x_{0}=-b/a.}
If
Δ
=
0
{\displaystyle \Delta =0}
and
Δ
0
≠
0
,
{\displaystyle \Delta _{0}\neq 0,}
the above expression for the roots is correct but misleading, hiding the fact that the polynomial is reducible and no cube root is needed to represent the roots.
=== Simpler cases ===
==== Reducible quartics ====
Consider the general quartic
Q
(
x
)
=
a
4
x
4
+
a
3
x
3
+
a
2
x
2
+
a
1
x
+
a
0
.
{\displaystyle Q(x)=a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}.}
It is reducible if Q(x) = R(x)×S(x), where R(x) and S(x) are non-constant polynomials with rational coefficients (or more generally with coefficients in the same field as the coefficients of Q(x)). Such a factorization will take one of two forms:
Q
(
x
)
=
(
x
−
x
1
)
(
b
3
x
3
+
b
2
x
2
+
b
1
x
+
b
0
)
{\displaystyle Q(x)=(x-x_{1})(b_{3}x^{3}+b_{2}x^{2}+b_{1}x+b_{0})}
or
Q
(
x
)
=
(
c
2
x
2
+
c
1
x
+
c
0
)
(
d
2
x
2
+
d
1
x
+
d
0
)
.
{\displaystyle Q(x)=(c_{2}x^{2}+c_{1}x+c_{0})(d_{2}x^{2}+d_{1}x+d_{0}).}
In either case, the roots of Q(x) are the roots of the factors, which may be computed using the formulas for the roots of a quadratic function or cubic function.
Detecting the existence of such factorizations can be done using the resolvent cubic of Q(x). It turns out that:
if we are working over R (that is, if coefficients are restricted to be real numbers) (or, more generally, over some real closed field) then there is always such a factorization;
if we are working over Q (that is, if coefficients are restricted to be rational numbers) then there is an algorithm to determine whether or not Q(x) is reducible and, if it is, how to express it as a product of polynomials of smaller degree.
In fact, several methods of solving quartic equations (Ferrari's method, Descartes' method, and, to a lesser extent, Euler's method) are based upon finding such factorizations.
==== Biquadratic equation ====
If a3 = a1 = 0 then the function
Q
(
x
)
=
a
4
x
4
+
a
2
x
2
+
a
0
{\displaystyle Q(x)=a_{4}x^{4}+a_{2}x^{2}+a_{0}}
is called a biquadratic function; equating it to zero defines a biquadratic equation, which is easy to solve as follows
Let the auxiliary variable z = x2.
Then Q(x) becomes a quadratic q in z: q(z) = a4z2 + a2z + a0. Let z+ and z− be the roots of q(z). Then the roots of the quartic Q(x) are
x
1
=
+
z
+
,
x
2
=
−
z
+
,
x
3
=
+
z
−
,
x
4
=
−
z
−
.
{\displaystyle {\begin{aligned}x_{1}&=+{\sqrt {z_{+}}},\\x_{2}&=-{\sqrt {z_{+}}},\\x_{3}&=+{\sqrt {z_{-}}},\\x_{4}&=-{\sqrt {z_{-}}}.\end{aligned}}}
==== Quasi-palindromic equation ====
The polynomial
P
(
x
)
=
a
0
x
4
+
a
1
x
3
+
a
2
x
2
+
a
1
m
x
+
a
0
m
2
{\displaystyle P(x)=a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{1}mx+a_{0}m^{2}}
is almost palindromic, as P(mx) = x4/m2P(m/x) (it is palindromic if m = 1). The change of variables z = x + m/x in P(x)/x2 = 0 produces the quadratic equation a0z2 + a1z + a2 − 2ma0 = 0. Since x2 − xz + m = 0, the quartic equation P(x) = 0 may be solved by applying the quadratic formula twice.
=== Solution methods ===
==== Converting to a depressed quartic ====
For solving purposes, it is generally better to convert the quartic into a depressed quartic by the following simple change of variable. All formulas are simpler and some methods work only in this case. The roots of the original quartic are easily recovered from that of the depressed quartic by the reverse change of variable.
Let
a
4
x
4
+
a
3
x
3
+
a
2
x
2
+
a
1
x
+
a
0
=
0
{\displaystyle a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0}
be the general quartic equation we want to solve.
Dividing by a4, provides the equivalent equation x4 + bx3 + cx2 + dx + e = 0, with b = a3/a4, c = a2/a4, d = a1/a4, and e = a0/a4.
Substituting y − b/4 for x gives, after regrouping the terms, the equation y4 + py2 + qy + r = 0,
where
p
=
8
c
−
3
b
2
8
=
8
a
2
a
4
−
3
a
3
2
8
a
4
2
q
=
b
3
−
4
b
c
+
8
d
8
=
a
3
3
−
4
a
2
a
3
a
4
+
8
a
1
a
4
2
8
a
4
3
r
=
−
3
b
4
+
256
e
−
64
b
d
+
16
b
2
c
256
=
−
3
a
3
4
+
256
a
0
a
4
3
−
64
a
1
a
3
a
4
2
+
16
a
2
a
3
2
a
4
256
a
4
4
.
{\displaystyle {\begin{aligned}p&={\frac {8c-3b^{2}}{8}}={\frac {8a_{2}a_{4}-3{a_{3}}^{2}}{8{a_{4}}^{2}}}\\q&={\frac {b^{3}-4bc+8d}{8}}={\frac {{a_{3}}^{3}-4a_{2}a_{3}a_{4}+8a_{1}{a_{4}}^{2}}{8{a_{4}}^{3}}}\\r&={\frac {-3b^{4}+256e-64bd+16b^{2}c}{256}}={\frac {-3{a_{3}}^{4}+256a_{0}{a_{4}}^{3}-64a_{1}a_{3}{a_{4}}^{2}+16a_{2}{a_{3}}^{2}a_{4}}{256{a_{4}}^{4}}}.\end{aligned}}}
If y0 is a root of this depressed quartic, then y0 − b/4 (that is y0 − a3/4a4) is a root of the original quartic and every root of the original quartic can be obtained by this process.
==== Ferrari's solution ====
As explained in the preceding section, we may start with the depressed quartic equation
y
4
+
p
y
2
+
q
y
+
r
=
0.
{\displaystyle y^{4}+py^{2}+qy+r=0.}
This depressed quartic can be solved by means of a method discovered by Lodovico Ferrari. The depressed equation may be rewritten (this is easily verified by expanding the square and regrouping all terms in the left-hand side) as
(
y
2
+
p
2
)
2
=
−
q
y
−
r
+
p
2
4
.
{\displaystyle \left(y^{2}+{\frac {p}{2}}\right)^{2}=-qy-r+{\frac {p^{2}}{4}}.}
Then, we introduce a variable m into the factor on the left-hand side by adding 2y2m + pm + m2 to both sides. After regrouping the coefficients of the power of y on the right-hand side, this gives the equation
which is equivalent to the original equation, whichever value is given to m.
As the value of m may be arbitrarily chosen, we will choose it in order to complete the square on the right-hand side. This implies that the discriminant in y of this quadratic equation is zero, that is m is a root of the equation
(
−
q
)
2
−
4
(
2
m
)
(
m
2
+
p
m
+
p
2
4
−
r
)
=
0
,
{\displaystyle (-q)^{2}-4(2m)\left(m^{2}+pm+{\frac {p^{2}}{4}}-r\right)=0,\,}
which may be rewritten as
This is the resolvent cubic of the quartic equation. The value of m may thus be obtained from Cardano's formula. When m is a root of this equation, the right-hand side of equation (1) is the square
(
2
m
y
−
q
2
2
m
)
2
.
{\displaystyle \left({\sqrt {2m}}y-{\frac {q}{2{\sqrt {2m}}}}\right)^{2}.}
However, this induces a division by zero if m = 0. This implies q = 0, and thus that the depressed equation is bi-quadratic, and may be solved by an easier method (see above). This was not a problem at the time of Ferrari, when one solved only explicitly given equations with numeric coefficients. For a general formula that is always true, one thus needs to choose a root of the cubic equation such that m ≠ 0. This is always possible except for the depressed equation y4 = 0.
Now, if m is a root of the cubic equation such that m ≠ 0, equation (1) becomes
(
y
2
+
p
2
+
m
)
2
=
(
y
2
m
−
q
2
2
m
)
2
.
{\displaystyle \left(y^{2}+{\frac {p}{2}}+m\right)^{2}=\left(y{\sqrt {2m}}-{\frac {q}{2{\sqrt {2m}}}}\right)^{2}.}
This equation is of the form M2 = N2, which can be rearranged as M2 − N2 = 0 or (M + N)(M − N) = 0. Therefore, equation (1) may be rewritten as
(
y
2
+
p
2
+
m
+
2
m
y
−
q
2
2
m
)
(
y
2
+
p
2
+
m
−
2
m
y
+
q
2
2
m
)
=
0.
{\displaystyle \left(y^{2}+{\frac {p}{2}}+m+{\sqrt {2m}}y-{\frac {q}{2{\sqrt {2m}}}}\right)\left(y^{2}+{\frac {p}{2}}+m-{\sqrt {2m}}y+{\frac {q}{2{\sqrt {2m}}}}\right)=0.}
This equation is easily solved by applying to each factor the quadratic formula. Solving them we may write the four roots as
y
=
±
1
2
m
±
2
−
(
2
p
+
2
m
±
1
2
q
m
)
2
,
{\displaystyle y={\pm _{1}{\sqrt {2m}}\pm _{2}{\sqrt {-\left(2p+2m\pm _{1}{{\sqrt {2}}q \over {\sqrt {m}}}\right)}} \over 2},}
where ±1 and ±2 denote either + or −. As the two occurrences of ±1 must denote the same sign, this leaves four possibilities, one for each root.
Therefore, the solutions of the original quartic equation are
x
=
−
a
3
4
a
4
+
±
1
2
m
±
2
−
(
2
p
+
2
m
±
1
2
q
m
)
2
.
{\displaystyle x=-{a_{3} \over 4a_{4}}+{\pm _{1}{\sqrt {2m}}\pm _{2}{\sqrt {-\left(2p+2m\pm _{1}{{\sqrt {2}}q \over {\sqrt {m}}}\right)}} \over 2}.}
A comparison with the general formula above shows that √2m = 2S.
==== Descartes' solution ====
Descartes introduced in 1637 the method of finding the roots of a quartic polynomial by factoring it into two quadratic ones. Let
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
(
x
2
+
s
x
+
t
)
(
x
2
+
u
x
+
v
)
=
x
4
+
(
s
+
u
)
x
3
+
(
t
+
v
+
s
u
)
x
2
+
(
s
v
+
t
u
)
x
+
t
v
{\displaystyle {\begin{aligned}x^{4}+bx^{3}+cx^{2}+dx+e&=(x^{2}+sx+t)(x^{2}+ux+v)\\&=x^{4}+(s+u)x^{3}+(t+v+su)x^{2}+(sv+tu)x+tv\end{aligned}}}
By equating coefficients, this results in the following system of equations:
{
b
=
s
+
u
c
=
t
+
v
+
s
u
d
=
s
v
+
t
u
e
=
t
v
{\displaystyle \left\{{\begin{array}{l}b=s+u\\c=t+v+su\\d=sv+tu\\e=tv\end{array}}\right.}
This can be simplified by starting again with the depressed quartic y4 + py2 + qy + r, which can be obtained by substituting y − b/4 for x. Since the coefficient of y3 is 0, we get s = −u, and:
{
p
+
u
2
=
t
+
v
q
=
u
(
t
−
v
)
r
=
t
v
{\displaystyle \left\{{\begin{array}{l}p+u^{2}=t+v\\q=u(t-v)\\r=tv\end{array}}\right.}
One can now eliminate both t and v by doing the following:
u
2
(
p
+
u
2
)
2
−
q
2
=
u
2
(
t
+
v
)
2
−
u
2
(
t
−
v
)
2
=
u
2
[
(
t
+
v
+
(
t
−
v
)
)
(
t
+
v
−
(
t
−
v
)
)
]
=
u
2
(
2
t
)
(
2
v
)
=
4
u
2
t
v
=
4
u
2
r
{\displaystyle {\begin{aligned}u^{2}(p+u^{2})^{2}-q^{2}&=u^{2}(t+v)^{2}-u^{2}(t-v)^{2}\\&=u^{2}[(t+v+(t-v))(t+v-(t-v))]\\&=u^{2}(2t)(2v)\\&=4u^{2}tv\\&=4u^{2}r\end{aligned}}}
If we set U = u2, then solving this equation becomes finding the roots of the resolvent cubic
which is done elsewhere. This resolvent cubic is equivalent to the resolvent cubic given above (equation (1a)), as can be seen by substituting U = 2m.
If u is a square root of a non-zero root of this resolvent (such a non-zero root exists except for the quartic x4, which is trivially factored),
{
s
=
−
u
2
t
=
p
+
u
2
+
q
/
u
2
v
=
p
+
u
2
−
q
/
u
{\displaystyle \left\{{\begin{array}{l}s=-u\\2t=p+u^{2}+q/u\\2v=p+u^{2}-q/u\end{array}}\right.}
The symmetries in this solution are as follows. There are three roots of the cubic, corresponding to the three ways that a quartic can be factored into two quadratics, and choosing positive or negative values of u for the square root of U merely exchanges the two quadratics with one another.
The above solution shows that a quartic polynomial with rational coefficients and a zero coefficient on the cubic term is factorable into quadratics with rational coefficients if and only if either the resolvent cubic (2) has a non-zero root which is the square of a rational, or p2 − 4r is the square of rational and q = 0; this can readily be checked using the rational root test.
==== Euler's solution ====
A variant of the previous method is due to Euler. Unlike the previous methods, both of which use some root of the resolvent cubic, Euler's method uses all of them. Consider a depressed quartic x4 + px2 + qx + r. Observe that, if
x4 + px2 + qx + r = (x2 + sx + t)(x2 − sx + v),
r1 and r2 are the roots of x2 + sx + t,
r3 and r4 are the roots of x2 − sx + v,
then
the roots of x4 + px2 + qx + r are r1, r2, r3, and r4,
r1 + r2 = −s,
r3 + r4 = s.
Therefore, (r1 + r2)(r3 + r4) = −s2. In other words, −(r1 + r2)(r3 + r4) is one of the roots of the resolvent cubic (2) and this suggests that the roots of that cubic are equal to −(r1 + r2)(r3 + r4), −(r1 + r3)(r2 + r4), and −(r1 + r4)(r2 + r3). This is indeed true and it follows from Vieta's formulas. It also follows from Vieta's formulas, together with the fact that we are working with a depressed quartic, that r1 + r2 + r3 + r4 = 0. (Of course, this also follows from the fact that r1 + r2 + r3 + r4 = −s + s.) Therefore, if α, β, and γ are the roots of the resolvent cubic, then the numbers r1, r2, r3, and r4 are such that
{
r
1
+
r
2
+
r
3
+
r
4
=
0
(
r
1
+
r
2
)
(
r
3
+
r
4
)
=
−
α
(
r
1
+
r
3
)
(
r
2
+
r
4
)
=
−
β
(
r
1
+
r
4
)
(
r
2
+
r
3
)
=
−
γ
.
{\displaystyle \left\{{\begin{array}{l}r_{1}+r_{2}+r_{3}+r_{4}=0\\(r_{1}+r_{2})(r_{3}+r_{4})=-\alpha \\(r_{1}+r_{3})(r_{2}+r_{4})=-\beta \\(r_{1}+r_{4})(r_{2}+r_{3})=-\gamma {\text{.}}\end{array}}\right.}
It is a consequence of the first two equations that r1 + r2 is a square root of α and that r3 + r4 is the other square root of α. For the same reason,
r1 + r3 is a square root of β,
r2 + r4 is the other square root of β,
r1 + r4 is a square root of γ,
r2 + r3 is the other square root of γ.
Therefore, the numbers r1, r2, r3, and r4 are such that
{
r
1
+
r
2
+
r
3
+
r
4
=
0
r
1
+
r
2
=
α
r
1
+
r
3
=
β
r
1
+
r
4
=
γ
;
{\displaystyle \left\{{\begin{array}{l}r_{1}+r_{2}+r_{3}+r_{4}=0\\r_{1}+r_{2}={\sqrt {\alpha }}\\r_{1}+r_{3}={\sqrt {\beta }}\\r_{1}+r_{4}={\sqrt {\gamma }}{\text{;}}\end{array}}\right.}
the sign of the square roots will be dealt with below. The only solution of this system is:
{
r
1
=
α
+
β
+
γ
2
r
2
=
α
−
β
−
γ
2
r
3
=
−
α
+
β
−
γ
2
r
4
=
−
α
−
β
+
γ
2
.
{\displaystyle \left\{{\begin{array}{l}r_{1}={\frac {{\sqrt {\alpha }}+{\sqrt {\beta }}+{\sqrt {\gamma }}}{2}}\\[2mm]r_{2}={\frac {{\sqrt {\alpha }}-{\sqrt {\beta }}-{\sqrt {\gamma }}}{2}}\\[2mm]r_{3}={\frac {-{\sqrt {\alpha }}+{\sqrt {\beta }}-{\sqrt {\gamma }}}{2}}\\[2mm]r_{4}={\frac {-{\sqrt {\alpha }}-{\sqrt {\beta }}+{\sqrt {\gamma }}}{2}}{\text{.}}\end{array}}\right.}
Since, in general, there are two choices for each square root, it might look as if this provides 8 (= 23) choices for the set {r1, r2, r3, r4}, but, in fact, it provides no more than 2 such choices, because the consequence of replacing one of the square roots by the symmetric one is that the set {r1, r2, r3, r4} becomes the set {−r1, −r2, −r3, −r4}.
In order to determine the right sign of the square roots, one simply chooses some square root for each of the numbers α, β, and γ and uses them to compute the numbers r1, r2, r3, and r4 from the previous equalities. Then, one computes the number √α√β√γ. Since α, β, and γ are the roots of (2), it is a consequence of Vieta's formulas that their product is equal to q2 and therefore that √α√β√γ = ±q. But a straightforward computation shows that
√α√β√γ = r1r2r3 + r1r2r4 + r1r3r4 + r2r3r4.
If this number is −q, then the choice of the square roots was a good one (again, by Vieta's formulas); otherwise, the roots of the polynomial will be −r1, −r2, −r3, and −r4, which are the numbers obtained if one of the square roots is replaced by the symmetric one (or, what amounts to the same thing, if each of the three square roots is replaced by the symmetric one).
This argument suggests another way of choosing the square roots:
pick any square root √α of α and any square root √β of β;
define √γ as
−
q
α
β
{\displaystyle -{\frac {q}{{\sqrt {\alpha }}{\sqrt {\beta }}}}}
.
Of course, this will make no sense if α or β is equal to 0, but 0 is a root of (2) only when q = 0, that is, only when we are dealing with a biquadratic equation, in which case there is a much simpler approach.
==== Solving by Lagrange resolvent ====
The symmetric group S4 on four elements has the Klein four-group as a normal subgroup. This suggests using a resolvent cubic whose roots may be variously described as a discrete Fourier transform or a Hadamard matrix transform of the roots; see Lagrange resolvents for the general method. Denote by xi, for i from 0 to 3, the four roots of x4 + bx3 + cx2 + dx + e. If we set
s
0
=
1
2
(
x
0
+
x
1
+
x
2
+
x
3
)
,
s
1
=
1
2
(
x
0
−
x
1
+
x
2
−
x
3
)
,
s
2
=
1
2
(
x
0
+
x
1
−
x
2
−
x
3
)
,
s
3
=
1
2
(
x
0
−
x
1
−
x
2
+
x
3
)
,
{\displaystyle {\begin{aligned}s_{0}&={\tfrac {1}{2}}(x_{0}+x_{1}+x_{2}+x_{3}),\\[4pt]s_{1}&={\tfrac {1}{2}}(x_{0}-x_{1}+x_{2}-x_{3}),\\[4pt]s_{2}&={\tfrac {1}{2}}(x_{0}+x_{1}-x_{2}-x_{3}),\\[4pt]s_{3}&={\tfrac {1}{2}}(x_{0}-x_{1}-x_{2}+x_{3}),\end{aligned}}}
then since the transformation is an involution we may express the roots in terms of the four si in exactly the same way. Since we know the value s0 = −b/2, we only need the values for s1, s2 and s3. These are the roots of the polynomial
(
s
2
−
s
1
2
)
(
s
2
−
s
2
2
)
(
s
2
−
s
3
2
)
.
{\displaystyle (s^{2}-{s_{1}}^{2})(s^{2}-{s_{2}}^{2})(s^{2}-{s_{3}}^{2}).}
Substituting the si by their values in term of the xi, this polynomial may be expanded in a polynomial in s whose coefficients are symmetric polynomials in the xi. By the fundamental theorem of symmetric polynomials, these coefficients may be expressed as polynomials in the coefficients of the monic quartic. If, for simplification, we suppose that the quartic is depressed, that is b = 0, this results in the polynomial
This polynomial is of degree six, but only of degree three in s2, and so the corresponding equation is solvable by the method described in the article about cubic function. By substituting the roots in the expression of the xi in terms of the si, we obtain expression for the roots. In fact we obtain, apparently, several expressions, depending on the numbering of the roots of the cubic polynomial and of the signs given to their square roots. All these different expressions may be deduced from one of them by simply changing the numbering of the xi.
These expressions are unnecessarily complicated, involving the cubic roots of unity, which can be avoided as follows. If s is any non-zero root of (3), and if we set
F
1
(
x
)
=
x
2
+
s
x
+
c
2
+
s
2
2
−
d
2
s
F
2
(
x
)
=
x
2
−
s
x
+
c
2
+
s
2
2
+
d
2
s
{\displaystyle {\begin{aligned}F_{1}(x)&=x^{2}+sx+{\frac {c}{2}}+{\frac {s^{2}}{2}}-{\frac {d}{2s}}\\F_{2}(x)&=x^{2}-sx+{\frac {c}{2}}+{\frac {s^{2}}{2}}+{\frac {d}{2s}}\end{aligned}}}
then
F
1
(
x
)
×
F
2
(
x
)
=
x
4
+
c
x
2
+
d
x
+
e
.
{\displaystyle F_{1}(x)\times F_{2}(x)=x^{4}+cx^{2}+dx+e.}
We therefore can solve the quartic by solving for s and then solving for the roots of the two factors using the quadratic formula.
This gives exactly the same formula for the roots as the one provided by Descartes' method.
==== Solving with algebraic geometry ====
There is an alternative solution using algebraic geometry In brief, one interprets the roots as the intersection of two quadratic curves, then finds the three reducible quadratic curves (pairs of lines) that pass through these points (this corresponds to the resolvent cubic, the pairs of lines being the Lagrange resolvents), and then use these linear equations to solve the quadratic.
The four roots of the depressed quartic x4 + px2 + qx + r = 0 may also be expressed as the x coordinates of the intersections of the two quadratic equations y2 + py + qx + r = 0 and y − x2 = 0 i.e., using the substitution y = x2 that two quadratics intersect in four points is an instance of Bézout's theorem. Explicitly, the four points are Pi ≔ (xi, xi2) for the four roots xi of the quartic.
These four points are not collinear because they lie on the irreducible quadratic y = x2 and thus there is a 1-parameter family of quadratics (a pencil of curves) passing through these points. Writing the projectivization of the two quadratics as quadratic forms in three variables:
F
1
(
X
,
Y
,
Z
)
:=
Y
2
+
p
Y
Z
+
q
X
Z
+
r
Z
2
,
F
2
(
X
,
Y
,
Z
)
:=
Y
Z
−
X
2
{\displaystyle {\begin{aligned}F_{1}(X,Y,Z)&:=Y^{2}+pYZ+qXZ+rZ^{2},\\F_{2}(X,Y,Z)&:=YZ-X^{2}\end{aligned}}}
the pencil is given by the forms λF1 + μF2 for any point [λ, μ] in the projective line — in other words, where λ and μ are not both zero, and multiplying a quadratic form by a constant does not change its quadratic curve of zeros.
This pencil contains three reducible quadratics, each corresponding to a pair of lines, each passing through two of the four points, which can be done
(
4
2
)
{\displaystyle \textstyle {\binom {4}{2}}}
= 6 different ways. Denote these Q1 = L12 + L34, Q2 = L13 + L24, and Q3 = L14 + L23. Given any two of these, their intersection has exactly the four points.
The reducible quadratics, in turn, may be determined by expressing the quadratic form λF1 + μF2 as a 3×3 matrix: reducible quadratics correspond to this matrix being singular, which is equivalent to its determinant being zero, and the determinant is a homogeneous degree three polynomial in λ and μ and corresponds to the resolvent cubic.
== See also ==
Linear function – Linear map or polynomial function of degree one
Quadratic function – Polynomial function of degree two
Cubic function – Polynomial function of degree 3
Quintic function – Polynomial function of degree 5
== Notes ==
^α For the purposes of this article, e is used as a variable as opposed to its conventional use as Euler's number (except when otherwise specified).
== References ==
== Further reading ==
Carpenter, W. (1966). "On the solution of the real quartic". Mathematics Magazine. 39 (1): 28–30. doi:10.2307/2688990. JSTOR 2688990.
Yacoub, M.D.; Fraidenraich, G. (July 2012). "A solution to the quartic equation". Mathematical Gazette. 96: 271–275. doi:10.1017/s002555720000454x. S2CID 124512391.
== External links ==
Quartic formula as four single equations at PlanetMath.
Ferrari's achievement | Wikipedia/Quartic_equations |
In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation, is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an equidimensional equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.
== The equation ==
Let y(n)(x) be the nth derivative of the unknown function y(x). Then a Cauchy–Euler equation of order n has the form
a
n
x
n
y
(
n
)
(
x
)
+
a
n
−
1
x
n
−
1
y
(
n
−
1
)
(
x
)
+
⋯
+
a
0
y
(
x
)
=
0.
{\displaystyle a_{n}x^{n}y^{(n)}(x)+a_{n-1}x^{n-1}y^{(n-1)}(x)+\dots +a_{0}y(x)=0.}
The substitution
x
=
e
u
{\displaystyle x=e^{u}}
(that is,
u
=
ln
(
x
)
{\displaystyle u=\ln(x)}
; for
x
<
0
{\displaystyle x<0}
, in which one might replace all instances of
x
{\displaystyle x}
by
|
x
|
{\displaystyle |x|}
, extending the solution's domain to
R
∖
{
0
}
{\displaystyle \mathbb {R} \setminus \{0\}}
) can be used to reduce this equation to a linear differential equation with constant coefficients. Alternatively, the trial solution
y
=
x
m
{\displaystyle y=x^{m}}
can be used to solve the equation directly, yielding the basic solutions.
=== Second order – solving through trial solution ===
The most common Cauchy–Euler equation is the second-order equation, which appears in a number of physics and engineering applications, such as when solving Laplace's equation in polar coordinates. The second order Cauchy–Euler equation is
x
2
d
2
y
d
x
2
+
a
x
d
y
d
x
+
b
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+ax{\frac {dy}{dx}}+by=0.}
We assume a trial solution
y
=
x
m
.
{\displaystyle y=x^{m}.}
Differentiating gives
d
y
d
x
=
m
x
m
−
1
{\displaystyle {\frac {dy}{dx}}=mx^{m-1}}
and
d
2
y
d
x
2
=
m
(
m
−
1
)
x
m
−
2
.
{\displaystyle {\frac {d^{2}y}{dx^{2}}}=m\left(m-1\right)x^{m-2}.}
Substituting into the original equation leads to requiring that
x
2
(
m
(
m
−
1
)
x
m
−
2
)
+
a
x
(
m
x
m
−
1
)
+
b
(
x
m
)
=
0
{\displaystyle x^{2}\left(m\left(m-1\right)x^{m-2}\right)+ax\left(mx^{m-1}\right)+b\left(x^{m}\right)=0}
Rearranging and factoring gives the indicial equation
m
2
+
(
a
−
1
)
m
+
b
=
0.
{\displaystyle m^{2}+\left(a-1\right)m+b=0.}
We then solve for m. There are three cases of interest:
Case 1 of two distinct roots, m1 and m2;
Case 2 of one real repeated root, m;
Case 3 of complex roots, α ± βi.
In case 1, the solution is
y
=
c
1
x
m
1
+
c
2
x
m
2
{\displaystyle y=c_{1}x^{m_{1}}+c_{2}x^{m_{2}}}
In case 2, the solution is
y
=
c
1
x
m
ln
(
x
)
+
c
2
x
m
{\displaystyle y=c_{1}x^{m}\ln(x)+c_{2}x^{m}}
To get to this solution, the method of reduction of order must be applied, after having found one solution y = xm.
In case 3, the solution is
y
=
c
1
x
α
cos
(
β
ln
(
x
)
)
+
c
2
x
α
sin
(
β
ln
(
x
)
)
{\displaystyle y=c_{1}x^{\alpha }\cos(\beta \ln(x))+c_{2}x^{\alpha }\sin(\beta \ln(x))}
α
=
Re
(
m
)
{\displaystyle \alpha =\operatorname {Re} (m)}
β
=
Im
(
m
)
{\displaystyle \beta =\operatorname {Im} (m)}
For
c
1
,
c
2
∈
R
{\displaystyle c_{1},c_{2}\in \mathbb {R} }
.
This form of the solution is derived by setting x = et and using Euler's formula.
=== Second order – solution through change of variables ===
x
2
d
2
y
d
x
2
+
a
x
d
y
d
x
+
b
y
=
0
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+ax{\frac {dy}{dx}}+by=0}
We operate the variable substitution defined by
t
=
ln
(
x
)
.
{\displaystyle t=\ln(x).}
y
(
x
)
=
φ
(
ln
(
x
)
)
=
φ
(
t
)
.
{\displaystyle y(x)=\varphi (\ln(x))=\varphi (t).}
Differentiating gives
d
y
d
x
=
1
x
d
φ
d
t
{\displaystyle {\frac {dy}{dx}}={\frac {1}{x}}{\frac {d\varphi }{dt}}}
d
2
y
d
x
2
=
1
x
2
(
d
2
φ
d
t
2
−
d
φ
d
t
)
.
{\displaystyle {\frac {d^{2}y}{dx^{2}}}={\frac {1}{x^{2}}}\left({\frac {d^{2}\varphi }{dt^{2}}}-{\frac {d\varphi }{dt}}\right).}
Substituting
φ
(
t
)
{\displaystyle \varphi (t)}
the differential equation becomes
d
2
φ
d
t
2
+
(
a
−
1
)
d
φ
d
t
+
b
φ
=
0.
{\displaystyle {\frac {d^{2}\varphi }{dt^{2}}}+(a-1){\frac {d\varphi }{dt}}+b\varphi =0.}
This equation in
φ
(
t
)
{\displaystyle \varphi (t)}
is solved via its characteristic polynomial
λ
2
+
(
a
−
1
)
λ
+
b
=
0.
{\displaystyle \lambda ^{2}+(a-1)\lambda +b=0.}
Now let
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
denote the two roots of this polynomial. We analyze the case in which there are distinct roots and the case in which there is a repeated root:
If the roots are distinct, the general solution is
φ
(
t
)
=
c
1
e
λ
1
t
+
c
2
e
λ
2
t
,
{\displaystyle \varphi (t)=c_{1}e^{\lambda _{1}t}+c_{2}e^{\lambda _{2}t},}
where the exponentials may be complex.
If the roots are equal, the general solution is
φ
(
t
)
=
c
1
e
λ
1
t
+
c
2
t
e
λ
1
t
.
{\displaystyle \varphi (t)=c_{1}e^{\lambda _{1}t}+c_{2}te^{\lambda _{1}t}.}
In both cases, the solution
y
(
x
)
{\displaystyle y(x)}
can be found by setting
t
=
ln
(
x
)
{\displaystyle t=\ln(x)}
.
Hence, in the first case,
y
(
x
)
=
c
1
x
λ
1
+
c
2
x
λ
2
,
{\displaystyle y(x)=c_{1}x^{\lambda _{1}}+c_{2}x^{\lambda _{2}},}
and in the second case,
y
(
x
)
=
c
1
x
λ
1
+
c
2
ln
(
x
)
x
λ
1
.
{\displaystyle y(x)=c_{1}x^{\lambda _{1}}+c_{2}\ln(x)x^{\lambda _{1}}.}
=== Second order - solution using differential operators ===
Observe that we can write the second-order Cauchy-Euler equation in terms of a linear differential operator
L
{\displaystyle L}
as
L
y
=
(
x
2
D
2
+
a
x
D
+
b
I
)
y
=
0
,
{\displaystyle Ly=(x^{2}D^{2}+axD+bI)y=0,}
where
D
=
d
d
x
{\displaystyle D={\frac {d}{dx}}}
and
I
{\displaystyle I}
is the identity operator.
We express the above operator as a polynomial in
x
D
{\displaystyle xD}
, rather than
D
{\displaystyle D}
. By the product rule,
(
x
D
)
2
=
x
D
(
x
D
)
=
x
(
D
+
x
D
2
)
=
x
2
D
2
+
x
D
.
{\displaystyle (xD)^{2}=xD(xD)=x(D+xD^{2})=x^{2}D^{2}+xD.}
So,
L
=
(
x
D
)
2
+
(
a
−
1
)
(
x
D
)
+
b
I
.
{\displaystyle L=(xD)^{2}+(a-1)(xD)+bI.}
We can then use the quadratic formula to factor this operator into linear terms. More specifically, let
λ
1
,
λ
2
{\displaystyle \lambda _{1},\lambda _{2}}
denote the (possibly equal) values of
−
a
−
1
2
±
1
2
(
a
−
1
)
2
−
4
b
.
{\displaystyle -{\frac {a-1}{2}}\pm {\frac {1}{2}}{\sqrt {(a-1)^{2}-4b}}.}
Then,
L
=
(
x
D
−
λ
1
I
)
(
x
D
−
λ
2
I
)
.
{\displaystyle L=(xD-\lambda _{1}I)(xD-\lambda _{2}I).}
It can be seen that these factors commute, that is
(
x
D
−
λ
1
I
)
(
x
D
−
λ
2
I
)
=
(
x
D
−
λ
2
I
)
(
x
D
−
λ
1
I
)
{\displaystyle (xD-\lambda _{1}I)(xD-\lambda _{2}I)=(xD-\lambda _{2}I)(xD-\lambda _{1}I)}
. Hence, if
λ
1
≠
λ
2
{\displaystyle \lambda _{1}\neq \lambda _{2}}
, the solution to
L
y
=
0
{\displaystyle Ly=0}
is a linear combination of the solutions to each of
(
x
D
−
λ
1
I
)
y
=
0
{\displaystyle (xD-\lambda _{1}I)y=0}
and
(
x
D
−
λ
2
I
)
y
=
0
{\displaystyle (xD-\lambda _{2}I)y=0}
, which can be solved by separation of variables.
Indeed, with
i
∈
{
1
,
2
}
{\displaystyle i\in \{1,2\}}
, we have
(
x
D
−
λ
i
I
)
y
=
x
d
y
d
x
−
λ
i
y
=
0
{\displaystyle (xD-\lambda _{i}I)y=x{\frac {dy}{dx}}-\lambda _{i}y=0}
. So,
x
d
y
d
x
=
λ
i
y
∫
1
y
d
y
=
λ
i
∫
1
x
d
x
ln
y
=
λ
i
ln
x
+
C
y
=
c
i
e
λ
i
ln
x
=
c
i
x
λ
i
.
{\displaystyle {\begin{aligned}x{\frac {dy}{dx}}&=\lambda _{i}y\\\int {\frac {1}{y}}\,dy&=\lambda _{i}\int {\frac {1}{x}}\,dx\\\ln y&=\lambda _{i}\ln x+C\\y&=c_{i}e^{\lambda _{i}\ln x}=c_{i}x^{\lambda _{i}}.\end{aligned}}}
Thus, the general solution is
y
=
c
1
x
λ
1
+
c
2
x
λ
2
{\displaystyle y=c_{1}x^{\lambda _{1}}+c_{2}x^{\lambda _{2}}}
.
If
λ
=
λ
1
=
λ
2
{\displaystyle \lambda =\lambda _{1}=\lambda _{2}}
, then we instead need to consider the solution of
(
x
D
−
λ
I
)
2
y
=
0
{\displaystyle (xD-\lambda I)^{2}y=0}
. Let
z
=
(
x
D
−
λ
I
)
y
{\displaystyle z=(xD-\lambda I)y}
, so that we can write
(
x
D
−
λ
I
)
2
y
=
(
x
D
−
λ
I
)
z
=
0.
{\displaystyle (xD-\lambda I)^{2}y=(xD-\lambda I)z=0.}
As before, the solution of
(
x
D
−
λ
I
)
z
=
0
{\displaystyle (xD-\lambda I)z=0}
is of the form
z
=
c
1
x
λ
{\displaystyle z=c_{1}x^{\lambda }}
. So, we are left to solve
(
x
D
−
λ
I
)
y
=
x
d
y
d
x
−
λ
y
=
c
1
x
λ
.
{\displaystyle (xD-\lambda I)y=x{\frac {dy}{dx}}-\lambda y=c_{1}x^{\lambda }.}
We then rewrite the equation as
d
y
d
x
−
λ
x
y
=
c
1
x
λ
−
1
,
{\displaystyle {\frac {dy}{dx}}-{\frac {\lambda }{x}}y=c_{1}x^{\lambda -1},}
which one can recognize as being amenable to solution via an integrating factor.
Choose
M
(
x
)
=
x
−
λ
{\displaystyle M(x)=x^{-\lambda }}
as our integrating factor. Multiplying our equation through by
M
(
x
)
{\displaystyle M(x)}
and recognizing the left-hand side as the derivative of a product, we then obtain
d
d
x
(
x
−
λ
y
)
=
c
1
x
−
1
x
−
λ
y
=
∫
c
1
x
−
1
d
x
y
=
x
λ
(
c
1
ln
(
x
)
+
c
2
)
=
c
1
ln
(
x
)
x
λ
+
c
2
x
λ
.
{\displaystyle {\begin{aligned}{\frac {d}{dx}}(x^{-\lambda }y)&=c_{1}x^{-1}\\x^{-\lambda }y&=\int c_{1}x^{-1}\,dx\\y&=x^{\lambda }(c_{1}\ln(x)+c_{2})\\&=c_{1}\ln(x)x^{\lambda }+c_{2}x^{\lambda }.\end{aligned}}}
=== Example ===
Given
x
2
u
″
−
3
x
u
′
+
3
u
=
0
,
{\displaystyle x^{2}u''-3xu'+3u=0\,,}
we substitute the simple solution xm:
x
2
(
m
(
m
−
1
)
x
m
−
2
)
−
3
x
(
m
x
m
−
1
)
+
3
x
m
=
m
(
m
−
1
)
x
m
−
3
m
x
m
+
3
x
m
=
(
m
2
−
4
m
+
3
)
x
m
=
0
.
{\displaystyle x^{2}\left(m\left(m-1\right)x^{m-2}\right)-3x\left(mx^{m-1}\right)+3x^{m}=m\left(m-1\right)x^{m}-3mx^{m}+3x^{m}=\left(m^{2}-4m+3\right)x^{m}=0\,.}
For xm to be a solution, either x = 0, which gives the trivial solution, or the coefficient of xm is zero. Solving the quadratic equation, we get m = 1, 3. The general solution is therefore
u
=
c
1
x
+
c
2
x
3
.
{\displaystyle u=c_{1}x+c_{2}x^{3}\,.}
== Difference equation analogue ==
There is a difference equation analogue to the Cauchy–Euler equation. For a fixed m > 0, define the sequence fm(n) as
f
m
(
n
)
:=
n
(
n
+
1
)
⋯
(
n
+
m
−
1
)
.
{\displaystyle f_{m}(n):=n(n+1)\cdots (n+m-1).}
Applying the difference operator to
f
m
{\displaystyle f_{m}}
, we find that
D
f
m
(
n
)
=
f
m
(
n
+
1
)
−
f
m
(
n
)
=
m
(
n
+
1
)
(
n
+
2
)
⋯
(
n
+
m
−
1
)
=
m
n
f
m
(
n
)
.
{\displaystyle {\begin{aligned}Df_{m}(n)&=f_{m}(n+1)-f_{m}(n)\\&=m(n+1)(n+2)\cdots (n+m-1)={\frac {m}{n}}f_{m}(n).\end{aligned}}}
If we do this k times, we find that
f
m
(
k
)
(
n
)
=
m
(
m
−
1
)
⋯
(
m
−
k
+
1
)
n
(
n
+
1
)
⋯
(
n
+
k
−
1
)
f
m
(
n
)
=
m
(
m
−
1
)
⋯
(
m
−
k
+
1
)
f
m
(
n
)
f
k
(
n
)
,
{\displaystyle {\begin{aligned}f_{m}^{(k)}(n)&={\frac {m(m-1)\cdots (m-k+1)}{n(n+1)\cdots (n+k-1)}}f_{m}(n)\\&=m(m-1)\cdots (m-k+1){\frac {f_{m}(n)}{f_{k}(n)}},\end{aligned}}}
where the superscript (k) denotes applying the difference operator k times. Comparing this to the fact that the k-th derivative of xm equals
m
(
m
−
1
)
⋯
(
m
−
k
+
1
)
x
m
x
k
{\displaystyle m(m-1)\cdots (m-k+1){\frac {x^{m}}{x^{k}}}}
suggests that we can solve the N-th order difference equation
f
N
(
n
)
y
(
N
)
(
n
)
+
a
N
−
1
f
N
−
1
(
n
)
y
(
N
−
1
)
(
n
)
+
⋯
+
a
0
y
(
n
)
=
0
,
{\displaystyle f_{N}(n)y^{(N)}(n)+a_{N-1}f_{N-1}(n)y^{(N-1)}(n)+\cdots +a_{0}y(n)=0,}
in a similar manner to the differential equation case. Indeed, substituting the trial solution
y
(
n
)
=
f
m
(
n
)
{\displaystyle y(n)=f_{m}(n)}
brings us to the same situation as the differential equation case,
m
(
m
−
1
)
⋯
(
m
−
N
+
1
)
+
a
N
−
1
m
(
m
−
1
)
⋯
(
m
−
N
+
2
)
+
⋯
+
a
1
m
+
a
0
=
0.
{\displaystyle m(m-1)\cdots (m-N+1)+a_{N-1}m(m-1)\cdots (m-N+2)+\dots +a_{1}m+a_{0}=0.}
One may now proceed as in the differential equation case, since the general solution of an N-th order linear difference equation is also the linear combination of N linearly independent solutions. Applying reduction of order in case of a multiple root m1 will yield expressions involving a discrete version of ln,
φ
(
n
)
=
∑
k
=
1
n
1
k
−
m
1
.
{\displaystyle \varphi (n)=\sum _{k=1}^{n}{\frac {1}{k-m_{1}}}.}
(Compare with:
ln
(
x
−
m
1
)
=
∫
1
+
m
1
x
d
t
t
−
m
1
.
{\textstyle \ln(x-m_{1})=\int _{1+m_{1}}^{x}{\frac {dt}{t-m_{1}}}.}
)
In cases where fractions become involved, one may use
f
m
(
n
)
:=
Γ
(
n
+
m
)
Γ
(
n
)
{\displaystyle f_{m}(n):={\frac {\Gamma (n+m)}{\Gamma (n)}}}
instead (or simply use it in all cases), which coincides with the definition before for integer m.
== See also ==
Hypergeometric differential equation
Cauchy–Euler operator
== References ==
== Bibliography ==
Weisstein, Eric W. "Cauchy–Euler equation". MathWorld. | Wikipedia/Cauchy–Euler_equation |
Cubic equations of state are a specific class of thermodynamic models for modeling the pressure of a gas as a function of temperature and density and which can be rewritten as a cubic function of the molar volume.
Equations of state are generally applied in the fields of physical chemistry and chemical engineering, particularly in the modeling of vapor–liquid equilibrium and chemical engineering process design.
== Van der Waals equation of state ==
The van der Waals equation of state may be written as
(
p
+
a
V
m
2
)
(
V
m
−
b
)
=
R
T
{\displaystyle \left(p+{\frac {a}{V_{\text{m}}^{2}}}\right)\left(V_{\text{m}}-b\right)=RT}
where
T
{\displaystyle T}
is the absolute temperature,
p
{\displaystyle p}
is the pressure,
V
m
{\displaystyle V_{\text{m}}}
is the molar volume and
R
{\displaystyle R}
is the universal gas constant. Note that
V
m
=
V
/
n
{\displaystyle V_{\text{m}}=V/n}
, where
V
{\displaystyle V}
is the volume, and
n
=
N
/
N
A
{\displaystyle n=N/N_{\text{A}}}
, where
n
{\displaystyle n}
is the number of moles,
N
{\displaystyle N}
is the number of particles, and
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant. These definitions apply to all equations of state below as well.
Proposed in 1873, the van der Waals equation of state was one of the first to perform markedly better than the ideal gas law. In this equation, usually
a
{\displaystyle a}
is called the attraction parameter and
b
{\displaystyle b}
the repulsion parameter (or the effective molecular volume). While the equation is definitely superior to the ideal gas law and does predict the formation of a liquid phase, the agreement with experimental data for vapor-liquid equilibria is limited. The van der Waals equation is commonly referenced in textbooks and papers for historical and other reasons, but since its development other equations of only slightly greater complexity have been since developed, many of which are far more accurate.
The van der Waals equation may be considered as an ideal gas law which has been "improved" by the inclusion of two non-ideal contributions to the equation. Consider the van der Waals equation in the form
p
=
R
T
V
m
−
b
−
a
V
m
2
{\displaystyle p={\frac {RT}{V_{\text{m}}-b}}-{\frac {a}{V_{\text{m}}^{2}}}}
as compared to the ideal gas equation
p
=
R
T
V
m
{\displaystyle p={\frac {RT}{V_{\text{m}}}}}
The form of the van der Waals equation can be motivated as follows:
Molecules are thought of as particles which occupy a finite volume. Thus the physical volume is not accessible to all molecules at any given moment, raising the pressure slightly compared to what would be expected for point particles. Thus (
V
m
−
b
{\displaystyle V_{\text{m}}-b}
), an "effective" molar volume, is used instead of
V
m
{\displaystyle V_{\text{m}}}
in the first term.
While ideal gas molecules do not interact, real molecules will exhibit attractive van der Waals forces if they are sufficiently close together. The attractive forces, which are proportional to the density
ρ
{\displaystyle \rho }
, tend to retard the collisions that molecules have with the container walls and lower the pressure. The number of collisions that are so affected is also proportional to the density. Thus, the pressure is lowered by an amount proportional to
ρ
2
{\displaystyle \rho ^{2}}
, or inversely proportional to the squared molar volume.
The substance-specific constants
a
{\displaystyle a}
and
b
{\displaystyle b}
can be calculated from the critical properties
p
c
{\displaystyle p_{\text{c}}}
and
V
c
{\displaystyle V_{\text{c}}}
(noting that
V
c
{\displaystyle V_{\text{c}}}
is the molar volume at the critical point and
p
c
{\displaystyle p_{\text{c}}}
is the critical pressure) as:
a
=
3
p
c
V
c
2
{\displaystyle a=3p_{\text{c}}V_{\text{c}}^{2}}
b
=
V
c
3
.
{\displaystyle b={\frac {V_{\text{c}}}{3}}.}
Expressions for
(
a
,
b
)
{\displaystyle (a,b)}
written as functions of
(
T
c
,
p
c
)
{\displaystyle (T_{\text{c}},p_{\text{c}})}
may also be obtained and are often used to parameterize the equation because the critical temperature and pressure are readily accessible to experiment. They are
a
=
27
(
R
T
c
)
2
64
p
c
{\displaystyle a={\frac {27(RT_{\text{c}})^{2}}{64p_{\text{c}}}}}
b
=
R
T
c
8
p
c
.
{\displaystyle b={\frac {RT_{\text{c}}}{8p_{\text{c}}}}.}
With the reduced state variables, i.e.
V
r
=
V
m
/
V
c
{\displaystyle V_{\text{r}}=V_{\text{m}}/V_{\text{c}}}
,
P
r
=
p
/
p
c
{\displaystyle P_{\text{r}}=p/p_{\text{c}}}
and
T
r
=
T
/
T
c
{\displaystyle T_{\text{r}}=T/T_{\text{c}}}
, the reduced form of the van der Waals equation can be formulated:
(
P
r
+
3
V
r
2
)
(
3
V
r
−
1
)
=
8
T
r
{\displaystyle \left(P_{\text{r}}+{\frac {3}{V_{\text{r}}^{2}}}\right)\left(3V_{\text{r}}-1\right)=8T_{\text{r}}}
The benefit of this form is that for given
T
r
{\displaystyle T_{\text{r}}}
and
P
r
{\displaystyle P_{\text{r}}}
, the reduced volume of the liquid and gas can be calculated directly using Cardano's method for the reduced cubic form:
V
r
3
−
(
1
3
+
8
T
r
3
P
r
)
V
r
2
+
3
V
r
P
r
−
1
P
r
=
0
{\displaystyle V_{\text{r}}^{3}-\left({\frac {1}{3}}+{\frac {8T_{\text{r}}}{3P_{\text{r}}}}\right)V_{\text{r}}^{2}+{\frac {3V_{\text{r}}}{P_{\text{r}}}}-{\frac {1}{P_{\text{r}}}}=0}
For
P
r
<
1
{\displaystyle P_{\text{r}}<1}
and
T
r
<
1
{\displaystyle T_{\text{r}}<1}
, the system is in a state of vapor–liquid equilibrium. In that situation, the reduced cubic equation of state yields 3 solutions. The largest and the lowest solution are the gas and liquid reduced volume. In this situation, the Maxwell construction is sometimes used to model the pressure as a function of molar volume.
The compressibility factor
Z
=
P
V
m
/
R
T
{\displaystyle Z=PV_{\text{m}}/RT}
is often used to characterize non-ideal behavior. For the van der Waals equation in reduced form, this becomes
Z
=
V
r
V
r
−
1
3
−
9
8
V
r
T
r
{\displaystyle Z={\frac {V_{\text{r}}}{V_{\text{r}}-{\frac {1}{3}}}}-{\frac {9}{8V_{\text{r}}T_{\text{r}}}}}
At the critical point,
Z
c
=
3
/
8
=
0.375
{\displaystyle Z_{\text{c}}=3/8=0.375}
.
== Redlich–Kwong equation of state ==
Introduced in 1949, the Redlich–Kwong equation of state was considered to be a notable improvement to the van der Waals equation. It is still of interest primarily due to its relatively simple form.
While superior to the van der Waals equation in some respects, it performs poorly with respect to the liquid phase and thus cannot be used for accurately calculating vapor–liquid equilibria. However, it can be used in conjunction with separate liquid-phase correlations for this purpose. The equation is given below, as are relationships between its parameters and the critical constants:
p
=
R
T
V
m
−
b
−
a
T
V
m
(
V
m
+
b
)
a
=
Ω
a
R
2
T
c
5
2
p
c
≈
0.42748
R
2
T
c
5
2
P
c
b
=
Ω
b
R
T
c
P
c
≈
0.08664
R
T
c
p
c
Ω
a
=
[
9
(
2
1
/
3
−
1
)
]
−
1
≈
0.42748
Ω
b
=
2
1
/
3
−
1
3
≈
0.08664
{\displaystyle {\begin{aligned}p&={\frac {R\,T}{V_{\text{m}}-b}}-{\frac {a}{{\sqrt {T}}\,V_{\text{m}}\left(V_{\text{m}}+b\right)}}\\[3pt]a&={\frac {\Omega _{a}\,R^{2}T_{\text{c}}^{\frac {5}{2}}}{p_{\text{c}}}}\approx 0.42748{\frac {R^{2}\,T_{\text{c}}^{\frac {5}{2}}}{P_{\text{c}}}}\\[3pt]b&={\frac {\Omega _{b}\,RT_{\text{c}}}{P_{\text{c}}}}\approx 0.08664{\frac {R\,T_{\text{c}}}{p_{\text{c}}}}\\[3pt]\Omega _{a}&=\left[9\left(2^{1/3}-1\right)\right]^{-1}\approx 0.42748\\[3pt]\Omega _{b}&={\frac {2^{1/3}-1}{3}}\approx 0.08664\end{aligned}}}
Another, equivalent form of the Redlich–Kwong equation is the expression of the model's compressibility factor:
Z
=
p
V
m
R
T
=
V
m
V
m
−
b
−
a
R
T
3
/
2
(
V
m
+
b
)
{\displaystyle Z={\frac {pV_{\text{m}}}{RT}}={\frac {V_{\text{m}}}{V_{\text{m}}-b}}-{\frac {a}{RT^{3/2}\left(V_{\text{m}}+b\right)}}}
The Redlich–Kwong equation is adequate for calculation of gas phase properties when the reduced pressure (defined in the previous section) is less than about one-half of the ratio of the temperature to the reduced temperature,
P
r
<
T
2
T
c
.
{\displaystyle P_{\text{r}}<{\frac {T}{2T_{\text{c}}}}.}
The Redlich–Kwong equation is consistent with the theorem of corresponding states. When the equation expressed in reduced form, an identical equation is obtained for all gases:
P
r
=
3
T
r
V
r
−
b
′
−
1
b
′
T
r
V
r
(
V
r
+
b
′
)
{\displaystyle P_{\text{r}}={\frac {3T_{\text{r}}}{V_{\text{r}}-b'}}-{\frac {1}{b'{\sqrt {T_{\text{r}}}}V_{\text{r}}\left(V_{\text{r}}+b'\right)}}}
where
b
′
{\displaystyle b'}
is:
b
′
=
2
1
/
3
−
1
≈
0.25992
{\displaystyle b'=2^{1/3}-1\approx 0.25992}
In addition, the compressibility factor at the critical point is the same for every substance:
Z
c
=
p
c
V
c
R
T
c
=
1
/
3
≈
0.33333
{\displaystyle Z_{\text{c}}={\frac {p_{\text{c}}V_{\text{c}}}{RT_{\text{c}}}}=1/3\approx 0.33333}
This is an improvement over the van der Waals equation prediction of the critical compressibility factor, which is
Z
c
=
3
/
8
=
0.375
{\displaystyle Z_{\text{c}}=3/8=0.375}
. Typical experimental values are
Z
c
=
0.274
{\displaystyle Z_{\text{c}}=0.274}
(carbon dioxide),
Z
c
=
0.235
{\displaystyle Z_{\text{c}}=0.235}
(water), and
Z
c
=
0.29
{\displaystyle Z_{\text{c}}=0.29}
(nitrogen).
== Soave modification of Redlich–Kwong ==
A modified form of the Redlich–Kwong equation was proposed by Soave. It takes the form
p
=
R
T
V
m
−
b
−
a
α
V
m
(
V
m
+
b
)
{\displaystyle p={\frac {R\,T}{V_{\text{m}}-b}}-{\frac {a\alpha }{V_{\text{m}}\left(V_{\text{m}}+b\right)}}}
a
=
Ω
a
R
2
T
c
2
P
c
=
0.42748
R
2
T
c
2
P
c
{\displaystyle a={\frac {\Omega _{a}\,R^{2}T_{\text{c}}^{2}}{P_{\text{c}}}}={\frac {0.42748\,R^{2}T_{\text{c}}^{2}}{P_{\text{c}}}}}
b
=
Ω
b
R
T
c
P
c
=
0.08664
R
T
c
P
c
{\displaystyle b={\frac {\Omega _{b}\,RT_{\text{c}}}{P_{\text{c}}}}={\frac {0.08664\,RT_{\text{c}}}{P_{\text{c}}}}}
α
=
(
1
+
(
0.48508
+
1.55171
ω
−
0.15613
ω
2
)
(
1
−
T
r
0.5
)
)
2
{\displaystyle \alpha =\left(1+\left(0.48508+1.55171\,\omega -0.15613\,\omega ^{2}\right)\left(1-T_{\text{r}}^{0.5}\right)\right)^{2}}
T
r
=
T
T
c
{\displaystyle T_{\text{r}}={\frac {T}{T_{\text{c}}}}}
Ω
a
=
[
9
(
2
1
/
3
−
1
)
]
−
1
≈
0.42748
{\displaystyle \Omega _{a}=\left[9\left(2^{1/3}-1\right)\right]^{-1}\approx 0.42748}
Ω
b
=
2
1
/
3
−
1
3
≈
0.08664
{\displaystyle \Omega _{b}={\frac {2^{1/3}-1}{3}}\approx 0.08664}
where ω is the acentric factor for the species.
The formulation for
α
{\displaystyle \alpha }
above is actually due to Graboski and Daubert. The original formulation from Soave is:
α
=
(
1
+
(
0.480
+
1.574
ω
−
0.176
ω
2
)
(
1
−
T
r
0.5
)
)
2
{\displaystyle \alpha =\left(1+\left(0.480+1.574\,\omega -0.176\,\omega ^{2}\right)\left(1-T_{\text{r}}^{0.5}\right)\right)^{2}}
for hydrogen:
α
=
1.202
exp
(
−
0.30288
T
r
)
.
{\displaystyle \alpha =1.202\exp \left(-0.30288\,T_{\text{r}}\right).}
By substituting the variables in the reduced form and the compressibility factor at critical point
{
p
r
=
p
/
P
c
,
T
r
=
T
/
T
c
,
V
r
=
V
m
/
V
c
,
Z
c
=
P
c
V
c
R
T
c
}
{\displaystyle \{p_{\text{r}}=p/P_{\text{c}},T_{\text{r}}=T/T_{\text{c}},V_{\text{r}}=V_{\text{m}}/V_{\text{c}},Z_{\text{c}}={\frac {P_{\text{c}}V_{\text{c}}}{RT_{\text{c}}}}\}}
we obtain
p
r
P
c
=
R
T
r
T
c
V
r
V
c
−
b
−
a
α
(
ω
,
T
r
)
V
r
V
c
(
V
r
V
c+
b
)
=
R
T
r
T
c
V
r
V
c
−
Ω
b
R
T
c
P
c
−
Ω
a
R
2
T
c
2
P
c
α
(
ω
,
T
r
)
V
r
V
c
(
V
r
V
c
+
Ω
b
R
T
c
P
c
)
=
{\displaystyle p_{\text{r}}P_{\text{c}}={\frac {R\,T_{\text{r}}T_{\text{c}}}{V_{\text{r}}V_{\text{c}}-b}}-{\frac {a\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}V_{\text{c}}\left(V_{\text{r}}V_{\text{c+}}b\right)}}={\frac {R\,T_{\text{r}}T_{\text{c}}}{V_{\text{r}}V_{\text{c}}-{\frac {\Omega _{b}\,RT_{\text{c}}}{P_{\text{c}}}}}}-{\frac {{\frac {\Omega _{a}\,R^{2}T_{\text{c}}^{2}}{P_{\text{c}}}}\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}V_{\text{c}}\left(V_{\text{r}}V_{\text{c}}+{\frac {\Omega _{b}\,RT_{\text{c}}}{P_{\text{c}}}}\right)}}=}
=
R
T
r
T
c
V
c
(
V
r
−
Ω
b
R
T
c
P
c
V
c
)
−
Ω
a
R
2
T
c
2
P
c
α
(
ω
,
T
r
)
V
r
V
c
2
(
V
r
+
Ω
b
R
T
c
P
c
V
c
)
=
R
T
r
T
c
V
c
(
V
r
−
Ω
b
Z
c
)
−
Ω
a
R
2
T
c
2
P
c
α
(
ω
,
T
r
)
V
r
V
c
2
(
V
r
+
Ω
b
Z
c
)
{\displaystyle ={\frac {R\,T_{\text{r}}T_{\text{c}}}{V_{\text{c}}\left(V_{\text{r}}-{\frac {\Omega _{b}\,RT_{\text{c}}}{P_{\text{c}}V_{\text{c}}}}\right)}}-{\frac {{\frac {\Omega _{a}\,R^{2}T_{\text{c}}^{2}}{P_{\text{c}}}}\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}V_{\text{c}}^{2}\left(V_{\text{r}}+{\frac {\Omega _{b}\,RT_{\text{c}}}{P_{\text{c}}V_{\text{c}}}}\right)}}={\frac {R\,T_{\text{r}}T_{\text{c}}}{V_{\text{c}}\left(V_{\text{r}}-{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}-{\frac {{\frac {\Omega _{a}\,R^{2}T_{\text{c}}^{2}}{P_{\text{c}}}}\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}V_{\text{c}}^{2}\left(V_{\text{r}}+{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}}
thus leading to
p
r
=
R
T
r
T
c
P
c
V
c
(
V
r
−
Ω
b
Z
c
)
−
Ω
a
R
2
T
c
2
P
c
2
α
(
ω
,
T
r
)
V
r
V
c
2
(
V
r
+
Ω
b
Z
c
)
=
T
r
Z
c
(
V
r
−
Ω
b
Z
c
)
−
Ω
a
Z
c
2
α
(
ω
,
T
r
)
V
r
(
V
r
+
Ω
b
Z
c
)
{\displaystyle p_{\text{r}}={\frac {R\,T_{\text{r}}T_{\text{c}}}{P_{\text{c}}V_{\text{c}}\left(V_{\text{r}}-{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}-{\frac {{\frac {\Omega _{a}\,R^{2}T_{\text{c}}^{2}}{P_{\text{c}}^{2}}}\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}V_{\text{c}}^{2}\left(V_{\text{r}}+{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}={\frac {T_{\text{r}}}{Z_{\text{c}}\left(V_{\text{r}}-{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}-{\frac {{\frac {\Omega _{a}}{Z_{\text{c}}^{2}}}\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}\left(V_{\text{r}}+{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}}
Thus, the Soave–Redlich–Kwong equation in reduced form only depends on ω and
Z
c
{\displaystyle Z_{\text{c}}}
of the substance, contrary to both the VdW and RK equation which are consistent with the theorem of corresponding states and the reduced form is one for all substances:
p
r
=
T
r
Z
c
(
V
r
−
Ω
b
Z
c
)
−
Ω
a
Z
c
2
α
(
ω
,
T
r
)
V
r
(
V
r
+
Ω
b
Z
c
)
{\displaystyle p_{\text{r}}={\frac {T_{\text{r}}}{Z_{\text{c}}\left(V_{\text{r}}-{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}-{\frac {{\frac {\Omega _{a}}{Z_{\text{c}}^{2}}}\alpha \left(\omega ,T_{\text{r}}\right)}{V_{\text{r}}\left(V_{\text{r}}+{\frac {\Omega _{b}}{Z_{\text{c}}}}\right)}}}
We can also write it in the polynomial form, with:
A
=
a
α
P
R
2
T
2
{\displaystyle A={\frac {a\alpha P}{R^{2}T^{2}}}}
B
=
b
P
R
T
{\displaystyle B={\frac {bP}{RT}}}
In terms of the compressibility factor, we have:
0
=
Z
3
−
Z
2
+
Z
(
A
−
B
−
B
2
)
−
A
B
{\displaystyle 0=Z^{3}-Z^{2}+Z\left(A-B-B^{2}\right)-AB}
.
This equation may have up to three roots. The maximal root of the cubic equation generally corresponds to a vapor state, while the minimal root is for a liquid state. This should be kept in mind when using cubic equations in calculations, e.g., of vapor-liquid equilibrium.
In 1972 G. Soave replaced the
1
T
{\textstyle {\frac {1}{\sqrt {T}}}}
term of the Redlich–Kwong equation with a function α(T,ω) involving the temperature and the acentric factor (the resulting equation is also known as the Soave–Redlich–Kwong equation of state; SRK EOS). The α function was devised to fit the vapor pressure data of hydrocarbons and the equation does fairly well for these materials.
Note especially that this replacement changes the definition of a slightly, as the
T
c
{\displaystyle T_{\text{c}}}
is now to the second power.
== Volume translation of Peneloux et al. (1982) ==
The SRK EOS may be written as
p
=
R
T
V
m
,
SRK
−
b
−
a
V
m
,
SRK
(
V
m
,
SRK
+
b
)
{\displaystyle p={\frac {R\,T}{V_{m,{\text{SRK}}}-b}}-{\frac {a}{V_{m,{\text{SRK}}}\left(V_{m,{\text{SRK}}}+b\right)}}}
where
a
=
a
c
α
a
c
≈
0.42747
R
2
T
c
2
P
c
b
≈
0.08664
R
T
c
P
c
{\displaystyle {\begin{aligned}a&=a_{\text{c}}\,\alpha \\a_{\text{c}}&\approx 0.42747{\frac {R^{2}\,T_{\text{c}}^{2}}{P_{\text{c}}}}\\b&\approx 0.08664{\frac {R\,T_{\text{c}}}{P_{\text{c}}}}\end{aligned}}}
where
α
{\displaystyle \alpha }
and other parts of the SRK EOS is defined in the SRK EOS section.
A downside of the SRK EOS, and other cubic EOS, is that the liquid molar volume is significantly less accurate than the gas molar volume. Peneloux et alios (1982) proposed a simple correction for this by introducing a volume translation
V
m
,
SRK
=
V
m
+
c
{\displaystyle V_{{\text{m}},{\text{SRK}}}=V_{\text{m}}+c}
where
c
{\displaystyle c}
is an additional fluid component parameter that translates the molar volume slightly. On the liquid branch of the EOS, a small change in molar volume corresponds to a large change in pressure. On the gas branch of the EOS, a small change in molar volume corresponds to a much smaller change in pressure than for the liquid branch. Thus, the perturbation of the molar gas volume is small. Unfortunately, there are two versions that occur in science and industry.
In the first version only
V
m
,
SRK
{\displaystyle V_{{\text{m}},{\text{SRK}}}}
is translated, and the EOS becomes
p
=
R
T
V
m
+
c
−
b
−
a
(
V
m
+
c
)
(
V
m
+
c
+
b
)
{\displaystyle p={\frac {R\,T}{V_{\text{m}}+c-b}}-{\frac {a}{\left(V_{\text{m}}+c\right)\left(V_{\text{m}}+c+b\right)}}}
In the second version both
V
m
,
SRK
{\displaystyle V_{{\text{m}},{\text{SRK}}}}
and
b
SRK
{\displaystyle b_{\text{SRK}}}
are translated, or the translation of
V
m
,
SRK
{\displaystyle V_{{\text{m}},{\text{SRK}}}}
is followed by a renaming of the composite parameter b − c. This gives
b
SRK
=
b
+
c
or
b
−
c
↷
b
p
=
R
T
V
m
−
b
−
a
(
V
m
+
c
)
(
V
m
+
2
c
+
b
)
{\displaystyle {\begin{aligned}b_{\text{SRK}}&=b+c\quad {\text{or}}\quad b-c\curvearrowright b\\p&={\frac {R\,T}{V_{\text{m}}-b}}-{\frac {a}{\left(V_{\text{m}}+c\right)\left(V_{\text{m}}+2c+b\right)}}\end{aligned}}}
The c-parameter of a fluid mixture is calculated by
c
=
∑
i
=
1
n
z
i
c
i
{\displaystyle c=\sum _{i=1}^{n}z_{i}c_{i}}
The c-parameter of the individual fluid components in a petroleum gas and oil can be estimated by the correlation
c
i
≈
0.40768
R
T
c
i
P
c
i
(
0.29441
−
Z
RA
,
i
)
{\displaystyle c_{i}\approx 0.40768\ {\frac {RT_{ci}}{P_{ci}}}\left(0.29441-Z_{{\text{RA}},i}\right)}
where the Rackett compressibility factor
Z
RA
,
i
{\displaystyle Z_{{\text{RA}},i}}
can be estimated by
Z
RA
,
i
≈
0.29056
−
0.08775
ω
i
{\displaystyle Z_{{\text{RA}},i}\approx 0.29056-0.08775\ \omega _{i}}
A nice feature with the volume translation method of Peneloux et al. (1982) is that it does not affect the vapor–liquid equilibrium calculations. This method of volume translation can also be applied to other cubic EOSs if the c-parameter correlation is adjusted to match the selected EOS.
== Peng–Robinson equation of state ==
The Peng–Robinson equation of state (PR EOS) was developed in 1976 at The University of Alberta by Ding-Yu Peng and Donald Robinson in order to satisfy the following goals:
The parameters should be expressible in terms of the critical properties and the acentric factor.
The model should provide reasonable accuracy near the critical point, particularly for calculations of the compressibility factor and liquid density.
The mixing rules should not employ more than a single binary interaction parameter, which should be independent of temperature, pressure, and composition.
The equation should be applicable to all calculations of all fluid properties in natural gas processes.
The equation is given as follows:
p
=
R
T
V
m
−
b
−
a
α
V
m
2
+
2
b
V
m
−
b
2
{\displaystyle p={\frac {R\,T}{V_{\text{m}}-b}}-{\frac {a\,\alpha }{V_{\text{m}}^{2}+2bV_{\text{m}}-b^{2}}}}
a
=
Ω
a
R
2
T
c
2
p
c
;
Ω
a
=
8
+
40
η
c
49
−
37
η
c
≈
0.45724
{\displaystyle a=\Omega _{a}{\frac {R^{2}\,T_{\text{c}}^{2}}{p_{\text{c}}}};\Omega _{a}={\frac {8+40\eta _{c}}{49-37\eta _{c}}}\approx 0.45724}
b
=
Ω
b
R
T
c
p
c
;
Ω
b
=
η
c
3
+
η
c
≈
0.07780
{\displaystyle b=\Omega _{b}{\frac {R\,T_{\text{c}}}{p_{\text{c}}}};\Omega _{b}={\frac {\eta _{c}}{3+\eta _{c}}}\approx 0.07780}
η
c
=
[
1
+
(
4
−
8
)
1
/
3
+
(
4
+
8
)
1
/
3
]
−
1
{\displaystyle \eta _{c}=[1+(4-{\sqrt {8}})^{1/3}+(4+{\sqrt {8}})^{1/3}]^{-1}}
α
=
(
1
+
κ
(
1
−
T
r
)
)
2
;
T
r
=
T
T
c
{\displaystyle \alpha =\left(1+\kappa \left(1-{\sqrt {T_{\text{r}}}}\right)\right)^{2};T_{\text{r}}={\frac {T}{T_{\text{c}}}}}
κ
≈
0.37464
+
1.54226
ω
−
0.26992
ω
2
{\displaystyle \kappa \approx 0.37464+1.54226\omega -0.26992\omega ^{2}}
In polynomial form:
A
=
α
a
p
R
2
T
2
{\displaystyle A={\frac {\alpha ap}{R^{2}\,T^{2}}}}
B
=
b
p
R
T
{\displaystyle B={\frac {bp}{RT}}}
Z
3
−
(
1
−
B
)
Z
2
+
(
A
−
2
B
−
3
B
2
)
Z
−
(
A
B
−
B
2
−
B
3
)
=
0
{\displaystyle Z^{3}-(1-B)Z^{2}+\left(A-2B-3B^{2}\right)Z-\left(AB-B^{2}-B^{3}\right)=0}
For the most part the Peng–Robinson equation exhibits performance similar to the Soave equation, although it is generally superior in predicting the liquid densities of many materials, especially nonpolar ones. Detailed performance of the original Peng-Robinson equation has been reported for density, thermal properties, and phase equilibria. Briefly, the original form exhibits deviations in vapor pressure and phase equilibria that are roughly three times as large as the updated implementations. The departure functions of the Peng–Robinson equation are given on a separate article.
The analytic values of its characteristic constants are:
Z
c
=
1
32
(
11
−
2
7
sinh
(
1
3
arsinh
(
13
7
7
)
)
)
≈
0.307401
{\displaystyle Z_{\text{c}}={\frac {1}{32}}\left(11-2{\sqrt {7}}\sinh \left({\frac {1}{3}}\operatorname {arsinh} \left({\frac {13}{7{\sqrt {7}}}}\right)\right)\right)\approx 0.307401}
b
′
=
b
V
m
,
c
=
1
3
(
8
sinh
(
1
3
arsinh
(
8
)
)
−
1
)
≈
0.253077
≈
0.07780
Z
c
{\displaystyle b'={\frac {b}{V_{{\text{m}},{\text{c}}}}}={\frac {1}{3}}\left({\sqrt {8}}\sinh \left({\frac {1}{3}}\operatorname {arsinh} \left({\sqrt {8}}\right)\right)-1\right)\approx 0.253077\approx {\frac {0.07780}{Z_{\text{c}}}}}
P
c
V
m
,
c
2
a
b
′
=
3
8
(
1
+
cosh
(
1
3
arcosh
(
3
)
)
)
≈
0.816619
≈
Z
c
2
0.45724
b
′
{\displaystyle {\frac {P_{\text{c}}V_{{\text{m}},{\text{c}}}^{2}}{a\,b'}}={\frac {3}{8}}\left(1+\cosh \left({\frac {1}{3}}\operatorname {arcosh} (3)\right)\right)\approx 0.816619\approx {\frac {Z_{\text{c}}^{2}}{0.45724\,b'}}}
== Peng–Robinson–Stryjek–Vera equations of state ==
=== PRSV1 ===
A modification to the attraction term in the Peng–Robinson equation of state published by Stryjek and Vera in 1986 (PRSV) significantly improved the model's accuracy by introducing an adjustable pure component parameter and by modifying the polynomial fit of the acentric factor.
The modification is:
κ
=
κ
0
+
κ
1
(
1
+
T
r
1
2
)
(
0.7
−
T
r
)
κ
0
=
0.378893
+
1.4897153
ω
−
0.17131848
ω
2
+
0.0196554
ω
3
{\displaystyle {\begin{aligned}\kappa &=\kappa _{0}+\kappa _{1}\left(1+T_{\text{r}}^{\frac {1}{2}}\right)\left(0.7-T_{\text{r}}\right)\\\kappa _{0}&=0.378893+1.4897153\,\omega -0.17131848\,\omega ^{2}+0.0196554\,\omega ^{3}\end{aligned}}}
where
κ
1
{\displaystyle \kappa _{1}}
is an adjustable pure component parameter. Stryjek and Vera published pure component parameters for many compounds of industrial interest in their original journal article. At reduced temperatures above 0.7, they recommend to set
κ
1
=
0
{\displaystyle \kappa _{1}=0}
and simply use
κ
=
κ
0
{\displaystyle \kappa =\kappa _{0}}
. For alcohols and water the value of
κ
1
{\displaystyle \kappa _{1}}
may be used up to the critical temperature and set to zero at higher temperatures.
=== PRSV2 ===
A subsequent modification published in 1986 (PRSV2) further improved the model's accuracy by introducing two additional pure component parameters to the previous attraction term modification.
The modification is:
κ
=
κ
0
+
[
κ
1
+
κ
2
(
κ
3
−
T
r
)
(
1
−
T
r
1
2
)
]
(
1
+
T
r
1
2
)
(
0.7
−
T
r
)
κ
0
=
0.378893
+
1.4897153
ω
−
0.17131848
ω
2
+
0.0196554
ω
3
{\displaystyle {\begin{aligned}\kappa &=\kappa _{0}+\left[\kappa _{1}+\kappa _{2}\left(\kappa _{3}-T_{\text{r}}\right)\left(1-T_{\text{r}}^{\frac {1}{2}}\right)\right]\left(1+T_{\text{r}}^{\frac {1}{2}}\right)\left(0.7-T_{\text{r}}\right)\\\kappa _{0}&=0.378893+1.4897153\,\omega -0.17131848\,\omega ^{2}+0.0196554\,\omega ^{3}\end{aligned}}}
where
κ
1
{\displaystyle \kappa _{1}}
,
κ
2
{\displaystyle \kappa _{2}}
, and
κ
3
{\displaystyle \kappa _{3}}
are adjustable pure component parameters.
PRSV2 is particularly advantageous for VLE calculations. While PRSV1 does offer an advantage over the Peng–Robinson model for describing thermodynamic behavior, it is still not accurate enough, in general, for phase equilibrium calculations. The highly non-linear behavior of phase-equilibrium calculation methods tends to amplify what would otherwise be acceptably small errors. It is therefore recommended that PRSV2 be used for equilibrium calculations when applying these models to a design. However, once the equilibrium state has been determined, the phase specific thermodynamic values at equilibrium may be determined by one of several simpler models with a reasonable degree of accuracy.
One thing to note is that in the PRSV equation, the parameter fit is done in a particular temperature range which is usually below the critical temperature. Above the critical temperature, the PRSV alpha function tends to diverge and become arbitrarily large instead of tending towards 0. Because of this, alternate equations for alpha should be employed above the critical point. This is especially important for systems containing hydrogen which is often found at temperatures far above its critical point. Several alternate formulations have been proposed. Some well known ones are by Twu et al. and by Mathias and Copeman. An extensive treatment of over 1700 compounds using the Twu method has been reported by Jaubert and coworkers. Detailed performance of the updated Peng-Robinson equation by Jaubert and coworkers has been reported for density, thermal properties, and phase equilibria. Briefly, the updated form exhibits deviations in vapor pressure and phase equilibria that are roughly a third as large as the original implementation.
== Peng–Robinson–Babalola-Susu equation of state (PRBS) ==
Babalola and Susu modified the Peng–Robinson Equation of state as:
P
=
(
R
T
V
m
−
b
)
−
[
(
a
1
P
+
a
2
)
α
V
m
(
V
m
+
b
)
+
b
(
V
m
−
b
)
]
{\displaystyle P=\left({\frac {RT}{V_{\mathrm {m} }-b}}\right)-\left[{\frac {(a_{1}P+a_{2})\alpha }{V_{\mathrm {m} }(V_{\mathrm {m} }+b)+b(V_{\mathrm {m} }-b)}}\right]}
The attractive force parameter ‘a’ was considered to be a constant with respect to pressure in the Peng–Robinson equation of state. The modification, in which parameter ‘a’ was treated as a variable with respect to pressure for multicomponent multi-phase high density reservoir systems was to improve accuracy in the prediction of properties of complex reservoir fluids for PVT modeling. The variation was represented with a linear equation where a1 and a2 were the slope and the intercept respectively of the straight line obtained when values of parameter ‘a’ are plotted against pressure.
This modification increases the accuracy of the Peng–Robinson equation of state for heavier fluids particularly at high pressure ranges (>30MPa) and eliminates the need for tuning the original Peng–Robinson equation of state. Tunning was captured inherently during the modification of the Peng-Robinson Equation.
The Peng-Robinson-Babalola-Susu (PRBS) Equation of State (EoS) was developed in 2005 and for about two decades now has been applied to numerous reservoir field data at varied temperature (T) and pressure (P) conditions and shown to rank among the few promising EoS for accurate prediction of reservoir fluid properties especially for more challenging ultra-deep reservoirs at High-Temperature High-Pressure (HTHP) conditions. These works have been published in reputable journals.
While the widely used Peng-Robinson (PR) EoS of 1976 can predict fluid properties of conventional reservoirs with good accuracy up to pressures of about 27 MPa (4,000 psi) but fail with pressure increase, the new Peng-Robinson-Babalola-Susu (PRBS) EoS can accurately model PVT behavior of ultra-deep reservoir complex fluid systems at very high pressures of up to 120 MPa (17,500 psi).
== Elliott–Suresh–Donohue equation of state ==
The Elliott–Suresh–Donohue (ESD) equation of state was proposed in 1990. The equation corrects the inaccurate van der Waals repulsive term that is also applied in the Peng–Robinson EOS. The attractive term includes a contribution that relates to the second virial coefficient of square-well spheres, and also shares some features of the Twu temperature dependence. The EOS accounts for the effect of the shape of any molecule and can be directly extended to polymers with molecular parameters characterized in terms of solubility parameter and liquid volume instead of using critical properties (as shown here). The EOS itself was developed through comparisons with computer simulations and should capture the essential physics of size, shape, and hydrogen bonding as inferred from straight chain molecules (like n-alkanes).
p
V
m
R
T
=
Z
=
1
+
Z
r
e
p
+
Z
a
t
t
{\displaystyle {\frac {pV_{\text{m}}}{RT}}=Z=1+Z^{\rm {rep}}+Z^{\rm {att}}}
where:
Z
r
e
p
=
4
c
η
1
−
1.9
η
{\displaystyle Z^{\rm {rep}}={\frac {4c\eta }{1-1.9\eta }}}
Z
a
t
t
=
−
z
m
q
η
Y
1
+
k
1
η
Y
{\displaystyle Z^{\rm {att}}=-{\frac {z_{\text{m}}q\eta Y}{1+k_{1}\eta Y}}}
and
c
{\displaystyle c}
is a "shape factor", with
c
=
1
{\displaystyle c=1}
for spherical molecules.
For non-spherical molecules, the following relation between the shape factor and the acentric factor is suggested:
c
=
1
+
3.535
ω
+
0.533
ω
2
{\displaystyle c=1+3.535\omega +0.533\omega ^{2}}
.
The reduced number density
η
{\displaystyle \eta }
is defined as
η
=
b
ρ
{\displaystyle \eta =b\rho }
, where
b
{\displaystyle b}
is the characteristic size parameter [cm3/mol], and
ρ
=
1
V
m
=
N
/
(
N
A
V
)
{\displaystyle \rho ={\frac {1}{V_{\text{m}}}}=N/(N_{\text{A}}V)}
is the molar density [mol/cm3].
The characteristic size parameter is related to
c
{\displaystyle c}
through
b
=
R
T
c
P
c
Φ
{\displaystyle b={\frac {RT_{\text{c}}}{P_{\text{c}}}}\Phi }
where
Φ
=
Z
c
2
2
A
q
[
−
B
q
+
B
q
2
+
4
A
q
C
q
]
{\displaystyle \Phi ={\frac {Z_{\text{c}}^{2}}{2A_{q}}}{[-B_{q}+{\sqrt {B_{q}^{2}+4A_{q}C_{q}}}]}}
3
Z
c
=
(
[
(
−
0.173
/
c
+
0.217
)
/
c
−
0.186
]
/
c
+
0.115
)
/
c
+
1
{\displaystyle 3Z_{\text{c}}=([(-0.173/{\sqrt {c}}+0.217)/{\sqrt {c}}-0.186]/{\sqrt {c}}+0.115)/{\sqrt {c}}+1}
A
q
=
[
1.9
(
9.5
q
−
k
1
)
+
4
c
k
1
]
(
4
c
−
1.9
)
{\displaystyle A_{q}=[1.9(9.5q-k_{1})+4ck_{1}](4c-1.9)}
B
q
=
1.9
k
1
Z
c
+
3
A
q
/
(
4
c
−
1.9
)
{\displaystyle B_{q}=1.9k_{1}Z_{\text{c}}+3A_{q}/(4c-1.9)}
C
q
=
(
9.5
q
−
k
1
)
/
Z
c
{\displaystyle C_{q}=(9.5q-k_{1})/Z_{\text{c}}}
The shape parameter
q
{\displaystyle q}
appearing in the attraction term and the term
Y
{\displaystyle Y}
are given by
q
=
1
+
k
3
(
c
−
1
)
{\displaystyle q=1+k_{3}(c-1)}
(and is hence also equal to 1 for spherical molecules).
Y
=
exp
(
ϵ
k
T
)
−
k
2
{\displaystyle Y=\exp \left({\frac {\epsilon }{kT}}\right)-k_{2}}
where
ϵ
{\displaystyle \epsilon }
is the depth of the square-well potential and is given by
Y
c
=
(
R
T
c
b
P
c
)
2
Z
c
3
A
q
{\displaystyle Y_{\text{c}}=({\frac {RT_{\text{c}}}{bP_{\text{c}}}})^{2}{\frac {Z_{\text{c}}^{3}}{A_{q}}}}
z
m
{\displaystyle z_{\text{m}}}
,
k
1
{\displaystyle k_{1}}
,
k
2
{\displaystyle k_{2}}
and
k
3
{\displaystyle k_{3}}
are constants in the equation of state:
z
m
=
9.5
{\displaystyle z_{\text{m}}=9.5}
,
k
1
=
1.7745
{\displaystyle k_{1}=1.7745}
,
k
2
=
1.0617
{\displaystyle k_{2}=1.0617}
,
k
3
=
1.90476.
{\displaystyle k_{3}=1.90476.}
The model can be extended to associating components and mixtures with non-associating components. Details are in the paper by J.R. Elliott, Jr. et al. (1990).
Noting that
4
(
k
3
−
1
)
/
k
3
{\displaystyle 4(k_{3}-1)/k_{3}}
= 1.900,
Z
rep
{\displaystyle Z^{\text{rep}}}
can be rewritten in the SAFT form as:
Z
r
e
p
=
4
q
η
g
−
(
q
−
1
)
η
g
d
g
d
η
=
4
q
η
1
−
1.9
η
−
(
q
−
1
)
1.9
η
1
−
1.9
η
;
g
=
1
1
−
1.9
η
{\displaystyle Z^{\rm {rep}}=4q\eta g-(q-1){\frac {\eta }{g}}{\frac {dg}{d\eta }}={\frac {4q\eta }{1-1.9\eta }}-{\frac {(q-1)1.9\eta }{1-1.9\eta }};g={\frac {1}{1-1.9\eta }}}
If preferred, the
q
{\displaystyle q}
can be replaced by
m
{\displaystyle m}
in SAFT notation and the ESD EOS can be written:
Z
=
1
+
m
(
4
η
1
−
1.9
η
−
9.5
Y
η
1
+
k
1
Y
η
)
−
(
m
−
1
)
1.9
η
1
−
1.9
η
{\displaystyle Z=1+m({\frac {4\eta }{1-1.9\eta }}-{\frac {9.5Y\eta }{1+k_{1}Y\eta }})-{\frac {(m-1)1.9\eta }{1-1.9\eta }}}
In this form, SAFT's segmental perspective is evident and all the results of Michael Wertheim are directly applicable and relatively succinct. In SAFT's segmental perspective, each molecule is conceived as comprising m spherical segments floating in space with their own spherical interactions, but then corrected for bonding into a tangent sphere chain by the (m − 1) term. When m is not an integer, it is simply considered as an "effective" number of tangent sphere segments.
Solving the equations in Wertheim's theory can be complicated, but simplifications can make their implementation less daunting. Briefly, a few extra steps are needed to compute
Z
a
s
s
o
c
{\displaystyle Z^{\rm {assoc}}}
given density and temperature. For example, when the number of hydrogen bonding donors is equal to the number of acceptors, the ESD equation becomes:
p
V
m
R
T
=
Z
=
1
+
Z
r
e
p
+
Z
a
t
t
+
Z
a
s
s
o
c
{\displaystyle {\frac {pV_{\text{m}}}{RT}}=Z=1+Z^{\rm {rep}}+Z^{\rm {att}}+Z^{\rm {assoc}}}
where:
Z
a
s
s
o
c
=
−
g
N
AD
(
1
−
X
AD
)
;
X
AD
=
2
/
[
1
+
1
+
4
N
AD
α
AD
]
;
α
AD
=
ρ
N
A
K
AD
[
exp
(
ϵ
AD
/
k
T
)
−
1
]
{\displaystyle Z^{\rm {assoc}}=-gN^{\text{AD}}(1-X^{\text{AD}});X^{\text{AD}}=2/[1+{\sqrt {1+4N^{\text{AD}}\alpha ^{\text{AD}}}}];\alpha ^{\text{AD}}=\rho N_{\text{A}}K^{\text{AD}}[\exp {(\epsilon ^{\text{AD}}/kT)-1]}}
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant,
K
AD
{\displaystyle K^{\text{AD}}}
and
ϵ
AD
{\displaystyle \epsilon ^{\text{AD}}}
are stored input parameters representing the volume and energy of hydrogen bonding. Typically,
K
AD
=
0.001
n
m
3
{\displaystyle K^{\text{AD}}=\mathrm {0.001\ nm^{3}} }
and
ϵ
AD
/
k
B
=
2000
K
{\displaystyle \epsilon ^{\text{AD}}/k_{\text{B}}=\mathrm {2000\ K} }
are stored.
N
AD
{\displaystyle N^{\text{AD}}}
is the number of acceptors (equal to number of donors for this example). For example,
N
AD
{\displaystyle N^{\text{AD}}}
= 1 for alcohols like methanol and ethanol.
N
AD
{\displaystyle N^{\text{AD}}}
= 2 for water.
N
AD
{\displaystyle N^{\text{AD}}}
= degree of polymerization for polyvinylphenol. So you use the density and temperature to calculate
α
AD
{\displaystyle \alpha ^{\text{AD}}}
then use
α
AD
{\displaystyle \alpha ^{\text{AD}}}
to calculate the other quantities. Technically, the ESD equation is no longer cubic when the association term is included, but no artifacts are introduced so there are only three roots in density. The extension to efficiently treat any number of electron acceptors (acids) and donors (bases), including mixtures of self-associating, cross-associating, and non-associating compounds, has been presented here. Detailed performance of the ESD equation has been reported for density, thermal properties, and phase equilibria. Briefly, the ESD equation exhibits deviations in vapor pressure and vapor-liquid equilibria that are roughly twice as large as the Peng-Robinson form as updated by Jaubert and coworkers, but deviations in liquid-liquid equilibria are roughly 40% smaller.
== Cubic-plus-association ==
The cubic-plus-association (CPA) equation of state combines the Soave–Redlich–Kwong equation with the association term from SAFT based on Chapman's extensions and simplifications of a theory of associating molecules due to Michael Wertheim. The development of the equation began in 1995 as a research project that was funded by Shell, and published in 1996.
p
=
R
T
(
V
m
−
b
)
−
a
V
m
(
V
m
+
b
)
+
R
T
V
m
ρ
∑
A
[
1
X
A
−
1
2
]
∂
X
A
∂
ρ
{\displaystyle p={\frac {RT}{(V_{\mathrm {m} }-b)}}-{\frac {a}{V_{\mathrm {m} }(V_{\mathrm {m} }+b)}}+{\frac {RT}{V_{\mathrm {m} }}}\rho \sum _{A}\left[{\frac {1}{X^{\text{A}}}}-{\frac {1}{2}}\right]{\frac {\partial X^{\text{A}}}{\partial \rho }}}
In the association term
X
A
{\displaystyle X^{\text{A}}}
is the mole fraction of molecules not bonded at site A.
== Cubic-plus-chain equation of state ==
The cubic-plus-chain (CPC) equation of state hybridizes the classical cubic equation of state with the SAFT chain term. The addition of the chain term allows the model to be capable of capturing the physics of both short-chain and long-chain non-associating components ranging from alkanes to polymers. The CPC monomer term is not restricted to one classical cubic EOS form, instead many forms can be used within the same framework. The cubic-plus-chain (CPC) equation of state is written in terms of the reduced residual Helmholtz energy (
F
C
P
C
{\displaystyle F^{\mathrm {CPC} }}
) as:
F
C
P
C
=
A
R
(
T
,
V
,
n
)
R
T
=
m
(
F
r
e
p
+
F
a
t
t
)
+
F
c
h
a
i
n
{\displaystyle F^{\mathrm {CPC} }={\frac {A^{\mathrm {R} }(T,V,{\textbf {n}})}{RT}}=m(F^{\mathrm {rep} }+F^{\mathrm {att} })+F^{\mathrm {chain} }}
where
A
R
{\displaystyle A^{\mathrm {R} }}
is the residual Helmholtz energy,
m
{\displaystyle m}
is the chain length, "rep" and "att" are the monomer repulsive and attractive contributions of the cubic equation of state, respectively. The "chain" term accounts for the monomer beads bonding contribution from SAFT equation of state. Using Redlich−Kwong (RK) for the monomer term, CPC can be written as:
p
=
n
R
T
V
(
1
+
m
¯
2
B
V
−
m
¯
B
)
−
m
¯
2
A
V
(
V
+
m
¯
B
)
−
n
R
T
V
[
∑
i
n
i
(
m
i
−
1
)
β
g
′
(
β
)
g
(
β
)
]
{\displaystyle p={\frac {nRT}{V}}{\Biggl (}1+{\frac {{\bar {m}}^{2}B}{V-{\bar {m}}B}}{\Biggr )}-{\frac {{\bar {m}}^{2}A}{V(V+{\bar {m}}B)}}-{\frac {nRT}{V}}\left[\sum _{i}n_{i}(m_{i}-1)\beta {\frac {g'(\beta )}{g(\beta )}}\right]}
where A is the molecular interaction energy parameter, B is the co-volume parameter,
m
¯
{\displaystyle {\bar {m}}}
is the mole-average chain length, g(β) is the radial distribution function (RDF) evaluated at contact, and β is the reduced volume.
The CPC model combines the simplicity and speed compared to other complex models used to model polymers. Sisco et al. applied the CPC equation of state to model different well-defined and polymer mixtures. They analyzed different factors including elevated pressure, temperature, solvent types, polydispersity, etc. The CPC model proved to be capable of modeling different systems by testing the results with experimental data.
Alajmi et al. incorporate the short-range soft repulsion to the CPC framework to enhance vapor pressure and liquid density predictions. They provided a database for more than 50 components from different chemical families, including n-alkanes, alkenes, branched alkanes, cycloalkanes, benzene derivatives, gases, etc. This CPC version uses a temperature-dependent co-volume parameter based on perturbation theory to describe short-range soft repulsion between molecules.
== References == | Wikipedia/Cubic_equations_of_state |
In ring theory and related areas of mathematics a central simple algebra (CSA) over a field K is a finite-dimensional associative K-algebra A that is simple, and for which the center is exactly K. (Note that not every simple algebra is a central simple algebra over its center: for instance, if K is a field of characteristic 0, then the Weyl algebra
K
[
X
,
∂
X
]
{\displaystyle K[X,\partial _{X}]}
is a simple algebra with center K, but is not a central simple algebra over K as it has infinite dimension as a K-module.)
For example, the complex numbers C form a CSA over themselves, but not over the real numbers R (the center of C is all of C, not just R). The quaternions H form a 4-dimensional CSA over R, and in fact represent the only non-trivial element of the Brauer group of the reals (see below).
Given two central simple algebras A ~ M(n,S) and B ~ M(m,T) over the same field F, A and B are called similar (or Brauer equivalent) if their division rings S and T are isomorphic. The set of all equivalence classes of central simple algebras over a given field F, under this equivalence relation, can be equipped with a group operation given by the tensor product of algebras. The resulting group is called the Brauer group Br(F) of the field F. It is always a torsion group.
== Properties ==
According to the Artin–Wedderburn theorem a finite-dimensional simple algebra A is isomorphic to the matrix algebra M(n,S) for some division ring S. Hence, there is a unique division algebra in each Brauer equivalence class.
Every automorphism of a central simple algebra is an inner automorphism (this follows from the Skolem–Noether theorem).
The dimension of a central simple algebra as a vector space over its centre is always a square: the degree is the square root of this dimension. The Schur index of a central simple algebra is the degree of the equivalent division algebra: it depends only on the Brauer class of the algebra.
The period or exponent of a central simple algebra is the order of its Brauer class as an element of the Brauer group. It is a divisor of the index, and the two numbers are composed of the same prime factors.
If S is a simple subalgebra of a central simple algebra A then dimF S divides dimF A.
Every 4-dimensional central simple algebra over a field F is isomorphic to a quaternion algebra; in fact, it is either a two-by-two matrix algebra, or a division algebra.
If D is a central division algebra over K for which the index has prime factorisation
i
n
d
(
D
)
=
∏
i
=
1
r
p
i
m
i
{\displaystyle \mathrm {ind} (D)=\prod _{i=1}^{r}p_{i}^{m_{i}}\ }
then D has a tensor product decomposition
D
=
⨂
i
=
1
r
D
i
{\displaystyle D=\bigotimes _{i=1}^{r}D_{i}\ }
where each component Di is a central division algebra of index
p
i
m
i
{\displaystyle p_{i}^{m_{i}}}
, and the components are uniquely determined up to isomorphism.
== Splitting field ==
We call a field E a splitting field for A over K if A⊗E is isomorphic to a matrix ring over E. Every finite dimensional CSA has a splitting field: indeed, in the case when A is a division algebra, then a maximal subfield of A is a splitting field. In general by theorems of Wedderburn and Koethe there is a splitting field which is a separable extension of K of degree equal to the index of A, and this splitting field is isomorphic to a subfield of A. As an example, the field C splits the quaternion algebra H over R with
t
+
x
i
+
y
j
+
z
k
↔
(
t
+
x
i
y
+
z
i
−
y
+
z
i
t
−
x
i
)
.
{\displaystyle t+x\mathbf {i} +y\mathbf {j} +z\mathbf {k} \leftrightarrow \left({\begin{array}{*{20}c}t+xi&y+zi\\-y+zi&t-xi\end{array}}\right).}
We can use the existence of the splitting field to define reduced norm and reduced trace for a CSA A. Map A to a matrix ring over a splitting field and define the reduced norm and trace to be the composite of this map with determinant and trace respectively. For example, in the quaternion algebra H, the splitting above shows that the element t + x i + y j + z k has reduced norm t2 + x2 + y2 + z2 and reduced trace 2t.
The reduced norm is multiplicative and the reduced trace is additive. An element a of A is invertible if and only if its reduced norm in non-zero: hence a CSA is a division algebra if and only if the reduced norm is non-zero on the non-zero elements.
== Generalization ==
CSAs over a field K are a non-commutative analog to extension fields over K – in both cases, they have no non-trivial 2-sided ideals, and have a distinguished field in their center, though a CSA can be non-commutative and need not have inverses (need not be a division algebra). This is of particular interest in noncommutative number theory as generalizations of number fields (extensions of the rationals Q); see noncommutative number field.
== See also ==
Azumaya algebra, generalization of CSAs where the base field is replaced by a commutative local ring
Severi–Brauer variety
Posner's theorem
== References ==
Cohn, P.M. (2003). Further Algebra and Applications (2nd ed.). Springer. ISBN 1852336676. Zbl 1006.00001.
Jacobson, Nathan (1996). Finite-dimensional division algebras over fields. Berlin: Springer-Verlag. ISBN 3-540-57029-2. Zbl 0874.16002.
Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023.
Lorenz, Falko (2008). Algebra. Volume II: Fields with Structure, Algebras and Advanced Topics. Springer. ISBN 978-0-387-72487-4. Zbl 1130.12001.
=== Further reading ===
Albert, A.A. (1939). Structure of Algebras. Colloquium Publications. Vol. 24 (7th revised reprint ed.). American Mathematical Society. ISBN 0-8218-1024-3. Zbl 0023.19901. {{cite book}}: ISBN / Date incompatibility (help)
Gille, Philippe; Szamuely, Tamás (2006). Central simple algebras and Galois cohomology. Cambridge Studies in Advanced Mathematics. Vol. 101. Cambridge: Cambridge University Press. ISBN 0-521-86103-9. Zbl 1137.12001. | Wikipedia/Central_simple_algebra |
In mathematics and theoretical physics, a Gerstenhaber algebra (sometimes called an antibracket algebra or braid algebra) is an algebraic structure discovered by Murray Gerstenhaber (1963) that combines the structures of a supercommutative ring and a graded Lie superalgebra. It is used in the Batalin–Vilkovisky formalism. It appears also in the generalization of Hamiltonian
formalism known as the De Donder–Weyl theory as the algebra of generalized Poisson brackets defined on differential forms.
== Definition ==
A Gerstenhaber algebra is a graded-commutative algebra with a Lie bracket of degree −1 satisfying the Poisson identity. Everything is understood to satisfy the usual superalgebra sign conventions. More precisely, the algebra has two products, one written as ordinary multiplication and one written as [,], and a Z-grading called degree (in theoretical physics sometimes called ghost number). The degree of an element a is denoted by |a|. These satisfy the identities
(ab)c = a(bc) (The product is associative)
ab = (−1)|a||b|ba (The product is (super) commutative)
|ab| = |a| + |b| (The product has degree 0)
|[a,b]| = |a| + |b| − 1 (The Lie bracket has degree −1)
[a,bc] = [a,b]c + (−1)(|a|−1)|b|b[a,c] (Poisson identity)
[a,b] = −(−1)(|a|−1)(|b|−1) [b,a] (Antisymmetry of Lie bracket)
[a,[b,c]] = [[a,b],c] + (−1)(|a|−1)(|b|−1)[b,[a,c]] (The Jacobi identity for the Lie bracket)
Gerstenhaber algebras differ from Poisson superalgebras in that the Lie bracket has degree −1 rather than degree 0. The Jacobi identity may also be expressed in a symmetrical form
(
−
1
)
(
|
a
|
−
1
)
(
|
c
|
−
1
)
[
a
,
[
b
,
c
]
]
+
(
−
1
)
(
|
b
|
−
1
)
(
|
a
|
−
1
)
[
b
,
[
c
,
a
]
]
+
(
−
1
)
(
|
c
|
−
1
)
(
|
b
|
−
1
)
[
c
,
[
a
,
b
]
]
=
0.
{\displaystyle (-1)^{(|a|-1)(|c|-1)}[a,[b,c]]+(-1)^{(|b|-1)(|a|-1)}[b,[c,a]]+(-1)^{(|c|-1)(|b|-1)}[c,[a,b]]=0.\,}
== Examples ==
Gerstenhaber showed that the Hochschild cohomology H*(A,A) of an algebra A is a Gerstenhaber algebra.
A Batalin–Vilkovisky algebra has an underlying Gerstenhaber algebra if one forgets its second order Δ operator.
The exterior algebra of a Lie algebra is a Gerstenhaber algebra.
The differential forms on a Poisson manifold form a Gerstenhaber algebra.
The multivector fields on a manifold form a Gerstenhaber algebra using the Schouten–Nijenhuis bracket
== References ==
Gerstenhaber, Murray (1963). "The cohomology structure of an associative ring". Annals of Mathematics. 78 (2): 267–288. doi:10.2307/1970343. JSTOR 1970343.
Getzler, Ezra (1994). "Batalin-Vilkovisky algebras and two-dimensional topological field theories". Communications in Mathematical Physics. 159 (2): 265–285. arXiv:hep-th/9212043. Bibcode:1994CMaPh.159..265G. doi:10.1007/BF02102639.
Kosmann-Schwarzbach, Yvette (2001) [1994], "Poisson algebra", Encyclopedia of Mathematics, EMS Press
Kanatchikov, Igor V. (1997). "On field theoretic generalizations of a Poisson algebra". Reports on Mathematical Physics. 40 (2): 225–234. arXiv:hep-th/9710069. Bibcode:1997RpMP...40..225K. doi:10.1016/S0034-4877(97)85919-8. | Wikipedia/Gerstenhaber_algebra |
In mathematics, especially representation theory, a quiver is another name for a multidigraph; that is, a directed graph where loops and multiple arrows between two vertices are allowed. Quivers are commonly used in representation theory: a representation V of a quiver assigns a vector space V(x) to each vertex x of the quiver and a linear map V(a) to each arrow a.
In category theory, a quiver can be understood to be the underlying structure of a category, but without composition or a designation of identity morphisms. That is, there is a forgetful functor from Cat (the category of categories) to Quiv (the category of multidigraphs). Its left adjoint is a free functor which, from a quiver, makes the corresponding free category.
== Definition ==
A quiver Γ consists of:
The set V of vertices of Γ
The set E of edges of Γ
Two functions:
s
:
E
→
V
{\displaystyle s:E\to V}
giving the start or source of the edge, and another function,
t
:
E
→
V
{\displaystyle t:E\to V}
giving the target of the edge.
This definition is identical to that of a multidigraph.
A morphism of quivers is a mapping from vertices to vertices which takes directed edges to directed edges. Formally, if
Γ
=
(
V
,
E
,
s
,
t
)
{\displaystyle \Gamma =(V,E,s,t)}
and
Γ
′
=
(
V
′
,
E
′
,
s
′
,
t
′
)
{\displaystyle \Gamma '=(V',E',s',t')}
are two quivers, then a morphism
m
=
(
m
v
,
m
e
)
{\displaystyle m=(m_{v},m_{e})}
of quivers consists of two functions
m
v
:
V
→
V
′
{\displaystyle m_{v}:V\to V'}
and
m
e
:
E
→
E
′
{\displaystyle m_{e}:E\to E'}
such that the following diagrams commute:
That is,
m
v
∘
s
=
s
′
∘
m
e
{\displaystyle m_{v}\circ s=s'\circ m_{e}}
and
m
v
∘
t
=
t
′
∘
m
e
{\displaystyle m_{v}\circ t=t'\circ m_{e}}
== Category-theoretic definition ==
The above definition is based in set theory; the category-theoretic definition generalizes this into a functor from the free quiver to the category of sets.
The free quiver (also called the walking quiver, Kronecker quiver, 2-Kronecker quiver or Kronecker category) Q is a category with two objects, and four morphisms: The objects are V and E. The four morphisms are
s
:
E
→
V
,
{\displaystyle s:E\to V,}
t
:
E
→
V
,
{\displaystyle t:E\to V,}
and the identity morphisms
i
d
V
:
V
→
V
{\displaystyle \mathrm {id} _{V}:V\to V}
and
i
d
E
:
E
→
E
.
{\displaystyle \mathrm {id} _{E}:E\to E.}
That is, the free quiver is the category
E
s
⇉
t
V
{\displaystyle E\;{\begin{matrix}s\\[-6pt]\rightrightarrows \\[-4pt]t\end{matrix}}\;V}
A quiver is then a functor
Γ
:
Q
→
S
e
t
{\displaystyle \Gamma :Q\to \mathbf {Set} }
. (That is to say,
Γ
{\displaystyle \Gamma }
specifies two sets
Γ
(
V
)
{\displaystyle \Gamma (V)}
and
Γ
(
E
)
{\displaystyle \Gamma (E)}
, and two functions
Γ
(
s
)
,
Γ
(
t
)
:
Γ
(
E
)
⟶
Γ
(
V
)
{\displaystyle \Gamma (s),\Gamma (t)\colon \Gamma (E)\longrightarrow \Gamma (V)}
; this is the full extent of what it means to be a functor from
Q
{\displaystyle Q}
to
S
e
t
{\displaystyle \mathbf {Set} }
.)
More generally, a quiver in a category C is a functor
Γ
:
Q
→
C
.
{\displaystyle \Gamma :Q\to C.}
The category Quiv(C) of quivers in C is the functor category where:
objects are functors
Γ
:
Q
→
C
,
{\displaystyle \Gamma :Q\to C,}
morphisms are natural transformations between functors.
Note that Quiv is the category of presheaves on the opposite category Qop.
== Path algebra ==
If Γ is a quiver, then a path in Γ is a sequence of arrows
a
n
a
n
−
1
…
a
3
a
2
a
1
{\displaystyle a_{n}a_{n-1}\dots a_{3}a_{2}a_{1}}
such that the head of ai+1 is the tail of ai for i = 1, …, n−1, using the convention of concatenating paths from right to left. Note that a path in graph theory has a stricter definition, and that this concept instead coincides with what in graph theory is called a walk.
If K is a field then the quiver algebra or path algebra K Γ is defined as a vector space having all the paths (of length ≥ 0) in the quiver as basis (including, for each vertex i of the quiver Γ, a trivial path ei of length 0; these paths are not assumed to be equal for different i), and multiplication given by concatenation of paths. If two paths cannot be concatenated because the end vertex of the first is not equal to the starting vertex of the second, their product is defined to be zero. This defines an associative algebra over K. This algebra has a unit element if and only if the quiver has only finitely many vertices. In this case, the modules over K Γ are naturally identified with the representations of Γ. If the quiver has infinitely many vertices, then K Γ has an approximate identity given by
e
F
:=
∑
v
∈
F
1
v
{\textstyle e_{F}:=\sum _{v\in F}1_{v}}
where F ranges over finite subsets of the vertex set of Γ.
If the quiver has finitely many vertices and arrows, and the end vertex and starting vertex of any path are always distinct (i.e. Q has no oriented cycles), then K Γ is a finite-dimensional hereditary algebra over K. Conversely, if K is algebraically closed, then any finite-dimensional, hereditary, associative algebra over K is Morita equivalent to the path algebra of its Ext quiver (i.e., they have equivalent module categories).
== Representations of quivers ==
A representation of a quiver Q is an association of an R-module to each vertex of Q, and a morphism between each module for each arrow.
A representation V of a quiver Q is said to be trivial if
V
(
x
)
=
0
{\displaystyle V(x)=0}
for all vertices x in Q.
A morphism,
f
:
V
→
V
′
,
{\displaystyle f:V\to V',}
between representations of the quiver Q, is a collection of linear maps
f
(
x
)
:
V
(
x
)
→
V
′
(
x
)
{\displaystyle f(x):V(x)\to V'(x)}
such that for every arrow a in Q from x to y,
V
′
(
a
)
f
(
x
)
=
f
(
y
)
V
(
a
)
,
{\displaystyle V'(a)f(x)=f(y)V(a),}
i.e. the squares that f forms with the arrows of V and V' all commute. A morphism, f, is an isomorphism, if f (x) is invertible for all vertices x in the quiver. With these definitions the representations of a quiver form a category.
If V and W are representations of a quiver Q, then the direct sum of these representations,
V
⊕
W
,
{\displaystyle V\oplus W,}
is defined by
(
V
⊕
W
)
(
x
)
=
V
(
x
)
⊕
W
(
x
)
{\displaystyle (V\oplus W)(x)=V(x)\oplus W(x)}
for all vertices x in Q and
(
V
⊕
W
)
(
a
)
{\displaystyle (V\oplus W)(a)}
is the direct sum of the linear mappings V(a) and W(a).
A representation is said to be decomposable if it is isomorphic to the direct sum of non-zero representations.
A categorical definition of a quiver representation can also be given. The quiver itself can be considered a category, where the vertices are objects and paths are morphisms. Then a representation of Q is just a covariant functor from this category to the category of finite dimensional vector spaces. Morphisms of representations of Q are precisely natural transformations between the corresponding functors.
For a finite quiver Γ (a quiver with finitely many vertices and edges), let K Γ be its path algebra. Let ei denote the trivial path at vertex i. Then we can associate to the vertex i the projective K Γ-module K Γei consisting of linear combinations of paths which have starting vertex i. This corresponds to the representation of Γ obtained by putting a copy of K at each vertex which lies on a path starting at i and 0 on each other vertex. To each edge joining two copies of K we associate the identity map.
This theory was related to cluster algebras by Derksen, Weyman, and Zelevinsky.
== Quiver with relations ==
To enforce commutativity of some squares inside a quiver a generalization is the notion of quivers with relations (also named bound quivers).
A relation on a quiver Q is a K linear combination of paths from Q.
A quiver with relation is a pair (Q, I) with Q a quiver and
I
⊆
K
Γ
{\displaystyle I\subseteq K\Gamma }
an
ideal of the path algebra. The quotient K Γ / I is the path algebra of (Q, I).
=== Quiver Variety ===
Given the dimensions of the vector spaces assigned to every vertex, one can form a variety which characterizes all representations of that quiver with those specified dimensions, and consider stability conditions. These give quiver varieties, as constructed by King (1994).
== Gabriel's theorem ==
A quiver is of finite type if it has only finitely many isomorphism classes of indecomposable representations. Gabriel (1972) classified all quivers of finite type, and also their indecomposable representations. More precisely, Gabriel's theorem states that:
A (connected) quiver is of finite type if and only if its underlying graph (when the directions of the arrows are ignored) is one of the ADE Dynkin diagrams: An, Dn, E6, E7, E8.
The indecomposable representations are in a one-to-one correspondence with the positive roots of the root system of the Dynkin diagram.
Dlab & Ringel (1973) found a generalization of Gabriel's theorem in which all Dynkin diagrams of finite dimensional semisimple Lie algebras occur. This was generalized to all quivers and their corresponding Kac–Moody algebras by Victor Kac.
== See also ==
ADE classification
Adhesive category
Assembly theory
Graph algebra
Group ring
Incidence algebra
Quiver diagram
Semi-invariant of a quiver
Toric variety
Derived noncommutative algebraic geometry - Quivers help encode the data of derived noncommutative schemes
== References ==
=== Books ===
Kirillov, Alexander (2016), Quiver Representations and Quiver Varieties, American Mathematical Society, ISBN 978-1-4704-2307-0
=== Lecture Notes ===
Crawley-Boevey, William, Lectures on Representations of Quivers (PDF), archived from the original on 2017-08-20{{citation}}: CS1 maint: bot: original URL status unknown (link)
Quiver representations in toric geometry
=== Research ===
Projective toric varieties as fine moduli spaces of quiver representations
== Sources ==
Derksen, Harm; Weyman, Jerzy (February 2005), "Quiver Representations" (PDF), Notices of the American Mathematical Society, 52 (2)
Dlab, Vlastimil; Ringel, Claus Michael (1973), On algebras of finite representation type, Carleton Mathematical Lecture Notes, vol. 2, Department of Mathematics, Carleton Univ., Ottawa, Ont., MR 0347907
Crawley-Boevey, William (1992), Notes on Quiver Representations (PDF), Oxford University, archived from the original (PDF) on 2011-07-24, retrieved 2007-02-17
Gabriel, Peter (1972), "Unzerlegbare Darstellungen. I", Manuscripta Mathematica, 6 (1): 71–103, doi:10.1007/BF01298413, ISSN 0025-2611, MR 0332887.
Victor Kac, "Root systems, representations of quivers and invariant theory". Invariant theory (Montecatini, 1982), pp. 74–108, Lecture Notes in Math. 996, Springer-Verlag, Berlin 1983. ISBN 3-540-12319-9
King, Alastair (1994), "Moduli of representations of finite-dimensional algebras", Quart. J. Math., 45 (180): 515–530, doi:10.1093/qmath/45.4.515
Savage, Alistair (2006) [2005], "Finite-dimensional algebras and quivers", in Francoise, J.-P.; Naber, G. L.; Tsou, S.T. (eds.), Encyclopedia of Mathematical Physics, vol. 2, Elsevier, pp. 313–320, arXiv:math/0505082, Bibcode:2005math......5082S
Simson, Daniel; Skowronski, Andrzej; Assem, Ibrahim (2007), Elements of the Representation Theory of Associative Algebras, Cambridge University Press, ISBN 978-0-521-88218-7
Bernšteĭn, I. N.; Gelʹfand, I. M.; Ponomarev, V. A., "Coxeter functors, and Gabriel's theorem" (Russian), Uspekhi Mat. Nauk 28 (1973), no. 2(170), 19–33. Translation on Bernstein's website.
Quiver at the nLab | Wikipedia/Quiver_algebra |
In mathematics, a separable algebra is a kind of semisimple algebra. It is a generalization to associative algebras of the notion of a separable field extension.
== Definition and first properties ==
A homomorphism of (unital, but not necessarily commutative) rings
K
→
A
{\displaystyle K\to A}
is called separable if the multiplication map
μ
:
A
⊗
K
A
→
A
a
⊗
b
↦
a
b
{\displaystyle {\begin{array}{rccc}\mu :&A\otimes _{K}A&\to &A\\&a\otimes b&\mapsto &ab\end{array}}}
admits a section
σ
:
A
→
A
⊗
K
A
{\displaystyle \sigma :A\to A\otimes _{K}A}
that is a homomorphism of A-A-bimodules.
If the ring
K
{\displaystyle K}
is commutative and
K
→
A
{\displaystyle K\to A}
maps
K
{\displaystyle K}
into the center of
A
{\displaystyle A}
, we call
A
{\displaystyle A}
a separable algebra over
K
{\displaystyle K}
.
It is useful to describe separability in terms of the element
p
:=
σ
(
1
)
=
∑
a
i
⊗
b
i
∈
A
⊗
K
A
{\displaystyle p:=\sigma (1)=\sum a_{i}\otimes b_{i}\in A\otimes _{K}A}
The reason is that a section σ is determined by this element. The condition that σ is a section of μ is equivalent to
∑
a
i
b
i
=
1
{\displaystyle \sum a_{i}b_{i}=1}
and the condition that σ is a homomorphism of A-A-bimodules is equivalent to the following requirement for any a in A:
∑
a
a
i
⊗
b
i
=
∑
a
i
⊗
b
i
a
.
{\displaystyle \sum aa_{i}\otimes b_{i}=\sum a_{i}\otimes b_{i}a.}
Such an element p is called a separability idempotent, since regarded as an element of the algebra
A
⊗
A
o
p
{\displaystyle A\otimes A^{\rm {op}}}
it satisfies
p
2
=
p
{\displaystyle p^{2}=p}
.
== Examples ==
For any commutative ring R, the (non-commutative) ring of n-by-n matrices
M
n
(
R
)
{\displaystyle M_{n}(R)}
is a separable R-algebra. For any
1
≤
j
≤
n
{\displaystyle 1\leq j\leq n}
, a separability idempotent is given by
∑
i
=
1
n
e
i
j
⊗
e
j
i
{\textstyle \sum _{i=1}^{n}e_{ij}\otimes e_{ji}}
, where
e
i
j
{\displaystyle e_{ij}}
denotes the elementary matrix which is 0 except for the entry in the (i, j) entry, which is 1. In particular, this shows that separability idempotents need not be unique.
=== Separable algebras over a field ===
A field extension L/K of finite degree is a separable extension if and only if L is separable as an associative K-algebra. If L/K has a primitive element
a
{\displaystyle a}
with irreducible polynomial
p
(
x
)
=
(
x
−
a
)
∑
i
=
0
n
−
1
b
i
x
i
{\textstyle p(x)=(x-a)\sum _{i=0}^{n-1}b_{i}x^{i}}
, then a separability idempotent is given by
∑
i
=
0
n
−
1
a
i
⊗
K
b
i
p
′
(
a
)
{\textstyle \sum _{i=0}^{n-1}a^{i}\otimes _{K}{\frac {b_{i}}{p'(a)}}}
. The tensorands are dual bases for the trace map: if
σ
1
,
…
,
σ
n
{\textstyle \sigma _{1},\ldots ,\sigma _{n}}
are the distinct K-monomorphisms of L into an algebraic closure of K, the trace mapping Tr of L into K is defined by
T
r
(
x
)
=
∑
i
=
1
n
σ
i
(
x
)
{\textstyle Tr(x)=\sum _{i=1}^{n}\sigma _{i}(x)}
. The trace map and its dual bases make explicit L as a Frobenius algebra over K.
More generally, separable algebras over a field K can be classified as follows: they are the same as finite products of matrix algebras over finite-dimensional division algebras whose centers are finite-dimensional separable field extensions of the field K. In particular: Every separable algebra is itself finite-dimensional. If K is a perfect field – for example a field of characteristic zero, or a finite field, or an algebraically closed field – then every extension of K is separable, so that separable K-algebras are finite products of matrix algebras over finite-dimensional division algebras over field K. In other words, if K is a perfect field, there is no difference between a separable algebra over K and a finite-dimensional semisimple algebra over K.
It can be shown by a generalized theorem of Maschke that an associative K-algebra A is separable if for every field extension
L
/
K
{\textstyle L/K}
the algebra
A
⊗
K
L
{\textstyle A\otimes _{K}L}
is semisimple.
=== Group rings ===
If K is commutative ring and G is a finite group such that the order of G is invertible in K, then the group algebra K[G] is a separable K-algebra. A separability idempotent is given by
1
o
(
G
)
∑
g
∈
G
g
⊗
g
−
1
{\textstyle {\frac {1}{o(G)}}\sum _{g\in G}g\otimes g^{-1}}
.
== Equivalent characterizations of separability ==
There are several equivalent definitions of separable algebras. A K-algebra A is separable if and only if it is projective when considered as a left module of
A
e
{\displaystyle A^{e}}
in the usual way. Moreover, an algebra A is separable if and only if it is flat when considered as a right module of
A
e
{\displaystyle A^{e}}
in the usual way.
Separable algebras can also be characterized by means of split extensions: A is separable over K if and only if all short exact sequences of A-A-bimodules that are split as A-K-bimodules also split as A-A-bimodules. Indeed, this condition is necessary since the multiplication mapping
μ
:
A
⊗
K
A
→
A
{\textstyle \mu :A\otimes _{K}A\rightarrow A}
arising in the definition above is a A-A-bimodule epimorphism, which is split as an A-K-bimodule map by the right inverse mapping
A
→
A
⊗
K
A
{\textstyle A\rightarrow A\otimes _{K}A}
given by
a
↦
a
⊗
1
{\displaystyle a\mapsto a\otimes 1}
. The converse can be proven by a judicious use of the separability idempotent (similarly to the proof of Maschke's theorem, applying its components within and without the splitting maps).
Equivalently, the relative Hochschild cohomology groups
H
n
(
R
,
S
;
M
)
{\displaystyle H^{n}(R,S;M)}
of (R, S) in any coefficient bimodule M is zero for n > 0. Examples of separable extensions are many including first separable algebras where R is a separable algebra and S = 1 times the ground field. Any ring R with elements a and b satisfying ab = 1, but ba different from 1, is a separable extension over the subring S generated by 1 and bRa.
== Relation to Frobenius algebras ==
A separable algebra is said to be strongly separable if there exists a separability idempotent that is symmetric, meaning
e
=
∑
i
=
1
n
x
i
⊗
y
i
=
∑
i
=
1
n
y
i
⊗
x
i
{\displaystyle e=\sum _{i=1}^{n}x_{i}\otimes y_{i}=\sum _{i=1}^{n}y_{i}\otimes x_{i}}
An algebra is strongly separable if and only if its trace form is nondegenerate, thus making the algebra into a particular kind of Frobenius algebra called a symmetric algebra (not to be confused with the symmetric algebra arising as the quotient of the tensor algebra).
If K is commutative, A is a finitely generated projective separable K-module, then A is a symmetric Frobenius algebra.
== Relation to formally unramified and formally étale extensions ==
Any separable extension A / K of commutative rings is formally unramified. The converse holds if A is a finitely generated K-algebra. A separable flat (commutative) K-algebra A is formally étale.
== Further results ==
A theorem in the area is that of J. Cuadra that a separable Hopf–Galois extension R | S has finitely generated natural S-module R. A fundamental fact about a separable extension R | S is that it is left or right semisimple extension: a short exact sequence of left or right R-modules that is split as S-modules, is split as R-modules. In terms of G. Hochschild's relative homological algebra, one says that all R-modules are relative (R, S)-projective. Usually relative properties of subrings or ring extensions, such as the notion of separable extension, serve to promote theorems that say that the over-ring shares a property of the subring. For example, a separable extension R of a semisimple algebra S has R semisimple, which follows from the preceding discussion.
There is the celebrated Jans theorem that a finite group algebra A over a field of characteristic p is of finite representation type if and only if its Sylow p-subgroup is cyclic: the clearest proof is to note this fact for p-groups, then note that the group algebra is a separable extension of its Sylow p-subgroup algebra B as the index is coprime to the characteristic. The separability condition above will imply every finitely generated A-module M is isomorphic to a direct summand in its restricted, induced module. But if B has finite representation type, the restricted module is uniquely a direct sum of multiples of finitely many indecomposables, which induce to a finite number of constituent indecomposable modules of which M is a direct sum. Hence A is of finite representation type if B is. The converse is proven by a similar argument noting that every subgroup algebra B is a B-bimodule direct summand of a group algebra A.
== Citations ==
== References ==
DeMeyer, F.; Ingraham, E. (1971). Separable algebras over commutative rings. Lecture Notes in Mathematics. Vol. 181. Berlin-Heidelberg-New York: Springer-Verlag. ISBN 978-3-540-05371-2. Zbl 0215.36602.
Samuel Eilenberg and Tadasi Nakayama, On the dimension of modules and algebras. II. Frobenius algebras and quasi-Frobenius rings, Nagoya Math. J. Volume 9 (1955), 1–16.
Endo, Shizuo; Watanabe, Yutaka (1967), "On separable algebras over a commutative ring", Osaka Journal of Mathematics, 4: 233–242, MR 0227211
Ford, Timothy J. (2017), Separable algebras, Providence, RI: American Mathematical Society, ISBN 978-1-4704-3770-1, MR 3618889
Hirata, H.; Sugano, K. (1966), "On semisimple and separable extensions of noncommutative rings", J. Math. Soc. Jpn., 18: 360–373
Kadison, Lars (1999), New examples of Frobenius extensions, University Lecture Series, vol. 14, Providence, RI: American Mathematical Society, doi:10.1090/ulect/014, ISBN 0-8218-1962-3, MR 1690111
Reiner, I. (2003), Maximal Orders, London Mathematical Society Monographs. New Series, vol. 28, Oxford University Press, ISBN 0-19-852673-3, Zbl 1024.16008
Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259. | Wikipedia/Separable_algebra |
In mathematics, the tensor product of two algebras over a commutative ring R is also an R-algebra. This gives the tensor product of algebras. When the ring is a field, the most common application of such products is to describe the product of algebra representations.
== Definition ==
Let R be a commutative ring and let A and B be R-algebras. Since A and B may both be regarded as R-modules, their tensor product
A
⊗
R
B
{\displaystyle A\otimes _{R}B}
is also an R-module. The tensor product can be given the structure of a ring by defining the product on elements of the form a ⊗ b by
(
a
1
⊗
b
1
)
(
a
2
⊗
b
2
)
=
a
1
a
2
⊗
b
1
b
2
{\displaystyle (a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=a_{1}a_{2}\otimes b_{1}b_{2}}
and then extending by linearity to all of A ⊗R B. This ring is an R-algebra, associative and unital with the identity element given by 1A ⊗ 1B. where 1A and 1B are the identity elements of A and B. If A and B are commutative, then the tensor product is commutative as well.
The tensor product turns the category of R-algebras into a symmetric monoidal category.
== Further properties ==
There are natural homomorphisms from A and B to A ⊗R B given by
a
↦
a
⊗
1
B
{\displaystyle a\mapsto a\otimes 1_{B}}
b
↦
1
A
⊗
b
{\displaystyle b\mapsto 1_{A}\otimes b}
These maps make the tensor product the coproduct in the category of commutative R-algebras. The tensor product is not the coproduct in the category of all R-algebras. There the coproduct is given by a more general free product of algebras. Nevertheless, the tensor product of non-commutative algebras can be described by a universal property similar to that of the coproduct:
Hom
(
A
⊗
B
,
X
)
≅
{
(
f
,
g
)
∈
Hom
(
A
,
X
)
×
Hom
(
B
,
X
)
∣
∀
a
∈
A
,
b
∈
B
:
[
f
(
a
)
,
g
(
b
)
]
=
0
}
,
{\displaystyle {\text{Hom}}(A\otimes B,X)\cong \lbrace (f,g)\in {\text{Hom}}(A,X)\times {\text{Hom}}(B,X)\mid \forall a\in A,b\in B:[f(a),g(b)]=0\rbrace ,}
where [-, -] denotes the commutator.
The natural isomorphism is given by identifying a morphism
ϕ
:
A
⊗
B
→
X
{\displaystyle \phi :A\otimes B\to X}
on the left hand side with the pair of morphisms
(
f
,
g
)
{\displaystyle (f,g)}
on the right hand side where
f
(
a
)
:=
ϕ
(
a
⊗
1
)
{\displaystyle f(a):=\phi (a\otimes 1)}
and similarly
g
(
b
)
:=
ϕ
(
1
⊗
b
)
{\displaystyle g(b):=\phi (1\otimes b)}
.
== Applications ==
The tensor product of commutative algebras is of frequent use in algebraic geometry. For affine schemes X, Y, Z with morphisms from X and Z to Y, so X = Spec(A), Y = Spec(R), and Z = Spec(B) for some commutative rings A, R, B, the fiber product scheme is the affine scheme corresponding to the tensor product of algebras:
X
×
Y
Z
=
Spec
(
A
⊗
R
B
)
.
{\displaystyle X\times _{Y}Z=\operatorname {Spec} (A\otimes _{R}B).}
More generally, the fiber product of schemes is defined by gluing together affine fiber products of this form.
== Examples ==
The tensor product can be used as a means of taking intersections of two subschemes in a scheme: consider the
C
[
x
,
y
]
{\displaystyle \mathbb {C} [x,y]}
-algebras
C
[
x
,
y
]
/
f
{\displaystyle \mathbb {C} [x,y]/f}
,
C
[
x
,
y
]
/
g
{\displaystyle \mathbb {C} [x,y]/g}
, then their tensor product is
C
[
x
,
y
]
/
(
f
)
⊗
C
[
x
,
y
]
C
[
x
,
y
]
/
(
g
)
≅
C
[
x
,
y
]
/
(
f
,
g
)
{\displaystyle \mathbb {C} [x,y]/(f)\otimes _{\mathbb {C} [x,y]}\mathbb {C} [x,y]/(g)\cong \mathbb {C} [x,y]/(f,g)}
, which describes the intersection of the algebraic curves f = 0 and g = 0 in the affine plane over C.
More generally, if
A
{\displaystyle A}
is a commutative ring and
I
,
J
⊆
A
{\displaystyle I,J\subseteq A}
are ideals, then
A
I
⊗
A
A
J
≅
A
I
+
J
{\displaystyle {\frac {A}{I}}\otimes _{A}{\frac {A}{J}}\cong {\frac {A}{I+J}}}
, with a unique isomorphism sending
(
a
+
I
)
⊗
(
b
+
J
)
{\displaystyle (a+I)\otimes (b+J)}
to
(
a
b
+
I
+
J
)
{\displaystyle (ab+I+J)}
.
Tensor products can be used as a means of changing coefficients. For example,
Z
[
x
,
y
]
/
(
x
3
+
5
x
2
+
x
−
1
)
⊗
Z
Z
/
5
≅
Z
/
5
[
x
,
y
]
/
(
x
3
+
x
−
1
)
{\displaystyle \mathbb {Z} [x,y]/(x^{3}+5x^{2}+x-1)\otimes _{\mathbb {Z} }\mathbb {Z} /5\cong \mathbb {Z} /5[x,y]/(x^{3}+x-1)}
and
Z
[
x
,
y
]
/
(
f
)
⊗
Z
C
≅
C
[
x
,
y
]
/
(
f
)
{\displaystyle \mathbb {Z} [x,y]/(f)\otimes _{\mathbb {Z} }\mathbb {C} \cong \mathbb {C} [x,y]/(f)}
.
Tensor products also can be used for taking products of affine schemes over a field. For example,
C
[
x
1
,
x
2
]
/
(
f
(
x
)
)
⊗
C
C
[
y
1
,
y
2
]
/
(
g
(
y
)
)
{\displaystyle \mathbb {C} [x_{1},x_{2}]/(f(x))\otimes _{\mathbb {C} }\mathbb {C} [y_{1},y_{2}]/(g(y))}
is isomorphic to the algebra
C
[
x
1
,
x
2
,
y
1
,
y
2
]
/
(
f
(
x
)
,
g
(
y
)
)
{\displaystyle \mathbb {C} [x_{1},x_{2},y_{1},y_{2}]/(f(x),g(y))}
which corresponds to an affine surface in
A
C
4
{\displaystyle \mathbb {A} _{\mathbb {C} }^{4}}
if f and g are not zero.
Given
R
{\displaystyle R}
-algebras
A
{\displaystyle A}
and
B
{\displaystyle B}
whose underlying rings are graded-commutative rings, the tensor product
A
⊗
R
B
{\displaystyle A\otimes _{R}B}
becomes a graded commutative ring by defining
(
a
⊗
b
)
(
a
′
⊗
b
′
)
=
(
−
1
)
|
b
|
|
a
′
|
a
a
′
⊗
b
b
′
{\displaystyle (a\otimes b)(a'\otimes b')=(-1)^{|b||a'|}aa'\otimes bb'}
for homogeneous
a
{\displaystyle a}
,
a
′
{\displaystyle a'}
,
b
{\displaystyle b}
, and
b
′
{\displaystyle b'}
.
== See also ==
Extension of scalars
Tensor product of modules
Tensor product of fields
Linearly disjoint
Multilinear subspace learning
== Notes ==
== References ==
Kassel, Christian (1995), Quantum groups, Graduate texts in mathematics, vol. 155, Springer, ISBN 978-0-387-94370-1.
Lang, Serge (2002) [first published in 1993]. Algebra. Graduate Texts in Mathematics. Vol. 21. Springer. ISBN 0-387-95385-X. | Wikipedia/Tensor_product_of_rings |
In mathematics, a subring of a ring R is a subset of R that is itself a ring when binary operations of addition and multiplication on R are restricted to the subset, and that shares the same multiplicative identity as R.
== Definition ==
A subring of a ring (R, +, *, 0, 1) is a subset S of R that preserves the structure of the ring, i.e. a ring (S, +, *, 0, 1) with S ⊆ R. Equivalently, it is both a subgroup of (R, +, 0) and a submonoid of (R, *, 1).
Equivalently, S is a subring if and only if it contains the multiplicative identity of R, and is closed under multiplication and subtraction. This is sometimes known as the subring test.
=== Variations ===
Some mathematicians define rings without requiring the existence of a multiplicative identity (see Ring (mathematics) § History). In this case, a subring of R is a subset of R that is a ring for the operations of R (this does imply it contains the additive identity of R). This alternate definition gives a strictly weaker condition, even for rings that do have a multiplicative identity, in that all ideals become subrings, and they may have a multiplicative identity that differs from the one of R. With the definition requiring a multiplicative identity, which is used in the rest of this article, the only ideal of R that is a subring of R is R itself.
== Examples ==
The ring of integers
Z
{\displaystyle \mathbb {Z} }
is a subring of both the field of real numbers and the polynomial ring
Z
[
X
]
{\displaystyle \mathbb {Z} [X]}
.
Z
{\displaystyle \mathbb {Z} }
and its quotients
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
have no subrings (with multiplicative identity) other than the full ring.
Every ring has a unique smallest subring, isomorphic to some ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
with n a nonnegative integer (see Characteristic). The integers
Z
{\displaystyle \mathbb {Z} }
correspond to n = 0 in this statement, since
Z
{\displaystyle \mathbb {Z} }
is isomorphic to
Z
/
0
Z
{\displaystyle \mathbb {Z} /0\mathbb {Z} }
.
The center of a ring R is a subring of R, and R is an associative algebra over its center.
== Subring generated by a set ==
A special kind of subring of a ring R is the subring generated by a subset X, which is defined as the intersection of all subrings of R containing X. The subring generated by X is also the set of all linear combinations with integer coefficients of products of elements of X, including the additive identity ("empty combination") and multiplicative identity ("empty product").
Any intersection of subrings of R is itself a subring of R; therefore, the subring generated by X (denoted here as S) is indeed a subring of R. This subring S is the smallest subring of R containing X; that is, if T is any other subring of R containing X, then S ⊆ T.
Since R itself is a subring of R, if R is generated by X, it is said that the ring R is generated by X.
== Ring extension ==
Subrings generalize some aspects of field extensions. If S is a subring of a ring R, then equivalently R is said to be a ring extension of S.
=== Adjoining ===
If A is a ring and T is a subring of A generated by R ∪ S, where R is a subring, then T is a ring extension and is said to be S adjoined to R, denoted R[S]. Individual elements can also be adjoined to a subring, denoted R[a1, a2, ..., an].
For example, the ring of Gaussian integers
Z
[
i
]
{\displaystyle \mathbb {Z} [i]}
is a subring of
C
{\displaystyle \mathbb {C} }
generated by
Z
∪
{
i
}
{\displaystyle \mathbb {Z} \cup \{i\}}
, and thus is the adjunction of the imaginary unit i to
Z
{\displaystyle \mathbb {Z} }
.
=== Prime subring ===
The intersection of all subrings of a ring R is a subring that may be called the prime subring of R by analogy with prime fields.
The prime subring of a ring R is a subring of the center of R, which is isomorphic either to the ring
Z
{\displaystyle \mathbb {Z} }
of the integers or to the ring of the integers modulo n, where n is the smallest positive integer such that the sum of n copies of 1 equals 0.
== See also ==
Integral extension
Group extension
Algebraic extension
Ore extension
== Notes ==
== References ==
=== General references ===
Adamson, Iain T. (1972). Elementary rings and modules. University Mathematical Texts. Oliver and Boyd. pp. 14–16. ISBN 0-05-002192-3.
Sharpe, David (1987). Rings and factorization. Cambridge University Press. pp. 15–17. ISBN 0-521-33718-6. | Wikipedia/Algebra_of_dual_numbers |
In algebraic geometry, a sheaf of algebras on a ringed space X is a sheaf of commutative rings on X that is also a sheaf of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules. It is quasi-coherent if it is so as a module.
When X is a scheme, just like a ring, one can take the global Spec of a quasi-coherent sheaf of algebras: this results in the contravariant functor
Spec
X
{\displaystyle \operatorname {Spec} _{X}}
from the category of quasi-coherent (sheaves of)
O
X
{\displaystyle {\mathcal {O}}_{X}}
-algebras on X to the category of schemes that are affine over X (defined below). Moreover, it is an equivalence: the quasi-inverse is given by sending an affine morphism
f
:
Y
→
X
{\displaystyle f:Y\to X}
to
f
∗
O
Y
.
{\displaystyle f_{*}{\mathcal {O}}_{Y}.}
== Affine morphism ==
A morphism of schemes
f
:
X
→
Y
{\displaystyle f:X\to Y}
is called affine if
Y
{\displaystyle Y}
has an open affine cover
U
i
{\displaystyle U_{i}}
's such that
f
−
1
(
U
i
)
{\displaystyle f^{-1}(U_{i})}
are affine. For example, a finite morphism is affine. An affine morphism is quasi-compact and separated; in particular, the direct image of a quasi-coherent sheaf along an affine morphism is quasi-coherent.
The base change of an affine morphism is affine.
Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be an affine morphism between schemes and
E
{\displaystyle E}
a locally ringed space together with a map
g
:
E
→
Y
{\displaystyle g:E\to Y}
. Then the natural map between the sets:
Mor
Y
(
E
,
X
)
→
Hom
O
Y
-alg
(
f
∗
O
X
,
g
∗
O
E
)
{\displaystyle \operatorname {Mor} _{Y}(E,X)\to \operatorname {Hom} _{{\mathcal {O}}_{Y}{\text{-alg}}}(f_{*}{\mathcal {O}}_{X},g_{*}{\mathcal {O}}_{E})}
is bijective.
== Examples ==
Let
f
:
X
~
→
X
{\displaystyle f:{\widetilde {X}}\to X}
be the normalization of an algebraic variety X. Then, since f is finite,
f
∗
O
X
~
{\displaystyle f_{*}{\mathcal {O}}_{\widetilde {X}}}
is quasi-coherent and
Spec
X
(
f
∗
O
X
~
)
=
X
~
{\displaystyle \operatorname {Spec} _{X}(f_{*}{\mathcal {O}}_{\widetilde {X}})={\widetilde {X}}}
.
Let
E
{\displaystyle E}
be a locally free sheaf of finite rank on a scheme X. Then
Sym
(
E
∗
)
{\displaystyle \operatorname {Sym} (E^{*})}
is a quasi-coherent
O
X
{\displaystyle {\mathcal {O}}_{X}}
-algebra and
Spec
X
(
Sym
(
E
∗
)
)
→
X
{\displaystyle \operatorname {Spec} _{X}(\operatorname {Sym} (E^{*}))\to X}
is the associated vector bundle over X (called the total space of
E
{\displaystyle E}
.)
More generally, if F is a coherent sheaf on X, then one still has
Spec
X
(
Sym
(
F
)
)
→
X
{\displaystyle \operatorname {Spec} _{X}(\operatorname {Sym} (F))\to X}
, usually called the abelian hull of F; see Cone (algebraic geometry)#Examples.
== The formation of direct images ==
Given a ringed space S, there is the category
C
S
{\displaystyle C_{S}}
of pairs
(
f
,
M
)
{\displaystyle (f,M)}
consisting of a ringed space morphism
f
:
X
→
S
{\displaystyle f:X\to S}
and an
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module
M
{\displaystyle M}
. Then the formation of direct images determines the contravariant functor from
C
S
{\displaystyle C_{S}}
to the category of pairs consisting of an
O
S
{\displaystyle {\mathcal {O}}_{S}}
-algebra A and an A-module M that sends each pair
(
f
,
M
)
{\displaystyle (f,M)}
to the pair
(
f
∗
O
,
f
∗
M
)
{\displaystyle (f_{*}{\mathcal {O}},f_{*}M)}
.
Now assume S is a scheme and then let
Aff
S
⊂
C
S
{\displaystyle \operatorname {Aff} _{S}\subset C_{S}}
be the subcategory consisting of pairs
(
f
:
X
→
S
,
M
)
{\displaystyle (f:X\to S,M)}
such that
f
{\displaystyle f}
is an affine morphism between schemes and
M
{\displaystyle M}
a quasi-coherent sheaf on
X
{\displaystyle X}
. Then the above functor determines the equivalence between
Aff
S
{\displaystyle \operatorname {Aff} _{S}}
and the category of pairs
(
A
,
M
)
{\displaystyle (A,M)}
consisting of an
O
S
{\displaystyle {\mathcal {O}}_{S}}
-algebra A and a quasi-coherent
A
{\displaystyle A}
-module
M
{\displaystyle M}
.
The above equivalence can be used (among other things) to do the following construction. As before, given a scheme S, let A be a quasi-coherent
O
S
{\displaystyle {\mathcal {O}}_{S}}
-algebra and then take its global Spec:
f
:
X
=
Spec
S
(
A
)
→
S
{\displaystyle f:X=\operatorname {Spec} _{S}(A)\to S}
. Then, for each quasi-coherent A-module M, there is a corresponding quasi-coherent
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module
M
~
{\displaystyle {\widetilde {M}}}
such that
f
∗
M
~
≃
M
,
{\displaystyle f_{*}{\widetilde {M}}\simeq M,}
called the sheaf associated to M. Put in another way,
f
∗
{\displaystyle f_{*}}
determines an equivalence between the category of quasi-coherent
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules and the quasi-coherent
A
{\displaystyle A}
-modules.
== See also ==
quasi-affine morphism
Serre's theorem on affineness
== References ==
Grothendieck, Alexandre; Dieudonné, Jean (1971). Éléments de géométrie algébrique: I. Le langage des schémas. Grundlehren der Mathematischen Wissenschaften (in French). Vol. 166 (2nd ed.). Berlin; New York: Springer-Verlag. ISBN 978-3-540-05113-8.
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
== External links ==
https://ncatlab.org/nlab/show/affine+morphism | Wikipedia/Sheaf_of_algebras |
In abstract algebra, a quasi-free algebra is an associative algebra that satisfies the lifting property similar to that of a formally smooth algebra in commutative algebra. The notion was introduced by Cuntz and Quillen for the applications to cyclic homology. A quasi-free algebra generalizes a free algebra, as well as the coordinate ring of a smooth affine complex curve. Because of the latter generalization, a quasi-free algebra can be thought of as signifying smoothness on a noncommutative space.
== Definition ==
Let A be an associative algebra over the complex numbers. Then A is said to be quasi-free if the following equivalent conditions are met:
Given a square-zero extension
R
→
R
/
I
{\displaystyle R\to R/I}
, each homomorphism
A
→
R
/
I
{\displaystyle A\to R/I}
lifts to
A
→
R
{\displaystyle A\to R}
.
The cohomological dimension of A with respect to Hochschild cohomology is at most one.
Let
(
Ω
A
,
d
)
{\displaystyle (\Omega A,d)}
denotes the differential envelope of A; i.e., the universal differential-graded algebra generated by A. Then A is quasi-free if and only if
Ω
1
A
{\displaystyle \Omega ^{1}A}
is projective as a bimodule over A.
There is also a characterization in terms of a connection. Given an A-bimodule E, a right connection on E is a linear map
∇
r
:
E
→
E
⊗
A
Ω
1
A
{\displaystyle \nabla _{r}:E\to E\otimes _{A}\Omega ^{1}A}
that satisfies
∇
r
(
a
s
)
=
a
∇
r
(
s
)
{\displaystyle \nabla _{r}(as)=a\nabla _{r}(s)}
and
∇
r
(
s
a
)
=
∇
r
(
s
)
a
+
s
⊗
d
a
{\displaystyle \nabla _{r}(sa)=\nabla _{r}(s)a+s\otimes da}
. A left connection is defined in the similar way. Then A is quasi-free if and only if
Ω
1
A
{\displaystyle \Omega ^{1}A}
admits a right connection.
== Properties and examples ==
One of basic properties of a quasi-free algebra is that the algebra is left and right hereditary (i.e., a submodule of a projective left or right module is projective or equivalently the left or right global dimension is at most one). This puts a strong restriction for algebras to be quasi-free. For example, a hereditary (commutative) integral domain is precisely a Dedekind domain. In particular, a polynomial ring over a field is quasi-free if and only if the number of variables is at most one.
An analog of the tubular neighborhood theorem, called the formal tubular neighborhood theorem, holds for quasi-free algebras.
== References ==
=== Bibliography ===
Cuntz, Joachim (June 2013). "Quillen's work on the foundations of cyclic cohomology". Journal of K-Theory. 11 (3): 559–574. arXiv:1202.5958. doi:10.1017/is012011006jkt201. ISSN 1865-2433.
Cuntz, Joachim; Quillen, Daniel (1995). "Algebra Extensions and Nonsingularity". Journal of the American Mathematical Society. 8 (2): 251–289. doi:10.2307/2152819. ISSN 0894-0347.
Kontsevich, Maxim; Rosenberg, Alexander L. (2000). "Noncommutative Smooth Spaces". The Gelfand Mathematical Seminars, 1996–1999. Birkhäuser: 85–108. arXiv:math/9812158. doi:10.1007/978-1-4612-1340-6_5.
Maxim Kontsevich, Alexander Rosenberg, Noncommutative spaces, preprint MPI-2004-35
Vale, R. (2009). "notes on quasi-free algebras" (PDF).
== Further reading ==
https://ncatlab.org/nlab/show/quasi-free+algebra | Wikipedia/Quasi-free_algebra |
In category theory, a branch of mathematics, a monoid (or monoid object, or internal monoid, or algebra) (M, μ, η) in a monoidal category (C, ⊗, I) is an object M together with two morphisms
μ: M ⊗ M → M called multiplication,
η: I → M called unit,
such that the pentagon diagram
and the unitor diagram
commute. In the above notation, 1 is the identity morphism of M, I is the unit element and α, λ and ρ are respectively the associativity, the left identity and the right identity of the monoidal category C.
Dually, a comonoid in a monoidal category C is a monoid in the dual category Cop.
Suppose that the monoidal category C has a braiding γ. A monoid M in C is commutative when μ ∘ γ = μ.
== Examples ==
A monoid object in Set, the category of sets (with the monoidal structure induced by the Cartesian product), is a monoid in the usual sense.
A monoid object in Top, the category of topological spaces (with the monoidal structure induced by the product topology), is a topological monoid.
A monoid object in the category of monoids (with the direct product of monoids) is just a commutative monoid. This follows easily from the Eckmann–Hilton argument.
A monoid object in the category of complete join-semilattices Sup (with the monoidal structure induced by the Cartesian product) is a unital quantale.
A monoid object in (Ab, ⊗Z, Z), the category of abelian groups, is a ring.
For a commutative ring R, a monoid object in
(R-Mod, ⊗R, R), the category of modules over R, is a R-algebra.
the category of graded modules is a graded R-algebra.
the category of chain complexes of R-modules is a differential graded algebra.
A monoid object in K-Vect, the category of K-vector spaces (again, with the tensor product), is a unital associative K-algebra, and a comonoid object is a K-coalgebra.
For any category C, the category [C, C] of its endofunctors has a monoidal structure induced by the composition and the identity functor IC. A monoid object in [C, C] is a monad on C.
For any category with a terminal object and finite products, every object becomes a comonoid object via the diagonal morphism ΔX : X → X × X. Dually in a category with an initial object and finite coproducts every object becomes a monoid object via idX ⊔ idX : X ⊔ X → X.
== Categories of monoids ==
Given two monoids (M, μ, η) and (M′, μ′, η′) in a monoidal category C, a morphism f : M → M′ is a morphism of monoids when
f ∘ μ = μ′ ∘ (f ⊗ f),
f ∘ η = η′.
In other words, the following diagrams
,
commute.
The category of monoids in C and their monoid morphisms is written MonC.
== See also ==
Act-S, the category of monoids acting on sets
== References ==
Kilp, Mati; Knauer, Ulrich; Mikhalov, Alexander V. (2000). Monoids, Acts and Categories. Walter de Gruyter. ISBN 3-11-015248-7. | Wikipedia/Monoid_(category_theory) |
In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image of the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication.
A commutative algebra is an associative algebra for which the multiplication is commutative, or, equivalently, an associative algebra that is also a commutative ring.
In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital.
Every ring is an associative algebra over its center and over the integers.
== Definition ==
Let R be a commutative ring (so R could be a field). An associative R-algebra A (or more simply, an R-algebra A) is a ring A
that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies
r
⋅
(
x
y
)
=
(
r
⋅
x
)
y
=
x
(
r
⋅
y
)
{\displaystyle r\cdot (xy)=(r\cdot x)y=x(r\cdot y)}
for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.)
Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (r, x) ↦ f(r)x (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by r ↦ r ⋅ 1A. (See also § From ring homomorphisms below).
Every ring is an associative Z-algebra, where Z denotes the ring of the integers.
A commutative algebra is an associative algebra that is also a commutative ring.
=== As a monoid object in the category of modules ===
The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules.
Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map
m
:
A
⊗
R
A
→
A
{\displaystyle m:A\otimes _{R}A\to A}
.
The associativity then refers to the identity:
m
∘
(
id
⊗
m
)
=
m
∘
(
m
⊗
id
)
.
{\displaystyle m\circ ({\operatorname {id} }\otimes m)=m\circ (m\otimes \operatorname {id} ).}
=== From ring homomorphisms ===
An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism η : R → A whose image lies in the center of A, we can make A an R-algebra by defining
r
⋅
x
=
η
(
r
)
x
{\displaystyle r\cdot x=\eta (r)x}
for all r ∈ R and x ∈ A. If A is an R-algebra, taking x = 1, the same formula in turn defines a ring homomorphism η : R → A whose image lies in the center.
If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism η : R → A.
The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms R → A for a fixed R, i.e., commutative R-algebras, and whose morphisms are ring homomorphisms A → A′ that are under R; i.e., R → A → A′ is R → A′ (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R.
How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: Generic matrix ring.
== Algebra homomorphisms ==
A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, φ : A1 → A2 is an associative algebra homomorphism if
φ
(
r
⋅
x
)
=
r
⋅
φ
(
x
)
φ
(
x
+
y
)
=
φ
(
x
)
+
φ
(
y
)
φ
(
x
y
)
=
φ
(
x
)
φ
(
y
)
φ
(
1
)
=
1
{\displaystyle {\begin{aligned}\varphi (r\cdot x)&=r\cdot \varphi (x)\\\varphi (x+y)&=\varphi (x)+\varphi (y)\\\varphi (xy)&=\varphi (x)\varphi (y)\\\varphi (1)&=1\end{aligned}}}
The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg.
The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings.
== Examples ==
The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics.
=== Algebra ===
Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent.
Any ring of characteristic n is a (Z/nZ)-algebra in the same way.
Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining (r·φ)(x) = r·φ(x).
Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module.
In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K.
The complex numbers form a 2-dimensional commutative algebra over the real numbers.
The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions).
Every polynomial ring R[x1, ..., xn] is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set {x1, ..., xn}.
The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E.
The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure).
Given a module M over a commutative ring R, the direct sum of modules R ⊕ M has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as (a + x)(b + y) = ab + ay + bx. The notion is sometimes called the algebra of dual numbers.
A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field.
=== Representation theory ===
The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra.
If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups.
If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A.
A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph.
=== Analysis ===
Given any Banach space X, the continuous linear operators A : X → X form an associative algebra (using composition of operators as multiplication); this is a Banach algebra.
Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise.
The set of semimartingales defined on the filtered probability space (Ω, F, (Ft)t≥0, P) forms a ring under stochastic integration.
The Weyl algebra
An Azumaya algebra
=== Geometry and combinatorics ===
The Clifford algebras, which are useful in geometry and physics.
Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics.
The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra.
A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra
Ω
(
M
)
=
⨁
p
=
0
n
Ω
p
(
M
)
{\textstyle \Omega (M)=\bigoplus _{p=0}^{n}\Omega ^{p}(M)}
, where
Ω
p
(
M
)
{\textstyle \Omega ^{p}(M)}
consists of differential p-forms on a manifold M, is a differential graded algebra.
=== Mathematical physics ===
A Poisson algebra is a commutative associative algebra over a field together with a structure of a Lie algebra so that the Lie bracket {,} satisfies the Leibniz rule; i.e., {fg, h} = f{g, h} + g{f, h}.
Given a Poisson algebra
a
{\displaystyle {\mathfrak {a}}}
, consider the vector space
a
[
[
u
]
]
{\displaystyle {\mathfrak {a}}[\![u]\!]}
of formal power series over
a
{\displaystyle {\mathfrak {a}}}
. If
a
[
[
u
]
]
{\displaystyle {\mathfrak {a}}[\![u]\!]}
has a structure of an associative algebra with multiplication
∗
{\displaystyle *}
such that, for
f
,
g
∈
a
{\displaystyle f,g\in {\mathfrak {a}}}
,
f
∗
g
=
f
g
−
1
2
{
f
,
g
}
u
+
⋯
,
{\displaystyle f*g=fg-{\frac {1}{2}}\{f,g\}u+\cdots ,}
then
a
[
[
u
]
]
{\displaystyle {\mathfrak {a}}[\![u]\!]}
is called a deformation quantization of
a
{\displaystyle {\mathfrak {a}}}
.
A quantized enveloping algebra. The dual of such an algebra turns out to be an associative algebra (see § Dual of an associative algebra) and is, philosophically speaking, the (quantized) coordinate ring of a quantum group.
Gerstenhaber algebra
== Constructions ==
Subalgebras
A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A.
Quotient algebras
Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since r · x = (r1A)x. This gives the quotient ring A / I the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra.
Direct products
The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication.
Free products
One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras.
Tensor products
The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining r · (s ⊗ a) = (rs ⊗ a). The functor which sends A to R ⊗Z A is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings.
Free algebra
A free algebra is an algebra generated by symbols. If one imposes commutativity; i.e., take the quotient by commutators, then one gets a polynomial algebra.
== Dual of an associative algebra ==
Let A be an associative algebra over a commutative ring R. Since A is in particular a module, we can take the dual module A* of A. A priori, the dual A* need not have a structure of an associative algebra. However, A may come with an extra structure (namely, that of a Hopf algebra) so that the dual is also an associative algebra.
For example, take A to be the ring of continuous functions on a compact group G. Then, not only A is an associative algebra, but it also comes with the co-multiplication Δ(f)(g, h) = f(gh) and co-unit ε(f) = f(1). The "co-" refers to the fact that they satisfy the dual of the usual multiplication and unit in the algebra axiom. Hence, the dual A* is an associative algebra. The co-multiplication and co-unit are also important in order to form a tensor product of representations of associative algebras (see § Representations below).
== Enveloping algebra ==
Given an associative algebra A over a commutative ring R, the enveloping algebra Ae of A is the algebra A ⊗R Aop or Aop ⊗R A, depending on authors.
Note that a bimodule over A is exactly a left module over Ae.
== Separable algebra ==
Let A be an algebra over a commutative ring R. Then the algebra A is a right module over Ae := Aop ⊗R A with the action x ⋅ (a ⊗ b) = axb. Then, by definition, A is said to separable if the multiplication map A ⊗R A → A : x ⊗ y ↦ xy splits as an Ae-linear map, where A ⊗ A is an Ae-module by (x ⊗ y) ⋅ (a ⊗ b) = ax ⊗ yb. Equivalently,
A is separable if it is a projective module over Ae; thus, the Ae-projective dimension of A, sometimes called the bidimension of A, measures the failure of separability.
== Finite-dimensional algebra ==
Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring.
=== Commutative case ===
As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent
A
{\displaystyle A}
is separable.
A
⊗
k
¯
{\displaystyle A\otimes {\overline {k}}}
is reduced, where
k
¯
{\displaystyle {\overline {k}}}
is some algebraic closure of k.
A
⊗
k
¯
=
k
¯
n
{\displaystyle A\otimes {\overline {k}}={\overline {k}}^{n}}
for some n.
dim
k
A
{\displaystyle \dim _{k}A}
is the number of
k
{\displaystyle k}
-algebra homomorphisms
A
→
k
¯
{\displaystyle A\to {\overline {k}}}
.
Let
Γ
=
Gal
(
k
s
/
k
)
=
lim
←
Gal
(
k
′
/
k
)
{\displaystyle \Gamma =\operatorname {Gal} (k_{s}/k)=\varprojlim \operatorname {Gal} (k'/k)}
, the profinite group of finite Galois extensions of k. Then
A
↦
X
A
=
{
k
-algebra homomorphisms
A
→
k
s
}
{\displaystyle A\mapsto X_{A}=\{k{\text{-algebra homomorphisms }}A\to k_{s}\}}
is an anti-equivalence of the category of finite-dimensional separable k-algebras to the category of finite sets with continuous
Γ
{\displaystyle \Gamma }
-actions.
=== Noncommutative case ===
Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., A = Mn(D). More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem.
The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.)
The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of A / I as a module over the enveloping algebra (A / I)e is at most one, then the natural surjection p : A → A / I splits; i.e., A contains a subalgebra B such that p|B : B ~→ A / I is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras.
== Lattices and orders ==
Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z, Q). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, L ⊗R K = V.
Let AK be a finite-dimensional K-algebra. An order in AK is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., 1/2Z is a lattice in Q but not an order (since it is not an algebra).
A maximal order is an order that is maximal among all the orders.
== Related concepts ==
=== Coalgebras ===
An associative algebra over K is given by a K-vector space A endowed with a bilinear map A × A → A having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism K → A identifying the scalar multiples of the multiplicative identity. If the bilinear map A × A → A is reinterpreted as a linear map (i.e., morphism in the category of K-vector spaces) A ⊗ A → A (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form A ⊗ A → A and one of the form K → A) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra.
There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above.
== Representations ==
A representation of an algebra A is an algebra homomorphism ρ : A → End(V) from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, ρ(xy) = ρ(x)ρ(y) for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V).
If A and B are two algebras, and ρ : A → End(V) and τ : B → End(W) are two representations, then there is a (canonical) representation A ⊗ B → End(V ⊗ W) of the tensor product algebra A ⊗ B on the vector space V ⊗ W. However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below.
=== Motivation for a Hopf algebra ===
Consider, for example, two representations σ : A → End(V) and τ : A → End(W). One might try to form a tensor product representation ρ : x ↦ σ(x) ⊗ τ(x) according to how it acts on the product vector space, so that
ρ
(
x
)
(
v
⊗
w
)
=
(
σ
(
x
)
(
v
)
)
⊗
(
τ
(
x
)
(
w
)
)
.
{\displaystyle \rho (x)(v\otimes w)=(\sigma (x)(v))\otimes (\tau (x)(w)).}
However, such a map would not be linear, since one would have
ρ
(
k
x
)
=
σ
(
k
x
)
⊗
τ
(
k
x
)
=
k
σ
(
x
)
⊗
k
τ
(
x
)
=
k
2
(
σ
(
x
)
⊗
τ
(
x
)
)
=
k
2
ρ
(
x
)
{\displaystyle \rho (kx)=\sigma (kx)\otimes \tau (kx)=k\sigma (x)\otimes k\tau (x)=k^{2}(\sigma (x)\otimes \tau (x))=k^{2}\rho (x)}
for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ : A → A ⊗ A, and defining the tensor product representation as
ρ
=
(
σ
⊗
τ
)
∘
Δ
.
{\displaystyle \rho =(\sigma \otimes \tau )\circ \Delta .}
Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups).
=== Motivation for a Lie algebra ===
One can try to be more clever in defining a tensor product. Consider, for example,
x
↦
ρ
(
x
)
=
σ
(
x
)
⊗
Id
W
+
Id
V
⊗
τ
(
x
)
{\displaystyle x\mapsto \rho (x)=\sigma (x)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)}
so that the action on the tensor product space is given by
ρ
(
x
)
(
v
⊗
w
)
=
(
σ
(
x
)
v
)
⊗
w
+
v
⊗
(
τ
(
x
)
w
)
{\displaystyle \rho (x)(v\otimes w)=(\sigma (x)v)\otimes w+v\otimes (\tau (x)w)}
.
This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication:
ρ
(
x
y
)
=
σ
(
x
)
σ
(
y
)
⊗
Id
W
+
Id
V
⊗
τ
(
x
)
τ
(
y
)
{\displaystyle \rho (xy)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)}
.
But, in general, this does not equal
ρ
(
x
)
ρ
(
y
)
=
σ
(
x
)
σ
(
y
)
⊗
Id
W
+
σ
(
x
)
⊗
τ
(
y
)
+
σ
(
y
)
⊗
τ
(
x
)
+
Id
V
⊗
τ
(
x
)
τ
(
y
)
{\displaystyle \rho (x)\rho (y)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+\sigma (x)\otimes \tau (y)+\sigma (y)\otimes \tau (x)+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)}
.
This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra.
== Non-unital algebras ==
Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital.
One example of a non-unital associative algebra is given by the set of all functions f : R → R whose limit as x nears infinity is zero.
Another example is the vector space of continuous periodic functions, together with the convolution product.
== See also ==
Abstract algebra
Algebraic structure
Algebra over a field
Sheaf of algebras, a sort of an algebra over a ringed space
Deligne's conjecture on Hochschild cohomology
== Notes ==
== Citations ==
== References == | Wikipedia/Enveloping_algebra_of_an_associative_algebra |
In mathematics, a Brauer algebra is an associative algebra introduced by Richard Brauer in the context of the representation theory of the orthogonal group. It plays the same role that the symmetric group does for the representation theory of the general linear group in Schur–Weyl duality.
== Structure ==
The Brauer algebra
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
is a
Z
[
δ
]
{\displaystyle \mathbb {Z} [\delta ]}
-algebra depending on the choice of a positive integer
n
{\displaystyle n}
. Here
δ
{\displaystyle \delta }
is an indeterminate, but in practice
δ
{\displaystyle \delta }
is often specialised to the dimension of the fundamental representation of an orthogonal group
O
(
δ
)
{\displaystyle O(\delta )}
. The Brauer algebra has the dimension
dim
B
n
(
δ
)
=
(
2
n
)
!
2
n
n
!
=
(
2
n
−
1
)
!
!
=
(
2
n
−
1
)
(
2
n
−
3
)
⋯
5
⋅
3
⋅
1
{\displaystyle \dim {\mathfrak {B}}_{n}(\delta )={\frac {(2n)!}{2^{n}n!}}=(2n-1)!!=(2n-1)(2n-3)\cdots 5\cdot 3\cdot 1}
=== Diagrammatic definition ===
A basis of
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
consists of all pairings on a set of
2
n
{\displaystyle 2n}
elements
X
1
,
.
.
.
,
X
n
,
Y
1
,
.
.
.
,
Y
n
{\displaystyle X_{1},...,X_{n},Y_{1},...,Y_{n}}
(that is, all perfect matchings of a complete graph
K
2
n
{\displaystyle K_{2n}}
: any two of the
2
n
{\displaystyle 2n}
elements may be matched to each other, regardless of their symbols). The elements
X
i
{\displaystyle X_{i}}
are usually written in a row, with the elements
Y
i
{\displaystyle Y_{i}}
beneath them.
The product of two basis elements
A
{\displaystyle A}
and
B
{\displaystyle B}
is obtained by concatenation: first identifying the endpoints in the bottom row of
A
{\displaystyle A}
and the top row of
B
{\displaystyle B}
(Figure AB in the diagram), then deleting the endpoints in the middle row and joining endpoints in the remaining two rows if they are joined, directly or by a path, in AB (Figure AB=nn in the diagram). Thereby all closed loops in the middle of AB are removed. The product
A
⋅
B
{\displaystyle A\cdot B}
of the basis elements is then defined to be the basis element corresponding to the new pairing multiplied by
δ
r
{\displaystyle \delta ^{r}}
where
r
{\displaystyle r}
is the number of deleted loops. In the example
A
⋅
B
=
δ
2
A
B
{\displaystyle A\cdot B=\delta ^{2}AB}
.
=== Generators and relations ===
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
can also be defined as the
Z
[
δ
]
{\displaystyle \mathbb {Z} [\delta ]}
-algebra with generators
s
1
,
…
,
s
n
−
1
,
e
1
,
…
,
e
n
−
1
{\displaystyle s_{1},\ldots ,s_{n-1},e_{1},\ldots ,e_{n-1}}
satisfying the following relations:
Relations of the symmetric group:
s
i
2
=
1
{\displaystyle s_{i}^{2}=1}
s
i
s
j
=
s
j
s
i
{\displaystyle s_{i}s_{j}=s_{j}s_{i}}
whenever
|
i
−
j
|
>
1
{\displaystyle |i-j|>1}
s
i
s
i
+
1
s
i
=
s
i
+
1
s
i
s
i
+
1
{\displaystyle s_{i}s_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1}}
Almost-idempotent relation:
e
i
2
=
δ
e
i
{\displaystyle e_{i}^{2}=\delta e_{i}}
Commutation:
e
i
e
j
=
e
j
e
i
{\displaystyle e_{i}e_{j}=e_{j}e_{i}}
s
i
e
j
=
e
j
s
i
{\displaystyle s_{i}e_{j}=e_{j}s_{i}}
whenever
|
i
−
j
|
>
1
{\displaystyle |i-j|>1}
Tangle relations
e
i
e
i
±
1
e
i
=
e
i
{\displaystyle e_{i}e_{i\pm 1}e_{i}=e_{i}}
s
i
s
i
±
1
e
i
=
e
i
±
1
e
i
{\displaystyle s_{i}s_{i\pm 1}e_{i}=e_{i\pm 1}e_{i}}
e
i
s
i
±
1
s
i
=
e
i
e
i
±
1
{\displaystyle e_{i}s_{i\pm 1}s_{i}=e_{i}e_{i\pm 1}}
Untwisting:
s
i
e
i
=
e
i
s
i
=
e
i
{\displaystyle s_{i}e_{i}=e_{i}s_{i}=e_{i}}
:
e
i
s
i
±
1
e
i
=
e
i
{\displaystyle e_{i}s_{i\pm 1}e_{i}=e_{i}}
In this presentation
s
i
{\displaystyle s_{i}}
represents the diagram in which
X
k
{\displaystyle X_{k}}
is always connected to
Y
k
{\displaystyle Y_{k}}
directly beneath it except for
X
i
{\displaystyle X_{i}}
and
X
i
+
1
{\displaystyle X_{i+1}}
which are connected to
Y
i
+
1
{\displaystyle Y_{i+1}}
and
Y
i
{\displaystyle Y_{i}}
respectively. Similarly
e
i
{\displaystyle e_{i}}
represents the diagram in which
X
k
{\displaystyle X_{k}}
is always connected to
Y
k
{\displaystyle Y_{k}}
directly beneath it except for
X
i
{\displaystyle X_{i}}
being connected to
X
i
+
1
{\displaystyle X_{i+1}}
and
Y
i
{\displaystyle Y_{i}}
to
Y
i
+
1
{\displaystyle Y_{i+1}}
.
=== Basic properties ===
The Brauer algebra is a subalgebra of the partition algebra.
The Brauer algebra
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
is semisimple if
δ
∈
C
−
{
0
,
±
1
,
±
2
,
…
,
±
n
}
{\displaystyle \delta \in \mathbb {C} -\{0,\pm 1,\pm 2,\dots ,\pm n\}}
.
The subalgebra of
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
generated by the generators
s
i
{\displaystyle s_{i}}
is the group algebra of the symmetric group
S
n
{\displaystyle S_{n}}
.
The subalgebra of
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
generated by the generators
e
i
{\displaystyle e_{i}}
is the Temperley-Lieb algebra
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
.
The Brauer algebra is a cellular algebra.
For a pairing
A
{\displaystyle A}
let
n
(
A
)
{\displaystyle n(A)}
be the number of closed loops formed by identifying
X
i
{\displaystyle X_{i}}
with
Y
i
{\displaystyle Y_{i}}
for any
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\dots ,n}
: then the Jones trace
Tr
(
A
)
=
δ
n
(
A
)
{\displaystyle {\text{Tr}}(A)=\delta ^{n(A)}}
obeys
Tr
(
A
B
)
=
Tr
(
B
A
)
{\displaystyle {\text{Tr}}(AB)={\text{Tr}}(BA)}
i.e. it is indeed a trace.
== Representations ==
=== Brauer-Specht modules ===
Brauer-Specht modules are finite-dimensional modules of the Brauer algebra.
If
δ
{\displaystyle \delta }
is such that
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
is semisimple,
they form a complete set of simple modules of
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
. These modules are parametrized by partitions, because they are built from the Specht modules of the symmetric group, which are themselves parametrized by partitions.
For
0
≤
ℓ
≤
n
{\displaystyle 0\leq \ell \leq n}
with
ℓ
≡
n
mod
2
{\displaystyle \ell \equiv n{\bmod {2}}}
, let
B
n
,
ℓ
{\displaystyle B_{n,\ell }}
be the set of perfect matchings of
n
+
ℓ
{\displaystyle n+\ell }
elements
X
1
,
…
,
X
n
,
Y
1
,
…
,
Y
ℓ
{\displaystyle X_{1},\dots ,X_{n},Y_{1},\dots ,Y_{\ell }}
, such that
Y
j
{\displaystyle Y_{j}}
is matched with one of the
n
{\displaystyle n}
elements
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
. For any ring
k
{\displaystyle k}
, the space
k
B
n
,
ℓ
{\displaystyle kB_{n,\ell }}
is a left
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
-module, where basis elements of
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
act by graph concatenation. (This action can produce matchings that violate the restriction that
Y
1
,
…
,
Y
ℓ
{\displaystyle Y_{1},\dots ,Y_{\ell }}
cannot match with one another: such graphs must be modded out.) Moreover, the space
k
B
n
,
ℓ
{\displaystyle kB_{n,\ell }}
is a right
S
ℓ
{\displaystyle S_{\ell }}
-module.
Given a Specht module
V
λ
{\displaystyle V_{\lambda }}
of
k
S
ℓ
{\displaystyle kS_{\ell }}
, where
λ
{\displaystyle \lambda }
is a partition of
ℓ
{\displaystyle \ell }
(i.e.
|
λ
|
=
ℓ
{\displaystyle |\lambda |=\ell }
), the corresponding Brauer-Specht module of
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
is
W
λ
=
k
B
n
,
|
λ
|
⊗
k
S
|
λ
|
V
λ
(
|
λ
|
≤
n
,
|
λ
|
≡
n
mod
2
)
{\displaystyle W_{\lambda }=kB_{n,|\lambda |}\otimes _{kS_{|\lambda |}}V_{\lambda }\qquad {\big (}|\lambda |\leq n,|\lambda |\equiv n{\bmod {2}}{\big )}}
A basis of this module is the set of elements
b
⊗
v
{\displaystyle b\otimes v}
, where
b
∈
B
n
,
|
λ
|
{\displaystyle b\in B_{n,|\lambda |}}
is such that the
|
λ
|
{\displaystyle |\lambda |}
lines that end on elements
Y
j
{\displaystyle Y_{j}}
do not cross, and
v
{\displaystyle v}
belongs to a basis of
V
λ
{\displaystyle V_{\lambda }}
. The dimension is
dim
(
W
λ
)
=
(
n
|
λ
|
)
(
n
−
|
λ
|
−
1
)
!
!
dim
(
V
λ
)
{\displaystyle \dim(W_{\lambda })={\binom {n}{|\lambda |}}(n-|\lambda |-1)!!\dim(V_{\lambda })}
i.e. the product of a binomial coefficient, a double factorial, and the dimension of the corresponding Specht module, which is given by the hook length formula.
=== Schur-Weyl duality ===
Let
V
=
R
d
{\displaystyle V=\mathbb {R} ^{d}}
be a Euclidean vector space of dimension
d
{\displaystyle d}
, and
O
(
V
)
=
O
(
d
,
R
)
{\displaystyle O(V)=O(d,\mathbb {R} )}
the corresponding orthogonal group. Then write
B
n
(
d
)
{\displaystyle B_{n}(d)}
for the specialisation
R
⊗
Z
[
δ
]
B
n
(
δ
)
{\displaystyle \mathbb {R} \otimes _{\mathbb {Z} [\delta ]}{\mathfrak {B}}_{n}(\delta )}
where
δ
{\displaystyle \delta }
acts on
R
{\displaystyle \mathbb {R} }
by multiplication with
d
{\displaystyle d}
. The tensor power
V
⊗
n
:=
V
⊗
⋯
⊗
V
⏟
n
times
{\displaystyle V^{\otimes n}:=\underbrace {V\otimes \cdots \otimes V} _{n{\text{ times}}}}
is naturally a
B
n
(
d
)
{\displaystyle B_{n}(d)}
-module:
s
i
{\displaystyle s_{i}}
acts by switching the
i
{\displaystyle i}
th and
(
i
+
1
)
{\displaystyle (i+1)}
th tensor factor and
e
i
{\displaystyle e_{i}}
acts by contraction followed by expansion in the
i
{\displaystyle i}
th and
(
i
+
1
)
{\displaystyle (i+1)}
th tensor factor, i.e.
e
i
{\displaystyle e_{i}}
acts as
v
1
⊗
⋯
⊗
v
i
−
1
⊗
(
v
i
⊗
v
i
+
1
)
⊗
⋯
⊗
v
n
↦
v
1
⊗
⋯
⊗
v
i
−
1
⊗
(
⟨
v
i
,
v
i
+
1
⟩
∑
k
=
1
d
(
w
k
⊗
w
k
)
)
⊗
⋯
⊗
v
n
{\displaystyle v_{1}\otimes \cdots \otimes v_{i-1}\otimes {\Big (}v_{i}\otimes v_{i+1}{\Big )}\otimes \cdots \otimes v_{n}\mapsto v_{1}\otimes \cdots \otimes v_{i-1}\otimes \left(\langle v_{i},v_{i+1}\rangle \sum _{k=1}^{d}(w_{k}\otimes w_{k})\right)\otimes \cdots \otimes v_{n}}
where
w
1
,
…
,
w
d
{\displaystyle w_{1},\ldots ,w_{d}}
is any orthonormal basis of
V
{\displaystyle V}
. (The sum is in fact independent of the choice of this basis.)
This action is useful in a generalisation of the Schur-Weyl duality: if
d
≥
n
{\displaystyle d\geq n}
, the image of
B
n
(
d
)
{\displaystyle B_{n}(d)}
inside
End
(
V
⊗
n
)
{\displaystyle \operatorname {End} (V^{\otimes n})}
is the centraliser of
O
(
V
)
{\displaystyle O(V)}
inside
End
(
V
⊗
n
)
{\displaystyle \operatorname {End} (V^{\otimes n})}
, and conversely the image of
O
(
V
)
{\displaystyle O(V)}
is the centraliser of
B
n
(
d
)
{\displaystyle B_{n}(d)}
. The tensor power
V
⊗
n
{\displaystyle V^{\otimes n}}
is therefore both an
O
(
V
)
{\displaystyle O(V)}
- and a
B
n
(
d
)
{\displaystyle B_{n}(d)}
-module and satisfies
V
⊗
n
=
⨁
λ
U
λ
⊠
W
λ
{\displaystyle V^{\otimes n}=\bigoplus _{\lambda }U_{\lambda }\boxtimes W_{\lambda }}
where
λ
{\displaystyle \lambda }
runs over a subset of the partitions such that
|
λ
|
≤
n
{\displaystyle |\lambda |\leq n}
and
|
λ
|
≡
n
mod
2
{\displaystyle |\lambda |\equiv n{\bmod {2}}}
,
U
λ
{\displaystyle U_{\lambda }}
is an irreducible
O
(
V
)
{\displaystyle O(V)}
-module, and
W
λ
{\displaystyle W_{\lambda }}
is a Brauer-Specht module of
B
n
(
d
)
{\displaystyle B_{n}(d)}
.
It follows that the Brauer algebra has a natural action on the space of polynomials on
V
n
{\displaystyle V^{n}}
, which commutes with the action of the orthogonal group.
If
δ
{\displaystyle \delta }
is a negative even integer, the Brauer algebra is related by Schur-Weyl duality to the symplectic group
Sp
−
δ
(
C
)
{\displaystyle {\text{Sp}}_{-\delta }(\mathbb {C} )}
, rather than the orthogonal group.
== Walled Brauer algebra ==
The walled Brauer algebra
B
r
,
s
(
δ
)
{\displaystyle {\mathfrak {B}}_{r,s}(\delta )}
is a subalgebra of
B
r
+
s
(
δ
)
{\displaystyle {\mathfrak {B}}_{r+s}(\delta )}
. Diagrammatically, it consists of diagrams where the only allowed pairings are of the types
X
i
≤
r
−
X
j
>
r
{\displaystyle X_{i\leq r}-X_{j>r}}
,
Y
i
≤
r
−
Y
j
>
r
{\displaystyle Y_{i\leq r}-Y_{j>r}}
,
X
i
≤
r
−
Y
j
≤
r
{\displaystyle X_{i\leq r}-Y_{j\leq r}}
,
X
i
>
r
−
Y
j
>
r
{\displaystyle X_{i>r}-Y_{j>r}}
. This amounts to having a wall that separates
X
i
≤
r
,
Y
i
≤
r
{\displaystyle X_{i\leq r},Y_{i\leq r}}
from
X
i
>
r
,
Y
i
>
r
{\displaystyle X_{i>r},Y_{i>r}}
, and requiring that
X
−
Y
{\displaystyle X-Y}
pairings cross the wall while
X
−
X
,
Y
−
Y
{\displaystyle X-X,Y-Y}
pairings don't.
The walled Brauer algebra is generated by
{
s
i
}
1
≤
i
≤
r
+
s
−
1
,
i
≠
r
∪
{
e
r
}
{\displaystyle \{s_{i}\}_{1\leq i\leq r+s-1,i\neq r}\cup \{e_{r}\}}
. These generators obey the basic relations of
B
r
+
s
(
δ
)
{\displaystyle {\mathfrak {B}}_{r+s}(\delta )}
that involve them, plus the two relations
e
r
s
r
+
1
s
r
−
1
e
r
s
r
−
1
=
e
r
s
r
+
1
s
r
−
1
e
r
s
r
+
1
,
s
r
−
1
e
r
s
r
+
1
s
r
−
1
e
r
=
s
r
+
1
e
r
s
r
+
1
s
r
−
1
e
r
{\displaystyle e_{r}s_{r+1}s_{r-1}e_{r}s_{r-1}=e_{r}s_{r+1}s_{r-1}e_{r}s_{r+1}\quad ,\quad s_{r-1}e_{r}s_{r+1}s_{r-1}e_{r}=s_{r+1}e_{r}s_{r+1}s_{r-1}e_{r}}
(In
B
r
+
s
(
δ
)
{\displaystyle {\mathfrak {B}}_{r+s}(\delta )}
, these two relations follow from the basic relations.)
For
δ
{\displaystyle \delta }
a natural integer, let
V
{\displaystyle V}
be the natural representation of the general linear group
G
L
δ
(
C
)
{\displaystyle GL_{\delta }(\mathbb {C} )}
.
The walled Brauer algebra
B
r
,
s
(
δ
)
{\displaystyle {\mathfrak {B}}_{r,s}(\delta )}
has a natural action on
V
⊗
r
⊗
(
V
∗
)
⊗
s
{\displaystyle V^{\otimes r}\otimes (V^{*})^{\otimes s}}
, which is related by Schur-Weyl duality to the action of
G
L
δ
(
C
)
{\displaystyle GL_{\delta }(\mathbb {C} )}
.
== See also ==
Birman–Wenzl algebra, a deformation of the Brauer algebra.
== References == | Wikipedia/Brauer_algebra |
In statistical mechanics, the Temperley–Lieb algebra is an algebra from which are built certain transfer matrices, invented by Neville Temperley and Elliott Lieb. It is also related to integrable models, knot theory and the braid groups, quantum groups and subfactors of von Neumann algebras.
== Structure ==
=== Generators and relations ===
Let
R
{\displaystyle R}
be a commutative ring and fix
δ
∈
R
{\displaystyle \delta \in R}
. The Temperley–Lieb algebra
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
is the
R
{\displaystyle R}
-algebra generated by the elements
e
1
,
e
2
,
…
,
e
n
−
1
{\displaystyle e_{1},e_{2},\ldots ,e_{n-1}}
, subject to the Jones relations:
e
i
2
=
δ
e
i
{\displaystyle e_{i}^{2}=\delta e_{i}}
for all
1
≤
i
≤
n
−
1
{\displaystyle 1\leq i\leq n-1}
e
i
e
i
+
1
e
i
=
e
i
{\displaystyle e_{i}e_{i+1}e_{i}=e_{i}}
for all
1
≤
i
≤
n
−
2
{\displaystyle 1\leq i\leq n-2}
e
i
e
i
−
1
e
i
=
e
i
{\displaystyle e_{i}e_{i-1}e_{i}=e_{i}}
for all
2
≤
i
≤
n
−
1
{\displaystyle 2\leq i\leq n-1}
e
i
e
j
=
e
j
e
i
{\displaystyle e_{i}e_{j}=e_{j}e_{i}}
for all
1
≤
i
,
j
≤
n
−
1
{\displaystyle 1\leq i,j\leq n-1}
such that
|
i
−
j
|
≠
1
{\displaystyle |i-j|\neq 1}
Using these relations, any product of generators
e
i
{\displaystyle e_{i}}
can be brought to Jones' normal form:
E
=
(
e
i
1
e
i
1
−
1
⋯
e
j
1
)
(
e
i
2
e
i
2
−
1
⋯
e
j
2
)
⋯
(
e
i
r
e
i
r
−
1
⋯
e
j
r
)
{\displaystyle E={\big (}e_{i_{1}}e_{i_{1}-1}\cdots e_{j_{1}}{\big )}{\big (}e_{i_{2}}e_{i_{2}-1}\cdots e_{j_{2}}{\big )}\cdots {\big (}e_{i_{r}}e_{i_{r}-1}\cdots e_{j_{r}}{\big )}}
where
(
i
1
,
i
2
,
…
,
i
r
)
{\displaystyle (i_{1},i_{2},\dots ,i_{r})}
and
(
j
1
,
j
2
,
…
,
j
r
)
{\displaystyle (j_{1},j_{2},\dots ,j_{r})}
are two strictly increasing sequences in
{
1
,
2
,
…
,
n
−
1
}
{\displaystyle \{1,2,\dots ,n-1\}}
. Elements of this type form a basis of the Temperley-Lieb algebra.
The dimensions of Temperley-Lieb algebras are Catalan numbers:
dim
(
T
L
n
(
δ
)
)
=
(
2
n
)
!
n
!
(
n
+
1
)
!
{\displaystyle \dim(TL_{n}(\delta ))={\frac {(2n)!}{n!(n+1)!}}}
The Temperley–Lieb algebra
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
is a subalgebra of the Brauer algebra
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
, and therefore also of the partition algebra
P
n
(
δ
)
{\displaystyle P_{n}(\delta )}
. The Temperley–Lieb algebra
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
is semisimple for
δ
∈
C
−
F
n
{\displaystyle \delta \in \mathbb {C} -F_{n}}
where
F
n
{\displaystyle F_{n}}
is a known, finite set. For a given
n
{\displaystyle n}
, all semisimple Temperley-Lieb algebras are isomorphic.
=== Diagram algebra ===
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
may be represented diagrammatically as the vector space over noncrossing pairings of
2
n
{\displaystyle 2n}
points on two opposite sides of a rectangle with n points on each of the two sides.
The identity element is the diagram in which each point is connected to the one directly across the rectangle from it. The generator
e
i
{\displaystyle e_{i}}
is the diagram in which the
i
{\displaystyle i}
-th and
(
i
+
1
)
{\displaystyle (i+1)}
-th point on the left side are connected to each other, similarly the two points opposite to these on the right side, and all other points are connected to the point directly across the rectangle.
The generators of
T
L
5
(
δ
)
{\displaystyle TL_{5}(\delta )}
are:
From left to right, the unit 1 and the generators
e
1
{\displaystyle e_{1}}
,
e
2
{\displaystyle e_{2}}
,
e
3
{\displaystyle e_{3}}
,
e
4
{\displaystyle e_{4}}
.
Multiplication on basis elements can be performed by concatenation: placing two rectangles side by side, and replacing any closed loops by a factor
δ
{\displaystyle \delta }
, for example
e
1
e
4
e
3
e
2
×
e
2
e
4
e
3
=
δ
e
1
e
4
e
3
e
2
e
4
e
3
{\displaystyle e_{1}e_{4}e_{3}e_{2}\times e_{2}e_{4}e_{3}=\delta \,e_{1}e_{4}e_{3}e_{2}e_{4}e_{3}}
:
× = =
δ
{\displaystyle \delta }
.
The Jones relations can be seen graphically:
=
δ
{\displaystyle \delta }
=
=
The five basis elements of
T
L
3
(
δ
)
{\displaystyle TL_{3}(\delta )}
are the following:
.
From left to right, the unit 1, the generators
e
2
{\displaystyle e_{2}}
,
e
1
{\displaystyle e_{1}}
, and
e
1
e
2
{\displaystyle e_{1}e_{2}}
,
e
2
e
1
{\displaystyle e_{2}e_{1}}
.
== Representations ==
=== Structure ===
For
δ
{\displaystyle \delta }
such that
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
is semisimple, a complete set
{
W
ℓ
}
{\displaystyle \{W_{\ell }\}}
of simple modules is parametrized by integers
0
≤
ℓ
≤
n
{\displaystyle 0\leq \ell \leq n}
with
ℓ
≡
n
mod
2
{\displaystyle \ell \equiv n{\bmod {2}}}
. The dimension of a simple module is written in terms of binomial coefficients as
dim
(
W
ℓ
)
=
(
n
n
−
ℓ
2
)
−
(
n
n
−
ℓ
2
−
1
)
{\displaystyle \dim(W_{\ell })={\binom {n}{\frac {n-\ell }{2}}}-{\binom {n}{{\frac {n-\ell }{2}}-1}}}
A basis of the simple module
W
ℓ
{\displaystyle W_{\ell }}
is the set
M
n
,
ℓ
{\displaystyle M_{n,\ell }}
of monic noncrossing pairings from
n
{\displaystyle n}
points on the left to
ℓ
{\displaystyle \ell }
points on the right. (Monic means that each point on the right is connected to a point on the left.) There is a natural bijection between
∪
0
≤
ℓ
≤
n
ℓ
≡
n
mod
2
M
n
,
ℓ
×
M
n
,
ℓ
{\displaystyle \cup _{\begin{array}{c}0\leq \ell \leq n\\\ell \equiv n{\bmod {2}}\end{array}}M_{n,\ell }\times M_{n,\ell }}
, and the set of diagrams that generate
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
: any such diagram can be cut into two elements of
M
n
,
ℓ
{\displaystyle M_{n,\ell }}
for some
ℓ
{\displaystyle \ell }
.
Then
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
acts on
W
ℓ
{\displaystyle W_{\ell }}
by diagram concatenation from the left. (Concatenation can produce non-monic pairings, which have to be modded out.) The module
W
ℓ
{\displaystyle W_{\ell }}
may be called a standard module or link module.
If
δ
=
q
+
q
−
1
{\displaystyle \delta =q+q^{-1}}
with
q
{\displaystyle q}
a root of unity,
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
may not be semisimple, and
W
ℓ
{\displaystyle W_{\ell }}
may not be irreducible:
W
ℓ
reducible
⟺
∃
j
∈
{
1
,
2
,
…
,
ℓ
}
,
q
2
n
−
4
ℓ
+
2
+
2
j
=
1
{\displaystyle W_{\ell }{\text{ reducible }}\iff \exists j\in \{1,2,\dots ,\ell \},\ q^{2n-4\ell +2+2j}=1}
If
W
ℓ
{\displaystyle W_{\ell }}
is reducible, then its quotient by its maximal proper submodule is irreducible.
=== Branching rules from the Brauer algebra ===
Simple modules of the Brauer algebra
B
n
(
δ
)
{\displaystyle {\mathfrak {B}}_{n}(\delta )}
can be decomposed into simple modules of the Temperley-Lieb algebra. The decomposition is called a branching rule, and it is a direct sum with positive integer coefficients:
W
λ
(
B
n
(
δ
)
)
=
⨁
|
λ
|
≤
ℓ
≤
n
ℓ
≡
|
λ
|
mod
2
c
ℓ
λ
W
ℓ
(
T
L
n
(
δ
)
)
{\displaystyle W_{\lambda }\left({\mathfrak {B}}_{n}(\delta )\right)=\bigoplus _{\begin{array}{c}|\lambda |\leq \ell \leq n\\\ell \equiv |\lambda |{\bmod {2}}\end{array}}c_{\ell }^{\lambda }W_{\ell }\left(TL_{n}(\delta )\right)}
The coefficients
c
ℓ
λ
{\displaystyle c_{\ell }^{\lambda }}
do not depend on
n
,
δ
{\displaystyle n,\delta }
, and are given by
c
ℓ
λ
=
f
λ
∑
r
=
0
ℓ
−
|
λ
|
2
(
−
1
)
r
(
ℓ
−
r
r
)
(
ℓ
−
2
r
ℓ
−
|
λ
|
−
2
r
)
(
ℓ
−
|
λ
|
−
2
r
)
!
!
{\displaystyle c_{\ell }^{\lambda }=f^{\lambda }\sum _{r=0}^{\frac {\ell -|\lambda |}{2}}(-1)^{r}{\binom {\ell -r}{r}}{\binom {\ell -2r}{\ell -|\lambda |-2r}}(\ell -|\lambda |-2r)!!}
where
f
λ
{\displaystyle f^{\lambda }}
is the number of standard Young tableaux of shape
λ
{\displaystyle \lambda }
, given by the hook length formula.
== Affine Temperley-Lieb algebra ==
The affine Temperley-Lieb algebra
a
T
L
n
(
δ
)
{\displaystyle aTL_{n}(\delta )}
is an infinite-dimensional algebra such that
T
L
n
(
δ
)
⊂
a
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )\subset aTL_{n}(\delta )}
. It is obtained by adding generators
e
n
,
τ
,
τ
−
1
{\displaystyle e_{n},\tau ,\tau ^{-1}}
such that
τ
e
i
=
e
i
+
1
τ
{\displaystyle \tau e_{i}=e_{i+1}\tau }
for all
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
,
e
1
τ
2
=
e
1
e
2
⋯
e
n
−
1
{\displaystyle e_{1}\tau ^{2}=e_{1}e_{2}\cdots e_{n-1}}
,
τ
τ
−
1
=
τ
−
1
τ
=
id
{\displaystyle \tau \tau ^{-1}=\tau ^{-1}\tau ={\text{id}}}
.
The indices are supposed to be periodic i.e.
e
n
+
1
=
e
1
,
e
n
=
e
0
{\displaystyle e_{n+1}=e_{1},e_{n}=e_{0}}
, and the Temperley-Lieb relations are supposed to hold for all
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
. Then
τ
n
{\displaystyle \tau ^{n}}
is central. A finite-dimensional quotient of the algebra
a
T
L
n
(
δ
)
{\displaystyle aTL_{n}(\delta )}
, sometimes called the unoriented Jones-Temperley-Lieb algebra, is obtained by
assuming
τ
n
=
id
{\displaystyle \tau ^{n}={\text{id}}}
, and replacing non-contractible lines with the same factor
δ
{\displaystyle \delta }
as contractible lines (for example, in the case
n
=
4
{\displaystyle n=4}
, this implies
e
1
e
3
e
2
e
4
e
1
e
3
=
δ
2
e
1
e
3
{\displaystyle e_{1}e_{3}e_{2}e_{4}e_{1}e_{3}=\delta ^{2}e_{1}e_{3}}
).
The diagram algebra for
a
T
L
n
(
δ
)
{\displaystyle aTL_{n}(\delta )}
is deduced from the diagram algebra for
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
by turning rectangles into cylinders. The algebra
a
T
L
n
(
δ
)
{\displaystyle aTL_{n}(\delta )}
is infinite-dimensional because lines can wind around the cylinder. If
n
{\displaystyle n}
is even, there can even exist closed winding lines, which are non-contractible.
The Temperley-Lieb algebra is a quotient of the corresponding affine Temperley-Lieb algebra.
The cell module
W
ℓ
,
z
{\displaystyle W_{\ell ,z}}
of
a
T
L
n
(
δ
)
{\displaystyle aTL_{n}(\delta )}
is generated by the set of monic pairings from
n
{\displaystyle n}
points to
ℓ
{\displaystyle \ell }
points, just like the module
W
ℓ
{\displaystyle W_{\ell }}
of
T
L
n
(
δ
)
{\displaystyle TL_{n}(\delta )}
. However, the pairings are now on a cylinder, and the right-multiplication with
τ
{\displaystyle \tau }
is identified with
z
⋅
id
{\displaystyle z\cdot {\text{id}}}
for some
z
∈
C
∗
{\displaystyle z\in \mathbb {C} ^{*}}
. If
ℓ
=
0
{\displaystyle \ell =0}
, there is no right-multiplication by
τ
{\displaystyle \tau }
, and it is the addition of a non-contractible loop on the right which is identified with
z
+
z
−
1
{\displaystyle z+z^{-1}}
. Cell modules are finite-dimensional, with
dim
(
W
ℓ
,
z
)
=
(
n
n
−
ℓ
2
)
{\displaystyle \dim(W_{\ell ,z})={\binom {n}{\frac {n-\ell }{2}}}}
The cell module
W
ℓ
,
z
{\displaystyle W_{\ell ,z}}
is irreducible for all
z
∈
C
∗
−
R
(
δ
)
{\displaystyle z\in \mathbb {C} ^{*}-R(\delta )}
, where the set
R
(
δ
)
{\displaystyle R(\delta )}
is countable. For
z
∈
R
(
δ
)
{\displaystyle z\in R(\delta )}
,
W
ℓ
,
z
{\displaystyle W_{\ell ,z}}
has an irreducible quotient. The irreducible cell modules and quotients thereof form a complete set of irreducible modules of
a
T
L
n
(
δ
)
{\displaystyle aTL_{n}(\delta )}
. Cell modules of the unoriented Jones-Temperley-Lieb algebra must obey
z
ℓ
=
1
{\displaystyle z^{\ell }=1}
if
ℓ
≠
0
{\displaystyle \ell \neq 0}
, and
z
+
z
−
1
=
δ
{\displaystyle z+z^{-1}=\delta }
if
ℓ
=
0
{\displaystyle \ell =0}
.
== Applications ==
=== Temperley–Lieb Hamiltonian ===
Consider an interaction-round-a-face model e.g. a square lattice model and let
n
{\displaystyle n}
be the number of sites on the lattice. Following Temperley and Lieb we define the Temperley–Lieb Hamiltonian (the TL Hamiltonian) as
H
=
∑
j
=
1
n
−
1
(
δ
−
e
j
)
{\displaystyle {\mathcal {H}}=\sum _{j=1}^{n-1}(\delta -e_{j})}
In what follows we consider the special case
δ
=
1
{\displaystyle \delta =1}
.
We will firstly consider the case
n
=
3
{\displaystyle n=3}
. The TL Hamiltonian is
H
=
2
−
e
1
−
e
2
{\displaystyle {\mathcal {H}}=2-e_{1}-e_{2}}
, namely
H
{\displaystyle {\mathcal {H}}}
= 2 - - .
We have two possible states,
and .
In acting by
H
{\displaystyle {\mathcal {H}}}
on these states, we find
H
{\displaystyle {\mathcal {H}}}
= 2 - - = - ,
and
H
{\displaystyle {\mathcal {H}}}
= 2 - - = - + .
Writing
H
{\displaystyle {\mathcal {H}}}
as a matrix in the basis of possible states we have,
H
=
(
1
−
1
−
1
1
)
{\displaystyle {\mathcal {H}}=\left({\begin{array}{rr}1&-1\\-1&1\end{array}}\right)}
The eigenvector of
H
{\displaystyle {\mathcal {H}}}
with the lowest eigenvalue is known as the ground state. In this case, the lowest eigenvalue
λ
0
{\displaystyle \lambda _{0}}
for
H
{\displaystyle {\mathcal {H}}}
is
λ
0
=
0
{\displaystyle \lambda _{0}=0}
. The corresponding eigenvector is
ψ
0
=
(
1
,
1
)
{\displaystyle \psi _{0}=(1,1)}
. As we vary the number of sites
n
{\displaystyle n}
we find the following table
where we have used the notation
m
j
=
(
m
,
…
,
m
)
{\displaystyle m_{j}=(m,\ldots ,m)}
j
{\displaystyle j}
-times e.g.,
5
2
=
(
5
,
5
)
{\displaystyle 5_{2}=(5,5)}
.
An interesting observation is that the largest components of the ground state of
H
{\displaystyle {\mathcal {H}}}
have a combinatorial enumeration as we vary the number of sites, as was first observed by Murray Batchelor, Jan de Gier and Bernard Nienhuis. Using the resources of the on-line encyclopedia of integer sequences, Batchelor et al. found, for an even numbers of sites
1
,
2
,
11
,
170
,
…
=
∏
j
=
0
n
−
2
2
(
3
j
+
1
)
(
2
j
)
!
(
6
j
)
!
(
4
j
)
!
(
4
j
+
1
)
!
(
n
=
2
,
4
,
6
,
…
)
{\displaystyle 1,2,11,170,\ldots =\prod _{j=0}^{\frac {n-2}{2}}\left(3j+1\right){\frac {(2j)!(6j)!}{(4j)!(4j+1)!}}\qquad (n=2,4,6,\dots )}
and for an odd numbers of sites
1
,
3
,
26
,
646
,
…
=
∏
j
=
0
n
−
3
2
(
3
j
+
2
)
(
2
j
+
1
)
!
(
6
j
+
3
)
!
(
4
j
+
2
)
!
(
4
j
+
3
)
!
(
n
=
3
,
5
,
7
,
…
)
{\displaystyle 1,3,26,646,\ldots =\prod _{j=0}^{\frac {n-3}{2}}(3j+2){\frac {(2j+1)!(6j+3)!}{(4j+2)!(4j+3)!}}\qquad (n=3,5,7,\dots )}
Surprisingly, these sequences corresponded to well known combinatorial objects. For
n
{\displaystyle n}
even, this (sequence A051255 in the OEIS) corresponds to cyclically symmetric transpose complement plane partitions and for
n
{\displaystyle n}
odd, (sequence A005156 in the OEIS), these correspond to alternating sign matrices symmetric about the vertical axis.
=== XXZ spin chain ===
== References ==
== Further reading ==
Kauffman, Louis H. (1991). Knots and Physics. World Scientific. ISBN 978-981-02-0343-6.
Kauffman, Louis H. (1987). "State Models and the Jones Polynomial". Topology. 26 (3): 395–407. doi:10.1016/0040-9383(87)90009-7. MR 0899057.
Baxter, Rodney J. (1982). Exactly solved models in statistical mechanics. London: Academic Press Inc. ISBN 0-12-083180-5. MR 0690578. | Wikipedia/Temperley-Lieb_algebra |
In mathematics, a quantum or quantized enveloping algebra is a q-analog of a universal enveloping algebra. Given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, the quantum enveloping algebra is typically denoted as
U
q
(
g
)
{\displaystyle U_{q}({\mathfrak {g}})}
. The notation was introduced by Drinfeld and independently by Jimbo.
Among the applications, studying the
q
→
0
{\displaystyle q\to 0}
limit led to the discovery of crystal bases.
== The case of ==
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
Michio Jimbo considered the algebras with three generators related by the three commutators
[
h
,
e
]
=
2
e
,
[
h
,
f
]
=
−
2
f
,
[
e
,
f
]
=
sinh
(
η
h
)
/
sinh
η
.
{\displaystyle [h,e]=2e,\ [h,f]=-2f,\ [e,f]=\sinh(\eta h)/\sinh \eta .}
When
η
→
0
{\displaystyle \eta \to 0}
, these reduce to the commutators that define the special linear Lie algebra
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
. In contrast, for nonzero
η
{\displaystyle \eta }
, the algebra defined by these relations is not a Lie algebra but instead an associative algebra that can be regarded as a deformation of the universal enveloping algebra of
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
.
== See also ==
Quantum group
== Notes ==
== References ==
Drinfel'd, V. G. (1987), "Quantum Groups", Proceedings of the International Congress of Mathematicians 986, 1, American Mathematical Society: 798–820
Tjin, T. (10 October 1992). "An introduction to quantized Lie groups and algebras". International Journal of Modern Physics A. 07 (25): 6175–6213. arXiv:hep-th/9111043. Bibcode:1992IJMPA...7.6175T. doi:10.1142/S0217751X92002805. ISSN 0217-751X. S2CID 119087306.
== External links ==
Quantized enveloping algebra at the nLab
Quantized enveloping algebras at
q
=
1
{\displaystyle q=1}
at MathOverflow
Does there exist any "quantum Lie algebra" imbedded into the quantum enveloping algebra
U
q
(
g
)
{\displaystyle U_{q}(g)}
? at MathOverflow | Wikipedia/Quantized_enveloping_algebra |
The partition algebra is an associative algebra with a basis of set-partition diagrams and multiplication given by diagram concatenation. Its subalgebras include diagram algebras such as the Brauer algebra, the Temperley–Lieb algebra, or the group algebra of the symmetric group. Representations of the partition algebra are built from sets of diagrams and from representations of the symmetric group.
== Definition ==
=== Diagrams ===
A partition of
2
k
{\displaystyle 2k}
elements labelled
1
,
1
¯
,
2
,
2
¯
,
…
,
k
,
k
¯
{\displaystyle 1,{\bar {1}},2,{\bar {2}},\dots ,k,{\bar {k}}}
is represented as a diagram, with lines connecting elements in the same subset. In the following example, the subset
{
1
¯
,
4
¯
,
5
¯
,
6
}
{\displaystyle \{{\bar {1}},{\bar {4}},{\bar {5}},6\}}
gives rise to the lines
1
¯
−
4
¯
,
4
¯
−
5
¯
,
5
¯
−
6
{\displaystyle {\bar {1}}-{\bar {4}},{\bar {4}}-{\bar {5}},{\bar {5}}-6}
, and could equivalently be represented by the lines
1
¯
−
6
,
4
¯
−
6
,
5
¯
−
6
,
1
¯
−
5
¯
{\displaystyle {\bar {1}}-6,{\bar {4}}-6,{\bar {5}}-6,{\bar {1}}-{\bar {5}}}
(for instance).
For
n
∈
C
{\displaystyle n\in \mathbb {C} }
and
k
∈
N
∗
{\displaystyle k\in \mathbb {N} ^{*}}
, the partition algebra
P
k
(
n
)
{\displaystyle P_{k}(n)}
is defined by a
C
{\displaystyle \mathbb {C} }
-basis made of partitions, and a multiplication given by diagram concatenation. The concatenated diagram comes with a factor
n
D
{\displaystyle n^{D}}
, where
D
{\displaystyle D}
is the number of connected components that are disconnected from the top and bottom elements.
=== Generators and relations ===
The partition algebra
P
k
(
n
)
{\displaystyle P_{k}(n)}
is generated by
3
k
−
2
{\displaystyle 3k-2}
elements of the type
These generators obey relations that include
s
i
2
=
1
,
s
i
s
i
+
1
s
i
=
s
i
+
1
s
i
s
i
+
1
,
p
i
2
=
n
p
i
,
b
i
2
=
b
i
,
p
i
b
i
p
i
=
p
i
{\displaystyle s_{i}^{2}=1\quad ,\quad s_{i}s_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1}\quad ,\quad p_{i}^{2}=np_{i}\quad ,\quad b_{i}^{2}=b_{i}\quad ,\quad p_{i}b_{i}p_{i}=p_{i}}
Other elements that are useful for generating subalgebras include
In terms of the original generators, these elements are
e
i
=
b
i
p
i
p
i
+
1
b
i
,
l
i
=
s
i
p
i
,
r
i
=
p
i
s
i
{\displaystyle e_{i}=b_{i}p_{i}p_{i+1}b_{i}\quad ,\quad l_{i}=s_{i}p_{i}\quad ,\quad r_{i}=p_{i}s_{i}}
=== Properties ===
The partition algebra
P
k
(
n
)
{\displaystyle P_{k}(n)}
is an associative algebra. It has a multiplicative identity
The partition algebra
P
k
(
n
)
{\displaystyle P_{k}(n)}
is semisimple for
n
∈
C
−
{
0
,
1
,
…
,
2
k
−
2
}
{\displaystyle n\in \mathbb {C} -\{0,1,\dots ,2k-2\}}
. For any two
n
,
n
′
{\displaystyle n,n'}
in this set, the algebras
P
k
(
n
)
{\displaystyle P_{k}(n)}
and
P
k
(
n
′
)
{\displaystyle P_{k}(n')}
are isomorphic.
The partition algebra is finite-dimensional, with
dim
P
k
(
n
)
=
B
2
k
{\displaystyle \dim P_{k}(n)=B_{2k}}
(a Bell number).
== Subalgebras ==
=== Eight subalgebras ===
Subalgebras of the partition algebra can be defined by the following properties:
Whether they are planar i.e. whether lines may cross in diagrams.
Whether subsets are allowed to have any size
1
,
2
,
…
,
2
k
{\displaystyle 1,2,\dots ,2k}
, or size
1
,
2
{\displaystyle 1,2}
, or only size
2
{\displaystyle 2}
.
Whether we allow top-top and bottom-bottom lines, or only top-bottom lines. In the latter case, the parameter
n
{\displaystyle n}
is absent, or can be eliminated by
p
i
→
1
n
p
i
{\displaystyle p_{i}\to {\frac {1}{n}}p_{i}}
.
Combining these properties gives rise to 8 nontrivial subalgebras, in addition to the partition algebra itself:
The symmetric group algebra
C
S
k
{\displaystyle \mathbb {C} S_{k}}
is the group ring of the symmetric group
S
k
{\displaystyle S_{k}}
over
C
{\displaystyle \mathbb {C} }
. The Motzkin algebra is sometimes called the dilute Temperley–Lieb algebra in the physics literature.
=== Properties ===
The listed subalgebras are semisimple for
n
∈
C
−
{
0
,
1
,
…
,
2
k
−
2
}
{\displaystyle n\in \mathbb {C} -\{0,1,\dots ,2k-2\}}
.
Inclusions of planar into non-planar algebras:
P
P
k
(
n
)
⊂
P
k
(
n
)
,
M
k
(
n
)
⊂
R
B
k
(
n
)
,
T
L
k
(
n
)
⊂
B
k
(
n
)
,
P
R
k
⊂
R
k
{\displaystyle PP_{k}(n)\subset P_{k}(n)\quad ,\quad M_{k}(n)\subset RB_{k}(n)\quad ,\quad TL_{k}(n)\subset B_{k}(n)\quad ,\quad PR_{k}\subset R_{k}}
Inclusions from constraints on subset size:
B
k
(
n
)
⊂
R
B
k
(
n
)
⊂
P
k
(
n
)
,
T
L
k
(
n
)
⊂
M
k
(
n
)
⊂
P
P
k
(
n
)
,
C
S
k
⊂
R
k
{\displaystyle B_{k}(n)\subset RB_{k}(n)\subset P_{k}(n)\quad ,\quad TL_{k}(n)\subset M_{k}(n)\subset PP_{k}(n)\quad ,\quad \mathbb {C} S_{k}\subset R_{k}}
Inclusions from allowing top-top and bottom-bottom lines:
R
k
⊂
R
B
k
(
n
)
,
P
R
k
⊂
M
k
(
n
)
,
C
S
k
⊂
B
k
(
n
)
{\displaystyle R_{k}\subset RB_{k}(n)\quad ,\quad PR_{k}\subset M_{k}(n)\quad ,\quad \mathbb {C} S_{k}\subset B_{k}(n)}
We have the isomorphism:
P
P
k
(
n
2
)
≅
T
L
2
k
(
n
)
,
{
p
i
↦
n
e
2
i
−
1
b
i
↦
1
n
e
2
i
{\displaystyle PP_{k}(n^{2})\cong TL_{2k}(n)\quad ,\quad \left\{{\begin{array}{l}p_{i}\mapsto ne_{2i-1}\\b_{i}\mapsto {\frac {1}{n}}e_{2i}\end{array}}\right.}
=== More subalgebras ===
In addition to the eight subalgebras described above, other subalgebras have been defined:
The totally propagating partition subalgebra
prop
P
k
{\displaystyle {\text{prop}}P_{k}}
is generated by diagrams whose blocks all propagate, i.e. partitions whose subsets all contain top and bottom elements. These diagrams from the dual symmetric inverse monoid, which is generated by
s
i
,
b
i
p
i
+
1
b
i
+
1
{\displaystyle s_{i},b_{i}p_{i+1}b_{i+1}}
.
The quasi-partition algebra
Q
P
k
(
n
)
{\displaystyle QP_{k}(n)}
is generated by subsets of size at least two. Its generators are
s
i
,
b
i
,
e
i
{\displaystyle s_{i},b_{i},e_{i}}
and its dimension is
1
+
∑
j
=
1
2
k
(
−
1
)
j
−
1
B
2
k
−
j
{\displaystyle 1+\sum _{j=1}^{2k}(-1)^{j-1}B_{2k-j}}
.
The uniform block permutation algebra
U
k
{\displaystyle U_{k}}
is generated by subsets with as many top elements as bottom elements. It is generated by
s
i
,
b
i
{\displaystyle s_{i},b_{i}}
.
An algebra with a half-integer index
k
+
1
2
{\displaystyle k+{\frac {1}{2}}}
is defined from partitions of
2
k
+
2
{\displaystyle 2k+2}
elements by requiring that
k
+
1
{\displaystyle k+1}
and
k
+
1
¯
{\displaystyle {\overline {k+1}}}
are in the same subset. For example,
P
k
+
1
2
{\displaystyle P_{k+{\frac {1}{2}}}}
is generated by
s
i
≤
k
−
1
,
b
i
≤
k
,
p
i
≤
k
{\displaystyle s_{i\leq k-1},b_{i\leq k},p_{i\leq k}}
so that
P
k
⊂
P
k
+
1
2
⊂
P
k
+
1
{\displaystyle P_{k}\subset P_{k+{\frac {1}{2}}}\subset P_{k+1}}
, and
dim
P
k
+
1
2
=
B
2
k
+
1
{\displaystyle \dim P_{k+{\frac {1}{2}}}=B_{2k+1}}
.
Periodic subalgebras are generated by diagrams that can be drawn on an annulus without line crossings. Such subalgebras include a translation element
u
=
{\displaystyle u=}
such that
u
k
=
1
{\displaystyle u^{k}=1}
. The translation element and its powers are the only combinations of
s
i
{\displaystyle s_{i}}
that belong to periodic subalgebras.
== Representations ==
=== Structure ===
For an integer
0
≤
ℓ
≤
k
{\displaystyle 0\leq \ell \leq k}
, let
D
ℓ
{\displaystyle D_{\ell }}
be the set of partitions of
k
+
ℓ
{\displaystyle k+\ell }
elements
1
,
2
,
…
,
k
{\displaystyle 1,2,\dots ,k}
(bottom) and
1
¯
,
2
¯
,
…
,
ℓ
¯
{\displaystyle {\bar {1}},{\bar {2}},\dots ,{\bar {\ell }}}
(top), such that no two top elements are in the same subset, and no top element is alone. Such partitions are represented by diagrams with no top-top lines, with at least one line for each top element. For example, in the case
k
=
12
,
ℓ
=
5
{\displaystyle k=12,\ell =5}
:
Partition diagrams act on
D
ℓ
{\displaystyle D_{\ell }}
from the bottom, while the symmetric group
S
ℓ
{\displaystyle S_{\ell }}
acts from the top. For any Specht module
V
λ
{\displaystyle V_{\lambda }}
of
S
ℓ
{\displaystyle S_{\ell }}
(with therefore
|
λ
|
=
ℓ
{\displaystyle |\lambda |=\ell }
), we define the representation of
P
k
(
n
)
{\displaystyle P_{k}(n)}
P
λ
=
C
D
|
λ
|
⊗
C
S
|
λ
|
V
λ
.
{\displaystyle {\mathcal {P}}_{\lambda }=\mathbb {C} D_{|\lambda |}\otimes _{\mathbb {C} S_{|\lambda |}}V_{\lambda }\ .}
The dimension of this representation is
dim
P
λ
=
f
λ
∑
ℓ
=
|
λ
|
k
{
k
ℓ
}
(
ℓ
|
λ
|
)
,
{\displaystyle \dim {\mathcal {P}}_{\lambda }=f_{\lambda }\sum _{\ell =|\lambda |}^{k}\left\{{k \atop \ell }\right\}{\binom {\ell }{|\lambda |}}\ ,}
where
{
k
ℓ
}
{\displaystyle \left\{{k \atop \ell }\right\}}
is a Stirling number of the second kind,
(
ℓ
|
λ
|
)
{\displaystyle {\binom {\ell }{|\lambda |}}}
is a binomial coefficient, and
f
λ
=
dim
V
λ
{\displaystyle f_{\lambda }=\dim V_{\lambda }}
is given by the hook length formula.
A basis of
P
λ
{\displaystyle {\mathcal {P}}_{\lambda }}
can be described combinatorially in terms of set-partition tableaux: Young tableaux whose boxes are filled with the blocks of a set partition.
Assuming that
P
k
(
n
)
{\displaystyle P_{k}(n)}
is semisimple, the representation
P
λ
{\displaystyle {\mathcal {P}}_{\lambda }}
is irreducible, and the
set of irreducible finite-dimensional representations of the partition algebra is
Irrep
(
P
k
(
n
)
)
=
{
P
λ
}
0
≤
|
λ
|
≤
k
.
{\displaystyle {\text{Irrep}}\left(P_{k}(n)\right)=\left\{{\mathcal {P}}_{\lambda }\right\}_{0\leq |\lambda |\leq k}\ .}
=== Representations of subalgebras ===
Representations of non-planar subalgebras have similar structures as representations of the partition algebra. For example, the Brauer-Specht modules of the Brauer algebra are built from Specht modules, and certain sets of partitions.
In the case of the planar subalgebras, planarity prevents nontrivial permutations, and Specht modules do not appear. For example, a standard module of the Temperley–Lieb algebra is parametrized by an integer
0
≤
ℓ
≤
k
{\displaystyle 0\leq \ell \leq k}
with
ℓ
≡
k
mod
2
{\displaystyle \ell \equiv k{\bmod {2}}}
, and a basis is simply given by a set of partitions.
The following table lists the irreducible representations of the partition algebra and eight subalgebras.
The irreducible representations of
prop
P
k
{\displaystyle {\text{prop}}P_{k}}
are indexed by partitions such that
0
<
|
λ
|
≤
k
{\displaystyle 0<|\lambda |\leq k}
and their dimensions are
f
λ
{
k
|
λ
|
}
{\displaystyle f_{\lambda }\left\{{k \atop |\lambda |}\right\}}
. The irreducible representations of
Q
P
k
{\displaystyle QP_{k}}
are indexed by partitions such that
0
≤
|
λ
|
≤
k
{\displaystyle 0\leq |\lambda |\leq k}
. The irreducible representations of
U
k
{\displaystyle U_{k}}
are indexed by sequences of partitions.
== Schur-Weyl duality ==
Assume
n
∈
N
∗
{\displaystyle n\in \mathbb {N} ^{*}}
.
For
V
{\displaystyle V}
a
n
{\displaystyle n}
-dimensional vector space with basis
v
1
,
…
,
v
n
{\displaystyle v_{1},\dots ,v_{n}}
, there is a natural action of the partition algebra
P
k
(
n
)
{\displaystyle P_{k}(n)}
on the vector space
V
⊗
k
{\displaystyle V^{\otimes k}}
. This action is defined by the matrix elements of a partition
{
1
,
1
¯
,
2
,
2
¯
,
…
,
k
,
k
¯
}
=
⊔
h
E
h
{\displaystyle \{1,{\bar {1}},2,{\bar {2}},\dots ,k,{\bar {k}}\}=\sqcup _{h}E_{h}}
in the basis
(
v
j
1
⊗
⋯
⊗
v
j
k
)
{\displaystyle (v_{j_{1}}\otimes \cdots \otimes v_{j_{k}})}
:
(
⊔
h
E
h
)
j
1
,
j
2
,
…
,
j
k
j
1
¯
,
j
2
¯
,
…
,
j
k
¯
=
1
r
,
s
∈
E
h
⟹
j
r
=
j
s
.
{\displaystyle \left(\sqcup _{h}E_{h}\right)_{j_{1},j_{2},\dots ,j_{k}}^{j_{\bar {1}},j_{\bar {2}},\dots ,j_{\bar {k}}}=\mathbf {1} _{r,s\in E_{h}\implies j_{r}=j_{s}}\ .}
This matrix element is one if all indices corresponding to any given partition subset coincide, and zero otherwise. For example, the action of a Temperley–Lieb generator is
e
i
(
v
j
1
⊗
⋯
⊗
v
j
i
⊗
v
j
i
+
1
⊗
⋯
⊗
v
j
k
)
=
δ
j
i
,
j
i
+
1
∑
j
=
1
n
v
j
1
⊗
⋯
⊗
v
j
⊗
v
j
⊗
⋯
⊗
v
j
k
.
{\displaystyle e_{i}\left(v_{j_{1}}\otimes \cdots \otimes v_{j_{i}}\otimes v_{j_{i+1}}\otimes \cdots \otimes v_{j_{k}}\right)=\delta _{j_{i},j_{i+1}}\sum _{j=1}^{n}v_{j_{1}}\otimes \cdots \otimes v_{j}\otimes v_{j}\otimes \cdots \otimes v_{j_{k}}\ .}
=== Duality between the partition algebra and the symmetric group ===
Let
n
≥
2
k
{\displaystyle n\geq 2k}
be integer.
Let us take
V
{\displaystyle V}
to be the natural permutation representation of the symmetric group
S
n
{\displaystyle S_{n}}
. This
n
{\displaystyle n}
-dimensional representation is a sum of two irreducible representations: the standard and trivial representations,
V
=
[
n
−
1
,
1
]
⊕
[
n
]
{\displaystyle V=[n-1,1]\oplus [n]}
.
Then the partition algebra
P
k
(
n
)
{\displaystyle P_{k}(n)}
is the centralizer of the action of
S
n
{\displaystyle S_{n}}
on the tensor product space
V
⊗
k
{\displaystyle V^{\otimes k}}
,
P
k
(
n
)
≅
End
S
n
(
V
⊗
k
)
.
{\displaystyle P_{k}(n)\cong {\text{End}}_{S_{n}}\left(V^{\otimes k}\right)\ .}
Moreover, as a bimodule over
P
k
(
n
)
×
S
n
{\displaystyle P_{k}(n)\times S_{n}}
, the tensor product space decomposes into irreducible representations as
V
⊗
k
=
⨁
0
≤
|
λ
|
≤
k
P
λ
⊗
V
[
n
−
|
λ
|
,
λ
]
,
{\displaystyle V^{\otimes k}=\bigoplus _{0\leq |\lambda |\leq k}{\mathcal {P}}_{\lambda }\otimes V_{[n-|\lambda |,\lambda ]}\ ,}
where
[
n
−
|
λ
|
,
λ
]
{\displaystyle [n-|\lambda |,\lambda ]}
is a Young diagram of size
n
{\displaystyle n}
built by adding a first row to
λ
{\displaystyle \lambda }
, and
V
[
n
−
|
λ
|
,
λ
]
{\displaystyle V_{[n-|\lambda |,\lambda ]}}
is the corresponding Specht module of
S
n
{\displaystyle S_{n}}
.
=== Dualities involving subalgebras ===
The duality between the symmetric group and the partition algebra generalizes the original Schur-Weyl duality between the general linear group and the symmetric group. There are other generalizations. In the relevant tensor product spaces, we write
V
n
{\displaystyle V_{n}}
for an irreducible
n
{\displaystyle n}
-dimensional representation of the first group or algebra:
== References ==
== Further reading ==
Kauffman, Louis H. (1991). Knots and Physics. World Scientific. ISBN 978-981-02-0343-6.
Kauffman, Louis H. (1990). "An invariant of regular isotopy". Transactions of the American Mathematical Society. 318 (2): 417–471. doi:10.1090/S0002-9947-1990-0958895-7. ISSN 0002-9947. | Wikipedia/Partition_algebra |
In mathematics – particularly in homological algebra, algebraic topology, and algebraic geometry – a differential graded algebra (or DGA, or DG algebra) is an algebraic structure often used to capture information about a topological or geometric space. Explicitly, a differential graded algebra is a graded associative algebra with a chain complex structure that is compatible with the algebra structure.
In geometry, the de Rham algebra of differential forms on a manifold has the structure of a differential graded algebra, and it encodes the de Rham cohomology of the manifold. In algebraic topology, the singular cochains of a topological space form a DGA encoding the singular cohomology. Moreover, American mathematician Dennis Sullivan developed a DGA to encode the rational homotopy type of topological spaces.
== Definitions ==
Let
A
∙
=
⨁
i
∈
Z
A
i
{\displaystyle A_{\bullet }=\bigoplus \nolimits _{i\in \mathbb {Z} }A_{i}}
be a
Z
{\displaystyle \mathbb {Z} }
-graded algebra, with product
⋅
{\displaystyle \cdot }
, equipped with a map
d
:
A
∙
→
A
∙
{\displaystyle d\colon A_{\bullet }\to A_{\bullet }}
of degree
−
1
{\displaystyle -1}
(homologically graded) or degree
+
1
{\displaystyle +1}
(cohomologically graded). We say that
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
is a differential graded algebra if
d
{\displaystyle d}
is a differential, giving
A
∙
{\displaystyle A_{\bullet }}
the structure of a chain complex or cochain complex (depending on the degree), and satisfies a graded Leibniz rule. In what follows, we will denote the "degree" of a homogeneous element
a
∈
A
i
{\displaystyle a\in A_{i}}
by
|
a
|
=
i
{\displaystyle |a|=i}
. Explicitly, the map
d
{\displaystyle d}
satisfies the conditions
Often one omits the differential and multiplication and simply writes
A
∙
{\displaystyle A_{\bullet }}
or
A
{\displaystyle A}
to refer to the DGA
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
.
A linear map
f
:
A
∙
→
B
∙
{\displaystyle f:A_{\bullet }\to B_{\bullet }}
between graded vector spaces is said to be of degree n if
f
(
A
i
)
⊆
B
i
+
n
{\displaystyle f(A_{i})\subseteq B_{i+n}}
for all
i
{\displaystyle i}
. When considering (co)chain complexes, we restrict our attention to chain maps, that is, maps of degree 0 that commute with the differentials
f
∘
d
A
=
d
B
∘
f
{\displaystyle f\circ d_{A}=d_{B}\circ f}
. The morphisms in the category of DGAs are chain maps that are also algebra homomorphisms.
=== Categorical Definition ===
One can also define DGAs more abstractly using category theory. There is a category of chain complexes over a ring
R
{\displaystyle R}
, often denoted
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
, whose objects are chain complexes and whose morphisms are chain maps. We define the tensor product of chain complexes
(
V
,
d
V
)
{\displaystyle (V,d_{V})}
and
(
W
,
d
W
)
{\displaystyle (W,d_{W})}
by
(
V
⊗
W
)
n
=
⨁
i
+
j
=
n
V
i
⊗
R
W
j
{\displaystyle (V\otimes W)_{n}=\bigoplus _{i+j=n}V_{i}\otimes _{R}W_{j}}
with differential
d
(
v
⊗
w
)
=
(
d
V
v
)
⊗
w
−
(
−
1
)
|
v
|
v
⊗
(
d
W
w
)
{\displaystyle d(v\otimes w)=(d_{V}v)\otimes w-(-1)^{|v|}v\otimes (d_{W}w)}
This operation makes
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
into a symmetric monoidal category. Then, we can equivalently define a differential graded algebra as a monoid object in
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
. Heuristically, it is an object in
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
with an associative and unital multiplication.
=== Homology and Cohomology ===
Associated to any chain complex
(
A
∙
,
d
)
{\displaystyle (A_{\bullet },d)}
is its homology. Since
d
∘
d
=
0
{\displaystyle d\circ d=0}
, it follows that
im
(
d
:
A
i
+
1
→
A
i
)
{\displaystyle \operatorname {im} (d:A_{i+1}\to A_{i})}
is a subobject of
ker
(
d
:
A
i
→
A
i
−
1
)
{\displaystyle \operatorname {ker} (d:A_{i}\to A_{i-1})}
. Thus, we can form the quotient
H
i
(
A
∙
)
=
ker
(
d
:
A
i
→
A
i
−
1
)
/
im
(
d
:
A
i
+
1
→
A
i
)
{\displaystyle H_{i}(A_{\bullet })=\operatorname {ker} (d:A_{i}\to A_{i-1})/\operatorname {im} (d:A_{i+1}\to A_{i})}
This is called the
i
{\displaystyle i}
th homology group, and all together they form a graded vector space
H
∙
(
A
)
{\displaystyle H_{\bullet }(A)}
. In fact, the homology groups form a DGA with zero differential. Analogously, one can define the cohomology groups of a cochain complex, which also form a graded algebra with zero differential.
Every chain map
f
:
(
A
∙
,
d
A
)
→
(
B
∙
,
d
B
)
{\displaystyle f:(A_{\bullet },d_{A})\to (B_{\bullet },d_{B})}
of complexes induces a map on (co)homology, often denoted
f
∗
:
H
∙
(
A
)
→
H
∙
(
B
)
{\displaystyle f_{*}:H_{\bullet }(A)\to H_{\bullet }(B)}
(respectively
f
∗
:
H
∙
(
B
)
→
H
∙
(
A
)
{\displaystyle f^{*}:H^{\bullet }(B)\to H^{\bullet }(A)}
). If this induced map is an isomorphism on all (co)homology groups, the map
f
{\displaystyle f}
is called a quasi-isomorphism. In many contexts, this is the natural notion of equivalence one uses for (co)chain complexes. We say a morphism of DGAs is a quasi-isomorphism if the chain map on the underlying (co)chain complexes is.
== Properties of DGAs ==
=== Commutative Differential Graded Algebras ===
A commutative differential graded algebra (or CDGA) is a differential graded algebra,
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
, which satisfies a graded version of commutativity. Namely,
a
⋅
b
=
(
−
1
)
|
a
|
|
b
|
b
⋅
a
{\displaystyle a\cdot b=(-1)^{|a||b|}b\cdot a}
for homogeneous elements
a
∈
A
i
,
b
∈
A
j
{\displaystyle a\in A_{i},b\in A_{j}}
. Many of the DGAs commonly encountered in math happen to be CDGAs, like the de Rham algebra of differential forms.
=== Differential graded Lie algebras ===
A differential graded Lie algebra (or DGLA) is a differential graded analogue of a Lie algebra. That is, it is a differential graded vector space,
(
L
∙
,
d
)
{\displaystyle (L_{\bullet },d)}
, together with an operation
[
,
]
:
L
i
⊗
L
j
→
L
i
+
j
{\displaystyle [,]:L_{i}\otimes L_{j}\to L_{i+j}}
, satisfying the following graded analogues of the Lie algebra axioms.
An example of a DGLA is the de Rham algebra
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
tensored with a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, with the bracket given by the exterior product of the differential forms and Lie bracket; elements of this DGLA are known as Lie algebra–valued differential forms. DGLAs also arise frequently in the study of deformations of algebraic structures where, over a field of characteristic 0, "nice" deformation problems are described by the space of Maurer-Cartan elements of some suitable DGLA.
=== Formal DGAs ===
A (co)chain complex
C
∙
{\displaystyle C_{\bullet }}
is called formal if there is a chain map to its (co)homology
H
∙
(
C
∙
)
{\displaystyle H_{\bullet }(C_{\bullet })}
(respectively
H
∙
(
C
∙
)
{\displaystyle H^{\bullet }(C_{\bullet })}
), thought of as a complex with 0 differential, that is a quasi-isomorphism. We say that a DGA
A
{\displaystyle A}
is formal if there exists a morphism of DGAs
A
→
H
∙
(
A
)
{\displaystyle A\to H_{\bullet }(A)}
(respectively
A
→
H
∙
(
A
)
{\displaystyle A\to H^{\bullet }(A)}
) that is a quasi-isomorphism. This notion is important, for instance, when one wants to consider quasi-isomorphic chain complexes or DGAs as being equivalent, as in the derived category.
== Examples ==
=== Trivial DGAs ===
Notice that any graded algebra
A
=
⨁
i
A
i
{\displaystyle A=\bigoplus \nolimits _{i}A_{i}}
has the structure of a DGA with trivial differential, i.e.,
d
=
0
{\displaystyle d=0}
. In particular, as noted above, the (co)homology of any DGA forms a trivial DGA, since it is a graded algebra.
=== The de-Rham algebra ===
Let
M
{\displaystyle M}
be a manifold. Then, the differential forms on
M
{\displaystyle M}
, denoted by
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
, naturally have the structure of a (cohomologically graded) DGA. The graded vector space is
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
, where the grading is given by form degree. This vector space has a product, given by the exterior product, which makes it into a graded algebra. Finally, the exterior derivative
d
:
Ω
i
(
M
)
→
Ω
i
+
1
(
M
)
{\displaystyle d:\Omega ^{i}(M)\to \Omega ^{i+1}(M)}
satisfies
d
2
=
0
{\displaystyle d^{2}=0}
and the graded Leibniz rule. In fact, the exterior product is graded-commutative, which makes the de Rham algebra an example of a CDGA.
=== Singular Cochains ===
Let
X
{\displaystyle X}
be a topological space. Recall that we can associate to
X
{\displaystyle X}
its complex of singular cochains with coefficients in a ring
R
{\displaystyle R}
, denoted
(
C
∙
(
X
;
R
)
,
d
)
{\displaystyle (C^{\bullet }(X;R),d)}
, whose cohomology is the singular cohomology of
X
{\displaystyle X}
. On
C
∙
(
X
;
R
)
{\displaystyle C^{\bullet }(X;R)}
, one can define the cup product of cochains, which gives this cochain complex the structure of a DGA. In the case where
X
{\displaystyle X}
is a smooth manifold and
R
=
R
{\displaystyle R=\mathbb {R} }
, the de Rham theorem states that the singular cohomology is isomorphic to the de Rham cohomology and, moreover, the cup product and exterior product of differential forms induce the same operation on cohomology.
Note, however, that while the cup product induces a graded-commutative operation on cohomology, it is not graded commutative directly on cochains. This is an important distinction, and the failure of a DGA to be commutative is referred to as the "commutative cochain problem". This problem is important because if, for any topological space
X
{\displaystyle X}
, one can associate a commutative DGA whose cohomology is the singular cohomology of
X
{\displaystyle X}
over
R
{\displaystyle R}
, then this CDGA determines the
R
{\displaystyle R}
-homotopy type of
X
{\displaystyle X}
.
=== The Free DGA ===
Let
V
{\displaystyle V}
be a (non-graded) vector space over a field
k
{\displaystyle k}
. The tensor algebra
T
(
V
)
{\displaystyle T(V)}
is defined to be the graded algebra
T
(
V
)
=
⨁
i
≥
0
T
i
(
V
)
=
⨁
i
≥
0
V
⊗
i
{\displaystyle T(V)=\bigoplus _{i\geq 0}T^{i}(V)=\bigoplus _{i\geq 0}V^{\otimes i}}
where, by convention, we take
T
0
(
V
)
=
k
{\displaystyle T^{0}(V)=k}
. This vector space can be made into a graded algebra with the multiplication
T
i
(
V
)
⊗
T
j
(
V
)
→
T
i
+
j
(
V
)
{\displaystyle T^{i}(V)\otimes T^{j}(V)\to T^{i+j}(V)}
given by the tensor product
⊗
{\displaystyle \otimes }
. This is the free algebra on
V
{\displaystyle V}
, and can be thought of as the algebra of all non-commuting polynomials in the elements of
V
{\displaystyle V}
.
One can give the tensor algebra the structure of a DGA as follows. Let
f
:
V
→
k
{\displaystyle f:V\to k}
be any linear map. Then, this extends uniquely to a derivation of
T
(
V
)
{\displaystyle T(V)}
of degree
−
1
{\displaystyle -1}
(homologically graded) by the formula
d
f
(
v
1
⊗
⋯
⊗
v
n
)
=
∑
i
=
1
n
(
−
1
)
i
−
1
v
1
⊗
⋯
⊗
f
(
v
i
)
⊗
⋯
⊗
v
n
{\displaystyle d_{f}(v_{1}\otimes \cdots \otimes v_{n})=\sum _{i=1}^{n}(-1)^{i-1}v_{1}\otimes \cdots \otimes f(v_{i})\otimes \cdots \otimes v_{n}}
One can think of the minus signs on the right-hand side as coming from "jumping" the map
f
{\displaystyle f}
over the elements
v
1
,
…
,
v
i
−
1
{\displaystyle v_{1},\ldots ,v_{i-1}}
, which are all of degree 1 in
T
(
V
)
{\displaystyle T(V)}
. This is commonly referred to as the Koszul sign rule.
One can extend this construction to differential graded vector spaces. Let
(
V
∙
,
d
V
)
{\displaystyle (V_{\bullet },d_{V})}
be a differential graded vector space, i.e.,
d
V
:
V
i
→
V
i
−
1
{\displaystyle d_{V}:V_{i}\to V_{i-1}}
and
d
2
=
0
{\displaystyle d^{2}=0}
. Here we work with a homologically graded DG vector space, but this construction works equally well for a cohomologically graded one. Then, we can endow the tensor algebra
T
(
V
)
{\displaystyle T(V)}
with a DGA structure which extends the DG structure on V. The differential is given by
d
(
v
1
⊗
⋯
⊗
v
n
)
=
∑
i
=
1
n
(
−
1
)
|
v
1
|
+
…
+
|
v
i
−
1
|
v
1
⊗
⋯
⊗
d
V
(
v
i
)
⊗
⋯
⊗
v
n
{\displaystyle d(v_{1}\otimes \cdots \otimes v_{n})=\sum _{i=1}^{n}(-1)^{|v_{1}|+\ldots +|v_{i-1}|}v_{1}\otimes \cdots \otimes d_{V}(v_{i})\otimes \cdots \otimes v_{n}}
This is similar to the previous case, except that now the elements of
V
{\displaystyle V}
can have different degrees, and
T
(
V
)
{\displaystyle T(V)}
is no longer graded by the number of tensor products but instead by the sum of the degrees of the elements of
V
{\displaystyle V}
, i.e.,
|
v
1
⊗
⋯
⊗
v
n
|
=
|
v
1
|
+
…
+
|
v
n
|
{\displaystyle |v_{1}\otimes \cdots \otimes v_{n}|=|v_{1}|+\ldots +|v_{n}|}
.
=== The Free CDGA ===
Similar to the previous case, one can also construct the free CDGA. Given a graded vector space
V
∙
{\displaystyle V_{\bullet }}
, we define the free graded commutative algebra on it by
S
(
V
)
=
Sym
(
⨁
i
=
2
k
V
i
)
⊗
⋀
(
⨁
i
=
2
k
+
1
V
i
)
{\displaystyle S(V)=\operatorname {Sym} \left(\bigoplus _{i=2k}V_{i}\right)\otimes \bigwedge \left(\bigoplus _{i=2k+1}V_{i}\right)}
where
Sym
{\displaystyle \operatorname {Sym} }
denotes the symmetric algebra and
⋀
{\displaystyle \bigwedge }
denotes the exterior algebra. If we begin with a DG vector space
(
V
∙
,
d
)
{\displaystyle (V_{\bullet },d)}
(either homologically or cohomologically graded), then we can extend
d
{\displaystyle d}
to
S
(
V
)
{\displaystyle S(V)}
such that
(
S
(
V
)
,
d
)
{\displaystyle (S(V),d)}
is a CDGA in a unique way.
== Models for DGAs ==
As mentioned previously, oftentimes one is most interested in the (co)homology of a DGA. As such, the specific (co)chain complex we use is less important, as long as it has the right (co)homology. Given a DGA
A
{\displaystyle A}
, we say that another DGA
M
{\displaystyle M}
is a model for
A
{\displaystyle A}
if it comes with a surjective DGA morphism
p
:
M
→
A
{\displaystyle p:M\to A}
that is a quasi-isomorphism.
=== Minimal Models ===
Since one could form arbitrarily large (co)chain complexes with the same cohomology, it is useful to consider the "smallest" possible model of a DGA. We say that a DGA
(
A
,
d
,
⋅
)
{\displaystyle (A,d,\cdot )}
is a minimal if it satisfies the following conditions.
Note that some conventions, often used in algebraic topology, additionally require that
A
{\displaystyle A}
be simply connected, which means that
A
0
=
k
{\displaystyle A^{0}=k}
and
A
1
=
0
{\displaystyle A^{1}=0}
. This condition on the 0th and 1st degree components of
A
{\displaystyle A}
mirror the (co)homology groups of a simply connected space.
Finally, we say that
M
{\displaystyle M}
is a minimal model for
A
{\displaystyle A}
if it is both minimal and a model for
A
{\displaystyle A}
. The fundamental theorem of minimal models states that if
A
{\displaystyle A}
is simply connected then it admits a minimal model, and that if a minimal model exists it is unique up to (non-unique) isomorphism.
=== The Sullivan minimal model ===
Minimal models were used with great success by Dennis Sullivan in his work on rational homotopy theory. Given a simplicial complex
X
{\displaystyle X}
, one can define a rational analogue of the (real) de Rham algebra: the DGA
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
of "piecewise polynomial" differential forms with
Q
{\displaystyle \mathbb {Q} }
-coefficients. Then,
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
has the structure of a CDGA over the field
Q
{\displaystyle \mathbb {Q} }
, and in fact the cohomology is isomorphic to the singular cohomology of
X
{\displaystyle X}
. In particular, if
X
{\displaystyle X}
is a simply connected topological space then
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is simply connected as a DGA, thus there exists a minimal model.
Moreover, since
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is a CDGA whose cohomology is the singular cohomology of
X
{\displaystyle X}
with
Q
{\displaystyle \mathbb {Q} }
-coefficients, it is a solution to the commutative cochain problem. Thus, if
X
{\displaystyle X}
is a simply connected CW complex with finite dimensional rational homology groups, the minimal model of the CDGA
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
captures entirely the rational homotopy type of
X
{\displaystyle X}
.
== See also ==
Differential graded Lie algebra
Rational homotopy theory
Homotopy associative algebra
== Notes ==
== References == | Wikipedia/Differential_graded_algebra |
In mathematics – particularly in homological algebra, algebraic topology, and algebraic geometry – a differential graded algebra (or DGA, or DG algebra) is an algebraic structure often used to capture information about a topological or geometric space. Explicitly, a differential graded algebra is a graded associative algebra with a chain complex structure that is compatible with the algebra structure.
In geometry, the de Rham algebra of differential forms on a manifold has the structure of a differential graded algebra, and it encodes the de Rham cohomology of the manifold. In algebraic topology, the singular cochains of a topological space form a DGA encoding the singular cohomology. Moreover, American mathematician Dennis Sullivan developed a DGA to encode the rational homotopy type of topological spaces.
== Definitions ==
Let
A
∙
=
⨁
i
∈
Z
A
i
{\displaystyle A_{\bullet }=\bigoplus \nolimits _{i\in \mathbb {Z} }A_{i}}
be a
Z
{\displaystyle \mathbb {Z} }
-graded algebra, with product
⋅
{\displaystyle \cdot }
, equipped with a map
d
:
A
∙
→
A
∙
{\displaystyle d\colon A_{\bullet }\to A_{\bullet }}
of degree
−
1
{\displaystyle -1}
(homologically graded) or degree
+
1
{\displaystyle +1}
(cohomologically graded). We say that
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
is a differential graded algebra if
d
{\displaystyle d}
is a differential, giving
A
∙
{\displaystyle A_{\bullet }}
the structure of a chain complex or cochain complex (depending on the degree), and satisfies a graded Leibniz rule. In what follows, we will denote the "degree" of a homogeneous element
a
∈
A
i
{\displaystyle a\in A_{i}}
by
|
a
|
=
i
{\displaystyle |a|=i}
. Explicitly, the map
d
{\displaystyle d}
satisfies the conditions
Often one omits the differential and multiplication and simply writes
A
∙
{\displaystyle A_{\bullet }}
or
A
{\displaystyle A}
to refer to the DGA
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
.
A linear map
f
:
A
∙
→
B
∙
{\displaystyle f:A_{\bullet }\to B_{\bullet }}
between graded vector spaces is said to be of degree n if
f
(
A
i
)
⊆
B
i
+
n
{\displaystyle f(A_{i})\subseteq B_{i+n}}
for all
i
{\displaystyle i}
. When considering (co)chain complexes, we restrict our attention to chain maps, that is, maps of degree 0 that commute with the differentials
f
∘
d
A
=
d
B
∘
f
{\displaystyle f\circ d_{A}=d_{B}\circ f}
. The morphisms in the category of DGAs are chain maps that are also algebra homomorphisms.
=== Categorical Definition ===
One can also define DGAs more abstractly using category theory. There is a category of chain complexes over a ring
R
{\displaystyle R}
, often denoted
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
, whose objects are chain complexes and whose morphisms are chain maps. We define the tensor product of chain complexes
(
V
,
d
V
)
{\displaystyle (V,d_{V})}
and
(
W
,
d
W
)
{\displaystyle (W,d_{W})}
by
(
V
⊗
W
)
n
=
⨁
i
+
j
=
n
V
i
⊗
R
W
j
{\displaystyle (V\otimes W)_{n}=\bigoplus _{i+j=n}V_{i}\otimes _{R}W_{j}}
with differential
d
(
v
⊗
w
)
=
(
d
V
v
)
⊗
w
−
(
−
1
)
|
v
|
v
⊗
(
d
W
w
)
{\displaystyle d(v\otimes w)=(d_{V}v)\otimes w-(-1)^{|v|}v\otimes (d_{W}w)}
This operation makes
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
into a symmetric monoidal category. Then, we can equivalently define a differential graded algebra as a monoid object in
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
. Heuristically, it is an object in
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
with an associative and unital multiplication.
=== Homology and Cohomology ===
Associated to any chain complex
(
A
∙
,
d
)
{\displaystyle (A_{\bullet },d)}
is its homology. Since
d
∘
d
=
0
{\displaystyle d\circ d=0}
, it follows that
im
(
d
:
A
i
+
1
→
A
i
)
{\displaystyle \operatorname {im} (d:A_{i+1}\to A_{i})}
is a subobject of
ker
(
d
:
A
i
→
A
i
−
1
)
{\displaystyle \operatorname {ker} (d:A_{i}\to A_{i-1})}
. Thus, we can form the quotient
H
i
(
A
∙
)
=
ker
(
d
:
A
i
→
A
i
−
1
)
/
im
(
d
:
A
i
+
1
→
A
i
)
{\displaystyle H_{i}(A_{\bullet })=\operatorname {ker} (d:A_{i}\to A_{i-1})/\operatorname {im} (d:A_{i+1}\to A_{i})}
This is called the
i
{\displaystyle i}
th homology group, and all together they form a graded vector space
H
∙
(
A
)
{\displaystyle H_{\bullet }(A)}
. In fact, the homology groups form a DGA with zero differential. Analogously, one can define the cohomology groups of a cochain complex, which also form a graded algebra with zero differential.
Every chain map
f
:
(
A
∙
,
d
A
)
→
(
B
∙
,
d
B
)
{\displaystyle f:(A_{\bullet },d_{A})\to (B_{\bullet },d_{B})}
of complexes induces a map on (co)homology, often denoted
f
∗
:
H
∙
(
A
)
→
H
∙
(
B
)
{\displaystyle f_{*}:H_{\bullet }(A)\to H_{\bullet }(B)}
(respectively
f
∗
:
H
∙
(
B
)
→
H
∙
(
A
)
{\displaystyle f^{*}:H^{\bullet }(B)\to H^{\bullet }(A)}
). If this induced map is an isomorphism on all (co)homology groups, the map
f
{\displaystyle f}
is called a quasi-isomorphism. In many contexts, this is the natural notion of equivalence one uses for (co)chain complexes. We say a morphism of DGAs is a quasi-isomorphism if the chain map on the underlying (co)chain complexes is.
== Properties of DGAs ==
=== Commutative Differential Graded Algebras ===
A commutative differential graded algebra (or CDGA) is a differential graded algebra,
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
, which satisfies a graded version of commutativity. Namely,
a
⋅
b
=
(
−
1
)
|
a
|
|
b
|
b
⋅
a
{\displaystyle a\cdot b=(-1)^{|a||b|}b\cdot a}
for homogeneous elements
a
∈
A
i
,
b
∈
A
j
{\displaystyle a\in A_{i},b\in A_{j}}
. Many of the DGAs commonly encountered in math happen to be CDGAs, like the de Rham algebra of differential forms.
=== Differential graded Lie algebras ===
A differential graded Lie algebra (or DGLA) is a differential graded analogue of a Lie algebra. That is, it is a differential graded vector space,
(
L
∙
,
d
)
{\displaystyle (L_{\bullet },d)}
, together with an operation
[
,
]
:
L
i
⊗
L
j
→
L
i
+
j
{\displaystyle [,]:L_{i}\otimes L_{j}\to L_{i+j}}
, satisfying the following graded analogues of the Lie algebra axioms.
An example of a DGLA is the de Rham algebra
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
tensored with a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, with the bracket given by the exterior product of the differential forms and Lie bracket; elements of this DGLA are known as Lie algebra–valued differential forms. DGLAs also arise frequently in the study of deformations of algebraic structures where, over a field of characteristic 0, "nice" deformation problems are described by the space of Maurer-Cartan elements of some suitable DGLA.
=== Formal DGAs ===
A (co)chain complex
C
∙
{\displaystyle C_{\bullet }}
is called formal if there is a chain map to its (co)homology
H
∙
(
C
∙
)
{\displaystyle H_{\bullet }(C_{\bullet })}
(respectively
H
∙
(
C
∙
)
{\displaystyle H^{\bullet }(C_{\bullet })}
), thought of as a complex with 0 differential, that is a quasi-isomorphism. We say that a DGA
A
{\displaystyle A}
is formal if there exists a morphism of DGAs
A
→
H
∙
(
A
)
{\displaystyle A\to H_{\bullet }(A)}
(respectively
A
→
H
∙
(
A
)
{\displaystyle A\to H^{\bullet }(A)}
) that is a quasi-isomorphism. This notion is important, for instance, when one wants to consider quasi-isomorphic chain complexes or DGAs as being equivalent, as in the derived category.
== Examples ==
=== Trivial DGAs ===
Notice that any graded algebra
A
=
⨁
i
A
i
{\displaystyle A=\bigoplus \nolimits _{i}A_{i}}
has the structure of a DGA with trivial differential, i.e.,
d
=
0
{\displaystyle d=0}
. In particular, as noted above, the (co)homology of any DGA forms a trivial DGA, since it is a graded algebra.
=== The de-Rham algebra ===
Let
M
{\displaystyle M}
be a manifold. Then, the differential forms on
M
{\displaystyle M}
, denoted by
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
, naturally have the structure of a (cohomologically graded) DGA. The graded vector space is
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
, where the grading is given by form degree. This vector space has a product, given by the exterior product, which makes it into a graded algebra. Finally, the exterior derivative
d
:
Ω
i
(
M
)
→
Ω
i
+
1
(
M
)
{\displaystyle d:\Omega ^{i}(M)\to \Omega ^{i+1}(M)}
satisfies
d
2
=
0
{\displaystyle d^{2}=0}
and the graded Leibniz rule. In fact, the exterior product is graded-commutative, which makes the de Rham algebra an example of a CDGA.
=== Singular Cochains ===
Let
X
{\displaystyle X}
be a topological space. Recall that we can associate to
X
{\displaystyle X}
its complex of singular cochains with coefficients in a ring
R
{\displaystyle R}
, denoted
(
C
∙
(
X
;
R
)
,
d
)
{\displaystyle (C^{\bullet }(X;R),d)}
, whose cohomology is the singular cohomology of
X
{\displaystyle X}
. On
C
∙
(
X
;
R
)
{\displaystyle C^{\bullet }(X;R)}
, one can define the cup product of cochains, which gives this cochain complex the structure of a DGA. In the case where
X
{\displaystyle X}
is a smooth manifold and
R
=
R
{\displaystyle R=\mathbb {R} }
, the de Rham theorem states that the singular cohomology is isomorphic to the de Rham cohomology and, moreover, the cup product and exterior product of differential forms induce the same operation on cohomology.
Note, however, that while the cup product induces a graded-commutative operation on cohomology, it is not graded commutative directly on cochains. This is an important distinction, and the failure of a DGA to be commutative is referred to as the "commutative cochain problem". This problem is important because if, for any topological space
X
{\displaystyle X}
, one can associate a commutative DGA whose cohomology is the singular cohomology of
X
{\displaystyle X}
over
R
{\displaystyle R}
, then this CDGA determines the
R
{\displaystyle R}
-homotopy type of
X
{\displaystyle X}
.
=== The Free DGA ===
Let
V
{\displaystyle V}
be a (non-graded) vector space over a field
k
{\displaystyle k}
. The tensor algebra
T
(
V
)
{\displaystyle T(V)}
is defined to be the graded algebra
T
(
V
)
=
⨁
i
≥
0
T
i
(
V
)
=
⨁
i
≥
0
V
⊗
i
{\displaystyle T(V)=\bigoplus _{i\geq 0}T^{i}(V)=\bigoplus _{i\geq 0}V^{\otimes i}}
where, by convention, we take
T
0
(
V
)
=
k
{\displaystyle T^{0}(V)=k}
. This vector space can be made into a graded algebra with the multiplication
T
i
(
V
)
⊗
T
j
(
V
)
→
T
i
+
j
(
V
)
{\displaystyle T^{i}(V)\otimes T^{j}(V)\to T^{i+j}(V)}
given by the tensor product
⊗
{\displaystyle \otimes }
. This is the free algebra on
V
{\displaystyle V}
, and can be thought of as the algebra of all non-commuting polynomials in the elements of
V
{\displaystyle V}
.
One can give the tensor algebra the structure of a DGA as follows. Let
f
:
V
→
k
{\displaystyle f:V\to k}
be any linear map. Then, this extends uniquely to a derivation of
T
(
V
)
{\displaystyle T(V)}
of degree
−
1
{\displaystyle -1}
(homologically graded) by the formula
d
f
(
v
1
⊗
⋯
⊗
v
n
)
=
∑
i
=
1
n
(
−
1
)
i
−
1
v
1
⊗
⋯
⊗
f
(
v
i
)
⊗
⋯
⊗
v
n
{\displaystyle d_{f}(v_{1}\otimes \cdots \otimes v_{n})=\sum _{i=1}^{n}(-1)^{i-1}v_{1}\otimes \cdots \otimes f(v_{i})\otimes \cdots \otimes v_{n}}
One can think of the minus signs on the right-hand side as coming from "jumping" the map
f
{\displaystyle f}
over the elements
v
1
,
…
,
v
i
−
1
{\displaystyle v_{1},\ldots ,v_{i-1}}
, which are all of degree 1 in
T
(
V
)
{\displaystyle T(V)}
. This is commonly referred to as the Koszul sign rule.
One can extend this construction to differential graded vector spaces. Let
(
V
∙
,
d
V
)
{\displaystyle (V_{\bullet },d_{V})}
be a differential graded vector space, i.e.,
d
V
:
V
i
→
V
i
−
1
{\displaystyle d_{V}:V_{i}\to V_{i-1}}
and
d
2
=
0
{\displaystyle d^{2}=0}
. Here we work with a homologically graded DG vector space, but this construction works equally well for a cohomologically graded one. Then, we can endow the tensor algebra
T
(
V
)
{\displaystyle T(V)}
with a DGA structure which extends the DG structure on V. The differential is given by
d
(
v
1
⊗
⋯
⊗
v
n
)
=
∑
i
=
1
n
(
−
1
)
|
v
1
|
+
…
+
|
v
i
−
1
|
v
1
⊗
⋯
⊗
d
V
(
v
i
)
⊗
⋯
⊗
v
n
{\displaystyle d(v_{1}\otimes \cdots \otimes v_{n})=\sum _{i=1}^{n}(-1)^{|v_{1}|+\ldots +|v_{i-1}|}v_{1}\otimes \cdots \otimes d_{V}(v_{i})\otimes \cdots \otimes v_{n}}
This is similar to the previous case, except that now the elements of
V
{\displaystyle V}
can have different degrees, and
T
(
V
)
{\displaystyle T(V)}
is no longer graded by the number of tensor products but instead by the sum of the degrees of the elements of
V
{\displaystyle V}
, i.e.,
|
v
1
⊗
⋯
⊗
v
n
|
=
|
v
1
|
+
…
+
|
v
n
|
{\displaystyle |v_{1}\otimes \cdots \otimes v_{n}|=|v_{1}|+\ldots +|v_{n}|}
.
=== The Free CDGA ===
Similar to the previous case, one can also construct the free CDGA. Given a graded vector space
V
∙
{\displaystyle V_{\bullet }}
, we define the free graded commutative algebra on it by
S
(
V
)
=
Sym
(
⨁
i
=
2
k
V
i
)
⊗
⋀
(
⨁
i
=
2
k
+
1
V
i
)
{\displaystyle S(V)=\operatorname {Sym} \left(\bigoplus _{i=2k}V_{i}\right)\otimes \bigwedge \left(\bigoplus _{i=2k+1}V_{i}\right)}
where
Sym
{\displaystyle \operatorname {Sym} }
denotes the symmetric algebra and
⋀
{\displaystyle \bigwedge }
denotes the exterior algebra. If we begin with a DG vector space
(
V
∙
,
d
)
{\displaystyle (V_{\bullet },d)}
(either homologically or cohomologically graded), then we can extend
d
{\displaystyle d}
to
S
(
V
)
{\displaystyle S(V)}
such that
(
S
(
V
)
,
d
)
{\displaystyle (S(V),d)}
is a CDGA in a unique way.
== Models for DGAs ==
As mentioned previously, oftentimes one is most interested in the (co)homology of a DGA. As such, the specific (co)chain complex we use is less important, as long as it has the right (co)homology. Given a DGA
A
{\displaystyle A}
, we say that another DGA
M
{\displaystyle M}
is a model for
A
{\displaystyle A}
if it comes with a surjective DGA morphism
p
:
M
→
A
{\displaystyle p:M\to A}
that is a quasi-isomorphism.
=== Minimal Models ===
Since one could form arbitrarily large (co)chain complexes with the same cohomology, it is useful to consider the "smallest" possible model of a DGA. We say that a DGA
(
A
,
d
,
⋅
)
{\displaystyle (A,d,\cdot )}
is a minimal if it satisfies the following conditions.
Note that some conventions, often used in algebraic topology, additionally require that
A
{\displaystyle A}
be simply connected, which means that
A
0
=
k
{\displaystyle A^{0}=k}
and
A
1
=
0
{\displaystyle A^{1}=0}
. This condition on the 0th and 1st degree components of
A
{\displaystyle A}
mirror the (co)homology groups of a simply connected space.
Finally, we say that
M
{\displaystyle M}
is a minimal model for
A
{\displaystyle A}
if it is both minimal and a model for
A
{\displaystyle A}
. The fundamental theorem of minimal models states that if
A
{\displaystyle A}
is simply connected then it admits a minimal model, and that if a minimal model exists it is unique up to (non-unique) isomorphism.
=== The Sullivan minimal model ===
Minimal models were used with great success by Dennis Sullivan in his work on rational homotopy theory. Given a simplicial complex
X
{\displaystyle X}
, one can define a rational analogue of the (real) de Rham algebra: the DGA
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
of "piecewise polynomial" differential forms with
Q
{\displaystyle \mathbb {Q} }
-coefficients. Then,
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
has the structure of a CDGA over the field
Q
{\displaystyle \mathbb {Q} }
, and in fact the cohomology is isomorphic to the singular cohomology of
X
{\displaystyle X}
. In particular, if
X
{\displaystyle X}
is a simply connected topological space then
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is simply connected as a DGA, thus there exists a minimal model.
Moreover, since
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is a CDGA whose cohomology is the singular cohomology of
X
{\displaystyle X}
with
Q
{\displaystyle \mathbb {Q} }
-coefficients, it is a solution to the commutative cochain problem. Thus, if
X
{\displaystyle X}
is a simply connected CW complex with finite dimensional rational homology groups, the minimal model of the CDGA
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
captures entirely the rational homotopy type of
X
{\displaystyle X}
.
== See also ==
Differential graded Lie algebra
Rational homotopy theory
Homotopy associative algebra
== Notes ==
== References == | Wikipedia/De_Rham_algebra |
In mathematics, an order in the sense of ring theory is a subring
O
{\displaystyle {\mathcal {O}}}
of a ring
A
{\displaystyle A}
, such that
A
{\displaystyle A}
is a finite-dimensional algebra over the field
Q
{\displaystyle \mathbb {Q} }
of rational numbers
O
{\displaystyle {\mathcal {O}}}
spans
A
{\displaystyle A}
over
Q
{\displaystyle \mathbb {Q} }
, and
O
{\displaystyle {\mathcal {O}}}
is a
Z
{\displaystyle \mathbb {Z} }
-lattice in
A
{\displaystyle A}
.
The last two conditions can be stated in less formal terms: Additively,
O
{\displaystyle {\mathcal {O}}}
is a free abelian group generated by a basis for
A
{\displaystyle A}
over
Q
{\displaystyle \mathbb {Q} }
.
More generally for
R
{\displaystyle R}
an integral domain with fraction field
K
{\displaystyle K}
, an
R
{\displaystyle R}
-order in a finite-dimensional
K
{\displaystyle K}
-algebra
A
{\displaystyle A}
is a subring
O
{\displaystyle {\mathcal {O}}}
of
A
{\displaystyle A}
which is a full
R
{\displaystyle R}
-lattice; i.e. is a finite
R
{\displaystyle R}
-module with the property that
O
⊗
R
K
=
A
{\displaystyle {\mathcal {O}}\otimes _{R}K=A}
.
When
A
{\displaystyle A}
is not a commutative ring, the idea of order is still important, but the phenomena are different. For example, the Hurwitz quaternions form a maximal order in the quaternions with rational co-ordinates; they are not the quaternions with integer coordinates in the most obvious sense. Maximal orders exist in general, but need not be unique: there is in general no largest order, but a number of maximal orders. An important class of examples is that of integral group rings.
== Examples ==
Some examples of orders are:
If
A
{\displaystyle A}
is the matrix ring
M
n
(
K
)
{\displaystyle M_{n}(K)}
over
K
{\displaystyle K}
, then the matrix ring
M
n
(
R
)
{\displaystyle M_{n}(R)}
over
R
{\displaystyle R}
is an
R
{\displaystyle R}
-order in
A
{\displaystyle A}
If
R
{\displaystyle R}
is an integral domain and
L
{\displaystyle L}
a finite separable extension of
K
{\displaystyle K}
, then the integral closure
S
{\displaystyle S}
of
R
{\displaystyle R}
in
L
{\displaystyle L}
is an
R
{\displaystyle R}
-order in
L
{\displaystyle L}
.
If
a
{\displaystyle a}
in
A
{\displaystyle A}
is an integral element over
R
{\displaystyle R}
, then the polynomial ring
R
[
a
]
{\displaystyle R[a]}
is an
R
{\displaystyle R}
-order in the algebra
K
[
a
]
{\displaystyle K[a]}
If
A
{\displaystyle A}
is the group ring
K
[
G
]
{\displaystyle K[G]}
of a finite group
G
{\displaystyle G}
, then
R
[
G
]
{\displaystyle R[G]}
is an
R
{\displaystyle R}
-order on
K
[
G
]
{\displaystyle K[G]}
A fundamental property of
R
{\displaystyle R}
-orders is that every element of an
R
{\displaystyle R}
-order is integral over
R
{\displaystyle R}
.
If the integral closure
S
{\displaystyle S}
of
R
{\displaystyle R}
in
A
{\displaystyle A}
is an
R
{\displaystyle R}
-order then the integrality of every element of every
R
{\displaystyle R}
-order shows that
S
{\displaystyle S}
must be the unique maximal
R
{\displaystyle R}
-order in
A
{\displaystyle A}
. However
S
{\displaystyle S}
need not always be an
R
{\displaystyle R}
-order: indeed
S
{\displaystyle S}
need not even be a ring, and even if
S
{\displaystyle S}
is a ring (for example, when
A
{\displaystyle A}
is commutative) then
S
{\displaystyle S}
need not be an
R
{\displaystyle R}
-lattice.
== Algebraic number theory ==
The leading example is the case where
A
{\displaystyle A}
is a number field
K
{\displaystyle K}
and
O
{\displaystyle {\mathcal {O}}}
is its ring of integers. In algebraic number theory there are examples for any
K
{\displaystyle K}
other than the rational field of proper subrings of the ring of integers that are also orders. For example, in the field extension
A
=
Q
(
i
)
{\displaystyle A=\mathbb {Q} (i)}
of Gaussian rationals over
Q
{\displaystyle \mathbb {Q} }
, the integral closure of
Z
{\displaystyle \mathbb {Z} }
is the ring of Gaussian integers
Z
[
i
]
{\displaystyle \mathbb {Z} [i]}
and so this is the unique maximal
Z
{\displaystyle \mathbb {Z} }
-order: all other orders in
A
{\displaystyle A}
are contained in it. For example, we can take the subring of complex numbers of the form
a
+
2
b
i
{\displaystyle a+2bi}
, with
a
{\displaystyle a}
and
b
{\displaystyle b}
integers.
The maximal order question can be examined at a local field level. This technique is applied in algebraic number theory and modular representation theory.
== See also ==
Hurwitz quaternion order – An example of ring order
== Notes ==
== References ==
Pohst, M.; Zassenhaus, H. (1989). Algorithmic Algebraic Number Theory. Encyclopedia of Mathematics and its Applications. Vol. 30. Cambridge University Press. ISBN 0-521-33060-2. Zbl 0685.12001.
Reiner, I. (2003). Maximal Orders. London Mathematical Society Monographs. New Series. Vol. 28. Oxford University Press. ISBN 0-19-852673-3. Zbl 1024.16008. | Wikipedia/Order_(ring_theory) |
In mathematics, an Azumaya algebra is a generalization of central simple algebras to
R
{\displaystyle R}
-algebras where
R
{\displaystyle R}
need not be a field. Such a notion was introduced in a 1951 paper of Goro Azumaya, for the case where
R
{\displaystyle R}
is a commutative local ring. The notion was developed further in ring theory, and in algebraic geometry, where Alexander Grothendieck made it the basis for his geometric theory of the Brauer group in Bourbaki seminars from 1964–65. There are now several points of access to the basic definitions.
== Over a ring ==
An Azumaya algebra
over a commutative ring
R
{\displaystyle R}
is an
R
{\displaystyle R}
-algebra
A
{\displaystyle A}
obeying any of the following equivalent conditions:
There exists an
R
{\displaystyle R}
-algebra
B
{\displaystyle B}
such that the tensor product of
R
{\displaystyle R}
-algebras
B
⊗
R
A
{\displaystyle B\otimes _{R}A}
is Morita equivalent to
R
{\displaystyle R}
.
The
R
{\displaystyle R}
-algebra
A
o
p
⊗
R
A
{\displaystyle A^{\mathrm {op} }\otimes _{R}A}
is Morita equivalent to
R
{\displaystyle R}
, where
A
o
p
{\displaystyle A^{\mathrm {op} }}
is the opposite algebra of
A
{\displaystyle A}
.
The center of
A
{\displaystyle A}
is
R
{\displaystyle R}
, and
A
{\displaystyle A}
is separable.
A
{\displaystyle A}
is finitely generated, faithful, and projective as an
R
{\displaystyle R}
-module, and the tensor product
A
⊗
R
A
o
p
{\displaystyle A\otimes _{R}A^{\mathrm {op} }}
is isomorphic to
End
R
(
A
)
{\displaystyle {\text{End}}_{R}(A)}
via the map sending
a
⊗
b
{\displaystyle a\otimes b}
to the endomorphism
x
↦
a
x
b
{\displaystyle x\mapsto axb}
of
A
{\displaystyle A}
.
=== Examples over a field ===
Over a field
k
{\displaystyle k}
, Azumaya algebras are completely classified by the Artin–Wedderburn theorem since they are the same as central simple algebras. These are algebras isomorphic to the matrix ring
M
n
(
D
)
{\displaystyle \mathrm {M} _{n}(D)}
for some division algebra
D
{\displaystyle D}
over
k
{\displaystyle k}
whose center is just
k
{\displaystyle k}
. For example, quaternion algebras provide examples of central simple algebras.
=== Examples over local rings ===
Given a local commutative ring
(
R
,
m
)
{\displaystyle (R,{\mathfrak {m}})}
, an
R
{\displaystyle R}
-algebra
A
{\displaystyle A}
is Azumaya if and only if
A
{\displaystyle A}
is free of positive finite rank as an
R
{\displaystyle R}
-module, and the algebra
A
⊗
R
(
R
/
m
)
{\displaystyle A\otimes _{R}(R/{\mathfrak {m}})}
is a central simple algebra over
R
/
m
{\displaystyle R/{\mathfrak {m}}}
, hence all examples come from central simple algebras over
R
/
m
{\displaystyle R/{\mathfrak {m}}}
.
=== Cyclic algebras ===
There is a class of Azumaya algebras called cyclic algebras which generate all similarity classes of Azumaya algebras over a field
K
{\displaystyle K}
, hence all elements in the Brauer group
Br
(
K
)
{\displaystyle {\text{Br}}(K)}
(defined below). Given a finite cyclic Galois field extension
L
/
K
{\displaystyle L/K}
of degree
n
{\displaystyle n}
, for every
b
∈
K
∗
{\displaystyle b\in K^{*}}
and any generator
σ
∈
Gal
(
L
/
K
)
{\displaystyle \sigma \in {\text{Gal}}(L/K)}
there is a twisted polynomial ring
L
[
x
]
σ
{\displaystyle L[x]_{\sigma }}
, also denoted
A
(
σ
,
b
)
{\displaystyle A(\sigma ,b)}
, generated by an element
x
{\displaystyle x}
such that
x
n
=
b
{\displaystyle x^{n}=b}
and the following commutation property holds:
l
⋅
x
=
σ
(
x
)
⋅
l
.
{\displaystyle l\cdot x=\sigma (x)\cdot l.}
As a vector space over
L
{\displaystyle L}
,
L
[
x
]
σ
{\displaystyle L[x]_{\sigma }}
has basis
1
,
x
,
x
2
,
…
,
x
n
−
1
{\displaystyle 1,x,x^{2},\ldots ,x^{n-1}}
with multiplication given by
x
i
⋅
x
j
=
{
x
i
+
j
if
i
+
j
<
n
x
i
+
j
−
n
b
if
i
+
j
≥
n
{\displaystyle x^{i}\cdot x^{j}={\begin{cases}x^{i+j}&{\text{ if }}i+j<n\\x^{i+j-n}b&{\text{ if }}i+j\geq n\\\end{cases}}}
Note that give a geometrically integral variety
X
/
K
{\displaystyle X/K}
, there is also an associated cyclic algebra for the quotient field extension
Frac
(
X
L
)
/
Frac
(
X
)
{\displaystyle {\text{Frac}}(X_{L})/{\text{Frac}}(X)}
.
== Brauer group of a ring ==
Over fields, there is a cohomological classification of Azumaya algebras using Étale cohomology. In fact, this group, called the Brauer group, can be also defined as the similarity classes: 3 of Azumaya algebras over a ring
R
{\displaystyle R}
, where rings
A
,
A
′
{\displaystyle A,A'}
are similar if there is an isomorphism
A
⊗
R
M
n
(
R
)
≅
A
′
⊗
R
M
m
(
R
)
{\displaystyle A\otimes _{R}M_{n}(R)\cong A'\otimes _{R}M_{m}(R)}
of rings for some natural numbers
n
,
m
{\displaystyle n,m}
. Then, this equivalence is in fact an equivalence relation, and if
A
1
∼
A
1
′
{\displaystyle A_{1}\sim A_{1}'}
,
A
2
∼
A
2
′
{\displaystyle A_{2}\sim A_{2}'}
, then
A
1
⊗
R
A
2
∼
A
1
′
⊗
R
A
2
′
{\displaystyle A_{1}\otimes _{R}A_{2}\sim A_{1}'\otimes _{R}A_{2}'}
, showing
[
A
1
]
⊗
[
A
2
]
=
[
A
1
⊗
R
A
2
]
{\displaystyle [A_{1}]\otimes [A_{2}]=[A_{1}\otimes _{R}A_{2}]}
is a well defined operation. This forms a group structure on the set of such equivalence classes called the Brauer group, denoted
Br
(
R
)
{\displaystyle {\text{Br}}(R)}
. Another definition is given by the torsion subgroup of the etale cohomology group
Br
coh
(
R
)
:=
H
e
t
2
(
Spec
(
R
)
,
G
m
)
tors
{\displaystyle {\text{Br}}_{\text{coh}}(R):={\text{H}}_{et}^{2}({\text{Spec}}(R),\mathbb {G} _{m})_{\text{tors}}}
which is called the cohomological Brauer group. These two definitions agree when
R
{\displaystyle R}
is a field.
=== Brauer group using Galois cohomology ===
There is another equivalent definition of the Brauer group using Galois cohomology. For a field extension
E
/
F
{\displaystyle E/F}
there is a cohomological Brauer group defined as
Br
coh
(
E
/
F
)
:=
H
Gal
2
(
Gal
(
E
/
F
)
,
E
×
)
{\displaystyle {\text{Br}}^{\text{coh}}(E/F):=H_{\text{Gal}}^{2}({\text{Gal}}(E/F),E^{\times })}
and the cohomological Brauer group for
F
{\displaystyle F}
is defined as
Br
coh
(
F
)
=
colim
E
/
F
H
Gal
2
(
Gal
(
E
/
F
)
,
E
×
)
{\displaystyle {\text{Br}}^{\text{coh}}(F)={\underset {E/F}{\text{colim}}}H_{\text{Gal}}^{2}({\text{Gal}}(E/F),E^{\times })}
where the colimit is taken over all finite Galois field extensions.
==== Computation for a local field ====
Over a local non-archimedean field
F
{\displaystyle F}
, such as the p-adic numbers
Q
p
{\displaystyle \mathbb {Q} _{p}}
, local class field theory gives the isomorphism of abelian groups:pg 193
Br
coh
(
F
)
≅
Q
/
Z
.
{\displaystyle {\text{Br}}^{\text{coh}}(F)\cong \mathbb {Q} /\mathbb {Z} .}
This is because given abelian field extensions
E
2
/
E
1
/
F
{\displaystyle E_{2}/E_{1}/F}
there is a short exact sequence of Galois groups
0
→
Gal
(
E
2
/
E
1
)
→
Gal
(
E
2
/
F
)
→
Gal
(
E
1
/
F
)
→
0
{\displaystyle 0\to {\text{Gal}}(E_{2}/E_{1})\to {\text{Gal}}(E_{2}/F)\to {\text{Gal}}(E_{1}/F)\to 0}
and from Local class field theory, there is the following commutative diagram:
H
Gal
2
(
Gal
(
E
2
/
F
)
,
E
1
×
)
→
H
Gal
2
(
Gal
(
E
1
/
F
)
,
E
1
×
)
↓
↓
(
1
[
E
2
:
E
1
]
Z
)
/
Z
→
(
1
[
E
1
:
F
]
Z
)
/
Z
{\displaystyle {\begin{matrix}H_{\text{Gal}}^{2}({\text{Gal}}(E_{2}/F),E_{1}^{\times })&\to &H_{\text{Gal}}^{2}({\text{Gal}}(E_{1}/F),E_{1}^{\times })\\\downarrow &&\downarrow \\\left({\frac {1}{[E_{2}:E_{1}]}}\mathbb {Z} \right)/\mathbb {Z} &\to &\left({\frac {1}{[E_{1}:F]}}\mathbb {Z} \right)/\mathbb {Z} \end{matrix}}}
where the vertical maps are isomorphisms and the horizontal maps are injections.
=== n-torsion for a field ===
Recall that there is the Kummer sequence
1
→
μ
n
→
G
m
→
⋅
n
G
m
→
1
{\displaystyle 1\to \mu _{n}\to \mathbb {G} _{m}\xrightarrow {\cdot n} \mathbb {G} _{m}\to 1}
giving a long exact sequence in cohomology for a field
F
{\displaystyle F}
. Since Hilbert's Theorem 90 implies
H
1
(
F
,
G
m
)
=
0
{\displaystyle H^{1}(F,\mathbb {G} _{m})=0}
, there is an associated short exact sequence
0
→
H
e
t
2
(
F
,
μ
n
)
→
Br
(
F
)
→
⋅
n
Br
(
F
)
→
0
{\displaystyle 0\to H_{et}^{2}(F,\mu _{n})\to {\text{Br}}(F)\xrightarrow {\cdot n} {\text{Br}}(F)\to 0}
showing the second etale cohomology group with coefficients in the
n
{\displaystyle n}
th roots of unity
μ
n
{\displaystyle \mu _{n}}
is
H
e
t
2
(
F
,
μ
n
)
=
Br
(
F
)
n
-tors
.
{\displaystyle H_{et}^{2}(F,\mu _{n})={\text{Br}}(F)_{n{\text{-tors}}}.}
=== Generators of n-torsion classes in the Brauer group over a field ===
The Galois symbol, or norm-residue symbol, is a map from the
n
{\displaystyle n}
-torsion Milnor K-theory group
K
2
M
(
F
)
⊗
Z
/
n
{\displaystyle K_{2}^{M}(F)\otimes \mathbb {Z} /n}
to the etale cohomology group
H
e
t
2
(
F
,
μ
n
⊗
2
)
{\displaystyle H_{et}^{2}(F,\mu _{n}^{\otimes 2})}
, denoted by
R
n
,
F
:
K
2
M
(
F
)
⊗
Z
Z
/
n
Z
→
H
e
t
2
(
F
,
μ
n
⊗
2
)
{\displaystyle R_{n,F}:K_{2}^{M}(F)\otimes _{\mathbb {Z} }\mathbb {Z} /n\mathbb {Z} \to H_{et}^{2}(F,\mu _{n}^{\otimes 2})}
It comes from the composition of the cup product in etale cohomology with the Hilbert's Theorem 90 isomorphism
χ
n
,
F
:
F
∗
⊗
Z
Z
/
n
→
H
et
1
(
F
,
μ
n
)
{\displaystyle \chi _{n,F}:F^{*}\otimes _{\mathbb {Z} }\mathbb {Z} /n\to H_{\text{et}}^{1}(F,\mu _{n})}
hence
R
n
,
F
(
{
a
,
b
}
)
=
χ
n
,
F
(
a
)
∪
χ
n
,
F
(
b
)
{\displaystyle R_{n,F}(\{a,b\})=\chi _{n,F}(a)\cup \chi _{n,F}(b)}
It turns out this map factors through
H
et
2
(
F
,
μ
n
)
=
Br
(
F
)
n
-tors
{\displaystyle H_{\text{et}}^{2}(F,\mu _{n})={\text{Br}}(F)_{n{\text{-tors}}}}
, whose class for
{
a
,
b
}
{\displaystyle \{a,b\}}
is represented by a cyclic algebra
[
A
(
σ
,
b
)
]
{\displaystyle [A(\sigma ,b)]}
. For the Kummer extension
E
/
F
{\displaystyle E/F}
where
E
=
F
(
a
n
)
{\displaystyle E=F({\sqrt[{n}]{a}})}
, take a generator
σ
∈
Gal
(
E
/
F
)
{\displaystyle \sigma \in {\text{Gal}}(E/F)}
of the cyclic group, and construct
[
A
(
σ
,
b
)
]
{\displaystyle [A(\sigma ,b)]}
. There is an alternative, yet equivalent construction through Galois cohomology and etale cohomology. Consider the short exact sequence of trivial
Gal
(
F
¯
/
F
)
{\displaystyle {\text{Gal}}({\overline {F}}/F)}
-modules
0
→
Z
→
Z
→
Z
/
n
→
0
{\displaystyle 0\to \mathbb {Z} \to \mathbb {Z} \to \mathbb {Z} /n\to 0}
The long exact sequence yields a map
H
Gal
1
(
F
,
Z
/
n
)
→
δ
H
Gal
2
(
F
,
Z
)
{\displaystyle H_{\text{Gal}}^{1}(F,\mathbb {Z} /n)\xrightarrow {\delta } H_{\text{Gal}}^{2}(F,\mathbb {Z} )}
For the unique character
χ
:
Gal
(
E
/
F
)
→
Z
/
n
{\displaystyle \chi :{\text{Gal}}(E/F)\to \mathbb {Z} /n}
with
χ
(
σ
)
=
1
{\displaystyle \chi (\sigma )=1}
, there is a unique lift
χ
¯
:
Gal
(
F
¯
/
F
)
→
Z
/
n
{\displaystyle {\overline {\chi }}:{\text{Gal}}({\overline {F}}/F)\to \mathbb {Z} /n}
and
δ
(
χ
¯
)
∪
(
b
)
=
[
A
(
σ
,
b
)
]
∈
Br
(
F
)
{\displaystyle \delta ({\overline {\chi }})\cup (b)=[A(\sigma ,b)]\in {\text{Br}}(F)}
note the class
(
b
)
{\displaystyle (b)}
is from the Hilberts theorem 90 map
χ
n
,
F
(
b
)
{\displaystyle \chi _{n,F}(b)}
. Then, since there exists a primitive root of unity
ζ
∈
μ
n
⊂
F
{\displaystyle \zeta \in \mu _{n}\subset F}
, there is also a class
δ
(
χ
¯
)
∪
(
b
)
∪
(
ζ
)
∈
H
et
2
(
F
,
μ
n
⊗
2
)
{\displaystyle \delta ({\overline {\chi }})\cup (b)\cup (\zeta )\in H_{\text{et}}^{2}(F,\mu _{n}^{\otimes 2})}
It turns out this is precisely the class
R
n
,
F
(
{
a
,
b
}
)
{\displaystyle R_{n,F}(\{a,b\})}
. Because of the norm residue isomorphism theorem,
R
n
,
F
{\displaystyle R_{n,F}}
is an isomorphism and the
n
{\displaystyle n}
-torsion classes in
Br
(
F
)
n
-tors
{\displaystyle {\text{Br}}(F)_{n{\text{-tors}}}}
are generated by the cyclic algebras
[
A
(
σ
,
b
)
]
{\displaystyle [A(\sigma ,b)]}
.
== Skolem–Noether theorem ==
One of the important structure results about Azumaya algebras is the Skolem–Noether theorem: given a local commutative ring
R
{\displaystyle R}
and an Azumaya algebra
R
→
A
{\displaystyle R\to A}
, the only automorphisms of
A
{\displaystyle A}
are inner. Meaning, the following map is surjective:
{
A
∗
→
Aut
(
A
)
a
↦
(
x
↦
a
−
1
x
a
)
{\displaystyle {\begin{cases}A^{*}\to {\text{Aut}}(A)\\a\mapsto (x\mapsto a^{-1}xa)\end{cases}}}
where
A
∗
{\displaystyle A^{*}}
is the group of units in
A
.
{\displaystyle A.}
This is important because it directly relates to the cohomological classification of similarity classes of Azumaya algebras over a scheme. In particular, it implies an Azumaya algebra has structure group
PGL
n
{\displaystyle {\text{PGL}}_{n}}
for some
n
{\displaystyle n}
, and the Čech cohomology group
H
ˇ
1
(
(
X
)
e
t
,
PGL
n
)
{\displaystyle {\check {H}}^{1}((X)_{et},{\text{PGL}}_{n})}
gives a cohomological classification of such bundles. Then, this can be related to
H
et
2
(
X
,
G
m
)
{\displaystyle H_{\text{et}}^{2}(X,\mathbb {G} _{m})}
using the exact sequence
1
→
G
m
→
GL
n
→
PGL
n
→
1
{\displaystyle 1\to \mathbb {G} _{m}\to {\text{GL}}_{n}\to {\text{PGL}}_{n}\to 1}
It turns out the image of
H
1
{\displaystyle H^{1}}
is a subgroup of the torsion subgroup
H
et
2
(
X
,
G
m
)
t
o
r
s
{\displaystyle H_{\text{et}}^{2}(X,\mathbb {G} _{m})_{tors}}
.
== On a scheme ==
An Azumaya algebra on a scheme X with structure sheaf
O
X
{\displaystyle {\mathcal {O}}_{X}}
, according to the original Grothendieck seminar, is a sheaf
A
{\displaystyle {\mathcal {A}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-algebras that is étale locally isomorphic to a matrix algebra sheaf; one should, however, add the condition that each matrix algebra sheaf is of positive rank. This definition makes an Azumaya algebra on
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
into a 'twisted-form' of the sheaf
M
n
(
O
X
)
{\displaystyle M_{n}({\mathcal {O}}_{X})}
. Milne, Étale Cohomology, starts instead from the definition that it is a sheaf
A
{\displaystyle {\mathcal {A}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-algebras whose stalk
A
x
{\displaystyle {\mathcal {A}}_{x}}
at each point
x
{\displaystyle x}
is an Azumaya algebra over the local ring
O
X
,
x
{\displaystyle {\mathcal {O}}_{X,x}}
in the sense given above.
Two Azumaya algebras
A
1
{\displaystyle {\mathcal {A}}_{1}}
and
A
2
{\displaystyle {\mathcal {A}}_{2}}
are equivalent if there exist locally free sheaves
E
1
{\displaystyle {\mathcal {E}}_{1}}
and
E
2
{\displaystyle {\mathcal {E}}_{2}}
of finite positive rank at every point such that
A
1
⊗
E
n
d
O
X
(
E
1
)
≃
A
2
⊗
E
n
d
O
X
(
E
2
)
,
{\displaystyle A_{1}\otimes \mathrm {End} _{{\mathcal {O}}_{X}}({\mathcal {E}}_{1})\simeq A_{2}\otimes \mathrm {End} _{{\mathcal {O}}_{X}}({\mathcal {E}}_{2}),}
: 6
where
E
n
d
O
X
(
E
i
)
{\displaystyle \mathrm {End} _{{\mathcal {O}}_{X}}({\mathcal {E}}_{i})}
is the endomorphism sheaf of
E
i
{\displaystyle {\mathcal {E}}_{i}}
. The Brauer group
B
(
X
)
{\displaystyle B(X)}
of
X
{\displaystyle X}
(an analogue of the Brauer group of a field) is the set of equivalence classes of Azumaya algebras. The group operation is given by tensor product, and the inverse is given by the opposite algebra. Note that this is distinct from the cohomological Brauer group which is defined as
H
et
2
(
X
,
G
m
)
{\displaystyle H_{\text{et}}^{2}(X,\mathbb {G} _{m})}
.
=== Example over Spec(Z[1/n]) ===
The construction of a quaternion algebra over a field can be globalized to
Spec
(
Z
[
1
/
n
]
)
{\displaystyle {\text{Spec}}(\mathbb {Z} [1/n])}
by considering the noncommutative
Z
[
1
/
n
]
{\displaystyle \mathbb {Z} [1/n]}
-algebra
A
a
,
b
=
Z
[
1
/
n
]
⟨
i
,
j
,
k
⟩
i
2
−
a
,
j
2
−
b
,
i
j
−
k
,
j
i
+
k
{\displaystyle A_{a,b}={\frac {\mathbb {Z} [1/n]\langle i,j,k\rangle }{i^{2}-a,j^{2}-b,ij-k,ji+k}}}
then, as a sheaf of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-algebras,
A
a
,
b
{\displaystyle {\mathcal {A}}_{a,b}}
has the structure of an Azumaya algebra. The reason for restricting to the open affine set
Spec
(
Z
[
1
/
n
]
)
↪
Spec
(
Z
)
{\displaystyle {\text{Spec}}(\mathbb {Z} [1/n])\hookrightarrow {\text{Spec}}(\mathbb {Z} )}
is because the quaternion algebra is a division algebra over the points
(
p
)
{\displaystyle (p)}
is and only if the Hilbert symbol
(
a
,
b
)
p
=
1
{\displaystyle (a,b)_{p}=1}
which is true at all but finitely many primes.
=== Example over Pn ===
Over
P
k
n
{\displaystyle \mathbb {P} _{k}^{n}}
Azumaya algebras can be constructed as
E
n
d
k
(
E
)
⊗
k
A
{\displaystyle {\mathcal {End}}_{k}({\mathcal {E}})\otimes _{k}A}
for an Azumaya algebra
A
{\displaystyle A}
over a field
k
{\displaystyle k}
. For example, the endomorphism sheaf of
O
(
a
)
⊕
O
(
b
)
{\displaystyle {\mathcal {O}}(a)\oplus {\mathcal {O}}(b)}
is the matrix sheaf
E
n
d
k
(
O
(
a
)
⊕
O
(
b
)
)
=
(
O
O
(
b
−
a
)
O
(
a
−
b
)
O
)
{\displaystyle {\mathcal {End}}_{k}({\mathcal {O}}(a)\oplus {\mathcal {O}}(b))={\begin{pmatrix}{\mathcal {O}}&{\mathcal {O}}(b-a)\\{\mathcal {O}}(a-b)&{\mathcal {O}}\end{pmatrix}}}
so an Azumaya algebra over
P
k
n
{\displaystyle \mathbb {P} _{k}^{n}}
can be constructed from this sheaf tensored with an Azumaya algebra
A
{\displaystyle A}
over
k
{\displaystyle k}
, such as a quaternion algebra.
== Applications ==
There have been significant applications of Azumaya algebras in diophantine geometry, following work of Yuri Manin. The Manin obstruction to the Hasse principle is defined using the Brauer group of schemes.
== See also ==
Gerbe
Class field theory
Algebraic K-theory
Motivic cohomology
Norm residue isomorphism theorem
== References == | Wikipedia/Azumaya_algebra |
In category theory, a branch of mathematics, a section is a right inverse of some morphism. Dually, a retraction is a left inverse of some morphism.
In other words, if
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
X
{\displaystyle g:Y\to X}
are morphisms whose composition
f
∘
g
:
Y
→
Y
{\displaystyle f\circ g:Y\to Y}
is the identity morphism on
Y
{\displaystyle Y}
, then
g
{\displaystyle g}
is a section of
f
{\displaystyle f}
, and
f
{\displaystyle f}
is a retraction of
g
{\displaystyle g}
.
Every section is a monomorphism (every morphism with a left inverse is left-cancellative), and every retraction is an epimorphism (every morphism with a right inverse is right-cancellative).
In algebra, sections are also called split monomorphisms and retractions are also called split epimorphisms. In an abelian category, if
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a split epimorphism with split monomorphism
g
:
Y
→
X
{\displaystyle g:Y\to X}
, then
X
{\displaystyle X}
is isomorphic to the direct sum of
Y
{\displaystyle Y}
and the kernel of
f
{\displaystyle f}
. The synonym coretraction for section is sometimes seen in the literature, although rarely in recent work.
== Properties ==
A section that is also an epimorphism is an isomorphism. Dually a retraction that is also a monomorphism is an isomorphism.
== Terminology ==
The concept of a retraction in category theory comes from the essentially similar notion of a retraction in topology:
f
:
X
→
Y
{\displaystyle f:X\to Y}
where
Y
{\displaystyle Y}
is a subspace of
X
{\displaystyle X}
is a retraction in the topological sense, if it's a retraction of the inclusion map
i
:
Y
↪
X
{\displaystyle i:Y\hookrightarrow X}
in the category theory sense. The concept in topology was defined by Karol Borsuk in 1931.
Borsuk's student, Samuel Eilenberg, was with Saunders Mac Lane the founder of category theory, and (as the earliest publications on category theory concerned various topological spaces) one might have expected this term to have initially be used. In fact, their earlier publications, up to, e.g., Mac Lane (1963)'s Homology, used the term right inverse. It was not until 1965 when Eilenberg and John Coleman Moore coined the dual term 'coretraction' that Borsuk's term was lifted to category theory in general. The term coretraction gave way to the term section by the end of the 1960s.
Both use of left/right inverse and section/retraction are commonly seen in the literature: the former use has the advantage that it is familiar from the theory of semigroups and monoids; the latter is considered less confusing by some because one does not have to think about 'which way around' composition goes, an issue that has become greater with the increasing popularity of the synonym
f
∘
g
{\displaystyle f\circ g}
for
g
∘
f
{\displaystyle g\circ f}
.
== Examples ==
In the category of sets, every monomorphism (injective function) with a non-empty domain is a section, and every epimorphism (surjective function) is a retraction; the latter statement is equivalent to the axiom of choice.
In the category of vector spaces over a field K, every monomorphism and every epimorphism splits; this follows from the fact that linear maps can be uniquely defined by specifying their values on a basis.
In the category of abelian groups, the epimorphism Z → Z/2Z which sends every integer to its remainder modulo 2 does not split; in fact the only morphism Z/2Z → Z is the zero map. Similarly, the natural monomorphism Z/2Z → Z/4Z doesn't split even though there is a non-trivial morphism Z/4Z → Z/2Z.
The categorical concept of a section is important in homological algebra, and is also closely related to the notion of a section of a fiber bundle in topology: in the latter case, a section of a fiber bundle is a section of the bundle projection map of the fiber bundle.
Given a quotient space
X
¯
{\displaystyle {\bar {X}}}
with quotient map
π
:
X
→
X
¯
{\displaystyle \pi \colon X\to {\bar {X}}}
, a section of
π
{\displaystyle \pi }
is called a transversal.
== Bibliography ==
Mac Lane, Saunders (1978). Categories for the working mathematician (2nd ed.). Springer Verlag.
Barry, Mitchell (1965). Theory of categories. Academic Press.
== See also ==
Splitting lemma
Inverse function § Left and right inverses
Transversal (combinatorics)
== Notes == | Wikipedia/Retract_(category_theory) |
Zermelo set theory (sometimes denoted by Z-), as set out in a seminal paper in 1908 by Ernst Zermelo, is the ancestor of modern Zermelo–Fraenkel set theory (ZF) and its extensions, such as von Neumann–Bernays–Gödel set theory (NBG). It bears certain differences from its descendants, which are not always understood, and are frequently misquoted. This article sets out the original axioms, with the original text (translated into English) and original numbering.
== The axioms of Zermelo set theory ==
The axioms of Zermelo set theory are stated for objects, some of which (but not necessarily all) are sets, and the remaining objects are urelements and not sets. Zermelo's language implicitly includes a membership relation ∈, an equality relation = (if it is not included in the underlying logic), and a unary predicate saying whether an object is a set. Later versions of set theory often assume that all objects are sets so there are no urelements and there is no need for the unary predicate.
AXIOM I. Axiom of extensionality (Axiom der Bestimmtheit) "If every element of a set M is also an element of N and vice versa ... then M
≡
{\displaystyle \equiv }
N. Briefly, every set is determined by its elements."
AXIOM II. Axiom of elementary sets (Axiom der Elementarmengen) "There exists a set, the null set, ∅, that contains no element at all. If a is any object of the domain, there exists a set {a} containing a and only a as an element. If a and b are any two objects of the domain, there always exists a set {a, b} containing as elements a and b but no object x distinct from them both." See Axiom of empty set and Axiom of pairing.
AXIOM III. Axiom of separation (Axiom der Aussonderung) "Whenever the propositional function –(x) is defined for all elements of a set M, M possesses a subset M' containing as elements precisely those elements x of M for which –(x) is true."
AXIOM IV. Axiom of the power set (Axiom der Potenzmenge) "To every set T there corresponds a set T' , the power set of T, that contains as elements precisely all subsets of T ."
AXIOM V. Axiom of the union (Axiom der Vereinigung) "To every set T there corresponds a set ∪T, the union of T, that contains as elements precisely all elements of the elements of T ."
AXIOM VI. Axiom of choice (Axiom der Auswahl) "If T is a set whose elements all are sets that are different from ∅ and mutually disjoint, its union ∪T includes at least one subset S1 having one and only one element in common with each element of T ."
AXIOM VII. Axiom of infinity (Axiom des Unendlichen) "There exists in the domain at least one set Z that contains the null set as an element and is so constituted that to each of its elements a there corresponds a further element of the form {a}, in other words, that with each of its elements a it also contains the corresponding set {a} as element."
== Connection with standard set theory ==
The most widely used and accepted set theory is known as ZFC, which consists of Zermelo–Fraenkel set theory including the axiom of choice (AC). The links show where the axioms of Zermelo's theory correspond. There is no exact match for "elementary sets". (It was later shown that the singleton set could be derived from what is now called the "Axiom of pairs". If a exists, a and a exist, thus {a,a} exists, and so by extensionality {a,a} = {a}.) The empty set axiom is already assumed by axiom of infinity, and is now included as part of it.
Zermelo set theory does not include the axioms of replacement and regularity. The axiom of replacement was first published in 1922 by Abraham Fraenkel and Thoralf Skolem, who had independently discovered that Zermelo's axioms cannot prove the existence of the set {Z0, Z1, Z2, ...} where Z0 is the set of natural numbers and Zn+1 is the power set of Zn. They both realized that the axiom of replacement is needed to prove this. The following year, John von Neumann pointed out that the axiom of regularity is necessary to build his theory of ordinals. The axiom of regularity was stated by von Neumann in 1925.
In the modern ZFC system, the "propositional function" referred to in the axiom of separation is interpreted as "any property definable by a first-order formula with parameters", so the separation axiom is replaced by an axiom schema. The notion of "first order formula" was not known in 1908 when Zermelo published his axiom system, and he later rejected this interpretation as being too restrictive. Zermelo set theory is usually taken to be a first-order theory with the separation axiom replaced by an axiom scheme with an axiom for each first-order formula. It can also be considered as a theory in second-order logic, where now the separation axiom is just a single axiom. The second-order interpretation of Zermelo set theory is probably closer to Zermelo's own conception of it, and is stronger than the first-order interpretation.
Since
(
V
λ
,
V
λ
+
1
)
{\displaystyle (V_{\lambda },V_{\lambda +1})}
—where
V
α
{\displaystyle V_{\alpha }}
is the rank-
α
{\displaystyle \alpha }
set in the cumulative hierarchy—forms a model of second-order Zermelo set theory within ZFC whenever
λ
{\displaystyle \lambda }
is a limit ordinal greater than the smallest infinite ordinal
ω
{\displaystyle \omega }
, it follows that the consistency of second-order Zermelo set theory (and therefore also that of first-order Zermelo set theory) is a theorem of ZFC. If we let
λ
=
ω
⋅
2
{\displaystyle \lambda =\omega \cdot 2}
, the existence of an uncountable strong limit cardinal is not satisfied in such a model; thus the existence of ℶω (the smallest uncountable strong limit cardinal) cannot be proved in second-order Zermelo set theory. Similarly, the set
V
ω
⋅
2
∩
L
{\displaystyle V_{\omega \cdot 2}\cap L}
(where L is the constructible universe) forms a model of first-order Zermelo set theory wherein the existence of an uncountable weak limit cardinal is not satisfied, showing that first-order Zermelo set theory cannot even prove the existence of the smallest singular cardinal,
ℵ
ω
{\displaystyle \aleph _{\omega }}
. Within such a model, the only infinite cardinals are the aleph numbers restricted to finite index ordinals.
The axiom of infinity is usually now modified to assert the existence of the first infinite von Neumann ordinal
ω
{\displaystyle \omega }
; the original Zermelo axioms cannot prove the existence of this set, nor can the modified Zermelo axioms prove Zermelo's axiom of infinity. Zermelo's axioms (original or modified) cannot prove the existence of
V
ω
{\displaystyle V_{\omega }}
as a set nor of any rank of the cumulative hierarchy of sets with infinite index. In any formulation, Zermelo set theory cannot prove the existence of the von Neumann ordinal
ω
⋅
2
{\displaystyle \omega \cdot 2}
, despite proving the existence of such an order type; thus the von Neumann definition of ordinals is not employed for Zermelo set theory.
Zermelo allowed for the existence of urelements that are not sets and contain no elements; these are now usually omitted from set theories.
== Mac Lane set theory ==
Mac Lane set theory, introduced by Mac Lane (1986), is Zermelo set theory with the axiom of separation restricted to first-order formulas in which every quantifier is bounded.
Mac Lane set theory is similar in strength to topos theory with a natural number object, or to the system in Principia mathematica. It is strong enough to carry out almost all ordinary mathematics not directly connected with set theory or logic.
== The aim of Zermelo's paper ==
The introduction states that the very existence of the discipline of set theory "seems to be threatened by certain contradictions or "antinomies", that can be derived from its principles – principles necessarily governing our thinking, it seems – and to which no entirely satisfactory solution has yet been found". Zermelo is of course referring to the "Russell antinomy".
He says he wants to show how the original theory of Georg Cantor and Richard Dedekind can be reduced to a few definitions and seven principles or axioms. He says he has not been able to prove that the axioms are consistent.
A non-constructivist argument for their consistency goes as follows. Define Vα for α one of the ordinals 0, 1, 2, ...,ω, ω+1, ω+2,..., ω·2 as follows:
V0 is the empty set.
For α a successor of the form β+1, Vα is defined to be the collection of all subsets of Vβ.
For α a limit (e.g. ω, ω·2) then Vα is defined to be the union of Vβ for β<α.
Then the axioms of Zermelo set theory are consistent because they are true in the model Vω·2. While a non-constructivist might regard this as a valid argument, a constructivist would probably not: while there are no problems with the construction of the sets up to Vω, the construction of Vω+1 is less clear because one cannot constructively define every subset of Vω. This argument can be turned into a valid proof with the addition of a single new axiom of infinity to Zermelo set theory, simply that Vω·2 exists. This is presumably not convincing for a constructivist, but it shows that the consistency of Zermelo set theory can be proved with a theory which is not very different from Zermelo theory itself, only a little more powerful.
== The axiom of separation ==
Zermelo comments that Axiom III of his system is the one responsible for eliminating the antinomies. It differs from the original definition by Cantor, as follows.
Sets cannot be independently defined by any arbitrary logically definable notion. They must be constructed in some way from previously constructed sets. For example, they can be constructed by taking powersets, or they can be separated as subsets of sets already "given". This, he says, eliminates contradictory ideas like "the set of all sets" or "the set of all ordinal numbers".
He disposes of the Russell paradox by means of this Theorem: "Every set
M
{\displaystyle M}
possesses at least one subset
M
0
{\displaystyle M_{0}}
that is not an element of
M
{\displaystyle M}
". Let
M
0
{\displaystyle M_{0}}
be the subset of
M
{\displaystyle M}
which, by AXIOM III, is separated out by the notion "
x
∉
x
{\displaystyle x\notin x}
". Then
M
0
{\displaystyle M_{0}}
cannot be in
M
{\displaystyle M}
. For
If
M
0
{\displaystyle M_{0}}
is in
M
0
{\displaystyle M_{0}}
, then
M
0
{\displaystyle M_{0}}
contains an element x for which x is in x (i.e.
M
0
{\displaystyle M_{0}}
itself), which would contradict the definition of
M
0
{\displaystyle M_{0}}
.
If
M
0
{\displaystyle M_{0}}
is not in
M
0
{\displaystyle M_{0}}
, and assuming
M
0
{\displaystyle M_{0}}
is an element of M, then
M
0
{\displaystyle M_{0}}
is an element of M that satisfies the definition "
x
∉
x
{\displaystyle x\notin x}
", and so is in
M
0
{\displaystyle M_{0}}
which is a contradiction.
Therefore, the assumption that
M
0
{\displaystyle M_{0}}
is in
M
{\displaystyle M}
is wrong, proving the theorem. Hence not all objects of the universal domain B can be elements of one and the same set. "This disposes of the Russell antinomy as far as we are concerned".
This left the problem of "the domain B" which seems to refer to something. This led to the idea of a proper class.
== Cantor's theorem ==
Zermelo's paper may be the first to mention the name "Cantor's theorem".
Cantor's theorem: "If M is an arbitrary set, then always M < P(M) [the power set of M]. Every set is of lower cardinality than the set of its subsets".
Zermelo proves this by considering a function φ: M → P(M). By Axiom III this defines the following set M' :
M' = {m: m ∉ φ(m)}.
But no element m' of M could correspond to M' , i.e. such that φ(m' ) = M' . Otherwise we can construct a contradiction:
If m' is in M' then by definition m' ∉ φ(m' ) = M' , which is the first part of the contradiction
If m' is not in M' but in M then by definition m' ∉ M' = φ(m' ) which by definition implies that m' is in M' , which is the second part of the contradiction.
so by contradiction m' does not exist. Note the close resemblance of this proof to the way Zermelo disposes of Russell's paradox.
== See also ==
S (set theory)
== References ==
=== Works cited ===
Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought, Birkhäuser, ISBN 978-3-7643-8349-7.
=== General references ===
Mac Lane, Saunders (1986), Mathematics, form and function, New York: Springer-Verlag, doi:10.1007/978-1-4612-4872-9, ISBN 0-387-96217-4, MR 0816347.
Zermelo, Ernst (1908), "Untersuchungen über die Grundlagen der Mengenlehre I" (PDF), Mathematische Annalen, 65 (2): 261–281, doi:10.1007/bf01449999, S2CID 120085563. English translation: Heijenoort, Jean van (1967), "Investigations in the foundations of set theory", From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Source Books in the History of the Sciences, Harvard Univ. Press, pp. 199–215, ISBN 978-0-674-32449-7. | Wikipedia/Zermelo_set_theory |
In category theory, a branch of mathematics, a pushout (also called a fibered coproduct or fibered sum or cocartesian square or amalgamated sum) is the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domain. The pushout consists of an object P along with two morphisms X → P and Y → P that complete a commutative square with the two given morphisms f and g. In fact, the defining universal property of the pushout (given below) essentially says that the pushout is the "most general" way to complete this commutative square. Common notations for the pushout are
P
=
X
⊔
Z
Y
{\displaystyle P=X\sqcup _{Z}Y}
and
P
=
X
+
Z
Y
{\displaystyle P=X+_{Z}Y}
.
The pushout is the categorical dual of the pullback.
== Universal property ==
Explicitly, the pushout of the morphisms f and g consists of an object P and two morphisms i1 : X → P and i2 : Y → P such that the diagram
commutes and such that (P, i1, i2) is universal with respect to this diagram. That is, for any other such triple (Q, j1, j2) for which the following diagram commutes, there must exist a unique u : P → Q also making the diagram commute:
As with all universal constructions, the pushout, if it exists, is unique up to a unique isomorphism.
== Examples of pushouts ==
Here are some examples of pushouts in familiar categories. Note that in each case, we are only providing a construction of an object in the isomorphism class of pushouts; as mentioned above, though there may be other ways to construct it, they are all equivalent.
Suppose that X, Y, and Z as above are sets, and that f : Z → X and g : Z → Y are set functions. The pushout of f and g is the disjoint union of X and Y, where elements sharing a common preimage (in Z) are identified, together with the morphisms i1, i2 from X and Y, i.e.
P
=
(
X
⊔
Y
)
/
∼
{\displaystyle P=(X\sqcup Y)/\!\sim }
where ~ is the finest equivalence relation (cf. also this) such that f(z) ~ g(z) for all z in Z. In particular, if X and Y are subsets of some larger set W and Z is their intersection, with f and g the inclusion maps of Z into X and Y, then the pushout can be canonically identified with the union
X
∪
Y
⊆
W
{\displaystyle X\cup Y\subseteq W}
.
A specific case of this is the cograph of a function. If
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
is a function, then the cograph of a function is the pushout of f along the identity function of X. In elementary terms, the cograph is the quotient of
X
⊔
Y
{\displaystyle X\sqcup Y}
by the equivalence relation generated by identifying
x
∈
X
⊆
X
⊔
Y
{\displaystyle x\in X\subseteq X\sqcup Y}
with
f
(
x
)
∈
Y
⊆
X
⊔
Y
{\displaystyle f(x)\in Y\subseteq X\sqcup Y}
. A function may be recovered by its cograph because each equivalence class in
X
⊔
Y
{\displaystyle X\sqcup Y}
contains precisely one element of Y. Cographs are dual to graphs of functions since the graph may be defined as the pullback of f along the identity of Y.
The construction of adjunction spaces is an example of pushouts in the category of topological spaces. More precisely, if Z is a subspace of Y and g : Z → Y is the inclusion map we can "glue" Y to another space X along Z using an "attaching map" f : Z → X. The result is the adjunction space
X
∪
f
Y
{\displaystyle X\cup _{f}Y}
, which is just the pushout of f and g. More generally, all identification spaces may be regarded as pushouts in this way.
A special case of the above is the wedge sum or one-point union; here we take X and Y to be pointed spaces and Z the one-point space. Then the pushout is
X
∨
Y
{\displaystyle X\vee Y}
, the space obtained by gluing the basepoint of X to the basepoint of Y.
In the category of abelian groups, pushouts can be thought of as "direct sum with gluing" in the same way we think of adjunction spaces as "disjoint union with gluing". The zero group is a subgroup of every group, so for any abelian groups A and B, we have homomorphisms
f
:
0
→
A
{\displaystyle f:0\to A}
and
g
:
0
→
B
{\displaystyle g:0\to B}
. The pushout of these maps is the direct sum of A and B. Generalizing to the case where f and g are arbitrary homomorphisms from a common domain Z, one obtains for the pushout a quotient group of the direct sum; namely, we mod out by the subgroup consisting of pairs (f(z), −g(z)). Thus we have "glued" along the images of Z under f and g. A similar approach yields the pushout in the category of R-modules for any ring R.
In the category of groups, the pushout is called the free product with amalgamation. It shows up in the Seifert–van Kampen theorem of algebraic topology (see below).
In CRing, the category of commutative rings (a full subcategory of the category of rings), the pushout is given by the tensor product of rings
A
⊗
C
B
{\displaystyle A\otimes _{C}B}
with the morphisms
g
′
:
A
→
A
⊗
C
B
{\displaystyle g':A\rightarrow A\otimes _{C}B}
and
f
′
:
B
→
A
⊗
C
B
{\displaystyle f':B\rightarrow A\otimes _{C}B}
that satisfy
f
′
∘
g
=
g
′
∘
f
{\displaystyle f'\circ g=g'\circ f}
. In fact, since the pushout is the colimit of a span and the pullback is the limit of a cospan, we can think of the tensor product of rings and the fibered product of rings (see the examples section) as dual notions to each other. In particular, let A, B, and C be objects (commutative rings with identity) in CRing and let f : C → A and g : C → B be morphisms (ring homomorphisms) in CRing. Then the tensor product is:
A
⊗
C
B
=
{
∑
i
∈
I
(
a
i
,
b
i
)
|
a
i
∈
A
,
b
i
∈
B
}
/
⟨
(
f
(
c
)
a
,
b
)
−
(
a
,
g
(
c
)
b
)
|
a
∈
A
,
b
∈
B
,
c
∈
C
⟩
{\displaystyle A\otimes _{C}B=\left\{\sum _{i\in I}(a_{i},b_{i})\;{\big |}\;a_{i}\in A,b_{i}\in B\right\}{\Bigg /}{\bigg \langle }(f(c)a,b)-(a,g(c)b)\;{\big |}\;a\in A,b\in B,c\in C{\bigg \rangle }}
See Free product of associative algebras for the case of non-commutative rings.
In the multiplicative monoid of positive integers
Z
+
{\displaystyle \mathbf {Z} _{+}}
, considered as a category with one object, the pushout of two positive integers m and n is just the pair
(
lcm
(
m
,
n
)
m
,
lcm
(
m
,
n
)
n
)
{\displaystyle \left({\frac {\operatorname {lcm} (m,n)}{m}},{\frac {\operatorname {lcm} (m,n)}{n}}\right)}
, where the numerators are both the least common multiple of m and n. Note that the same pair is also the pullback.
== Properties ==
Whenever the pushout A ⊔C B exists, then B ⊔C A exists as well and there is a natural isomorphism A ⊔C B ≅ B ⊔C A.
In an abelian category all pushouts exist, and they preserve cokernels in the following sense: if (P, i1, i2) is the pushout of f : Z → X and g : Z → Y, then the natural map coker(f) → coker(i2) is an isomorphism, and so is the natural map coker(g) → coker(i1).
There is a natural isomorphism (A ⊔C B) ⊔B D ≅ A ⊔C D. Explicitly, this means:
if maps f : C → A, g : C → B and h : B → D are given and
the pushout of f and g is given by i : A → P and j : B → P, and
the pushout of j and h is given by k : P → Q and l : D → Q,
then the pushout of f and hg is given by ki : A → Q and l : D → Q.
Graphically this means that two pushout squares, placed side by side and sharing one morphism, form a larger pushout square when ignoring the inner shared morphism.
== Construction via coproducts and coequalizers ==
Pushouts are equivalent to coproducts and coequalizers (if there is an initial object) in the sense that:
Coproducts are a pushout from the initial object, and the coequalizer of f, g : X → Y is the pushout of [f, g] and [1X, 1X], so if there are pushouts (and an initial object), then there are coequalizers and coproducts;
Pushouts can be constructed from coproducts and coequalizers, as described below (the pushout is the coequalizer of the maps to the coproduct).
All of the above examples may be regarded as special cases of the following very general construction, which works in any category C satisfying:
For any objects A and B of C, their coproduct exists in C;
For any morphisms j and k of C with the same domain and the same target, the coequalizer of j and k exists in C.
In this setup, we obtain the pushout of morphisms f : Z → X and g : Z → Y by first forming the coproduct of the targets X and Y. We then have two morphisms from Z to this coproduct. We can either go from Z to X via f, then include into the coproduct, or we can go from Z to Y via g, then include into the coproduct. The pushout of f and g is the coequalizer of these new maps.
== Application: the Seifert–van Kampen theorem ==
The Seifert–van Kampen theorem answers the following question. Suppose we have a path-connected space
X
{\displaystyle X}
, covered by path-connected open subspaces
A
{\displaystyle A}
and
B
{\displaystyle B}
whose intersection
A
∩
B
{\displaystyle A\cap B}
is also path-connected. (Assume also that the basepoint
∗
{\displaystyle \ast }
lies in the intersection of A and B.) If we know the fundamental groups of
A
{\displaystyle A}
,
B
{\displaystyle B}
and
A
∩
B
{\displaystyle A\cap B}
can we recover the fundamental group of
X
{\displaystyle X}
? The answer is yes, provided we also know the induced homomorphisms
π
1
(
A
∩
B
,
∗
)
→
π
1
(
A
,
∗
)
{\displaystyle \pi _{1}(A\cap B,*)\to \pi _{1}(A,*)}
and
π
1
(
A
∩
B
,
∗
)
→
π
1
(
B
,
∗
)
.
{\displaystyle \pi _{1}(A\cap B,*)\to \pi _{1}(B,*).}
The theorem then says that the fundamental group of
X
{\displaystyle X}
is the pushout of these two induced maps. Of course,
X
{\displaystyle X}
is the pushout of the two inclusion maps of
A
∩
B
{\displaystyle A\cap B}
into
A
{\displaystyle A}
and
B
{\displaystyle B}
. Thus we may interpret the theorem as confirming that the fundamental group functor preserves pushouts of inclusions. We might expect this to be simplest when
A
∩
B
{\displaystyle A\cap B}
is simply connected, since then both homomorphisms above have trivial domain. Indeed, this is the case, since then the pushout (of groups) reduces to the free product, which is the coproduct in the category of groups. In a most general case we will be speaking of a free product with amalgamation.
There is a detailed exposition of this, in a slightly more general setting (covering groupoids) in the book by J. P. May listed in the references.
== References ==
May, J. P. A concise course in algebraic topology. University of Chicago Press, 1999.
An introduction to categorical approaches to algebraic topology: the focus is on the algebra, and assumes a topological background.
Ronald Brown "Topology and Groupoids" pdf available Gives an account of some categorical methods in topology, use the fundamental groupoid on a set of base points to give a generalisation of the Seifert-van Kampen Theorem.
Philip J. Higgins, "Categories and Groupoids" free download Explains some uses of groupoids in group theory and topology.
== References ==
== External links ==
pushout in nLab | Wikipedia/Pushout_(category_theory) |
In category theory, a branch of mathematics, a section is a right inverse of some morphism. Dually, a retraction is a left inverse of some morphism.
In other words, if
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
X
{\displaystyle g:Y\to X}
are morphisms whose composition
f
∘
g
:
Y
→
Y
{\displaystyle f\circ g:Y\to Y}
is the identity morphism on
Y
{\displaystyle Y}
, then
g
{\displaystyle g}
is a section of
f
{\displaystyle f}
, and
f
{\displaystyle f}
is a retraction of
g
{\displaystyle g}
.
Every section is a monomorphism (every morphism with a left inverse is left-cancellative), and every retraction is an epimorphism (every morphism with a right inverse is right-cancellative).
In algebra, sections are also called split monomorphisms and retractions are also called split epimorphisms. In an abelian category, if
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a split epimorphism with split monomorphism
g
:
Y
→
X
{\displaystyle g:Y\to X}
, then
X
{\displaystyle X}
is isomorphic to the direct sum of
Y
{\displaystyle Y}
and the kernel of
f
{\displaystyle f}
. The synonym coretraction for section is sometimes seen in the literature, although rarely in recent work.
== Properties ==
A section that is also an epimorphism is an isomorphism. Dually a retraction that is also a monomorphism is an isomorphism.
== Terminology ==
The concept of a retraction in category theory comes from the essentially similar notion of a retraction in topology:
f
:
X
→
Y
{\displaystyle f:X\to Y}
where
Y
{\displaystyle Y}
is a subspace of
X
{\displaystyle X}
is a retraction in the topological sense, if it's a retraction of the inclusion map
i
:
Y
↪
X
{\displaystyle i:Y\hookrightarrow X}
in the category theory sense. The concept in topology was defined by Karol Borsuk in 1931.
Borsuk's student, Samuel Eilenberg, was with Saunders Mac Lane the founder of category theory, and (as the earliest publications on category theory concerned various topological spaces) one might have expected this term to have initially be used. In fact, their earlier publications, up to, e.g., Mac Lane (1963)'s Homology, used the term right inverse. It was not until 1965 when Eilenberg and John Coleman Moore coined the dual term 'coretraction' that Borsuk's term was lifted to category theory in general. The term coretraction gave way to the term section by the end of the 1960s.
Both use of left/right inverse and section/retraction are commonly seen in the literature: the former use has the advantage that it is familiar from the theory of semigroups and monoids; the latter is considered less confusing by some because one does not have to think about 'which way around' composition goes, an issue that has become greater with the increasing popularity of the synonym
f
∘
g
{\displaystyle f\circ g}
for
g
∘
f
{\displaystyle g\circ f}
.
== Examples ==
In the category of sets, every monomorphism (injective function) with a non-empty domain is a section, and every epimorphism (surjective function) is a retraction; the latter statement is equivalent to the axiom of choice.
In the category of vector spaces over a field K, every monomorphism and every epimorphism splits; this follows from the fact that linear maps can be uniquely defined by specifying their values on a basis.
In the category of abelian groups, the epimorphism Z → Z/2Z which sends every integer to its remainder modulo 2 does not split; in fact the only morphism Z/2Z → Z is the zero map. Similarly, the natural monomorphism Z/2Z → Z/4Z doesn't split even though there is a non-trivial morphism Z/4Z → Z/2Z.
The categorical concept of a section is important in homological algebra, and is also closely related to the notion of a section of a fiber bundle in topology: in the latter case, a section of a fiber bundle is a section of the bundle projection map of the fiber bundle.
Given a quotient space
X
¯
{\displaystyle {\bar {X}}}
with quotient map
π
:
X
→
X
¯
{\displaystyle \pi \colon X\to {\bar {X}}}
, a section of
π
{\displaystyle \pi }
is called a transversal.
== Bibliography ==
Mac Lane, Saunders (1978). Categories for the working mathematician (2nd ed.). Springer Verlag.
Barry, Mitchell (1965). Theory of categories. Academic Press.
== See also ==
Splitting lemma
Inverse function § Left and right inverses
Transversal (combinatorics)
== Notes == | Wikipedia/Section_(category_theory) |
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.
Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification.
Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8).
== History ==
The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.
Church later developed a weaker system, the simply typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming.
The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.
Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language.
John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.
The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages.
The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.
More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.
Functional programming continues to be used in commercial settings.
== Concepts ==
A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.
=== First-class and higher-order functions ===
Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator
d
/
d
x
{\displaystyle d/dx}
, which returns the derivative of a function
f
{\displaystyle f}
.
Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).
Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
=== Pure functions ===
Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:
If the result of a pure expression is not used, it can be removed without affecting other expressions.
If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.)
If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe).
If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation).
While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics.
=== Recursion ===
Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.
The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop.
Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages.
Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming.
=== Strict versus non-strict evaluation ===
Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression:
print length([2+1, 3*2, 1/0, 5-4])
fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.
=== Type systems ===
Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.
Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified.
A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#.
=== Referential transparency ===
Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.
Consider C assignment statement x=x*10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x*10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent.
Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects.
Functional programs exclusively use this type of function and are therefore referentially transparent.
=== Data structures ===
Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.
== Comparison to imperative programming ==
Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency.
Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item.
=== Imperative vs. functional programming ===
The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable result.
Traditional imperative loop:
Functional programming with higher-order functions:
Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule).
=== Simulating state ===
There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).
Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.
Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.
Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.
=== Efficiency issues ===
Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes.
Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka.
Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them.
However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) .
==== Abstraction cost ====
Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure:
When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as:
has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining.
One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime.
=== Functional programming in non-functional languages ===
It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions.
JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin.
In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming.
In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.
In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes.
In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.
Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.
Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.
== Comparison to logic programming ==
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.
For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program:
The program can be queried, like a functional program, to generate mothers from children:
But it can also be queried backwards, to generate children:
It can even be used to generate all instances of the mother relation:
Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form:
The same definition in relational notation needs to be written in the unnested form:
Here :- means if and , means and.
However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming:
Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy.
== Applications ==
=== Text editors ===
Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages.
Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family.
=== Spreadsheets ===
Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.
=== Microservices ===
Due to their composability, functional programming paradigms can be suitable for microservices-based architectures.
=== Academia ===
Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming.
=== Industry ===
Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming.
Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie.
Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory.
=== Education ===
Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods.
Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics.
In particular, Scheme has been a relatively popular choice for teaching programming for years.
== See also ==
Eager evaluation
Functional reactive programming
Inductive functional programming
List of functional programming languages
List of functional programming topics
Nested function
Purely functional programming
== Notes and references ==
== Further reading ==
Abelson, Hal; Sussman, Gerald Jay (1985). Structure and Interpretation of Computer Programs. MIT Press. Bibcode:1985sicp.book.....A.
Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998.
Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958.
Curry, Haskell B.; Hindley, J. Roger; Seldin, Jonathan P. (1972). Combinatory Logic. Vol. II. Amsterdam: North Holland. ISBN 978-0-7204-2208-5.
Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005.
Felleisen, Matthias; Findler, Robert; Flatt, Matthew; Krishnamurthi, Shriram (2018). How to Design Programs. MIT Press.
Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996.
MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990.
Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5.
O'Sullivan, Brian; Stewart, Don; Goerzen, John (2008). Real World Haskell. O'Reilly.
Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996.
Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998.
Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996.
== External links ==
Ford, Neal. "Functional thinking". Retrieved 2021-11-10.
Akhmechet, Slava (2006-06-19). "defmacro – Functional Programming For The Rest of Us". Retrieved 2013-02-24. An introduction
Functional programming in Python (by David Mertz): part 1, part 2, part 3 | Wikipedia/Functional_programming |
Applied category theory is an academic discipline in which methods from category theory are used to study other fields including but not limited to computer science, physics (in particular quantum mechanics), natural language processing, control theory, probability theory and causality. The application of category theory in these domains can take different forms. In some cases the formalization of the domain into the language of category theory is the goal, the idea here being that this would elucidate the important structure and properties of the domain. In other cases the formalization is used to leverage the power of abstraction in order to prove new results about the field.
== List of applied category theorists ==
Samson Abramsky
John C. Baez
Bob Coecke
Joachim Lambek
Valeria de Paiva
Gordon Plotkin
Dana Scott
David Spivak
== See also ==
Categorical quantum mechanics
ZX-calculus
DisCoCat
Petri net
Univalent foundations
String diagrams
== External links ==
Journals:
Compositionality
Conferences:
Applied category theory
Symposium on Compositional Structures (SYCO)
Books:
Picturing Quantum Processes
Categories for Quantum Theory
An Invitation to Applied Category Theory (preprint)
Category Theory for the Sciences (preprint)
Institutes:
the Quantum Group at the University of Oxford
TallCat, a research group at Tallinn University of Technology
Topos Institute
Cybercat Institute
Software:
DisCoPy, a Python toolkit for computing with string diagrams
CatLab.jl, a framework for applied category theory in the Julia language
CQL, a query language based on Kan extensions
Companies:
Conexus AI, a data integration company
Symbolica, a machine learning company
Mascots:
Gremlin-Morgoth
== References == | Wikipedia/Applied_category_theory |
In category theory, a branch of mathematics, a pullback (also called a fiber product, fibre product, fibered product or Cartesian square) is the limit of a diagram consisting of two morphisms f : X → Z and g : Y → Z with a common codomain. The pullback is written
P = X ×f, Z, g Y.
Usually the morphisms f and g are omitted from the notation, and then the pullback is written
P = X ×Z Y.
The pullback comes equipped with two natural morphisms P → X and P → Y. The pullback of two morphisms f and g need not exist, but if it does, it is essentially uniquely defined by the two morphisms. In many situations, X ×Z Y may intuitively be thought of as consisting of pairs of elements (x, y) with x in X, y in Y, and f(x) = g(y). For the general definition, a universal property is used, which essentially expresses the fact that the pullback is the "most general" way to complete the two given morphisms to a commutative square.
The dual concept of the pullback is the pushout.
== Universal property ==
Explicitly, a pullback of the morphisms
f
{\displaystyle f}
and
g
{\displaystyle g}
consists of an object
P
{\displaystyle P}
and two morphisms
p
1
:
P
→
X
{\displaystyle p_{1}:P\rightarrow X}
and
p
2
:
P
→
Y
{\displaystyle p_{2}:P\rightarrow Y}
for which the diagram
commutes. Moreover, the pullback (P, p1, p2) must be universal with respect to this diagram. That is, for any other such triple (Q, q1, q2) where q1 : Q → X and q2 : Q → Y are morphisms with f q1 = g q2, there must exist a unique u : Q → P such that
p
1
∘
u
=
q
1
,
p
2
∘
u
=
q
2
.
{\displaystyle p_{1}\circ u=q_{1},\qquad p_{2}\circ u=q_{2}.}
This situation is illustrated in the following commutative diagram.
As with all universal constructions, a pullback, if it exists, is unique up to isomorphism. In fact, given two pullbacks (A, a1, a2) and (B, b1, b2) of the same cospan X → Z ← Y, there is a unique isomorphism between A and B respecting the pullback structure.
== Pullback and product ==
The pullback is similar to the product, but not the same. One may obtain the product by "forgetting" that the morphisms f and g exist, and forgetting that the object Z exists. One is then left with a discrete category containing only the two objects X and Y, and no arrows between them. This discrete category may be used as the index set to construct the ordinary binary product. Thus, the pullback can be thought of as the ordinary (Cartesian) product, but with additional structure. Instead of "forgetting" Z, f, and g, one can also "trivialize" them by specializing Z to be the terminal object (assuming it exists). f and g are then uniquely determined and thus carry no information, and the pullback of this cospan can be seen to be the product of X and Y.
== Examples ==
=== Commutative rings ===
In the category of commutative rings (with identity), the pullback is called the fibered product. Let A, B, and C be commutative rings (with identity) and α : A → C and β : B → C (identity preserving) ring homomorphisms. Then the pullback of this diagram exists and is given by the subring of the product ring A × B defined by
A
×
C
B
=
{
(
a
,
b
)
∈
A
×
B
|
α
(
a
)
=
β
(
b
)
}
{\displaystyle A\times _{C}B=\left\{(a,b)\in A\times B\;{\big |}\;\alpha (a)=\beta (b)\right\}}
along with the morphisms
β
′
:
A
×
C
B
→
A
,
α
′
:
A
×
C
B
→
B
{\displaystyle \beta '\colon A\times _{C}B\to A,\qquad \alpha '\colon A\times _{C}B\to B}
given by
β
′
(
a
,
b
)
=
a
{\displaystyle \beta '(a,b)=a}
and
α
′
(
a
,
b
)
=
b
{\displaystyle \alpha '(a,b)=b}
for all
(
a
,
b
)
∈
A
×
C
B
{\displaystyle (a,b)\in A\times _{C}B}
. We then have
α
∘
β
′
=
β
∘
α
′
.
{\displaystyle \alpha \circ \beta '=\beta \circ \alpha '.}
=== Groups and modules ===
In complete analogy to the example of commutative rings above, one can show that all pullbacks exist in the category of groups and in the category of modules over some fixed ring.
=== Sets ===
In the category of sets, the pullback of functions f : X → Z and g : Y → Z always exists and is given by the set
X
×
Z
Y
=
{
(
x
,
y
)
∈
X
×
Y
|
f
(
x
)
=
g
(
y
)
}
=
⋃
z
∈
f
(
X
)
∩
g
(
Y
)
f
−
1
[
{
z
}
]
×
g
−
1
[
{
z
}
]
,
{\displaystyle X\times _{Z}Y=\{(x,y)\in X\times Y|f(x)=g(y)\}=\bigcup _{z\in f(X)\cap g(Y)}f^{-1}[\{z\}]\times g^{-1}[\{z\}],}
together with the restrictions of the projection maps π1 and π2 to X ×Z Y.
Alternatively one may view the pullback in Set asymmetrically:
X
×
Z
Y
≅
∐
x
∈
X
g
−
1
[
{
f
(
x
)
}
]
≅
∐
y
∈
Y
f
−
1
[
{
g
(
y
)
}
]
{\displaystyle X\times _{Z}Y\cong \coprod _{x\in X}g^{-1}[\{f(x)\}]\cong \coprod _{y\in Y}f^{-1}[\{g(y)\}]}
where
∐
{\displaystyle \coprod }
is the disjoint union of sets (the involved sets are not disjoint on their own unless f resp. g is injective). In the first case, the projection π1 extracts the x index while π2 forgets the index, leaving elements of Y.
This example motivates another way of characterizing the pullback: as the equalizer of the morphisms f ∘ p1, g ∘ p2 : X × Y → Z where X × Y is the binary product of X and Y and p1 and p2 are the natural projections. This shows that pullbacks exist in any category with binary products and equalizers. In fact, by the existence theorem for limits, all finite limits exist in a category with binary products and equalizers; equivalently, all finite limits exist in a category with terminal object and pullbacks (by the fact that binary product is equal to pullback on the terminal object, and that an equalizer is a pullback involving binary product).
==== Graphs of functions ====
A specific example of a pullback is given by the graph of a function. Suppose that
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
is a function. The graph of f is the set
Γ
f
=
{
(
x
,
f
(
x
)
)
:
x
∈
X
}
⊆
X
×
Y
.
{\displaystyle \Gamma _{f}=\{(x,f(x))\colon x\in X\}\subseteq X\times Y.}
The graph can be reformulated as the pullback of f and the identity function on Y. By definition, this pullback is
X
×
f
,
Y
,
1
Y
Y
=
{
(
x
,
y
)
:
f
(
x
)
=
1
Y
(
y
)
}
=
{
(
x
,
y
)
:
f
(
x
)
=
y
}
⊆
X
×
Y
,
{\displaystyle X\times _{f,Y,1_{Y}}Y=\{(x,y)\colon f(x)=1_{Y}(y)\}=\{(x,y)\colon f(x)=y\}\subseteq X\times Y,}
and this equals
Γ
f
{\displaystyle \Gamma _{f}}
.
=== Fiber bundles ===
Another example of a pullback comes from the theory of fiber bundles: given a bundle map π : E → B and a continuous map f : X → B, the pullback (formed in the category of topological spaces with continuous maps) X ×B E is a fiber bundle over X called the pullback bundle. The associated commutative diagram is a morphism of fiber bundles. A special case is the pullback of two fiber bundles E1, E2 → B. In this case E1 × E2 is a fiber bundle over B × B, and pulling back along the diagonal map B → B × B gives a space homeomorphic (diffeomorphic) to E1 ×B E2, which is a fiber bundle over B. All statements here hold true for differentiable manifolds as well. Differentiable maps f : M → N and g : P → N are transverse if and only if their product M × P → N × N is transverse to the diagonal of N. Thus, the pullback of two transverse differentiable maps into the same differentiable manifold is also a differentiable manifold, and the tangent space of the pullback is the pullback of the tangent spaces along the differential maps.
=== Preimages and intersections ===
Preimages of sets under functions can be described as pullbacks as follows:
Suppose f : A → B, B0 ⊆ B. Let g be the inclusion map B0 ↪ B. Then a pullback of f and g (in Set) is given by the preimage f−1[B0] together with the inclusion of the preimage in A
f−1[B0] ↪ A
and the restriction of f to f−1[B0]
f−1[B0] → B0.
Because of this example, in a general category the pullback of a morphism f and a monomorphism g can be thought of as the "preimage" under f of the subobject specified by g. Similarly, pullbacks of two monomorphisms can be thought of as the "intersection" of the two subobjects.
=== Least common multiple ===
Consider the multiplicative monoid of positive integers Z+ as a category with one object. In this category, the pullback of two positive integers m and n is just the pair
(
lcm
(
m
,
n
)
m
,
lcm
(
m
,
n
)
n
)
{\displaystyle \left({\frac {\operatorname {lcm} (m,n)}{m}},{\frac {\operatorname {lcm} (m,n)}{n}}\right)}
, where the numerators are both the least common multiple of m and n. The same pair is also the pushout.
== Properties ==
In any category with a terminal object T, the pullback X ×T Y is just the ordinary product X × Y.
Monomorphisms are stable under pullback: if the arrow f in the diagram is monic, then so is the arrow p2. Similarly, if g is monic, then so is p1.
Isomorphisms are also stable, and hence, for example, X ×X Y ≅ Y for any map Y → X (where the implied map X → X is the identity).
In an abelian category all pullbacks exist, and they preserve kernels, in the following sense: if
is a pullback diagram, then the induced morphism ker(p2) → ker(f) is an isomorphism, and so is the induced morphism ker(p1) → ker(g). Every pullback diagram thus gives rise to a commutative diagram of the following form, where all rows and columns are exact:
Furthermore, in an abelian category, if X → Z is an epimorphism, then so is its pullback P → Y, and symmetrically: if Y → Z is an epimorphism, then so is its pullback P → X. In these situations, the pullback square is also a pushout square.
There is a natural isomorphism (A×CB)×B D ≅ A×CD. Explicitly, this means:
if maps f : A → C, g : B → C and h : D → B are given and
the pullback of f and g is given by r : P → A and s : P → B, and
the pullback of s and h is given by t : Q → P and u : Q → D ,
then the pullback of f and gh is given by rt : Q → A and u : Q → D.
Graphically this means that two pullback squares, placed side by side and sharing one morphism, form a larger pullback square when ignoring the inner shared morphism.
Any category with pullbacks and products has equalizers.
== Weak pullbacks ==
A weak pullback of a cospan X → Z ← Y is a cone over the cospan that is only weakly universal, that is, the mediating morphism u : Q → P above is not required to be unique.
== See also ==
Pullbacks in differential geometry
Equijoin in relational algebra
Fiber product of schemes
== Notes ==
== References ==
Adámek, Jiří, Herrlich, Horst, & Strecker, George E.; (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. ISBN 0-471-60922-6. (now free on-line edition).
Cohn, Paul M.; Universal Algebra (1981), D. Reidel Publishing, Holland, ISBN 90-277-1213-1 (Originally published in 1965, by Harper & Row).
Mitchell, Barry (1965). Theory of Categories. Academic Press.
== External links ==
Interactive web page which generates examples of pullbacks in the category of finite sets. Written by Jocelyn Paine.
pullback at the nLab | Wikipedia/Pullback_(category_theory) |
In category theory, an end of a functor
S
:
C
o
p
×
C
→
X
{\displaystyle S:\mathbf {C} ^{\mathrm {op} }\times \mathbf {C} \to \mathbf {X} }
is a universal dinatural transformation from an object e of X to S.
More explicitly, this is a pair
(
e
,
ω
)
{\displaystyle (e,\omega )}
, where e is an object of X and
ω
:
e
→
¨
S
{\displaystyle \omega :e{\ddot {\to }}S}
is an extranatural transformation such that for every extranatural transformation
β
:
x
→
¨
S
{\displaystyle \beta :x{\ddot {\to }}S}
there exists a unique morphism
h
:
x
→
e
{\displaystyle h:x\to e}
of X with
β
a
=
ω
a
∘
h
{\displaystyle \beta _{a}=\omega _{a}\circ h}
for every object a of C.
By abuse of language the object e is often called the end of the functor S (forgetting
ω
{\displaystyle \omega }
) and is written
e
=
∫
c
S
(
c
,
c
)
or just
∫
C
S
.
{\displaystyle e=\int _{c}^{}S(c,c){\text{ or just }}\int _{\mathbf {C} }^{}S.}
Characterization as limit: If X is complete and C is small, the end can be described as the equalizer in the diagram
∫
c
S
(
c
,
c
)
→
∏
c
∈
C
S
(
c
,
c
)
⇉
∏
c
→
c
′
S
(
c
,
c
′
)
,
{\displaystyle \int _{c}S(c,c)\to \prod _{c\in C}S(c,c)\rightrightarrows \prod _{c\to c'}S(c,c'),}
where the first morphism being equalized is induced by
S
(
c
,
c
)
→
S
(
c
,
c
′
)
{\displaystyle S(c,c)\to S(c,c')}
and the second is induced by
S
(
c
′
,
c
′
)
→
S
(
c
,
c
′
)
{\displaystyle S(c',c')\to S(c,c')}
.
== Coend ==
The definition of the coend of a functor
S
:
C
o
p
×
C
→
X
{\displaystyle S:\mathbf {C} ^{\mathrm {op} }\times \mathbf {C} \to \mathbf {X} }
is the dual of the definition of an end.
Thus, a coend of S consists of a pair
(
d
,
ζ
)
{\displaystyle (d,\zeta )}
, where d is an object of X and
ζ
:
S
→
¨
d
{\displaystyle \zeta :S{\ddot {\to }}d}
is an extranatural transformation, such that for every extranatural transformation
γ
:
S
→
¨
x
{\displaystyle \gamma :S{\ddot {\to }}x}
there exists a unique morphism
g
:
d
→
x
{\displaystyle g:d\to x}
of X with
γ
a
=
g
∘
ζ
a
{\displaystyle \gamma _{a}=g\circ \zeta _{a}}
for every object a of C.
The coend d of the functor S is written
d
=
∫
c
S
(
c
,
c
)
or
∫
C
S
.
{\displaystyle d=\int _{}^{c}S(c,c){\text{ or }}\int _{}^{\mathbf {C} }S.}
Characterization as colimit: Dually, if X is cocomplete and C is small, then the coend can be described as the coequalizer in the diagram
∫
c
S
(
c
,
c
)
←
∐
c
∈
C
S
(
c
,
c
)
⇇
∐
c
→
c
′
S
(
c
′
,
c
)
.
{\displaystyle \int ^{c}S(c,c)\leftarrow \coprod _{c\in C}S(c,c)\leftleftarrows \coprod _{c\to c'}S(c',c).}
== Examples ==
Natural transformations:
Suppose we have functors
F
,
G
:
C
→
X
{\displaystyle F,G:\mathbf {C} \to \mathbf {X} }
then
H
o
m
X
(
F
(
−
)
,
G
(
−
)
)
:
C
o
p
×
C
→
S
e
t
{\displaystyle \mathrm {Hom} _{\mathbf {X} }(F(-),G(-)):\mathbf {C} ^{op}\times \mathbf {C} \to \mathbf {Set} }
.
In this case, the category of sets is complete, so we need only form the equalizer and in this case
∫
c
H
o
m
X
(
F
(
c
)
,
G
(
c
)
)
=
N
a
t
(
F
,
G
)
{\displaystyle \int _{c}\mathrm {Hom} _{\mathbf {X} }(F(c),G(c))=\mathrm {Nat} (F,G)}
the natural transformations from
F
{\displaystyle F}
to
G
{\displaystyle G}
. Intuitively, a natural transformation from
F
{\displaystyle F}
to
G
{\displaystyle G}
is a morphism from
F
(
c
)
{\displaystyle F(c)}
to
G
(
c
)
{\displaystyle G(c)}
for every
c
{\displaystyle c}
in the category with compatibility conditions. Looking at the equalizer diagram defining the end makes the equivalence clear.
Geometric realizations:
Let
T
{\displaystyle T}
be a simplicial set. That is,
T
{\displaystyle T}
is a functor
Δ
o
p
→
S
e
t
{\displaystyle \Delta ^{\mathrm {op} }\to \mathbf {Set} }
. The discrete topology gives a functor
d
:
S
e
t
→
T
o
p
{\displaystyle d:\mathbf {Set} \to \mathbf {Top} }
, where
T
o
p
{\displaystyle \mathbf {Top} }
is the category of topological spaces. Moreover, there is a map
γ
:
Δ
→
T
o
p
{\displaystyle \gamma :\Delta \to \mathbf {Top} }
sending the object
[
n
]
{\displaystyle [n]}
of
Δ
{\displaystyle \Delta }
to the standard
n
{\displaystyle n}
-simplex inside
R
n
+
1
{\displaystyle \mathbb {R} ^{n+1}}
. Finally there is a functor
T
o
p
×
T
o
p
→
T
o
p
{\displaystyle \mathbf {Top} \times \mathbf {Top} \to \mathbf {Top} }
that takes the product of two topological spaces.
Define
S
{\displaystyle S}
to be the composition of this product functor with
d
T
×
γ
{\displaystyle dT\times \gamma }
. The coend of
S
{\displaystyle S}
is the geometric realization of
T
{\displaystyle T}
.
== Notes ==
== References ==
== External links ==
end at the nLab | Wikipedia/End_(category_theory) |
In mathematical logic, descriptive set theory (DST) is the study of certain classes of "well-behaved" subsets of the real line and other Polish spaces. As well as being one of the primary areas of research in set theory, it has applications to other areas of mathematics such as functional analysis, ergodic theory, the study of operator algebras and group actions, and mathematical logic.
== Polish spaces ==
Descriptive set theory begins with the study of Polish spaces and their Borel sets.
A Polish space is a second-countable topological space that is metrizable with a complete metric. Heuristically, it is a complete separable metric space whose metric has been "forgotten". Examples include the real line
R
{\displaystyle \mathbb {R} }
, the Baire space
N
{\displaystyle {\mathcal {N}}}
, the Cantor space
C
{\displaystyle {\mathcal {C}}}
, and the Hilbert cube
I
N
{\displaystyle I^{\mathbb {N} }}
.
=== Universality properties ===
The class of Polish spaces has several universality properties, which show that there is no loss of generality in considering Polish spaces of certain restricted forms.
Every Polish space is homeomorphic to a Gδ subspace of the Hilbert cube, and every Gδ subspace of the Hilbert cube is Polish.
Every Polish space is obtained as a continuous image of Baire space; in fact every Polish space is the image of a continuous bijection defined on a closed subset of Baire space. Similarly, every compact Polish space is a continuous image of Cantor space.
Because of these universality properties, and because the Baire space
N
{\displaystyle {\mathcal {N}}}
has the convenient property that it is homeomorphic to
N
ω
{\displaystyle {\mathcal {N}}^{\omega }}
, many results in descriptive set theory are proved in the context of Baire space alone.
== Borel sets ==
The class of Borel sets of a topological space X consists of all sets in the smallest σ-algebra containing the open sets of X. This means that the Borel sets of X are the smallest collection of sets such that:
Every open subset of X is a Borel set.
If A is a Borel set, so is
X
∖
A
{\displaystyle X\setminus A}
. That is, the class of Borel sets are closed under complementation.
If An is a Borel set for each natural number n, then the union
⋃
A
n
{\displaystyle \bigcup A_{n}}
is a Borel set. That is, the Borel sets are closed under countable unions.
A fundamental result shows that any two uncountable Polish spaces X and Y are Borel isomorphic: there is a bijection from X to Y such that the preimage of any Borel set is Borel, and the image of any Borel set is Borel. This gives additional justification to the practice of restricting attention to Baire space and Cantor space, since these and any other Polish spaces are all isomorphic at the level of Borel sets.
=== Borel hierarchy ===
Each Borel set of a Polish space is classified in the Borel hierarchy based on how many times the operations of countable union and complementation must be used to obtain the set, beginning from open sets. The classification is in terms of countable ordinal numbers. For each nonzero countable ordinal α there are classes
Σ
α
0
{\displaystyle \mathbf {\Sigma } _{\alpha }^{0}}
,
Π
α
0
{\displaystyle \mathbf {\Pi } _{\alpha }^{0}}
, and
Δ
α
0
{\displaystyle \mathbf {\Delta } _{\alpha }^{0}}
.
Every open set is declared to be
Σ
1
0
{\displaystyle \mathbf {\Sigma } _{1}^{0}}
.
A set is declared to be
Π
α
0
{\displaystyle \mathbf {\Pi } _{\alpha }^{0}}
if and only if its complement is
Σ
α
0
{\displaystyle \mathbf {\Sigma } _{\alpha }^{0}}
.
A set A is declared to be
Σ
δ
0
{\displaystyle \mathbf {\Sigma } _{\delta }^{0}}
, δ > 1, if there is a sequence ⟨ Ai ⟩ of sets, each of which is
Π
λ
(
i
)
0
{\displaystyle \mathbf {\Pi } _{\lambda (i)}^{0}}
for some λ(i) < δ, such that
A
=
⋃
A
i
{\displaystyle A=\bigcup A_{i}}
.
A set is
Δ
α
0
{\displaystyle \mathbf {\Delta } _{\alpha }^{0}}
if and only if it is both
Σ
α
0
{\displaystyle \mathbf {\Sigma } _{\alpha }^{0}}
and
Π
α
0
{\displaystyle \mathbf {\Pi } _{\alpha }^{0}}
.
A theorem shows that any set that is
Σ
α
0
{\displaystyle \mathbf {\Sigma } _{\alpha }^{0}}
or
Π
α
0
{\displaystyle \mathbf {\Pi } _{\alpha }^{0}}
is
Δ
α
+
1
0
{\displaystyle \mathbf {\Delta } _{\alpha +1}^{0}}
, and any
Δ
β
0
{\displaystyle \mathbf {\Delta } _{\beta }^{0}}
set is both
Σ
α
0
{\displaystyle \mathbf {\Sigma } _{\alpha }^{0}}
and
Π
α
0
{\displaystyle \mathbf {\Pi } _{\alpha }^{0}}
for all α > β. Thus the hierarchy has the following structure, where arrows indicate inclusion.
=== Regularity properties of Borel sets ===
Classical descriptive set theory includes the study of regularity properties of Borel sets. For example, all Borel sets of a Polish space have the property of Baire and the perfect set property. Modern descriptive set theory includes the study of the ways in which these results generalize, or fail to generalize, to other classes of subsets of Polish spaces.
== Analytic and coanalytic sets ==
Just beyond the Borel sets in complexity are the analytic sets and coanalytic sets. A subset of a Polish space X is analytic if it is the continuous image of a Borel subset of some other Polish space. Although any continuous preimage of a Borel set is Borel, not all analytic sets are Borel sets. A set is coanalytic if its complement is analytic.
== Projective sets and Wadge degrees ==
Many questions in descriptive set theory ultimately depend upon set-theoretic considerations and the properties of ordinal and cardinal numbers. This phenomenon is particularly apparent in the projective sets. These are defined via the projective hierarchy on a Polish space X:
A set is declared to be
Σ
1
1
{\displaystyle \mathbf {\Sigma } _{1}^{1}}
if it is analytic.
A set is
Π
1
1
{\displaystyle \mathbf {\Pi } _{1}^{1}}
if it is coanalytic.
A set A is
Σ
n
+
1
1
{\displaystyle \mathbf {\Sigma } _{n+1}^{1}}
if there is a
Π
n
1
{\displaystyle \mathbf {\Pi } _{n}^{1}}
subset B of
X
×
X
{\displaystyle X\times X}
such that A is the projection of B to the first coordinate.
A set A is
Π
n
+
1
1
{\displaystyle \mathbf {\Pi } _{n+1}^{1}}
if there is a
Σ
n
1
{\displaystyle \mathbf {\Sigma } _{n}^{1}}
subset B of
X
×
X
{\displaystyle X\times X}
such that A is the projection of B to the first coordinate.
A set is
Δ
n
1
{\displaystyle \mathbf {\Delta } _{n}^{1}}
if it is both
Π
n
1
{\displaystyle \mathbf {\Pi } _{n}^{1}}
and
Σ
n
1
{\displaystyle \mathbf {\Sigma } _{n}^{1}}
.
As with the Borel hierarchy, for each n, any
Δ
n
1
{\displaystyle \mathbf {\Delta } _{n}^{1}}
set is both
Σ
n
+
1
1
{\displaystyle \mathbf {\Sigma } _{n+1}^{1}}
and
Π
n
+
1
1
{\displaystyle \mathbf {\Pi } _{n+1}^{1}}
.
The properties of the projective sets are not completely determined by ZFC. Under the assumption V = L, not all projective sets have the perfect set property or the property of Baire. However, under the assumption of projective determinacy, all projective sets have both the perfect set property and the property of Baire. This is related to the fact that ZFC proves Borel determinacy, but not projective determinacy.
There are also generic extensions of
L
{\displaystyle L}
for any natural number
n
>
2
{\displaystyle n>2}
in which
P
(
ω
)
∩
L
{\displaystyle {\mathcal {P}}(\omega )\cap L}
consists of all the lightface
Δ
n
1
{\displaystyle \Delta _{n}^{1}}
subsets of
ω
{\displaystyle \omega }
.
More generally, the entire collection of sets of elements of a Polish space X can be grouped into equivalence classes, known as Wadge degrees, that generalize the projective hierarchy. These degrees are ordered in the Wadge hierarchy. The axiom of determinacy implies that the Wadge hierarchy on any Polish space is well-founded and of length Θ, with structure extending the projective hierarchy.
== Borel equivalence relations ==
A contemporary area of research in descriptive set theory studies Borel equivalence relations. A Borel equivalence relation on a Polish space X is a Borel subset of
X
×
X
{\displaystyle X\times X}
that is an equivalence relation on X.
== Effective descriptive set theory ==
The area of effective descriptive set theory combines the methods of descriptive set theory with those of generalized recursion theory (especially hyperarithmetical theory). In particular, it focuses on lightface analogues of hierarchies of classical descriptive set theory. Thus the hyperarithmetic hierarchy is studied instead of the Borel hierarchy, and the analytical hierarchy instead of the projective hierarchy. This research is related to weaker versions of set theory such as Kripke–Platek set theory and second-order arithmetic.
== Table ==
== See also ==
Pointclass
Prewellordering
Scale property
== References ==
Kechris, Alexander S. (1994). Classical Descriptive Set Theory. Springer-Verlag. ISBN 0-387-94374-9.
Moschovakis, Yiannis N. (1980). Descriptive Set Theory. North Holland. p. 2. ISBN 0-444-70199-0.
=== Citations ===
== External links ==
Descriptive set theory, David Marker, 2002. Lecture notes. | Wikipedia/Descriptive_set_theory |
In category theory and its applications to other branches of mathematics, kernels are a generalization of the kernels of group homomorphisms, the kernels of module homomorphisms and certain other kernels from algebra. Intuitively, the kernel of the morphism f : X → Y is the "most general" morphism k : K → X that yields zero when composed with (followed by) f.
Note that kernel pairs and difference kernels (also known as binary equalisers) sometimes go by the name "kernel"; while related, these aren't quite the same thing and are not discussed in this article.
== Definition ==
Let C be a category.
In order to define a kernel in the general category-theoretical sense, C needs to have zero morphisms.
In that case, if f : X → Y is an arbitrary morphism in C, then a kernel of f is an equaliser of f and the zero morphism from X to Y.
In symbols:
ker(f) = eq(f, 0XY)
To be more explicit, the following universal property can be used. A kernel of f is an object K together with a morphism k : K → X such that:
f ∘k is the zero morphism from K to Y;
Given any morphism k′ : K′ → X such that f ∘k′ is the zero morphism, there is a unique morphism u : K′ → K such that k∘u = k′.
As for every universal property, there is a unique isomorphism between two kernels of the same morphism, and the morphism k is always a monomorphism (in the categorical sense). So, it is common to talk of the kernel of a morphism. In concrete categories, one can thus take a subset of K′ for K, in which case, the morphism k is the inclusion map. This allows one to talk of K as the kernel, since k is implicitly defined by K. There are non-concrete categories, where one can similarly define a "natural" kernel, such that K defines k implicitly.
Not every morphism needs to have a kernel, but if it does, then all its kernels are isomorphic in a strong sense: if k : K → X and ℓ : L → X are kernels of f : X → Y, then there exists a unique isomorphism φ : K → L such that ℓ∘φ = k.
== Examples ==
Kernels are familiar in many categories from abstract algebra, such as the category of groups or the category of (left) modules over a fixed ring (including vector spaces over a fixed field). To be explicit, if f : X → Y is a homomorphism in one of these categories, and K is its kernel in the usual algebraic sense, then K is a subobject of X and the inclusion homomorphism from K to X is a kernel in the categorical sense.
Note that in the category of monoids, category-theoretic kernels exist just as for groups, but these kernels don't carry sufficient information for algebraic purposes. Therefore, the notion of kernel studied in monoid theory is slightly different (see #Relationship to algebraic kernels below).
In the category of unital rings, there are no kernels in the category-theoretic sense; indeed, this category does not even have zero morphisms. Nevertheless, there is still a notion of kernel studied in ring theory that corresponds to kernels in the category of non-unital rings.
In the category of pointed topological spaces, if f : X → Y is a continuous pointed map, then the preimage of the distinguished point, K, is a subspace of X. The inclusion map of K into X is the categorical kernel of f.
== Relation to other categorical concepts ==
The dual concept to that of kernel is that of cokernel.
That is, the kernel of a morphism is its cokernel in the opposite category, and vice versa.
As mentioned above, a kernel is a type of binary equaliser, or difference kernel.
Conversely, in a preadditive category, every binary equaliser can be constructed as a kernel.
To be specific, the equaliser of the morphisms f and g is the kernel of the difference g − f.
In symbols:
eq (f, g) = ker (g − f).
It is because of this fact that binary equalisers are called "difference kernels", even in non-preadditive categories where morphisms cannot be subtracted.
Every kernel, like any other equaliser, is a monomorphism.
Conversely, a monomorphism is called normal if it is the kernel of some morphism.
A category is called normal if every monomorphism is normal.
Abelian categories, in particular, are always normal.
In this situation, the kernel of the cokernel of any morphism (which always exists in an abelian category) turns out to be the image of that morphism; in symbols:
im f = ker coker f (in an abelian category)
When m is a monomorphism, it must be its own image; thus, not only are abelian categories normal, so that every monomorphism is a kernel, but we also know which morphism the monomorphism is a kernel of, to wit, its cokernel.
In symbols:
m = ker (coker m) (for monomorphisms in an abelian category)
== Relationship to algebraic kernels ==
Universal algebra defines a notion of kernel for homomorphisms between two algebraic structures of the same kind.
This concept of kernel measures how far the given homomorphism is from being injective.
There is some overlap between this algebraic notion and the categorical notion of kernel since both generalize the situation of groups and modules mentioned above.
In general, however, the universal-algebraic notion of kernel is more like the category-theoretic concept of kernel pair.
In particular, kernel pairs can be used to interpret kernels in monoid theory or ring theory in category-theoretic terms.
== Sources ==
Awodey, Steve (2010) [2006]. Category Theory (PDF). Oxford Logic Guides. Vol. 49 (2nd ed.). Oxford University Press. ISBN 978-0-19-923718-0. Archived from the original (PDF) on 2018-05-21. Retrieved 2018-06-29.
Kernel at the nLab | Wikipedia/Kernel_(category_theory) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.