text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is a 1960 article written by the physicist Eugene Wigner, published in Communication in Pure and Applied Mathematics. In it, Wigner observes that a theoretical physics's mathematical structure often points the way to further advances in that theory and to empirical predictions. Mathematical theories often have predictive power in describing nature.
== Observations and arguments ==
Wigner argues that mathematical concepts have applicability far beyond the context in which they were originally developed. He writes: "It is important to point out that the mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena." He adds that the observation "the laws of nature are written in the language of mathematics," properly made by Galileo three hundred years ago, "is now truer than ever before."
Wigner's first example is the law of gravitation formulated by Isaac Newton. Originally used to model freely falling bodies on the surface of the Earth, this law was extended based on what Wigner terms "very scanty observations" to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations." Wigner says that "Newton ... noted that the parabola of the thrown rock's path on the earth and the circle of the moon's path in the sky are particular cases of the same mathematical object of an ellipse, and postulated the universal law of gravitation on the basis of a single, and at that time very approximate, numerical coincidence."
Wigner's second example comes from quantum mechanics: Max Born "noticed that some rules of computation, given by Heisenberg, were formally identical with the rules of computation with matrices, established a long time before by mathematicians. Born, Jordan, and Heisenberg then proposed to replace by matrices the position and momentum variables of the equations of classical mechanics. They applied the rules of matrix mechanics to a few highly idealized problems and the results were quite satisfactory. However, there was, at that time, no rational evidence that their matrix mechanics would prove correct under more realistic conditions." But Wolfgang Pauli found their work accurately described the hydrogen atom: "This application gave results in agreement with experience." The helium atom, with two electrons, is more complex, but "nevertheless, the calculation of the lowest energy level of helium, as carried out a few months ago by Kinoshita at Cornell and by Bazley at the Bureau of Standards, agrees with the experimental data within the accuracy of the observations, which is one part in ten million. Surely in this case we 'got something out' of the equations that we did not put in." The same is true of the atomic spectra of heavier elements.
Wigner's last example comes from quantum electrodynamics: "Whereas Newton's theory of gravitation still had obvious connections with experience, experience entered the formulation of matrix mechanics only in the refined or sublimated form of Heisenberg's prescriptions. The quantum theory of the Lamb shift, as conceived by Bethe and established by Schwinger, is a purely mathematical theory and the only direct contribution of experiment was to show the existence of a measurable effect. The agreement with calculation is better than one part in a thousand."
There are examples beyond the ones mentioned by Wigner. Another often cited example is Maxwell's equations, derived to model the elementary electrical and magnetic phenomena known in the mid-19th century. The equations also describe radio waves, discovered by David Edward Hughes in 1879, around the time of James Clerk Maxwell's death.
== Responses ==
The responses the thesis received include:
Richard Hamming in computer science, "The Unreasonable Effectiveness of Mathematics".
Arthur Lesk in molecular biology, "The Unreasonable Effectiveness of Mathematics in Molecular Biology".
Peter Norvig in artificial intelligence, "The Unreasonable Effectiveness of Data"
Max Tegmark in physics, "The Mathematical Universe".
Ivor Grattan-Guinness in mathematics, "Solving Wigner's mystery: The reasonable (though perhaps limited) effectiveness of mathematics in the natural sciences".
Vela Velupillai in economics, "The Unreasonable Ineffectiveness of Mathematics in Economics".
Terrence Joseph Sejnowski in Artificial Intelligence: The Unreasonable Effectiveness of Deep Learning in Artificial Intelligence".
=== Richard Hamming ===
Mathematician and Turing Award laureate Richard Hamming reflected on and extended Wigner's Unreasonable Effectiveness in 1980, discussing four "partial explanations" for it, and concluding that they were unsatisfactory. They were:
1. Humans see what they look for. The belief that science is experimentally grounded is only partially true. Hamming gives four examples of nontrivial physical phenomena he believes arose from the mathematical tools employed and not from the intrinsic properties of physical reality.
Hamming proposes that Galileo discovered the law of falling bodies not by experimenting, but by simple, though careful, thinking. Hamming imagines Galileo as having engaged in the following thought experiment (the experiment, which Hamming calls "scholastic reasoning", is described in Galileo's book On Motion.):
Suppose that a falling body broke into two pieces. Of course, the two pieces would immediately slow down to their appropriate speeds. But suppose further that one piece happened to touch the other one. Would they now be one piece and both speed up? Suppose I tie the two pieces together. How tightly must I do it to make them one piece? A light string? A rope? Glue? When are two pieces one?
There is simply no way a falling body can "answer" such hypothetical "questions." Hence Galileo would have concluded that "falling bodies need not know anything if they all fall with the same velocity, unless interfered with by another force." After coming up with this argument, Hamming found a related discussion in Pólya (1963: 83-85). Hamming's account does not reveal an awareness of the 20th-century scholarly debate over just what Galileo did.
The inverse square law of universal gravitation necessarily follows from the conservation of energy and of space having three dimensions. Measuring the exponent in the law of universal gravitation is more a test of whether space is Euclidean than a test of the properties of the gravitational field.
The inequality at the heart of the uncertainty principle of quantum mechanics follows from the properties of Fourier integrals and from assuming time invariance.
Hamming argues that Albert Einstein's pioneering work on special relativity was largely "scholastic" in its approach. He knew from the outset what the theory should look like (although he only knew this because of the Michelson–Morley experiment), and explored candidate theories with mathematical tools, not actual experiments. Hamming alleges that Einstein was so confident that his relativity theories were correct that the outcomes of observations designed to test them did not much interest him. If the observations were inconsistent with his theories, it would be the observations that were at fault.
2. Humans create and select the mathematics that fit a situation. The mathematics at hand does not always work. For example, when mere scalars proved awkward for understanding forces, first vectors, then tensors, were invented.
3. Mathematics addresses only a part of human experience. Much of human experience does not fall under science or mathematics but under the philosophy of value, including ethics, aesthetics, and political philosophy. To assert that the world can be explained via mathematics amounts to an act of faith.
4. Evolution has primed humans to think mathematically. The earliest lifeforms must have contained the seeds of the human ability to create and follow long chains of close reasoning.
=== Max Tegmark ===
Physicist Max Tegmark argued that the effectiveness of mathematics in describing external physical reality is because the physical world is an abstract mathematical structure. This theory, referred to as the mathematical universe hypothesis, mirrors ideas previously advanced by Peter Atkins. However, Tegmark explicitly states that "the true mathematical structure isomorphic to our world, if it exists, has not yet been found." Rather, mathematical theories in physics are successful because they approximate more complex and predictive mathematics. According to Tegmark, "Our successful theories are not mathematics approximating physics, but simple mathematics approximating more complex mathematics."
=== Ivor Grattan-Guinness ===
Ivor Grattan-Guinness found the effectiveness in question eminently reasonable and explicable in terms of concepts such as analogy, generalization, and metaphor. He emphasizes that Wigner largely ignores "the effectiveness of the natural sciences in mathematics, in that much mathematics has been motivated by interpretations in the sciences".
=== Michael Atiyah ===
The tables were turned by Michael Atiyah with his essay "The unreasonable effectiveness of physics in mathematics." He argued that the toolbox of physics enables a practitioner like Edward Witten to go beyond standard mathematics, in particular the geometry of 4-manifolds. The tools of a physicist are cited as quantum field theory, special relativity, non-abelian gauge theory, spin, chirality, supersymmetry, and the electromagnetic duality.
== Prior versions of the argument ==
German scholar Moritz Drobisch was known to have revered the "mathematical fundament" of most sciences, as it put students in the position to "awe at the teleological coherence" and "recognise a superhuman, ordering wisdom whose purposes [...] [they] will gradually understand". Relevantly, he stated of astronomy: [T]he harmonious order, in which the celestial bodies describe their orbits, the eternally consistent regularity, touches a deep sounding string within us and elevates us – far from just letting the dead mechanism of chance unwind before us – to the notion of the supreme wise being.
== See also ==
== References ==
== Further reading == | Wikipedia/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences |
A classical field theory is a physical theory that predicts how one or more fields in physics interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically intended to describe electromagnetism and gravitation, two of the fundamental forces of nature.
A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change.
The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as non-relativistic and relativistic. Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternative mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles.
== History ==
Michael Faraday coined the term "field" and lines of forces to explain electric and magnetic phenomena. Lord Kelvin in 1851 formalized the concept of field in different areas of physics.
== Non-relativistic field theories ==
Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described.
=== Newtonian gravitation ===
The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting the motion of planets around the Sun.
Any massive body M has a gravitational field g which describes its influence on other massive bodies. The gravitational field of M at a point r in space is found by determining the force F that M exerts on a small test mass m located at r, and then dividing by m:
g
(
r
)
=
F
(
r
)
m
.
{\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}.}
Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M.
According to Newton's law of universal gravitation, F(r) is given by
F
(
r
)
=
−
G
M
m
r
2
r
^
,
{\displaystyle \mathbf {F} (\mathbf {r} )=-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }},}
where
r
^
{\displaystyle {\hat {\mathbf {r} }}}
is a unit vector pointing along the line from M to m, and G is Newton's gravitational constant. Therefore, the gravitational field of M is
g
(
r
)
=
F
(
r
)
m
=
−
G
M
r
2
r
^
.
{\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}=-{\frac {GM}{r^{2}}}{\hat {\mathbf {r} }}.}
The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity.
For a discrete collection of masses, Mi, located at points, ri, the gravitational field at a point r due to the masses is
g
(
r
)
=
−
G
∑
i
M
i
(
r
−
r
i
)
|
r
−
r
i
|
3
,
{\displaystyle \mathbf {g} (\mathbf {r} )=-G\sum _{i}{\frac {M_{i}(\mathbf {r} -\mathbf {r_{i}} )}{|\mathbf {r} -\mathbf {r} _{i}|^{3}}}\,,}
If we have a continuous mass distribution ρ instead, the sum is replaced by an integral,
g
(
r
)
=
−
G
∭
V
ρ
(
x
)
d
3
x
(
r
−
x
)
|
r
−
x
|
3
,
{\displaystyle \mathbf {g} (\mathbf {r} )=-G\iiint _{V}{\frac {\rho (\mathbf {x} )d^{3}\mathbf {x} (\mathbf {r} -\mathbf {x} )}{|\mathbf {r} -\mathbf {x} |^{3}}}\,,}
Note that the direction of the field points from the position r to the position of the masses ri; this is ensured by the minus sign. In a nutshell, this means all masses attract.
In the integral form Gauss's law for gravity is
∬
g
⋅
d
S
=
−
4
π
G
M
{\displaystyle \iint \mathbf {g} \cdot d\mathbf {S} =-4\pi GM}
while in differential form it is
∇
⋅
g
=
−
4
π
G
ρ
m
{\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho _{m}}
Therefore, the gravitational field g can be written in terms of the gradient of a gravitational potential φ(r):
g
(
r
)
=
−
∇
ϕ
(
r
)
.
{\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla \phi (\mathbf {r} ).}
This is a consequence of the gravitational force F being conservative.
=== Electromagnetism ===
==== Electrostatics ====
A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E generated by the source charge Q so that F = qE:
E
(
r
)
=
F
(
r
)
q
.
{\displaystyle \mathbf {E} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{q}}.}
Using this and Coulomb's law the electric field due to a single charged particle is
E
=
1
4
π
ε
0
Q
r
2
r
^
.
{\displaystyle \mathbf {E} ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Q}{r^{2}}}{\hat {\mathbf {r} }}\,.}
The electric field is conservative, and hence is given by the gradient of a scalar potential, V(r)
E
(
r
)
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {E} (\mathbf {r} )=-\nabla V(\mathbf {r} )\,.}
Gauss's law for electricity is in integral form
∬
E
⋅
d
S
=
Q
ε
0
{\displaystyle \iint \mathbf {E} \cdot d\mathbf {S} ={\frac {Q}{\varepsilon _{0}}}}
while in differential form
∇
⋅
E
=
ρ
e
ε
0
.
{\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho _{e}}{\varepsilon _{0}}}\,.}
==== Magnetostatics ====
A steady current I flowing along a path ℓ will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is
F
(
r
)
=
q
v
×
B
(
r
)
,
{\displaystyle \mathbf {F} (\mathbf {r} )=q\mathbf {v} \times \mathbf {B} (\mathbf {r} ),}
where B(r) is the magnetic field, which is determined from I by the Biot–Savart law:
B
(
r
)
=
μ
0
I
4
π
∫
d
ℓ
×
d
r
^
r
2
.
{\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}I}{4\pi }}\int {\frac {d{\boldsymbol {\ell }}\times d{\hat {\mathbf {r} }}}{r^{2}}}.}
The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r):
B
(
r
)
=
∇
×
A
(
r
)
{\displaystyle \mathbf {B} (\mathbf {r} )=\nabla \times \mathbf {A} (\mathbf {r} )}
Gauss's law for magnetism in integral form is
∬
B
⋅
d
S
=
0
,
{\displaystyle \iint \mathbf {B} \cdot d\mathbf {S} =0,}
while in differential form it is
∇
⋅
B
=
0.
{\displaystyle \nabla \cdot \mathbf {B} =0.}
The physical interpretation is that there are no magnetic monopoles.
==== Electrodynamics ====
In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to the electric charge density (charge per unit volume) ρ and current density (electric current per unit area) J.
Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations
E
=
−
∇
V
−
∂
A
∂
t
{\displaystyle \mathbf {E} =-\nabla V-{\frac {\partial \mathbf {A} }{\partial t}}}
B
=
∇
×
A
.
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} .}
=== Continuum mechanics ===
==== Fluid dynamics ====
Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid,
∂
∂
t
(
ρ
u
)
+
∇
⋅
(
ρ
u
⊗
u
+
p
I
)
=
∇
⋅
τ
+
ρ
b
{\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} )=\nabla \cdot {\boldsymbol {\tau }}+\rho \mathbf {b} }
if the density ρ, pressure p, deviatoric stress tensor τ of the fluid, as well as external body forces b, are all given. The velocity field u is the vector field to solve for.
=== Other examples ===
In 1839, James MacCullagh presented field equations to describe reflection and refraction in "An essay toward a dynamical theory of crystalline reflection and refraction".
== Potential theory ==
The term "potential theory" arises from the fact that, in 19th century physics, the fundamental forces of nature were believed to be derived from scalar potentials which satisfied Laplace's equation. Poisson addressed the question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation from the perturbation forces, and derived the Poisson's equation, named after him. The general form of this equation is
∇
2
ϕ
=
σ
{\displaystyle \nabla ^{2}\phi =\sigma }
where σ is a source function (as a density, a quantity per unit volume) and ø the scalar potential to solve for.
In Newtonian gravitation, masses are the sources of the field so that field lines terminate at objects that have mass. Similarly, charges are the sources and sinks of electrostatic fields: positive charges emanate electric field lines, and field lines terminate at negative charges. These field concepts are also illustrated in the general divergence theorem, specifically Gauss's law's for gravity and electricity. For the cases of time-independent gravity and electromagnetism, the fields are gradients of corresponding potentials
g
=
−
∇
ϕ
g
,
E
=
−
∇
ϕ
e
{\displaystyle \mathbf {g} =-\nabla \phi _{g}\,,\quad \mathbf {E} =-\nabla \phi _{e}}
so substituting these into Gauss' law for each case obtains
∇
2
ϕ
g
=
4
π
G
ρ
g
,
∇
2
ϕ
e
=
4
π
k
e
ρ
e
=
−
ρ
e
ε
0
{\displaystyle \nabla ^{2}\phi _{g}=4\pi G\rho _{g}\,,\quad \nabla ^{2}\phi _{e}=4\pi k_{e}\rho _{e}=-{\rho _{e} \over \varepsilon _{0}}}
where ρg is the mass density, ρe the charge density, G the gravitational constant and ke = 1/4πε0 the electric force constant.
Incidentally, this similarity arises from the similarity between Newton's law of gravitation and Coulomb's law.
In the case where there is no source term (e.g. vacuum, or paired charges), these potentials obey Laplace's equation:
∇
2
ϕ
=
0.
{\displaystyle \nabla ^{2}\phi =0.}
For a distribution of mass (or charge), the potential can be expanded in a series of spherical harmonics, and the nth term in the series can be viewed as a potential arising from the 2n-moments (see multipole expansion). For many purposes only the monopole, dipole, and quadrupole terms are needed in calculations.
== Relativistic field theory ==
Modern formulations of classical field theories generally require Lorentz covariance as this is now recognised as a fundamental aspect of nature. A field theory tends to be expressed mathematically by using Lagrangians. This is a function that, when subjected to an action principle, gives rise to the field equations and a conservation law for the theory. The action is a Lorentz scalar, from which the field equations and symmetries can be readily derived.
Throughout we use units such that the speed of light in vacuum is 1, i.e. c = 1.
=== Lagrangian dynamics ===
Given a field tensor
ϕ
{\displaystyle \phi }
, a scalar called the Lagrangian density
L
(
ϕ
,
∂
ϕ
,
∂
∂
ϕ
,
…
,
x
)
{\displaystyle {\mathcal {L}}(\phi ,\partial \phi ,\partial \partial \phi ,\ldots ,x)}
can be constructed from
ϕ
{\displaystyle \phi }
and its derivatives.
From this density, the action functional can be constructed by integrating over spacetime,
S
=
∫
L
−
g
d
4
x
.
{\displaystyle {\mathcal {S}}=\int {{\mathcal {L}}{\sqrt {-g}}\,\mathrm {d} ^{4}x}.}
Where
−
g
d
4
x
{\displaystyle {\sqrt {-g}}\,\mathrm {d} ^{4}x}
is the volume form in curved spacetime.
(
g
≡
det
(
g
μ
ν
)
)
{\displaystyle (g\equiv \det(g_{\mu \nu }))}
Therefore, the Lagrangian itself is equal to the integral of the Lagrangian density over all space.
Then by enforcing the action principle, the Euler–Lagrange equations are obtained
δ
S
δ
ϕ
=
∂
L
∂
ϕ
−
∂
μ
(
∂
L
∂
(
∂
μ
ϕ
)
)
+
⋯
+
(
−
1
)
m
∂
μ
1
∂
μ
2
⋯
∂
μ
m
−
1
∂
μ
m
(
∂
L
∂
(
∂
μ
1
∂
μ
2
⋯
∂
μ
m
−
1
∂
μ
m
ϕ
)
)
=
0.
{\displaystyle {\frac {\delta {\mathcal {S}}}{\delta \phi }}={\frac {\partial {\mathcal {L}}}{\partial \phi }}-\partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi )}}\right)+\cdots +(-1)^{m}\partial _{\mu _{1}}\partial _{\mu _{2}}\cdots \partial _{\mu _{m-1}}\partial _{\mu _{m}}\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu _{1}}\partial _{\mu _{2}}\cdots \partial _{\mu _{m-1}}\partial _{\mu _{m}}\phi )}}\right)=0.}
== Relativistic fields ==
Two of the most well-known Lorentz-covariant classical field theories are now described.
=== Electromagnetism ===
Historically, the first (classical) field theories were those describing the electric and magnetic fields (separately). After numerous experiments, it was found that these two fields were related, or, in fact, two aspects of the same field: the electromagnetic field. Maxwell's theory of electromagnetism describes the interaction of charged matter with the electromagnetic field. The first formulation of this field theory used vector fields to describe the electric and magnetic fields. With the advent of special relativity, a more complete formulation using tensor fields was found. Instead of using two vector fields describing the electric and magnetic fields, a tensor field representing these two fields together is used.
The electromagnetic four-potential is defined to be Aa = (−φ, A), and the electromagnetic four-current ja = (−ρ, j). The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor
F
a
b
=
∂
a
A
b
−
∂
b
A
a
.
{\displaystyle F_{ab}=\partial _{a}A_{b}-\partial _{b}A_{a}.}
==== The Lagrangian ====
To obtain the dynamics for this field, we try and construct a scalar from the field. In the vacuum, we have
L
=
−
1
4
μ
0
F
a
b
F
a
b
.
{\displaystyle {\mathcal {L}}=-{\frac {1}{4\mu _{0}}}F^{ab}F_{ab}\,.}
We can use gauge field theory to get the interaction term, and this gives us
L
=
−
1
4
μ
0
F
a
b
F
a
b
−
j
a
A
a
.
{\displaystyle {\mathcal {L}}=-{\frac {1}{4\mu _{0}}}F^{ab}F_{ab}-j^{a}A_{a}\,.}
==== The equations ====
To obtain the field equations, the electromagnetic tensor in the Lagrangian density needs to be replaced by its definition in terms of the 4-potential A, and it's this potential which enters the Euler-Lagrange equations. The EM field F is not varied in the EL equations. Therefore,
∂
b
(
∂
L
∂
(
∂
b
A
a
)
)
=
∂
L
∂
A
a
.
{\displaystyle \partial _{b}\left({\frac {\partial {\mathcal {L}}}{\partial \left(\partial _{b}A_{a}\right)}}\right)={\frac {\partial {\mathcal {L}}}{\partial A_{a}}}\,.}
Evaluating the derivative of the Lagrangian density with respect to the field components
∂
L
∂
A
a
=
μ
0
j
a
,
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial A_{a}}}=\mu _{0}j^{a}\,,}
and the derivatives of the field components
∂
L
∂
(
∂
b
A
a
)
=
F
a
b
,
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial (\partial _{b}A_{a})}}=F^{ab}\,,}
obtains Maxwell's equations in vacuum. The source equations (Gauss' law for electricity and the Maxwell-Ampère law) are
∂
b
F
a
b
=
μ
0
j
a
.
{\displaystyle \partial _{b}F^{ab}=\mu _{0}j^{a}\,.}
while the other two (Gauss' law for magnetism and Faraday's law) are obtained from the fact that F is the 4-curl of A, or, in other words, from the fact that the Bianchi identity holds for the electromagnetic field tensor.
6
F
[
a
b
,
c
]
=
F
a
b
,
c
+
F
c
a
,
b
+
F
b
c
,
a
=
0.
{\displaystyle 6F_{[ab,c]}\,=F_{ab,c}+F_{ca,b}+F_{bc,a}=0.}
where the comma indicates a partial derivative.
=== Gravitation ===
After Newtonian gravitation was found to be inconsistent with special relativity, Albert Einstein formulated a new theory of gravitation called general relativity. This treats gravitation as a geometric phenomenon ('curved spacetime') caused by masses and represents the gravitational field mathematically by a tensor field called the metric tensor. The Einstein field equations describe how this curvature is produced. Newtonian gravitation is now superseded by Einstein's theory of general relativity, in which gravitation is thought of as being due to a curved spacetime, caused by masses. The Einstein field equations,
G
a
b
=
κ
T
a
b
{\displaystyle G_{ab}=\kappa T_{ab}}
describe how this curvature is produced by matter and radiation, where Gab is the Einstein tensor,
G
a
b
=
R
a
b
−
1
2
R
g
a
b
{\displaystyle G_{ab}\,=R_{ab}-{\frac {1}{2}}Rg_{ab}}
written in terms of the Ricci tensor Rab and Ricci scalar R = Rabgab, Tab is the stress–energy tensor and κ = 8πG/c4 is a constant. In the absence of matter and radiation (including sources) the 'vacuum field equations,
G
a
b
=
0
{\displaystyle G_{ab}=0}
can be derived by varying the Einstein–Hilbert action,
S
=
∫
R
−
g
d
4
x
{\displaystyle S=\int R{\sqrt {-g}}\,d^{4}x}
with respect to the metric, where g is the determinant of the metric tensor gab. Solutions of the vacuum field equations are called vacuum solutions. An alternative interpretation, due to Arthur Eddington, is that
R
{\displaystyle R}
is fundamental,
T
{\displaystyle T}
is merely one aspect of
R
{\displaystyle R}
, and
κ
{\displaystyle \kappa }
is forced by the choice of units.
=== Further examples ===
Further examples of Lorentz-covariant classical field theories are
Klein-Gordon theory for real or complex scalar fields
Dirac theory for a Dirac spinor field
Yang–Mills theory for a non-abelian gauge field
== Unification attempts ==
Attempts to create a unified field theory based on classical physics are classical unified field theories. During the years between the two World Wars, the idea of unification of gravity with electromagnetism was actively pursued by several mathematicians and physicists like Albert Einstein, Theodor Kaluza, Hermann Weyl, Arthur Eddington, Gustav Mie and Ernst Reichenbacher.
Early attempts to create such theory were based on incorporation of electromagnetic fields into the geometry of general relativity. In 1918, the case for the first geometrization of the electromagnetic field was proposed in 1918 by Hermann Weyl.
In 1919, the idea of a five-dimensional approach was suggested by Theodor Kaluza. From that, a theory called Kaluza-Klein Theory was developed. It attempts to unify gravitation and electromagnetism, in a five-dimensional space-time.
There are several ways of extending the representational framework for a unified field theory which have been considered by Einstein and other researchers. These extensions in general are based in two options. The first option is based in relaxing the conditions imposed on the original formulation, and the second is based in introducing other mathematical objects into the theory. An example of the first option is relaxing the restrictions to four-dimensional space-time by considering higher-dimensional representations. That is used in Kaluza-Klein Theory. For the second, the most prominent example arises from the concept of the affine connection that was introduced into the theory of general relativity mainly through the work of Tullio Levi-Civita and Hermann Weyl.
Further development of quantum field theory changed the focus of searching for unified field theory from classical to quantum description. Because of that, many theoretical physicists gave up looking for a classical unified field theory. Quantum field theory would include unification of two other fundamental forces of nature, the strong and weak nuclear force which act on the subatomic level.
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Thidé, Bo. "Electromagnetic Field Theory" (PDF). Archived from the original (PDF) on September 17, 2003. Retrieved February 14, 2006.
Carroll, Sean M. (1997). "Lecture Notes on General Relativity". arXiv:gr-qc/9712019. Bibcode:1997gr.qc....12019C. {{cite journal}}: Cite journal requires |journal= (help)
Binney, James J. "Lecture Notes on Classical Fields" (PDF). Retrieved April 30, 2007.
Sardanashvily, G. (November 2008). "Advanced Classical Field Theory". International Journal of Geometric Methods in Modern Physics. 5 (7): 1163–1189. arXiv:0811.0331. Bibcode:2008IJGMM..05.1163S. doi:10.1142/S0219887808003247. ISBN 978-981-283-895-7. S2CID 13884729. | Wikipedia/Classical_field_theory |
In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of
computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry.
Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program.
== Software packages ==
Magma computer algebra system
SageMath
Number Theory Library
PARI/GP
Fast Library for Number Theory
== Further reading ==
Michael E. Pohst (1993): Computational Algebraic Number Theory, Springer, ISBN 978-3-0348-8589-8
Eric Bach; Jeffrey Shallit (1996). Algorithmic Number Theory, Volume 1: Efficient Algorithms. MIT Press. ISBN 0-262-02405-5.
David M. Bressoud (1989). Factorisation and Primality Testing. Springer-Verlag. ISBN 0-387-97040-1.
Joe P. Buhler; Peter Stevenhagen, eds. (2008). Algorithmic Number Theory: Lattices, Number Fields, Curves and Cryptography. MSRI Publications. Vol. 44. Cambridge University Press. ISBN 978-0-521-20833-8. Zbl 1154.11002.
Henri Cohen (1993). A Course In Computational Algebraic Number Theory. Graduate Texts in Mathematics. Vol. 138. Springer-Verlag. doi:10.1007/978-3-662-02945-9. ISBN 0-387-55640-0.
Henri Cohen (2000). Advanced Topics in Computational Number Theory. Graduate Texts in Mathematics. Vol. 193. Springer-Verlag. doi:10.1007/978-1-4419-8489-0. ISBN 0-387-98727-4.
Henri Cohen (2007). Number Theory – Volume I: Tools and Diophantine Equations. Graduate Texts in Mathematics. Vol. 239. Springer-Verlag. doi:10.1007/978-0-387-49923-9. ISBN 978-0-387-49922-2.
Henri Cohen (2007). Number Theory – Volume II: Analytic and Modern Tools. Graduate Texts in Mathematics. Vol. 240. Springer-Verlag. doi:10.1007/978-0-387-49894-2. ISBN 978-0-387-49893-5.
Richard Crandall; Carl Pomerance (2001). Prime Numbers: A Computational Perspective. Springer-Verlag. doi:10.1007/978-1-4684-9316-0. ISBN 0-387-94777-9.
Hans Riesel (1994). Prime Numbers and Computer Methods for Factorization. Progress in Mathematics. Vol. 126 (second ed.). Birkhäuser. ISBN 0-8176-3743-5. Zbl 0821.11001.
Victor Shoup (2012). A Computational Introduction to Number Theory and Algebra. Cambridge University Press. doi:10.1017/CBO9781139165464. ISBN 9781139165464.
Samuel S. Wagstaff, Jr. (2013). The Joy of Factoring. American Mathematical Society. ISBN 978-1-4704-1048-3.
Peter Giblin (1993): Primes and Programming: An Introduction to Number Theory with Computing, Cambridge University Press, ISBN 0-521-40988-8
Nigel P. Smart (1998): The Algorithmic Resolution of Diophantine Equations, Cambridge University Press, ISBN 0-521-64633-2
Ramanujachary Kumanduri and Cristina Romero (1998): Number Theory with Computer Applications, Prentice Hall, ISBN 0-13-801812-X
Fernando Rodriguez Villegas (2007): Experimental Number Theory, Oxford University Press, ISBN 978-0-19-922730-3
Harold M. Edwards (2008): Higher Arithmetic: An Algorithmic Introduction to Number Theory, American Mathematical Society, ISBN 978-1-4704-2153-3
Lasse Rempe-Gillen and Rebecca Waldecker (2014). Primality Testing for Beginners. American Mathematical Society. ISBN 978-0-8218-9883-3
== References ==
== External links ==
Media related to Computational number theory at Wikimedia Commons | Wikipedia/Computational_number_theory |
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In regular perturbation theory, the solution is expressed as a power series in a small parameter
ε
{\displaystyle \varepsilon }
. The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of
ε
{\displaystyle \varepsilon }
usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
== Description ==
Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution
A
,
{\displaystyle \ A\ ,}
a series in the small parameter (here called ε), like the following:
A
≡
A
0
+
ε
1
A
1
+
ε
2
A
2
+
ε
3
A
3
+
⋯
{\displaystyle A\equiv A_{0}+\varepsilon ^{1}A_{1}+\varepsilon ^{2}A_{2}+\varepsilon ^{3}A_{3}+\cdots }
In this example,
A
0
{\displaystyle \ A_{0}\ }
would be the known solution to the exactly solvable initial problem, and the terms
A
1
,
A
2
,
A
3
,
…
{\displaystyle \ A_{1},A_{2},A_{3},\ldots \ }
represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small
ε
{\displaystyle \ \varepsilon \ }
these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction
A
→
A
0
+
ε
A
1
f
o
r
ε
→
0
{\displaystyle A\to A_{0}+\varepsilon A_{1}\qquad {\mathsf {for}}\qquad \varepsilon \to 0}
Some authors use big O notation to indicate the order of the error in the approximate solution:
A
=
A
0
+
ε
A
1
+
O
(
ε
2
)
.
{\displaystyle \;A=A_{0}+\varepsilon A_{1}+{\mathcal {O}}{\bigl (}\ \varepsilon ^{2}\ {\bigr )}~.}
If the power series in
ε
{\displaystyle \ \varepsilon \ }
converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem. In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution. However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an asymptotic series. If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers
ε
(
1
/
2
)
{\displaystyle \ \varepsilon ^{\left(1/2\right)}\ }
or negative powers
ε
−
2
{\displaystyle \ \varepsilon ^{-2}\ }
) then the perturbation problem is called a singular perturbation problem. Many special techniques in perturbation theory have been developed to analyze singular perturbation problems.
== Prototypical example ==
The earliest use of what would now be called perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.
Perturbation methods start with a simplified form of the original problem, which is simple enough to be solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under Newtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the Solar System) and not quite correct when the gravitational interaction is stated using formulations from general relativity.
== Perturbative expansion ==
Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. The perturbative expansion is created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. Write
D
{\displaystyle \ D\ }
for this collection of equations; that is, let the symbol
D
{\displaystyle \ D\ }
stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D".
The process is generally mechanical, if laborious. One begins by writing the equations
D
{\displaystyle \ D\ }
so that they split into two parts: some collection of equations
D
0
{\displaystyle \ D_{0}\ }
which can be solved exactly, and some additional remaining part
ε
D
1
{\displaystyle \ \varepsilon D_{1}\ }
for some small
ε
≪
1
.
{\displaystyle \ \varepsilon \ll 1~.}
The solution
A
0
{\displaystyle \ A_{0}\ }
(to
D
0
{\displaystyle \ D_{0}\ }
) is known, and one seeks the general solution
A
{\displaystyle \ A\ }
to
D
=
D
0
+
ε
D
1
.
{\displaystyle \ D=D_{0}+\varepsilon D_{1}~.}
Next the approximation
A
≈
A
0
+
ε
A
1
{\displaystyle \ A\approx A_{0}+\varepsilon A_{1}\ }
is inserted into
ε
D
1
{\displaystyle \ \varepsilon D_{1}}
. This results in an equation for
A
1
,
{\displaystyle \ A_{1}\ ,}
which, in the general case, can be written in closed form as a sum over integrals over
A
0
.
{\displaystyle \ A_{0}~.}
Thus, one has obtained the first-order correction
A
1
{\displaystyle \ A_{1}\ }
and thus
A
≈
A
0
+
ε
A
1
{\displaystyle \ A\approx A_{0}+\varepsilon A_{1}\ }
is a good approximation to
A
.
{\displaystyle \ A~.}
It is a good approximation, precisely because the parts that were ignored were of size
ε
2
.
{\displaystyle \ \varepsilon ^{2}~.}
The process can then be repeated, to obtain corrections
A
2
,
{\displaystyle \ A_{2}\ ,}
and so on.
In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand. Isaac Newton is reported to have said, regarding the problem of the Moon's orbit, that "It causeth my head to ache." This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs in quantum mechanics for controlling the expansion are the Feynman diagrams, which allow quantum mechanical perturbation series to be represented by a sketch.
== Examples ==
Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations"
D
{\displaystyle D}
include algebraic equations,
differential equations (e.g., the equations of motion
and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer,
and Hamiltonian operators in quantum mechanics.
Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion (e.g., the trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization), and the ground state energy of a quantum mechanical problem.
Examples of exactly solvable problems that can be used as starting points include linear equations, including linear equations of motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom).
Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion, interactions between particles, terms of higher powers in the Hamiltonian/free energy.
For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) using Feynman diagrams.
=== In chemistry ===
Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Implicit perturbation theory works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method.
=== Shell-crossing ===
A shell-crossing (sc) occurs in perturbation theory when matter trajectories intersect, forming a singularity. This limits the predictive power of physical simulations at small scales.
== History ==
Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?" Kepler's orbital equations only solve Newton's gravitational equations when the latter are limited to just two bodies interacting. The gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led many eminent 18th and 19th century mathematicians, notably Joseph-Louis Lagrange and Pierre-Simon Laplace, to extend and generalize the methods of perturbation theory.
These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development of quantum mechanics in 20th century atomic and subatomic physics. Paul Dirac developed quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later named Fermi's golden rule. Perturbation theory in quantum mechanics is fairly accessible, mainly because quantum mechanics is limited to linear wave equations, but also since the quantum mechanical notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from the Zeeman effect to the hyperfine splitting in the hydrogen atom.
Despite the simpler notation, perturbation theory applied to quantum field theory still easily gets out of hand. Richard Feynman developed the celebrated Feynman diagrams by observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to many other perturbative series (although not always worthwhile).
In the second half of the 20th century, as chaos theory developed, it became clear that unperturbed systems were in general completely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which the KAM torus is the canonical example. At the same time, it was also discovered that many (rather special) non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions.
The improved understanding of dynamical systems coming from chaos theory helped shed light on what was termed the small denominator problem or small divisor problem. In the 19th century Poincaré observed (as perhaps had earlier mathematicians) that sometimes 2nd and higher order terms in the perturbative series have "small denominators": That is, they have the general form
ψ
n
V
ϕ
m
(
ω
n
−
ω
m
)
{\displaystyle \ {\frac {\ \psi _{n}V\phi _{m}\ }{\ (\omega _{n}-\omega _{m})\ }}\ }
where
ψ
n
,
{\displaystyle \ \psi _{n}\ ,}
V
,
{\displaystyle \ V\ ,}
and
ϕ
m
{\displaystyle \ \phi _{m}\ }
are some complicated expressions pertinent to the problem to be solved, and
ω
n
{\displaystyle \ \omega _{n}\ }
and
ω
m
{\displaystyle \ \omega _{m}\ }
are real numbers; very often they are the energy of normal modes. The small divisor problem arises when the difference
ω
n
−
ω
m
{\displaystyle \ \omega _{n}-\omega _{m}\ }
is small, causing the perturbative correction to "blow up", becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: It stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is an asymptotic series: A useful approximation for a few terms, but at some point becomes less accurate if even more terms are added. The breakthrough from chaos theory was an explanation of why this happened: The small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other.
=== Beginnings in the study of planetary motion ===
Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of the two-body problem, the two bodies being the planet and the Sun.
Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-body problem; thus, in studying the system Moon-Earth-Sun, the mass ratio between the Moon and the Earth was chosen as the "small parameter". Lagrange and Laplace were the first to advance the view that the so-called "constants" which describe the motion of a planet around the Sun gradually change: They are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory".
Perturbation theory was investigated by the classical scholars – Laplace, Siméon Denis Poisson, Carl Friedrich Gauss – as a result of which the computations could be performed with a very high accuracy. The discovery of the planet Neptune in 1848 by Urbain Le Verrier, based on the deviations in motion of the planet Uranus. He sent the coordinates to J.G. Galle who successfully observed Neptune through his telescope – a triumph of perturbation theory.
== Perturbation orders ==
The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate.
== See also ==
== References ==
== External links ==
van den Eijnden, Eric. "Introduction to regular perturbation theory" (PDF). Archived (PDF) from the original on 2004-09-20.
Chow, Carson C. (23 October 2007). "Perturbation method of multiple scales". Scholarpedia. 2 (10): 1617. doi:10.4249/scholarpedia.1617.
Alternative approach to quantum perturbation theory Martínez-Carranza, J.; Soto-Eguibar, F.; Moya-Cessa, H. (2012). "Alternative analysis to perturbation theory in quantum mechanics". The European Physical Journal D. 66 (1): 22. arXiv:1110.0723. Bibcode:2012EPJD...66...22M. doi:10.1140/epjd/e2011-20654-5. S2CID 117362666. | Wikipedia/Perturbation_theory |
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines field theory and the principle of relativity with ideas behind quantum mechanics.: xi QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on QFT.
== History ==
Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory.
=== Theoretical background ===
Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity.: xi A brief overview of these theoretical precursors follows.
The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact".: 4 It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.: 18
Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.: 301 : 2
The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.: 19
Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.: Ch.2 Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles.
In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.: 22–23
In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.: 19 It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations.
Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators.
=== Quantum electrodynamics ===
Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.: 1
Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.: 1 With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.: 22
In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.: 71
In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.: 71–72
The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.: 22–23
It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.: 72 : 23 QFT naturally incorporated antiparticles in its formalism.: 24
=== Infinities and renormalization ===
Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.: 25 It was not until 20 years later that a systematic approach to remove such infinities was developed.
A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community.
Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.: 26
In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.: 28 Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations.
The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. As Tomonaga said in his Nobel lecture:Since those parts of the modified mass and charge due to field reactions [become infinite], it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with [the] Americans'.
By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities".
At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.: 2 The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.: 5
It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.: 2
=== Non-renormalizability ===
Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.: 30
The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.: 30
The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant α ≈ 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.: 31
With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.: 31
=== Source theory ===
Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory,: 454 but in 1951 he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields. Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966 then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields. Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed.
In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general. Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities.: 467
Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury. The neglect of source theory by the physics community was a major disappointment for Schwinger:The lack of appreciation of these facts by others was depressing, but understandable. -J. SchwingerSee "the shoes incident" between J. Schwinger and S. Weinberg.
=== Standard model ===
In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups.: 5 In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.: 32
Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable.
Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.: 5–6
By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,: 6 until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion.
Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) : 11 Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.: 32
These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.: 3 The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model.
=== Other developments ===
The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory.: 4
Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry theories only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973,: 7 but to date have not been widely accepted as part of the Standard Model due to lack of experimental evidence.
Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory,: 6 itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity.
=== Condensed-matter-physics ===
Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics.
Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter.
Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems.
Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect.
== Principles ==
For simplicity, natural units are used in the following sections, in which the reduced Planck constant ħ and the speed of light c are both set to one.
=== Classical fields ===
A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom.
Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields.
Canonical quantization and path integrals are two common formulations of QFT.: 61 To motivate the fundamentals of QFT, an overview of classical field theory follows.
The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as ϕ(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field,
L
{\displaystyle L}
, is
L
=
∫
d
3
x
L
=
∫
d
3
x
[
1
2
ϕ
˙
2
−
1
2
(
∇
ϕ
)
2
−
1
2
m
2
ϕ
2
]
,
{\displaystyle L=\int d^{3}x\,{\mathcal {L}}=\int d^{3}x\,\left[{\frac {1}{2}}{\dot {\phi }}^{2}-{\frac {1}{2}}(\nabla \phi )^{2}-{\frac {1}{2}}m^{2}\phi ^{2}\right],}
where
L
{\displaystyle {\mathcal {L}}}
is the Lagrangian density,
ϕ
˙
{\displaystyle {\dot {\phi }}}
is the time-derivative of the field, ∇ is the gradient operator, and m is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:: 16
∂
∂
t
[
∂
L
∂
(
∂
ϕ
/
∂
t
)
]
+
∑
i
=
1
3
∂
∂
x
i
[
∂
L
∂
(
∂
ϕ
/
∂
x
i
)
]
−
∂
L
∂
ϕ
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial t)}}\right]+\sum _{i=1}^{3}{\frac {\partial }{\partial x^{i}}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial x^{i})}}\right]-{\frac {\partial {\mathcal {L}}}{\partial \phi }}=0,}
we obtain the equations of motion for the field, which describe the way it varies in time and space:
(
∂
2
∂
t
2
−
∇
2
+
m
2
)
ϕ
=
0.
{\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}+m^{2}\right)\phi =0.}
This is known as the Klein–Gordon equation.: 17
The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows:
ϕ
(
x
,
t
)
=
∫
d
3
p
(
2
π
)
3
1
2
ω
p
(
a
p
e
−
i
ω
p
t
+
i
p
⋅
x
+
a
p
∗
e
i
ω
p
t
−
i
p
⋅
x
)
,
{\displaystyle \phi (\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left(a_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+a_{\mathbf {p} }^{*}e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right),}
where a is a complex number (normalized by convention), * denotes complex conjugation, and ωp is the frequency of the normal mode:
ω
p
=
|
p
|
2
+
m
2
.
{\displaystyle \omega _{\mathbf {p} }={\sqrt {|\mathbf {p} |^{2}+m^{2}}}.}
Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ωp.: 21,26
=== Canonical quantization ===
The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator.
The displacement of a classical harmonic oscillator is described by
x
(
t
)
=
1
2
ω
a
e
−
i
ω
t
+
1
2
ω
a
∗
e
i
ω
t
,
{\displaystyle x(t)={\frac {1}{\sqrt {2\omega }}}ae^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}a^{*}e^{i\omega t},}
where a is a complex number (normalized by convention), and ω is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label x of a quantum field.
For a quantum harmonic oscillator, x(t) is promoted to a linear operator
x
^
(
t
)
{\displaystyle {\hat {x}}(t)}
:
x
^
(
t
)
=
1
2
ω
a
^
e
−
i
ω
t
+
1
2
ω
a
^
†
e
i
ω
t
.
{\displaystyle {\hat {x}}(t)={\frac {1}{\sqrt {2\omega }}}{\hat {a}}e^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}{\hat {a}}^{\dagger }e^{i\omega t}.}
Complex numbers a and a* are replaced by the annihilation operator
a
^
{\displaystyle {\hat {a}}}
and the creation operator
a
^
†
{\displaystyle {\hat {a}}^{\dagger }}
, respectively, where † denotes Hermitian conjugation. The commutation relation between the two is
[
a
^
,
a
^
†
]
=
1.
{\displaystyle \left[{\hat {a}},{\hat {a}}^{\dagger }\right]=1.}
The Hamiltonian of the simple harmonic oscillator can be written as
H
^
=
ℏ
ω
a
^
†
a
^
+
1
2
ℏ
ω
.
{\displaystyle {\hat {H}}=\hbar \omega {\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\hbar \omega .}
The vacuum state
|
0
⟩
{\displaystyle |0\rangle }
, which is the lowest energy state, is defined by
a
^
|
0
⟩
=
0
{\displaystyle {\hat {a}}|0\rangle =0}
and has energy
1
2
ℏ
ω
.
{\displaystyle {\frac {1}{2}}\hbar \omega .}
One can easily check that
[
H
^
,
a
^
†
]
=
ℏ
ω
a
^
†
,
{\displaystyle [{\hat {H}},{\hat {a}}^{\dagger }]=\hbar \omega {\hat {a}}^{\dagger },}
which implies that
a
^
†
{\displaystyle {\hat {a}}^{\dagger }}
increases the energy of the simple harmonic oscillator by
ℏ
ω
{\displaystyle \hbar \omega }
. For example, the state
a
^
†
|
0
⟩
{\displaystyle {\hat {a}}^{\dagger }|0\rangle }
is an eigenstate of energy
3
ℏ
ω
/
2
{\displaystyle 3\hbar \omega /2}
.
Any energy eigenstate state of a single harmonic oscillator can be obtained from
|
0
⟩
{\displaystyle |0\rangle }
by successively applying the creation operator
a
^
†
{\displaystyle {\hat {a}}^{\dagger }}
:: 20 and any state of the system can be expressed as a linear combination of the states
|
n
⟩
∝
(
a
^
†
)
n
|
0
⟩
.
{\displaystyle |n\rangle \propto \left({\hat {a}}^{\dagger }\right)^{n}|0\rangle .}
A similar procedure can be applied to the real scalar field ϕ, by promoting it to a quantum field operator
ϕ
^
{\displaystyle {\hat {\phi }}}
, while the annihilation operator
a
^
p
{\displaystyle {\hat {a}}_{\mathbf {p} }}
, the creation operator
a
^
p
†
{\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }}
and the angular frequency
ω
p
{\displaystyle \omega _{\mathbf {p} }}
are now for a particular p:
ϕ
^
(
x
,
t
)
=
∫
d
3
p
(
2
π
)
3
1
2
ω
p
(
a
^
p
e
−
i
ω
p
t
+
i
p
⋅
x
+
a
^
p
†
e
i
ω
p
t
−
i
p
⋅
x
)
.
{\displaystyle {\hat {\phi }}(\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left({\hat {a}}_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+{\hat {a}}_{\mathbf {p} }^{\dagger }e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right).}
Their commutation relations are:: 21
[
a
^
p
,
a
^
q
†
]
=
(
2
π
)
3
δ
(
p
−
q
)
,
[
a
^
p
,
a
^
q
]
=
[
a
^
p
†
,
a
^
q
†
]
=
0
,
{\displaystyle \left[{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }^{\dagger }\right]=(2\pi )^{3}\delta (\mathbf {p} -\mathbf {q} ),\quad \left[{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }\right]=\left[{\hat {a}}_{\mathbf {p} }^{\dagger },{\hat {a}}_{\mathbf {q} }^{\dagger }\right]=0,}
where δ is the Dirac delta function. The vacuum state
|
0
⟩
{\displaystyle |0\rangle }
is defined by
a
^
p
|
0
⟩
=
0
,
for all
p
.
{\displaystyle {\hat {a}}_{\mathbf {p} }|0\rangle =0,\quad {\text{for all }}\mathbf {p} .}
Any quantum state of the field can be obtained from
|
0
⟩
{\displaystyle |0\rangle }
by successively applying creation operators
a
^
p
†
{\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }}
(or by a linear combination of such states), e.g. : 22
(
a
^
p
3
†
)
3
a
^
p
2
†
(
a
^
p
1
†
)
2
|
0
⟩
.
{\displaystyle \left({\hat {a}}_{\mathbf {p} _{3}}^{\dagger }\right)^{3}{\hat {a}}_{\mathbf {p} _{2}}^{\dagger }\left({\hat {a}}_{\mathbf {p} _{1}}^{\dagger }\right)^{2}|0\rangle .}
While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization.: 19
The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields,: 52 vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary.
The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:: 77
L
=
1
2
(
∂
μ
ϕ
)
(
∂
μ
ϕ
)
−
1
2
m
2
ϕ
2
−
λ
4
!
ϕ
4
,
{\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi )\left(\partial ^{\mu }\phi \right)-{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {\lambda }{4!}}\phi ^{4},}
where μ is a spacetime index,
∂
0
=
∂
/
∂
t
,
∂
1
=
∂
/
∂
x
1
{\displaystyle \partial _{0}=\partial /\partial t,\ \partial _{1}=\partial /\partial x^{1}}
, etc. The summation over the index μ has been omitted following the Einstein notation. If the parameter λ is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory.
=== Path integrals ===
The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state
|
ϕ
I
⟩
{\displaystyle |\phi _{I}\rangle }
at time t = 0 to some final state
|
ϕ
F
⟩
{\displaystyle |\phi _{F}\rangle }
at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then: 10
⟨
ϕ
F
|
e
−
i
H
T
|
ϕ
I
⟩
=
∫
d
ϕ
1
∫
d
ϕ
2
⋯
∫
d
ϕ
N
−
1
⟨
ϕ
F
|
e
−
i
H
T
/
N
|
ϕ
N
−
1
⟩
⋯
⟨
ϕ
2
|
e
−
i
H
T
/
N
|
ϕ
1
⟩
⟨
ϕ
1
|
e
−
i
H
T
/
N
|
ϕ
I
⟩
.
{\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int d\phi _{1}\int d\phi _{2}\cdots \int d\phi _{N-1}\,\langle \phi _{F}|e^{-iHT/N}|\phi _{N-1}\rangle \cdots \langle \phi _{2}|e^{-iHT/N}|\phi _{1}\rangle \langle \phi _{1}|e^{-iHT/N}|\phi _{I}\rangle .}
Taking the limit N → ∞, the above product of integrals becomes the Feynman path integral:: 282 : 12
⟨
ϕ
F
|
e
−
i
H
T
|
ϕ
I
⟩
=
∫
D
ϕ
(
t
)
exp
{
i
∫
0
T
d
t
L
}
,
{\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int {\mathcal {D}}\phi (t)\,\exp \left\{i\int _{0}^{T}dt\,L\right\},}
where L is the Lagrangian involving ϕ and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transformation. The initial and final conditions of the path integral are respectively
ϕ
(
0
)
=
ϕ
I
,
ϕ
(
T
)
=
ϕ
F
.
{\displaystyle \phi (0)=\phi _{I},\quad \phi (T)=\phi _{F}.}
In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand.
=== Two-point correlation function ===
In calculations, one often encounters expression like
⟨
0
|
T
{
ϕ
(
x
)
ϕ
(
y
)
}
|
0
⟩
or
⟨
Ω
|
T
{
ϕ
(
x
)
ϕ
(
y
)
}
|
Ω
⟩
{\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \quad {\text{or}}\quad \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle }
in the free or interacting theory, respectively. Here,
x
{\displaystyle x}
and
y
{\displaystyle y}
are position four-vectors,
T
{\displaystyle T}
is the time ordering operator that shuffles its operands so the time-components
x
0
{\displaystyle x^{0}}
and
y
0
{\displaystyle y^{0}}
increase from right to left, and
|
Ω
⟩
{\displaystyle |\Omega \rangle }
is the ground state (vacuum state) of the interacting theory, different from the free ground state
|
0
⟩
{\displaystyle |0\rangle }
. This expression represents the probability amplitude for the field to propagate from y to x, and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short.: 82
The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be: 31,288 : 23
⟨
0
|
T
{
ϕ
(
x
)
ϕ
(
y
)
}
|
0
⟩
≡
D
F
(
x
−
y
)
=
lim
ϵ
→
0
∫
d
4
p
(
2
π
)
4
i
p
μ
p
μ
−
m
2
+
i
ϵ
e
−
i
p
μ
(
x
μ
−
y
μ
)
.
{\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \equiv D_{F}(x-y)=\lim _{\epsilon \to 0}\int {\frac {d^{4}p}{(2\pi )^{4}}}{\frac {i}{p_{\mu }p^{\mu }-m^{2}+i\epsilon }}e^{-ip_{\mu }(x^{\mu }-y^{\mu })}.}
In an interacting theory, where the Lagrangian or Hamiltonian contains terms
L
I
(
t
)
{\displaystyle L_{I}(t)}
or
H
I
(
t
)
{\displaystyle H_{I}(t)}
that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function.
In canonical quantization, the two-point correlation function can be written as:: 87
⟨
Ω
|
T
{
ϕ
(
x
)
ϕ
(
y
)
}
|
Ω
⟩
=
lim
T
→
∞
(
1
−
i
ϵ
)
⟨
0
|
T
{
ϕ
I
(
x
)
ϕ
I
(
y
)
exp
[
−
i
∫
−
T
T
d
t
H
I
(
t
)
]
}
|
0
⟩
⟨
0
|
T
{
exp
[
−
i
∫
−
T
T
d
t
H
I
(
t
)
]
}
|
0
⟩
,
{\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\left\langle 0\left|T\left\{\phi _{I}(x)\phi _{I}(y)\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}\right|0\right\rangle }{\left\langle 0\left|T\left\{\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}\right|0\right\rangle }},}
where ε is an infinitesimal number and ϕI is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in
ϕ
4
{\displaystyle \phi ^{4}}
-theory, the interacting term of the Hamiltonian is
H
I
(
t
)
=
∫
d
3
x
λ
4
!
ϕ
I
(
x
)
4
{\textstyle H_{I}(t)=\int d^{3}x\,{\frac {\lambda }{4!}}\phi _{I}(x)^{4}}
,: 84 and the expansion of the two-point correlator in terms of
λ
{\displaystyle \lambda }
becomes
⟨
Ω
|
T
{
ϕ
(
x
)
ϕ
(
y
)
}
|
Ω
⟩
=
∑
n
=
0
∞
(
−
i
λ
)
n
(
4
!
)
n
n
!
∫
d
4
z
1
⋯
∫
d
4
z
n
⟨
0
|
T
{
ϕ
I
(
x
)
ϕ
I
(
y
)
ϕ
I
(
z
1
)
4
⋯
ϕ
I
(
z
n
)
4
}
|
0
⟩
∑
n
=
0
∞
(
−
i
λ
)
n
(
4
!
)
n
n
!
∫
d
4
z
1
⋯
∫
d
4
z
n
⟨
0
|
T
{
ϕ
I
(
z
1
)
4
⋯
ϕ
I
(
z
n
)
4
}
|
0
⟩
.
{\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle ={\frac {\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(x)\phi _{I}(y)\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }{\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }}.}
This perturbation expansion expresses the interacting two-point function in terms of quantities
⟨
0
|
⋯
|
0
⟩
{\displaystyle \langle 0|\cdots |0\rangle }
that are evaluated in the free theory.
In the path integral formulation, the two-point correlation function can be written: 284
⟨
Ω
|
T
{
ϕ
(
x
)
ϕ
(
y
)
}
|
Ω
⟩
=
lim
T
→
∞
(
1
−
i
ϵ
)
∫
D
ϕ
ϕ
(
x
)
ϕ
(
y
)
exp
[
i
∫
−
T
T
d
4
z
L
]
∫
D
ϕ
exp
[
i
∫
−
T
T
d
4
z
L
]
,
{\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\int {\mathcal {D}}\phi \,\phi (x)\phi (y)\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}{\int {\mathcal {D}}\phi \,\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}},}
where
L
{\displaystyle {\mathcal {L}}}
is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in λ, reducing the interacting two-point function to quantities in the free theory.
Wick's theorem further reduce any n-point correlation function in the free theory to a sum of products of two-point correlation functions. For example,
⟨
0
|
T
{
ϕ
(
x
1
)
ϕ
(
x
2
)
ϕ
(
x
3
)
ϕ
(
x
4
)
}
|
0
⟩
=
⟨
0
|
T
{
ϕ
(
x
1
)
ϕ
(
x
2
)
}
|
0
⟩
⟨
0
|
T
{
ϕ
(
x
3
)
ϕ
(
x
4
)
}
|
0
⟩
+
⟨
0
|
T
{
ϕ
(
x
1
)
ϕ
(
x
3
)
}
|
0
⟩
⟨
0
|
T
{
ϕ
(
x
2
)
ϕ
(
x
4
)
}
|
0
⟩
+
⟨
0
|
T
{
ϕ
(
x
1
)
ϕ
(
x
4
)
}
|
0
⟩
⟨
0
|
T
{
ϕ
(
x
2
)
ϕ
(
x
3
)
}
|
0
⟩
.
{\displaystyle {\begin{aligned}\langle 0|T\{\phi (x_{1})\phi (x_{2})\phi (x_{3})\phi (x_{4})\}|0\rangle &=\langle 0|T\{\phi (x_{1})\phi (x_{2})\}|0\rangle \langle 0|T\{\phi (x_{3})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{3})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{4})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{3})\}|0\rangle .\end{aligned}}}
Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.: 90 This makes the Feynman propagator one of the most important quantities in quantum field theory.
=== Feynman diagram ===
Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the λ1 term in the two-point correlation function in the ϕ4 theory is
−
i
λ
4
!
∫
d
4
z
⟨
0
|
T
{
ϕ
(
x
)
ϕ
(
y
)
ϕ
(
z
)
ϕ
(
z
)
ϕ
(
z
)
ϕ
(
z
)
}
|
0
⟩
.
{\displaystyle {\frac {-i\lambda }{4!}}\int d^{4}z\,\langle 0|T\{\phi (x)\phi (y)\phi (z)\phi (z)\phi (z)\phi (z)\}|0\rangle .}
After applying Wick's theorem, one of the terms is
12
⋅
−
i
λ
4
!
∫
d
4
z
D
F
(
x
−
z
)
D
F
(
y
−
z
)
D
F
(
z
−
z
)
.
{\displaystyle 12\cdot {\frac {-i\lambda }{4!}}\int d^{4}z\,D_{F}(x-z)D_{F}(y-z)D_{F}(z-z).}
This term can instead be obtained from the Feynman diagram
.
The diagram consists of
external vertices connected with one edge and represented by dots (here labeled
x
{\displaystyle x}
and
y
{\displaystyle y}
).
internal vertices connected with four edges and represented by dots (here labeled
z
{\displaystyle z}
).
edges connecting the vertices and represented by lines.
Every vertex corresponds to a single
ϕ
{\displaystyle \phi }
field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules:
For every internal vertex
z
i
{\displaystyle z_{i}}
, write down a factor
−
i
λ
∫
d
4
z
i
{\textstyle -i\lambda \int d^{4}z_{i}}
.
For every edge that connects two vertices
z
i
{\displaystyle z_{i}}
and
z
j
{\displaystyle z_{j}}
, write down a factor
D
F
(
z
i
−
z
j
)
{\displaystyle D_{F}(z_{i}-z_{j})}
.
Divide by the symmetry factor of the diagram.
With the symmetry factor
2
{\displaystyle 2}
, following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space.: 91–94
In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise,
⟨
Ω
|
T
{
ϕ
(
x
1
)
⋯
ϕ
(
x
n
)
}
|
Ω
⟩
{\displaystyle \langle \Omega |T\{\phi (x_{1})\cdots \phi (x_{n})\}|\Omega \rangle }
is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ϕ4 interaction theory discussed above, every vertex must have four legs.: 98
In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.: 102–115
Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.: 44 Lines whose end points are vertices can be thought of as the propagation of virtual particles.: 31
=== Renormalization ===
Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities.
Parameters appearing in the Lagrangian, such as the mass m and the coupling constant λ, have no physical meaning — m, λ, and the field strength ϕ are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off Λ, obtain expressions for the physical quantities, and then take the limit Λ → ∞. This is an example of regularization, a class of methods to treat divergences in QFT, with Λ being the regulator.
The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ϕ4 theory, the field strength is first redefined:
ϕ
=
Z
1
/
2
ϕ
r
,
{\displaystyle \phi =Z^{1/2}\phi _{r},}
where ϕ is the bare field, ϕr is the renormalized field, and Z is a constant to be determined. The Lagrangian density becomes:
L
=
1
2
(
∂
μ
ϕ
r
)
(
∂
μ
ϕ
r
)
−
1
2
m
r
2
ϕ
r
2
−
λ
r
4
!
ϕ
r
4
+
1
2
δ
Z
(
∂
μ
ϕ
r
)
(
∂
μ
ϕ
r
)
−
1
2
δ
m
ϕ
r
2
−
δ
λ
4
!
ϕ
r
4
,
{\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}m_{r}^{2}\phi _{r}^{2}-{\frac {\lambda _{r}}{4!}}\phi _{r}^{4}+{\frac {1}{2}}\delta _{Z}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}\delta _{m}\phi _{r}^{2}-{\frac {\delta _{\lambda }}{4!}}\phi _{r}^{4},}
where mr and λr are the experimentally measurable, renormalized, mass and coupling constant, respectively, and
δ
Z
=
Z
−
1
,
δ
m
=
m
2
Z
−
m
r
2
,
δ
λ
=
λ
Z
2
−
λ
r
{\displaystyle \delta _{Z}=Z-1,\quad \delta _{m}=m^{2}Z-m_{r}^{2},\quad \delta _{\lambda }=\lambda Z^{2}-\lambda _{r}}
are constants to be determined. The first three terms are the ϕ4 Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator Λ. Compute Feynman diagrams, in which divergent terms will depend on Λ. Then, define δZ, δm, and δλ such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit Λ → ∞ is taken. In this way, meaningful finite quantities are obtained.: 323–326
It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT,: 719–727 while quantum gravity is non-renormalizable.: 798 : 421
==== Renormalization group ====
The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.: 393 The way in which each parameter changes with scale is described by its β function.: 417 Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.: 410–411
As an example, the coupling constant in QED, namely the elementary charge e, has the following β function:
β
(
e
)
≡
1
Λ
d
e
d
Λ
=
e
3
12
π
2
+
O
(
e
5
)
,
{\displaystyle \beta (e)\equiv {\frac {1}{\Lambda }}{\frac {de}{d\Lambda }}={\frac {e^{3}}{12\pi ^{2}}}+O{\mathord {\left(e^{5}\right)}},}
where Λ is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.: 420
The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following β function:
β
(
g
)
≡
1
Λ
d
g
d
Λ
=
g
3
16
π
2
(
−
11
+
2
3
N
f
)
+
O
(
g
5
)
,
{\displaystyle \beta (g)\equiv {\frac {1}{\Lambda }}{\frac {dg}{d\Lambda }}={\frac {g^{3}}{16\pi ^{2}}}\left(-11+{\frac {2}{3}}N_{f}\right)+O{\mathord {\left(g^{5}\right)}},}
where Nf is the number of quark flavours. In the case where Nf ≤ 16 (the Standard Model has Nf = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.: 531
Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and N = 4 supersymmetric Yang–Mills theory.
According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off Λ, i.e. that the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory.: 402–403 The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them.: 2 According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary.: 156
=== Other theories ===
The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and ϕ4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction.
As an example, quantum electrodynamics contains a Dirac field ψ representing the electron field and a vector field Aμ representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is:
L
=
ψ
¯
(
i
γ
μ
∂
μ
−
m
)
ψ
−
1
4
F
μ
ν
F
μ
ν
−
e
ψ
¯
γ
μ
ψ
A
μ
,
{\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\gamma ^{\mu }\partial _{\mu }-m\right)\psi -{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }-e{\bar {\psi }}\gamma ^{\mu }\psi A_{\mu },}
where γμ are Dirac matrices,
ψ
¯
=
ψ
†
γ
0
{\displaystyle {\bar {\psi }}=\psi ^{\dagger }\gamma ^{0}}
, and
F
μ
ν
=
∂
μ
A
ν
−
∂
ν
A
μ
{\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }}
is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.: 78
Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg.
==== Gauge symmetry ====
If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant:
ψ
(
x
)
→
e
i
α
(
x
)
ψ
(
x
)
,
A
μ
(
x
)
→
A
μ
(
x
)
+
i
e
−
1
e
−
i
α
(
x
)
∂
μ
e
i
α
(
x
)
,
{\displaystyle \psi (x)\to e^{i\alpha (x)}\psi (x),\quad A_{\mu }(x)\to A_{\mu }(x)+ie^{-1}e^{-i\alpha (x)}\partial _{\mu }e^{i\alpha (x)},}
where α(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.: 482–483 Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations
e
i
α
(
x
)
{\displaystyle e^{i\alpha (x)}}
and
e
i
α
′
(
x
)
{\displaystyle e^{i\alpha '(x)}}
is yet another symmetry transformation
e
i
[
α
(
x
)
+
α
′
(
x
)
]
{\displaystyle e^{i[\alpha (x)+\alpha '(x)]}}
. For any α(x),
e
i
α
(
x
)
{\displaystyle e^{i\alpha (x)}}
is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.: 496 The photon field Aμ may be referred to as the U(1) gauge boson.
U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories).: 489 Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ψi, i = 1,2,3 representing quark fields as well as eight vector fields Aa,μ, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.: 547 The QCD Lagrangian density is:: 490–491
L
=
i
ψ
¯
i
γ
μ
(
D
μ
)
i
j
ψ
j
−
1
4
F
μ
ν
a
F
a
,
μ
ν
−
m
ψ
¯
i
ψ
i
,
{\displaystyle {\mathcal {L}}=i{\bar {\psi }}^{i}\gamma ^{\mu }(D_{\mu })^{ij}\psi ^{j}-{\frac {1}{4}}F_{\mu \nu }^{a}F^{a,\mu \nu }-m{\bar {\psi }}^{i}\psi ^{i},}
where Dμ is the gauge covariant derivative:
D
μ
=
∂
μ
−
i
g
A
μ
a
t
a
,
{\displaystyle D_{\mu }=\partial _{\mu }-igA_{\mu }^{a}t^{a},}
where g is the coupling constant, ta are the eight generators of SU(3) in the fundamental representation (3×3 matrices),
F
μ
ν
a
=
∂
μ
A
ν
a
−
∂
ν
A
μ
a
+
g
f
a
b
c
A
μ
b
A
ν
c
,
{\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c},}
and fabc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation:
ψ
i
(
x
)
→
U
i
j
(
x
)
ψ
j
(
x
)
,
A
μ
a
(
x
)
t
a
→
U
(
x
)
[
A
μ
a
(
x
)
t
a
+
i
g
−
1
∂
μ
]
U
†
(
x
)
,
{\displaystyle \psi ^{i}(x)\to U^{ij}(x)\psi ^{j}(x),\quad A_{\mu }^{a}(x)t^{a}\to U(x)\left[A_{\mu }^{a}(x)t^{a}+ig^{-1}\partial _{\mu }\right]U^{\dagger }(x),}
where U(x) is an element of SU(3) at every spacetime point x:
U
(
x
)
=
e
i
α
(
x
)
a
t
a
.
{\displaystyle U(x)=e^{i\alpha (x)^{a}t^{a}}.}
The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density
L
[
ϕ
,
∂
μ
ϕ
]
{\displaystyle {\mathcal {L}}[\phi ,\partial _{\mu }\phi ]}
under a certain local transformation of the fields, the measure
∫
D
ϕ
{\textstyle \int {\mathcal {D}}\phi }
of the path integral may change.: 243 For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.: 705–707
The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group.
Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.: 17–18 : 73 For example, the U(1) symmetry of QED implies charge conservation.
Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field Aμ, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing Aμ can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description.: 168
To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.: 512–515 A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization.: 517
==== Spontaneous symmetry-breaking ====
Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.: 347
To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density:
L
=
1
2
(
∂
μ
ϕ
i
)
(
∂
μ
ϕ
i
)
+
1
2
μ
2
ϕ
i
ϕ
i
−
λ
4
(
ϕ
i
ϕ
i
)
2
,
{\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\phi ^{i}\right)\left(\partial ^{\mu }\phi ^{i}\right)+{\frac {1}{2}}\mu ^{2}\phi ^{i}\phi ^{i}-{\frac {\lambda }{4}}\left(\phi ^{i}\phi ^{i}\right)^{2},}
where μ and λ are real parameters. The theory admits an O(N) global symmetry:
ϕ
i
→
R
i
j
ϕ
j
,
R
∈
O
(
N
)
.
{\displaystyle \phi ^{i}\to R^{ij}\phi ^{j},\quad R\in \mathrm {O} (N).}
The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ϕ0 satisfying
ϕ
0
i
ϕ
0
i
=
μ
2
λ
.
{\displaystyle \phi _{0}^{i}\phi _{0}^{i}={\frac {\mu ^{2}}{\lambda }}.}
Without loss of generality, let the ground state be in the N-th direction:
ϕ
0
i
=
(
0
,
⋯
,
0
,
μ
λ
)
.
{\displaystyle \phi _{0}^{i}=\left(0,\cdots ,0,{\frac {\mu }{\sqrt {\lambda }}}\right).}
The original N fields can be rewritten as:
ϕ
i
(
x
)
=
(
π
1
(
x
)
,
⋯
,
π
N
−
1
(
x
)
,
μ
λ
+
σ
(
x
)
)
,
{\displaystyle \phi ^{i}(x)=\left(\pi ^{1}(x),\cdots ,\pi ^{N-1}(x),{\frac {\mu }{\sqrt {\lambda }}}+\sigma (x)\right),}
and the original Lagrangian density as:
L
=
1
2
(
∂
μ
π
k
)
(
∂
μ
π
k
)
+
1
2
(
∂
μ
σ
)
(
∂
μ
σ
)
−
1
2
(
2
μ
2
)
σ
2
−
λ
μ
σ
3
−
λ
μ
π
k
π
k
σ
−
λ
2
π
k
π
k
σ
2
−
λ
4
(
π
k
π
k
)
2
,
{\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\pi ^{k}\right)\left(\partial ^{\mu }\pi ^{k}\right)+{\frac {1}{2}}\left(\partial _{\mu }\sigma \right)\left(\partial ^{\mu }\sigma \right)-{\frac {1}{2}}\left(2\mu ^{2}\right)\sigma ^{2}-{\sqrt {\lambda }}\mu \sigma ^{3}-{\sqrt {\lambda }}\mu \pi ^{k}\pi ^{k}\sigma -{\frac {\lambda }{2}}\pi ^{k}\pi ^{k}\sigma ^{2}-{\frac {\lambda }{4}}\left(\pi ^{k}\pi ^{k}\right)^{2},}
where k = 1, ..., N − 1. The original O(N) global symmetry is no longer manifest, leaving only the subgroup O(N − 1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.: 349–350
Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, O(N) has N(N − 1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N − 1) has (N − 1)(N − 2)/2. The number of broken symmetries is their difference, N − 1, which corresponds to the N − 1 massless fields πk.: 351
On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.: 743–744
In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.: 199 In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.: 690
==== Supersymmetry ====
All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.: 795 : 443
The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations Pμ and the Lorentz transformations Jμν.: 58–60 In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Qα, called supercharges, which themselves transform as Weyl fermions.: 795 : 444 The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, QαI, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.: 795 : 450 Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory.
The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.: 448 Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang–Mills theory,: 450 and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.: 444
If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity.
Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.: 796–797
Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.: 797 : 443
==== Other spacetimes ====
The ϕ4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime.
In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases. In high-energy physics, string theory is a type of (1+1)-dimensional QFT,: 452 while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.: 428–429
In Minkowski space, the flat metric ημν is used to raise and lower spacetime indices in the Lagrangian, e.g.
A
μ
A
μ
=
η
μ
ν
A
μ
A
ν
,
∂
μ
ϕ
∂
μ
ϕ
=
η
μ
ν
∂
μ
ϕ
∂
ν
ϕ
,
{\displaystyle A_{\mu }A^{\mu }=\eta _{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,}
where ημν is the inverse of ημν satisfying ημρηρν = δμν.
For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used:
A
μ
A
μ
=
g
μ
ν
A
μ
A
ν
,
∂
μ
ϕ
∂
μ
ϕ
=
g
μ
ν
∂
μ
ϕ
∂
ν
ϕ
,
{\displaystyle A_{\mu }A^{\mu }=g_{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =g^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,}
where gμν is the inverse of gμν.
For a real scalar field, the Lagrangian density in a general spacetime background is
L
=
|
g
|
(
1
2
g
μ
ν
∇
μ
ϕ
∇
ν
ϕ
−
1
2
m
2
ϕ
2
)
,
{\displaystyle {\mathcal {L}}={\sqrt {|g|}}\left({\frac {1}{2}}g^{\mu \nu }\nabla _{\mu }\phi \nabla _{\nu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right),}
where g = det(gμν), and ∇μ denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background.
==== Topological quantum field theory ====
The correlation functions and physical predictions of a QFT depend on the spacetime metric gμν. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.: 36 QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.: 1–5 The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the
link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond.
=== Perturbative and non-perturbative methods ===
Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model.
== Mathematical rigor ==
In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined.
However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory, can be given a sound mathematical interpretation from their finite-dimensional analogues.
Since the 1950s, theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,: 2 which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem, and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions, the three-dimensional scalar field theories with a quartic interaction, etc.
Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms.
Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms.: 2–3 One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).: 10
Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows.
Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on
R
4
{\displaystyle \mathbb {R} ^{4}}
and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater & Wightman (1964), Osterwalder & Schrader (1973) and Osterwalder & Schrader (1975).
== See also ==
== References ==
Bibliography
Streater, R.; Wightman, A. (1964). PCT, Spin and Statistics and all That. W. A. Benjamin.
Osterwalder, K.; Schrader, R. (1973). "Axioms for Euclidean Green's functions". Communications in Mathematical Physics. 31 (2): 83–112. Bibcode:1973CMaPh..31...83O. doi:10.1007/BF01645738. S2CID 189829853.
Osterwalder, K.; Schrader, R. (1975). "Axioms for Euclidean Green's functions II". Communications in Mathematical Physics. 42 (3): 281–305. Bibcode:1975CMaPh..42..281O. doi:10.1007/BF01608978. S2CID 119389461.
== Further reading ==
General readers
Pais, A. (1994) [1986]. Inward Bound: Of Matter and Forces in the Physical World (reprint ed.). Oxford, New York, Toronto: Oxford University Press. ISBN 978-0198519973.
Schweber, S. S. (1994). QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga. Princeton University Press. ISBN 9780691033273.
Feynman, R.P. (2001) [1964]. The Character of Physical Law. MIT Press. ISBN 978-0-262-56003-0.
Feynman, R.P. (2006) [1985]. QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6.
Gribbin, J. (1998). Q is for Quantum: Particle Physics from A to Z. Weidenfeld & Nicolson. ISBN 978-0-297-81752-9.
Carroll, Sean (2024). The Biggest Ideas in the Universe : quanta and fields. Dutton. ISBN 978-0-593-18660-2.
Introductory text
Pierre van Baal (2016). A Course in Field Theory. CRC Press. ISBN 9780429073601.
McMahon, D. (2008). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-154382-8.
Bogolyubov, N.; Shirkov, D. (1982). Quantum Fields. Benjamin Cummings. ISBN 978-0-8053-0983-6.
Frampton, P.H. (2000). Gauge Field Theories. Frontiers in Physics (2nd ed.). Wiley.; Frampton, Paul H. (22 September 2008). 2008, 3rd edition. John Wiley & Sons. ISBN 978-3527408351.
Greiner, W.; Müller, B. (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0.
Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0.
Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Group. ISBN 978-0-201-11749-3.
Kleinert, H.; Schulte-Frohlinde, Verena (2001). Critical Properties of φ4-Theories. World Scientific. ISBN 978-981-02-4658-7.
Kleinert, H. (2008). Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation (PDF). World Scientific. ISBN 978-981-279-170-2.
Lancaster, Tom; Blundell, Stephen (2014). Quantum field theory for the gifted amateur. Oxford: Oxford University Press. ISBN 978-0-19-969933-9. OCLC 859651399.
Loudon, R. (1983). The Quantum Theory of Light. Oxford University Press. ISBN 978-0-19-851155-7.
Mandl, F.; Shaw, G. (1993). Quantum Field Theory. John Wiley & Sons. ISBN 978-0-471-94186-6.
Ryder, L.H. (1985). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-33859-2.
Schwartz, M.D. (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. ISBN 978-1107034730. Archived from the original on 2018-03-22. Retrieved 2020-05-13.
Ynduráin, F.J. (1996). Relativistic Quantum Mechanics and Introduction to Field Theory (1st ed.). Springer. Bibcode:1996rqmi.book.....Y. doi:10.1007/978-3-642-61057-8. ISBN 978-3-540-60453-2.
Greiner, W.; Reinhardt, J. (1996). Field Quantization. Springer. ISBN 978-3-540-59179-5.
Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0-201-50397-5.
Scharf, Günter (2014) [1989]. Finite Quantum Electrodynamics: The Causal Approach (third ed.). Dover Publications. ISBN 978-0486492735.
Srednicki, M. (2007). Quantum Field Theory. Cambridge University Press. ISBN 978-0521-8644-97.
Tong, David (2015). "Lectures on Quantum Field Theory". Retrieved 2016-02-09.
Williams, A.G. (2022). Introduction to Quantum Field Theory: Classical Mechanics to Gauge Field Theories. Cambridge University Press. ISBN 978-1108470902.
Zee, Anthony (2010). Quantum Field Theory in a Nutshell (2nd ed.). Princeton University Press. ISBN 978-0691140346.
Advanced texts
Heitler, W. (1953). The Quantum Theory of Radiation. Dover Publications, Inc. ISBN 0-486-64558-4.
Umezawa, H. (1956) Quantum Field Theory. North Holland Puplishing.
Barton, G. (1963). Introduction to Advanced Field Theory. Intescience Publishers.
Brown, Lowell S. (1994). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-46946-3.
Bogoliubov, N.; Logunov, A.A.; Oksak, A.I.; Todorov, I.T. (1990). General Principles of Quantum Field Theory. Kluwer Academic Publishers. ISBN 978-0-7923-0540-8.
Weinberg, S. (1995). The Quantum Theory of Fields. Vol. 1. Cambridge University Press. ISBN 978-0521550017.
== External links ==
Media related to Quantum field theory at Wikimedia Commons
"Quantum field theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Stanford Encyclopedia of Philosophy: "Quantum Field Theory", by Meinard Kuhlmann.
Siegel, Warren, 2005. Fields. arXiv:hep-th/9912205 .
Quantum Field Theory by P. J. Mulders | Wikipedia/Quantum_field_theory |
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. What is meant by best and simpler will depend on the application.
A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials.
One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or rational (ratio of polynomials) approximations.
The objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. This is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function.
Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated. Modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment.
== Optimal polynomials ==
Once the domain (typically an interval) and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the maximum value of
∣
P
(
x
)
−
f
(
x
)
∣
{\displaystyle \mid P(x)-f(x)\mid }
, where P(x) is the approximating polynomial, f(x) is the actual function, and x varies over the chosen interval. For well-behaved functions, there exists an Nth-degree polynomial that will lead to an error curve that oscillates back and forth between
+
ε
{\displaystyle +\varepsilon }
and
−
ε
{\displaystyle -\varepsilon }
a total of N+2 times, giving a worst-case error of
ε
{\displaystyle \varepsilon }
. It is seen that there exists an Nth-degree polynomial that can interpolate N+1 points in a curve. That such a polynomial is always optimal is asserted by the equioscillation theorem. It is possible to make contrived functions f(x) for which no such polynomial exists, but these occur rarely in practice.
For example, the graphs shown to the right show the error in approximating log(x) and exp(x) for N = 4. The red curves, for the optimal polynomial, are level, that is, they oscillate between
+
ε
{\displaystyle +\varepsilon }
and
−
ε
{\displaystyle -\varepsilon }
exactly. In each case, the number of extrema is N+2, that is, 6. Two of the extrema are at the end points of the interval, at the left and right edges of the graphs.
To prove this is true in general, suppose P is a polynomial of degree N having the property described, that is, it gives rise to an error function that has N + 2 extrema, of alternating signs and equal magnitudes. The red graph to the right shows what this error function might look like for N = 4. Suppose Q(x) (whose error function is shown in blue to the right) is another N-degree polynomial that is a better approximation to f than P. In particular, Q is closer to f than P for each value xi where an extreme of P−f occurs, so
|
Q
(
x
i
)
−
f
(
x
i
)
|
<
|
P
(
x
i
)
−
f
(
x
i
)
|
.
{\displaystyle |Q(x_{i})-f(x_{i})|<|P(x_{i})-f(x_{i})|.}
When a maximum of P−f occurs at xi, then
Q
(
x
i
)
−
f
(
x
i
)
≤
|
Q
(
x
i
)
−
f
(
x
i
)
|
<
|
P
(
x
i
)
−
f
(
x
i
)
|
=
P
(
x
i
)
−
f
(
x
i
)
,
{\displaystyle Q(x_{i})-f(x_{i})\leq |Q(x_{i})-f(x_{i})|<|P(x_{i})-f(x_{i})|=P(x_{i})-f(x_{i}),}
And when a minimum of P−f occurs at xi, then
f
(
x
i
)
−
Q
(
x
i
)
≤
|
Q
(
x
i
)
−
f
(
x
i
)
|
<
|
P
(
x
i
)
−
f
(
x
i
)
|
=
f
(
x
i
)
−
P
(
x
i
)
.
{\displaystyle f(x_{i})-Q(x_{i})\leq |Q(x_{i})-f(x_{i})|<|P(x_{i})-f(x_{i})|=f(x_{i})-P(x_{i}).}
So, as can be seen in the graph, [P(x) − f(x)] − [Q(x) − f(x)] must alternate in sign for the N + 2 values of xi. But [P(x) − f(x)] − [Q(x) − f(x)] reduces to P(x) − Q(x) which is a polynomial of degree N. This function changes sign at least N+1 times so, by the Intermediate value theorem, it has N+1 zeroes, which is impossible for a polynomial of degree N.
== Chebyshev approximation ==
One can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree.
This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the usual trigonometric functions.
If one calculates the coefficients in the Chebyshev expansion for a function:
f
(
x
)
∼
∑
i
=
0
∞
c
i
T
i
(
x
)
{\displaystyle f(x)\sim \sum _{i=0}^{\infty }c_{i}T_{i}(x)}
and then cuts off the series after the
T
N
{\displaystyle T_{N}}
term, one gets an Nth-degree polynomial approximating f(x).
The reason this polynomial is nearly optimal is that, for functions with rapidly converging power series, if the series is cut off after some term, the total error arising from the cutoff is close to the first term after the cutoff. That is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of bucking polynomials. If a Chebyshev expansion is cut off after
T
N
{\displaystyle T_{N}}
, the error will take a form close to a multiple of
T
N
+
1
{\displaystyle T_{N+1}}
. The Chebyshev polynomials have the property that they are level – they oscillate between +1 and −1 in the interval [−1, 1].
T
N
+
1
{\displaystyle T_{N+1}}
has N+2 level extrema. This means that the error between f(x) and its Chebyshev expansion out to
T
N
{\displaystyle T_{N}}
is close to a level function with N+2 extrema, so it is close to the optimal Nth-degree polynomial.
In the graphs above, the blue error function is sometimes better than (inside of) the red function, but sometimes worse, meaning that it is not quite the optimal polynomial. The discrepancy is less serious for the exp function, which has an extremely rapidly converging power series, than for the log function.
Chebyshev approximation is the basis for Clenshaw–Curtis quadrature, a numerical integration technique.
== Remez's algorithm ==
The Remez algorithm (sometimes spelled Remes) is used to produce an optimal polynomial P(x) approximating a given function f(x) over a given interval. It is an iterative algorithm that converges to a polynomial that has an error function with N+2 level extrema. By the theorem above, that polynomial is optimal.
Remez's algorithm uses the fact that one can construct an Nth-degree polynomial that leads to level and alternating error values, given N+2 test points.
Given N+2 test points
x
1
{\displaystyle x_{1}}
,
x
2
{\displaystyle x_{2}}
, ...
x
N
+
2
{\displaystyle x_{N+2}}
(where
x
1
{\displaystyle x_{1}}
and
x
N
+
2
{\displaystyle x_{N+2}}
are presumably the end points of the interval of approximation), these equations need to be solved:
P
(
x
1
)
−
f
(
x
1
)
=
+
ε
P
(
x
2
)
−
f
(
x
2
)
=
−
ε
P
(
x
3
)
−
f
(
x
3
)
=
+
ε
⋮
P
(
x
N
+
2
)
−
f
(
x
N
+
2
)
=
±
ε
.
{\displaystyle {\begin{aligned}P(x_{1})-f(x_{1})&=+\varepsilon \\P(x_{2})-f(x_{2})&=-\varepsilon \\P(x_{3})-f(x_{3})&=+\varepsilon \\&\ \ \vdots \\P(x_{N+2})-f(x_{N+2})&=\pm \varepsilon .\end{aligned}}}
The right-hand sides alternate in sign.
That is,
P
0
+
P
1
x
1
+
P
2
x
1
2
+
P
3
x
1
3
+
⋯
+
P
N
x
1
N
−
f
(
x
1
)
=
+
ε
P
0
+
P
1
x
2
+
P
2
x
2
2
+
P
3
x
2
3
+
⋯
+
P
N
x
2
N
−
f
(
x
2
)
=
−
ε
⋮
{\displaystyle {\begin{aligned}P_{0}+P_{1}x_{1}+P_{2}x_{1}^{2}+P_{3}x_{1}^{3}+\dots +P_{N}x_{1}^{N}-f(x_{1})&=+\varepsilon \\P_{0}+P_{1}x_{2}+P_{2}x_{2}^{2}+P_{3}x_{2}^{3}+\dots +P_{N}x_{2}^{N}-f(x_{2})&=-\varepsilon \\&\ \ \vdots \end{aligned}}}
Since
x
1
{\displaystyle x_{1}}
, ...,
x
N
+
2
{\displaystyle x_{N+2}}
were given, all of their powers are known, and
f
(
x
1
)
{\displaystyle f(x_{1})}
, ...,
f
(
x
N
+
2
)
{\displaystyle f(x_{N+2})}
are also known. That means that the above equations are just N+2 linear equations in the N+2 variables
P
0
{\displaystyle P_{0}}
,
P
1
{\displaystyle P_{1}}
, ...,
P
N
{\displaystyle P_{N}}
, and
ε
{\displaystyle \varepsilon }
. Given the test points
x
1
{\displaystyle x_{1}}
, ...,
x
N
+
2
{\displaystyle x_{N+2}}
, one can solve this system to get the polynomial P and the number
ε
{\displaystyle \varepsilon }
.
The graph below shows an example of this, producing a fourth-degree polynomial approximating
e
x
{\displaystyle e^{x}}
over [−1, 1]. The test points were set at
−1, −0.7, −0.1, +0.4, +0.9, and 1. Those values are shown in green. The resultant value of
ε
{\displaystyle \varepsilon }
is 4.43 × 10−4
The error graph does indeed take on the values
±
ε
{\displaystyle \pm \varepsilon }
at the six test points, including the end points, but that those points are not extrema. If the four interior test points had been extrema (that is, the function P(x)f(x) had maxima or minima there), the polynomial would be optimal.
The second step of Remez's algorithm consists of moving the test points to the approximate locations where the error function had its actual local maxima or minima. For example, one can tell from looking at the graph that the point at −0.1 should have been at about −0.28. The way to do this in the algorithm is to use a single round of Newton's method. Since one knows the first and second derivatives of P(x) − f(x), one can calculate approximately how far a test point has to be moved so that the derivative will be zero.
Calculating the derivatives of a polynomial is straightforward. One must also be able to calculate the first and second derivatives of f(x). Remez's algorithm requires an ability to calculate
f
(
x
)
{\displaystyle f(x)\,}
,
f
′
(
x
)
{\displaystyle f'(x)\,}
, and
f
″
(
x
)
{\displaystyle f''(x)\,}
to extremely high precision. The entire algorithm must be carried out to higher precision than the desired precision of the result.
After moving the test points, the linear equation part is repeated, getting a new polynomial, and Newton's method is used again to move the test points again. This sequence is continued until the result converges to the desired accuracy. The algorithm converges very rapidly. Convergence is quadratic for well-behaved functions—if the test points are within
10
−
15
{\displaystyle 10^{-15}}
of the correct result, they will be approximately within
10
−
30
{\displaystyle 10^{-30}}
of the correct result after the next round.
Remez's algorithm is typically started by choosing the extrema of the Chebyshev polynomial
T
N
+
1
{\displaystyle T_{N+1}}
as the initial points, since the final error function will be similar to that polynomial.
== Main journals ==
Journal of Approximation Theory
Constructive Approximation
East Journal on Approximations
== See also ==
== References ==
Achiezer (Akhiezer), N.I. (2013) [1956]. Theory of approximation. Translated by Hyman, C.J. Dover. ISBN 978-0-486-15313-1. OCLC 1067500225.
Timan, A.F. (2014) [1963]. Theory of approximation of functions of a real variable. International Series in Pure and Applied Mathematics. Vol. 34. Elsevier. ISBN 978-1-4831-8481-4.
Hastings, Jr., C. (2015) [1955]. Approximations for Digital Computers. Princeton University Press. ISBN 978-1-4008-7559-7.
Hart, J.F.; Cheney, E.W.; Lawson, C.L.; Maehly, H.J.; Mesztenyi, C.K.; Rice, Jr., J.R.; Thacher, H.C.; Witzgall, C. (1968). Computer Approximations. Wiley. OCLC 0471356301.
Fox, L.; Parker, I.B. (1968). Chebyshev Polynomials in Numerical Analysis. Oxford mathematical handbooks. Oxford University Press. ISBN 978-0-19-859614-1. OCLC 9036207.
Press, WH; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007). "§5.8 Chebyshev Approximation". Numerical Recipes: The Art of Scientific Computing (3rd ed.). Cambridge University Press. ISBN 978-0-521-88068-8.
Cody, Jr., W.J.; Waite, W. (1980). Software Manual for the Elementary Functions. Prentice-Hall. ISBN 0-13-822064-6. OCLC 654695035.
Remes (Remez), E. (1934). "Sur le calcul effectif des polynomes d'approximation de Tschebyschef". C. R. Acad. Sci. (in French). 199: 337–340.
Steffens, K.-G. (2006). Anastassiou, George A. (ed.). The History of Approximation Theory: From Euler to Bernstein. Birkhauser. doi:10.1007/0-8176-4475-X. ISBN 0-8176-4353-2.
Erdélyi, T. (2008). "Extensions of the Bloch-Pólya theorem on the number of distinct real zeros of polynomials". Journal de théorie des nombres de Bordeaux. 20: 281–7. doi:10.5802/jtnb.627.
Erdélyi, T. (2009). "The Remez inequality for linear combinations of shifted Gaussians". Mathematical Proceedings of the Cambridge Philosophical Society. 146 (3): 523–530. doi:10.1017/S0305004108001849 (inactive February 25, 2025).{{cite journal}}: CS1 maint: DOI inactive as of February 2025 (link)
Trefethen, L.N. (2020). Approximation theory and approximation practice. SIAM. ISBN 978-1-61197-594-9. Ch. 1–6 of 2013 edition
== External links ==
History of Approximation Theory (HAT)
Surveys in Approximation Theory (SAT) | Wikipedia/Approximation_theory |
In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the d'Alembert principle of virtual work. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.
Lagrangian mechanics describes a mechanical system as a pair (M, L) consisting of a configuration space M and a smooth function
L
{\textstyle L}
within that space called a Lagrangian. For many systems, L = T − V, where T and V are the kinetic and potential energy of the system, respectively.
The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (specifically, a maximum, minimum, or saddle point) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
== Introduction ==
Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is
nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems.
Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of Lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment.
For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so r1 = (x1, y1, z1), r2 = (x2, y2, z2) and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written r = (x, y, z). The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus
v
1
=
d
r
1
d
t
,
v
2
=
d
r
2
d
t
,
…
,
v
N
=
d
r
N
d
t
.
{\displaystyle \mathbf {v} _{1}={\frac {d\mathbf {r} _{1}}{dt}},\mathbf {v} _{2}={\frac {d\mathbf {r} _{2}}{dt}},\ldots ,\mathbf {v} _{N}={\frac {d\mathbf {r} _{N}}{dt}}.}
In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration",
∑
F
=
m
d
2
r
d
t
2
,
{\displaystyle \sum \mathbf {F} =m{\frac {d^{2}\mathbf {r} }{dt^{2}}},}
applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for.
=== Lagrangian ===
Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by
L
=
T
−
V
,
{\displaystyle L=T-V,}
where
T
=
1
2
∑
k
=
1
N
m
k
v
k
2
{\displaystyle T={\frac {1}{2}}\sum _{k=1}^{N}m_{k}v_{k}^{2}}
is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the
N
{\displaystyle N}
particles. Each particle labeled
k
{\displaystyle k}
has mass
m
k
,
{\displaystyle m_{k},}
and vk2 = vk · vk is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself.
Kinetic energy T is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so T = T(v1, v2, ...).
V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so V = V(r1, r2, ...). For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, V = V(r1, r2, ..., v1, v2, ...). If there is some external field or external driving force changing with time, the potential changes with time, so most generally V = V(r1, r2, ..., v1, v2, ..., t).
As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy.
One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form f(r, t) = 0. If the number of constraints in the system is C, then each constraint has an equation f1(r, t) = 0, f2(r, t) = 0, ..., fC(r, t) = 0, each of which could apply to any of the particles. If particle k is subject to constraint i, then fi(rk, t) = 0. At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are non-integrable, when the constraints have inequalities, or when the constraints involve complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods.
If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian L(r1, r2, ... v1, v2, ... t) is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian L(r1, r2, ... v1, v2, ...) is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates.
With these definitions, Lagrange's equations of the first kind are
where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and
∂
∂
r
k
≡
(
∂
∂
x
k
,
∂
∂
y
k
,
∂
∂
z
k
)
,
∂
∂
r
˙
k
≡
(
∂
∂
x
˙
k
,
∂
∂
y
˙
k
,
∂
∂
z
˙
k
)
{\displaystyle {\frac {\partial }{\partial \mathbf {r} _{k}}}\equiv \left({\frac {\partial }{\partial x_{k}}},{\frac {\partial }{\partial y_{k}}},{\frac {\partial }{\partial z_{k}}}\right),\quad {\frac {\partial }{\partial {\dot {\mathbf {r} }}_{k}}}\equiv \left({\frac {\partial }{\partial {\dot {x}}_{k}}},{\frac {\partial }{\partial {\dot {y}}_{k}}},{\frac {\partial }{\partial {\dot {z}}_{k}}}\right)}
are each shorthands for a vector of partial derivatives ∂/∂ with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to 3N + C, because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations.
In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by vz,2 = dz2/dt, is just ∂L/∂vz,2; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2).
In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore n = 3N − C. We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple q = (q1, q2, ... qn), by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time:
r
k
=
r
k
(
q
,
t
)
=
(
x
k
(
q
,
t
)
,
y
k
(
q
,
t
)
,
z
k
(
q
,
t
)
,
t
)
.
{\displaystyle \mathbf {r} _{k}=\mathbf {r} _{k}(\mathbf {q} ,t)={\big (}x_{k}(\mathbf {q} ,t),y_{k}(\mathbf {q} ,t),z_{k}(\mathbf {q} ,t),t{\big )}.}
The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is
q
˙
j
=
d
q
j
d
t
,
v
k
=
∑
j
=
1
n
∂
r
k
∂
q
j
q
˙
j
+
∂
r
k
∂
t
.
{\displaystyle {\dot {q}}_{j}={\frac {\mathrm {d} q_{j}}{\mathrm {d} t}},\quad \mathbf {v} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}{\dot {q}}_{j}+{\frac {\partial \mathbf {r} _{k}}{\partial t}}.}
Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so
T
=
T
(
q
,
q
˙
,
t
)
.
{\displaystyle T=T(\mathbf {q} ,{\dot {\mathbf {q} }},t).}
With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind
are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian L(q, dq/dt, t) gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to n = 3N − C coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for.
Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates.
== From Newtonian to Lagrangian mechanics ==
=== Newton's laws ===
For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation
F
=
m
a
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,}
where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when t = 0.
Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates ξ = (ξ1, ξ2, ξ3), the law in tensor index notation is the "Lagrangian form"
F
a
=
m
(
d
2
ξ
a
d
t
2
+
Γ
a
b
c
d
ξ
b
d
t
d
ξ
c
d
t
)
=
g
a
k
(
d
d
t
∂
T
∂
ξ
˙
k
−
∂
T
∂
ξ
k
)
,
ξ
˙
a
≡
d
ξ
a
d
t
,
{\displaystyle F^{a}=m\left({\frac {\mathrm {d} ^{2}\xi ^{a}}{\mathrm {d} t^{2}}}+\Gamma ^{a}{}_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}\right)=g^{ak}\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\xi }}^{k}}}-{\frac {\partial T}{\partial \xi ^{k}}}\right),\quad {\dot {\xi }}^{a}\equiv {\frac {\mathrm {d} \xi ^{a}}{\mathrm {d} t}},}
where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind,
T
=
1
2
m
g
b
c
d
ξ
b
d
t
d
ξ
c
d
t
{\displaystyle T={\frac {1}{2}}mg_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}}
is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates.
It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, F = 0, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces F ≠ 0, the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense.
However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C,
F
=
C
+
N
.
{\displaystyle \mathbf {F} =\mathbf {C} +\mathbf {N} .}
The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations.
The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion.
=== D'Alembert's principle ===
A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero:
∑
k
=
1
N
(
N
k
+
C
k
−
m
k
a
k
)
⋅
δ
r
k
=
0.
{\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}+\mathbf {C} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.}
The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint).
Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero:
∑
k
=
1
N
C
k
⋅
δ
r
k
=
0
,
{\displaystyle \sum _{k=1}^{N}\mathbf {C} _{k}\cdot \delta \mathbf {r} _{k}=0,}
so that
∑
k
=
1
N
(
N
k
−
m
k
a
k
)
⋅
δ
r
k
=
0.
{\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.}
Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion.
=== Equations of motion from D'Alembert's principle ===
If there are constraints on particle k, then since the coordinates of the position rk = (xk, yk, zk) are linked together by a constraint equation, so are those of the virtual displacements δrk = (δxk, δyk, δzk). Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential,
δ
r
k
=
∑
j
=
1
n
∂
r
k
∂
q
j
δ
q
j
.
{\displaystyle \delta \mathbf {r} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}.}
There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time.
The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces
Q
j
=
∑
k
=
1
N
N
k
⋅
∂
r
k
∂
q
j
,
{\displaystyle Q_{j}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}},}
so that
∑
k
=
1
N
N
k
⋅
δ
r
k
=
∑
k
=
1
N
N
k
⋅
∑
j
=
1
n
∂
r
k
∂
q
j
δ
q
j
=
∑
j
=
1
n
Q
j
δ
q
j
.
{\displaystyle \sum _{k=1}^{N}\mathbf {N} _{k}\cdot \delta \mathbf {r} _{k}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot \sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{n}Q_{j}\delta q_{j}.}
This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result:
∑
k
=
1
N
m
k
a
k
⋅
∂
r
k
∂
q
j
=
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
.
{\displaystyle \sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}.}
Now D'Alembert's principle is in the generalized coordinates as required,
∑
j
=
1
n
[
Q
j
−
(
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
)
]
δ
q
j
=
0
,
{\displaystyle \sum _{j=1}^{n}\left[Q_{j}-\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right)\right]\delta q_{j}=0,}
and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion,
Q
j
=
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
{\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}}
These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle.
=== Euler–Lagrange equations and Hamilton's principle ===
For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that
Q
j
=
d
d
t
∂
V
∂
q
˙
j
−
∂
V
∂
q
j
,
{\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial V}{\partial {\dot {q}}_{j}}}-{\frac {\partial V}{\partial q_{j}}},}
equating to Lagrange's equations and defining the Lagrangian as L = T − V obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion
∂
L
∂
q
j
−
d
d
t
∂
L
∂
q
˙
j
=
0.
{\displaystyle {\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}=0.}
However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations.
The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is
δ
L
=
∑
j
=
1
n
(
∂
L
∂
q
j
δ
q
j
+
∂
L
∂
q
˙
j
δ
q
˙
j
)
,
δ
q
˙
j
≡
δ
d
q
j
d
t
≡
d
(
δ
q
j
)
d
t
,
{\displaystyle \delta L=\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta {\dot {q}}_{j}\right),\quad \delta {\dot {q}}_{j}\equiv \delta {\frac {\mathrm {d} q_{j}}{\mathrm {d} t}}\equiv {\frac {\mathrm {d} (\delta q_{j})}{\mathrm {d} t}},}
which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian,
∫
t
1
t
2
δ
L
d
t
=
∫
t
1
t
2
∑
j
=
1
n
(
∂
L
∂
q
j
δ
q
j
+
d
d
t
(
∂
L
∂
q
˙
j
δ
q
j
)
−
d
d
t
∂
L
∂
q
˙
j
δ
q
j
)
d
t
=
∑
j
=
1
n
[
∂
L
∂
q
˙
j
δ
q
j
]
t
1
t
2
+
∫
t
1
t
2
∑
j
=
1
n
(
∂
L
∂
q
j
−
d
d
t
∂
L
∂
q
˙
j
)
δ
q
j
d
t
.
{\displaystyle {\begin{aligned}\int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t&=\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)\,\mathrm {d} t\\&=\sum _{j=1}^{n}\left[{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right]_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)\delta q_{j}\,\mathrm {d} t.\end{aligned}}}
Now, if the condition δqj(t1) = δqj(t2) = 0 holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle:
∫
t
1
t
2
δ
L
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t=0.}
The time integral of the Lagrangian is another quantity called the action, defined as
S
=
∫
t
1
t
2
L
d
t
,
{\displaystyle S=\int _{t_{1}}^{t_{2}}L\,\mathrm {d} t,}
which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as [angular momentum], [energy]·[time], or [length]·[momentum]. With this definition Hamilton's principle is
δ
S
=
0.
{\displaystyle \delta S=0.}
Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles.
Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others.
Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here.
=== Lagrange multipliers and constraints ===
The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles,
∫
t
1
t
2
∑
k
=
1
N
(
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
)
⋅
δ
r
k
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.}
Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed δrk(t1) = δrk(t2) = 0 for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation fi(rk, t) = 0 by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian
L
′
=
L
(
r
1
,
r
2
,
…
,
r
˙
1
,
r
˙
2
,
…
,
t
)
+
∑
i
=
1
C
λ
i
(
t
)
f
i
(
r
k
,
t
)
.
{\displaystyle L'=L(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,{\dot {\mathbf {r} }}_{1},{\dot {\mathbf {r} }}_{2},\ldots ,t)+\sum _{i=1}^{C}\lambda _{i}(t)f_{i}(\mathbf {r} _{k},t).}
The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives
∫
t
1
t
2
δ
L
′
d
t
=
∫
t
1
t
2
∑
k
=
1
N
(
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
)
⋅
δ
r
k
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\delta L'\mathrm {d} t=\int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.}
The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement
∂
L
′
∂
r
k
−
d
d
t
∂
L
′
∂
r
˙
k
=
0
⇒
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
=
0
,
{\displaystyle {\frac {\partial L'}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\mathbf {r} }}_{k}}}=0\quad \Rightarrow \quad {\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,}
which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations
∂
L
′
∂
λ
i
−
d
d
t
∂
L
′
∂
λ
˙
i
=
0
⇒
f
i
(
r
k
,
t
)
=
0.
{\displaystyle {\frac {\partial L'}{\partial \lambda _{i}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\lambda }}_{i}}}=0\quad \Rightarrow \quad f_{i}(\mathbf {r} _{k},t)=0.}
For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian L = T − V gives
∂
T
∂
r
k
−
d
d
t
∂
T
∂
r
˙
k
⏟
−
F
k
+
−
∂
V
∂
r
k
⏟
N
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
=
0
,
{\displaystyle \underbrace {{\frac {\partial T}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\mathbf {r} }}_{k}}}} _{-\mathbf {F} _{k}}+\underbrace {-{\frac {\partial V}{\partial \mathbf {r} _{k}}}} _{\mathbf {N} _{k}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,}
and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are
C
k
=
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
,
{\displaystyle \mathbf {C} _{k}=\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}},}
thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers.
== Properties of the Lagrangian ==
=== Non-uniqueness ===
The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian L′ = aL + b will describe the same motion as L. If one restricts as above to trajectories q over a given time interval [tst, tfin]} and fixed end points Pst = q(tst) and Pfin = q(tfin), then two Lagrangians describing the same system can differ by the "total time derivative" of a function f(q, t):
L
′
(
q
,
q
˙
,
t
)
=
L
(
q
,
q
˙
,
t
)
+
d
f
(
q
,
t
)
d
t
,
{\displaystyle L'(\mathbf {q} ,{\dot {\mathbf {q} }},t)=L(\mathbf {q} ,{\dot {\mathbf {q} }},t)+{\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}},}
where
d
f
(
q
,
t
)
d
t
{\textstyle {\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}}}
means
∂
f
(
q
,
t
)
∂
t
+
∑
i
∂
f
(
q
,
t
)
∂
q
i
q
˙
i
.
{\textstyle {\frac {\partial f(\mathbf {q} ,t)}{\partial t}}+\sum _{i}{\frac {\partial f(\mathbf {q} ,t)}{\partial q_{i}}}{\dot {q}}_{i}.}
Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via
S
′
[
q
]
=
∫
t
st
t
fin
L
′
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
d
t
=
∫
t
st
t
fin
L
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
d
t
+
∫
t
st
t
fin
d
f
(
q
(
t
)
,
t
)
d
t
d
t
=
S
[
q
]
+
f
(
P
fin
,
t
fin
)
−
f
(
P
st
,
t
st
)
,
{\displaystyle {\begin{aligned}S'[\mathbf {q} ]&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L'(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt\\&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt+\int _{t_{\text{st}}}^{t_{\text{fin}}}{\frac {\mathrm {d} f(\mathbf {q} (t),t)}{\mathrm {d} t}}\,dt\\&=S[\mathbf {q} ]+f(P_{\text{fin}},t_{\text{fin}})-f(P_{\text{st}},t_{\text{st}}),\end{aligned}}}
with the last two components f(Pfin, tfin) and f(Pst, tst) independent of q.
=== Invariance under point transformations ===
Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation Q = Q(q, t) which is invertible as q = q(Q, t), the new Lagrangian L′ is a function of the new coordinates and similarly for the constraints
L
′
(
Q
,
Q
˙
,
t
)
=
L
(
q
(
Q
,
t
)
,
q
˙
(
Q
,
Q
˙
,
t
)
,
t
)
,
ϕ
j
′
(
Q
,
t
)
=
ϕ
j
(
q
(
Q
,
t
)
,
t
)
{\displaystyle {\begin{aligned}L'(\mathbf {Q} ,{\dot {\mathbf {Q} }},t)&=L(\mathbf {q} (\mathbf {Q} ,t),{\dot {\mathbf {q} }}(\mathbf {Q} ,{\dot {\mathbf {Q} }},t),t),\\\phi _{j}'(\mathbf {Q} ,t)&=\phi _{j}(\mathbf {q} (\mathbf {Q} ,t),t)\end{aligned}}}
and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation;
d
d
t
∂
L
′
∂
Q
˙
i
=
∂
L
′
∂
Q
i
+
∑
j
λ
j
∂
ϕ
j
′
∂
Q
i
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {Q}}_{i}}}={\frac {\partial L'}{\partial Q_{i}}}+\sum _{j}\lambda _{j}{\frac {\partial \phi '_{j}}{\partial Q_{i}}}.}
=== Cyclic coordinates and conserved momenta ===
An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by
p
i
=
∂
L
∂
q
˙
i
.
{\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}.}
If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that
p
˙
i
=
d
d
t
∂
L
∂
q
˙
i
=
∂
L
∂
q
i
=
0
{\displaystyle {\dot {p}}_{i}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial L}{\partial q_{i}}}=0}
and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable".
For example, a system may have a Lagrangian
L
(
r
,
θ
,
s
˙
,
z
˙
,
r
˙
,
θ
˙
,
ϕ
˙
,
t
)
,
{\displaystyle L(r,\theta ,{\dot {s}},{\dot {z}},{\dot {r}},{\dot {\theta }},{\dot {\phi }},t),}
where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta
p
z
=
∂
L
∂
z
˙
,
p
s
=
∂
L
∂
s
˙
,
p
ϕ
=
∂
L
∂
ϕ
˙
,
{\displaystyle p_{z}={\frac {\partial L}{\partial {\dot {z}}}},\quad p_{s}={\frac {\partial L}{\partial {\dot {s}}}},\quad p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}},}
are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved.
=== Energy ===
Given a Lagrangian
L
,
{\displaystyle L,}
the Hamiltonian of the corresponding mechanical system is, by definition,
H
=
(
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
)
−
L
.
{\displaystyle H={\biggl (}\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}{\biggr )}-L.}
This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector:
r
=
r
(
q
1
,
⋯
,
q
n
)
{\displaystyle \mathbf {r} =\mathbf {r} (q_{1},\cdots ,q_{n})}
. From:
T
=
m
2
v
2
=
m
2
∑
i
,
j
(
∂
r
→
∂
q
i
q
˙
i
)
⋅
(
∂
r
→
∂
q
j
q
˙
j
)
=
m
2
∑
i
,
j
a
i
j
q
˙
i
q
˙
j
{\displaystyle T={\frac {m}{2}}v^{2}={\frac {m}{2}}\sum _{i,j}\left({\frac {\partial {\vec {r}}}{\partial q_{i}}}{\dot {q}}_{i}\right)\cdot \left({\frac {\partial {\vec {r}}}{\partial q_{j}}}{\dot {q}}_{j}\right)={\frac {m}{2}}\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}}
∑
k
=
1
n
q
˙
k
∂
L
∂
q
˙
k
=
∑
k
=
1
n
q
˙
k
∂
T
∂
q
˙
k
=
m
2
(
2
∑
i
,
j
a
i
j
q
˙
i
q
˙
j
)
=
2
T
{\displaystyle \sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial L}{\partial {\dot {q}}_{k}}}=\sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial T}{\partial {\dot {q}}_{k}}}={\frac {m}{2}}\left(2\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}\right)=2T}
H
=
(
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
)
−
L
=
2
T
−
(
T
−
V
)
=
T
+
V
=
E
{\displaystyle H=\left(\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}\right)-L=2T-(T-V)=T+V=E}
where
a
i
j
=
∂
r
∂
q
i
⋅
∂
r
∂
q
j
{\displaystyle a_{ij}={\frac {\partial \mathbf {r} }{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} }{\partial q_{j}}}}
is a symmetric matrix that is defined for the derivation.
==== Invariance under coordinate transformations ====
At every time instant t, the energy is invariant under configuration space coordinate changes q → Q, i.e. (using natural coordinates)
E
(
q
,
q
˙
,
t
)
=
E
(
Q
,
Q
˙
,
t
)
.
{\displaystyle E(\mathbf {q} ,{\dot {\mathbf {q} }},t)=E(\mathbf {Q} ,{\dot {\mathbf {Q} }},t).}
Besides this result, the proof below shows that, under such change of coordinates, the derivatives
∂
L
/
∂
q
˙
i
{\displaystyle \partial L/\partial {\dot {q}}_{i}}
change as coefficients of a linear form.
==== Conservation ====
In Lagrangian mechanics, the system is closed if and only if its Lagrangian
L
{\displaystyle L}
does not explicitly depend on time. The energy conservation law states that the energy
E
{\displaystyle E}
of a closed system is an integral of motion.
More precisely, let q = q(t) be an extremal. (In other words, q satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to
d
L
d
t
=
q
˙
∂
L
∂
q
+
q
¨
∂
L
∂
q
˙
+
∂
L
∂
t
−
∂
L
∂
t
=
d
d
t
(
∂
L
∂
q
˙
)
q
˙
+
q
¨
∂
L
∂
q
˙
−
L
˙
−
∂
L
∂
t
=
d
d
t
(
∂
L
∂
q
˙
q
˙
−
L
)
=
d
H
d
t
{\displaystyle {\begin{aligned}{\frac {dL}{dt}}&={\dot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}+{\frac {\partial L}{\partial t}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right){\dot {\mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}-{\dot {L}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\mathbf {\dot {q}} -L\right)={\frac {dH}{dt}}\end{aligned}}}
If the Lagrangian L does not explicitly depend on time, then ∂L/∂t = 0, then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that
H
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
=
constant of time
.
{\displaystyle H(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)={\text{constant of time}}.}
Hence, if the chosen coordinates were natural coordinates, the energy is conserved.
==== Kinetic and potential energies ====
Under all these circumstances, the constant
E
=
T
+
V
{\displaystyle E=T+V}
is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates.
=== Mechanical similarity ===
If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, rk′ = αrk, so that
V
(
α
r
1
,
α
r
2
,
…
,
α
r
N
)
=
α
N
V
(
r
1
,
r
2
,
…
,
r
N
)
{\displaystyle V(\alpha \mathbf {r} _{1},\alpha \mathbf {r} _{2},\ldots ,\alpha \mathbf {r} _{N})=\alpha ^{N}V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N})}
and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if
α
2
β
2
=
α
N
⇒
β
=
α
1
−
N
2
.
{\displaystyle {\frac {\alpha ^{2}}{\beta ^{2}}}=\alpha ^{N}\quad \Rightarrow \quad \beta =\alpha ^{1-{\frac {N}{2}}}.}
Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios
t
′
t
=
(
l
′
l
)
1
−
N
2
.
{\displaystyle {\frac {t'}{t}}=\left({\frac {l'}{l}}\right)^{1-{\frac {N}{2}}}.}
=== Interacting particles ===
For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems:
L
=
L
A
+
L
B
.
{\displaystyle L=L_{A}+L_{B}.}
If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction,
L
=
L
A
+
L
B
+
L
A
B
.
{\displaystyle L=L_{A}+L_{B}+L_{AB}.}
This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above.
The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added.
=== Consequences of singular Lagrangians ===
From the Euler-Lagrange equations, it follows that:
d
d
t
∂
L
∂
q
˙
i
−
∂
L
∂
q
i
=
0
∂
2
L
∂
q
j
∂
q
˙
i
d
q
j
d
t
+
∂
2
L
∂
q
˙
j
∂
q
˙
i
d
q
˙
j
d
t
+
∂
L
∂
t
−
∂
L
∂
q
i
=
0
∑
j
W
i
j
(
q
,
q
˙
,
t
)
q
¨
j
=
∂
L
∂
q
i
−
∂
L
∂
t
−
∑
j
∂
2
L
∂
q
˙
i
∂
q
j
q
˙
j
,
{\displaystyle {\begin{aligned}&{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-{\frac {\partial L}{\partial q_{i}}}=0\\&{\frac {\partial ^{2}L}{\partial q_{j}\partial {\dot {q}}_{i}}}{\frac {dq_{j}}{dt}}+{\frac {\partial ^{2}L}{\partial {\dot {q}}_{j}\partial {\dot {q}}_{i}}}{\frac {d{\dot {q}}_{j}}{dt}}+{\frac {\partial L}{\partial t}}-{\frac {\partial L}{\partial q_{i}}}=0\\&\sum _{j}W_{ij}(q,{\dot {q}},t){\ddot {q}}_{j}={\frac {\partial L}{\partial q_{i}}}-{\frac {\partial L}{\partial t}}-\sum _{j}{\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial q_{j}}}{\dot {q}}_{j},\\\end{aligned}}}
where the matrix is defined as
W
i
j
=
∂
2
L
∂
q
˙
i
∂
q
˙
j
{\displaystyle W_{ij}={\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial {\dot {q}}_{j}}}}
. If the matrix
W
{\displaystyle W}
is non-singular, the above equations can be solved to represent
q
¨
{\displaystyle {\ddot {q}}}
as a function of
(
q
˙
,
q
,
t
)
{\displaystyle ({\dot {q}},q,t)}
. If the matrix is non-invertible, it would not be possible to represent all
q
¨
{\displaystyle {\ddot {q}}}
's as a function of
(
q
˙
,
q
,
t
)
{\displaystyle ({\dot {q}},q,t)}
but also, the Hamiltonian equations of motions will not take the standard form.
== Examples ==
The following examples apply Lagrange's equations of the second kind to mechanical problems.
=== Conservative force ===
A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential,
F
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {F} =-{\boldsymbol {\nabla }}V(\mathbf {r} ).}
If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates.
==== Cartesian coordinates ====
The Lagrangian of the particle can be written
L
(
x
,
y
,
z
,
x
˙
,
y
˙
,
z
˙
)
=
1
2
m
(
x
˙
2
+
y
˙
2
+
z
˙
2
)
−
V
(
x
,
y
,
z
)
.
{\displaystyle L(x,y,z,{\dot {x}},{\dot {y}},{\dot {z}})={\frac {1}{2}}m({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})-V(x,y,z).}
The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate
d
d
t
(
∂
L
∂
x
˙
)
=
∂
L
∂
x
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)={\frac {\partial L}{\partial x}},}
with derivatives
∂
L
∂
x
=
−
∂
V
∂
x
,
∂
L
∂
x
˙
=
m
x
˙
,
d
d
t
(
∂
L
∂
x
˙
)
=
m
x
¨
,
{\displaystyle {\frac {\partial L}{\partial x}}=-{\frac {\partial V}{\partial x}},\quad {\frac {\partial L}{\partial {\dot {x}}}}=m{\dot {x}},\quad {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)=m{\ddot {x}},}
hence
m
x
¨
=
−
∂
V
∂
x
,
{\displaystyle m{\ddot {x}}=-{\frac {\partial V}{\partial x}},}
and similarly for the y and z coordinates. Collecting the equations in vector form we find
m
r
¨
=
−
∇
V
{\displaystyle m{\ddot {\mathbf {r} }}=-{\boldsymbol {\nabla }}V}
which is Newton's second law of motion for a particle subject to a conservative force.
==== Polar coordinates in 2D and 3D ====
Using the spherical coordinates (r, θ, φ) as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is
L
=
m
2
(
r
˙
2
+
r
2
θ
˙
2
+
r
2
sin
2
θ
φ
˙
2
)
−
V
(
r
)
.
{\displaystyle L={\frac {m}{2}}({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}+r^{2}\sin ^{2}\theta \,{\dot {\varphi }}^{2})-V(r).}
So, in spherical coordinates, the Euler–Lagrange equations are
m
r
¨
−
m
r
(
θ
˙
2
+
sin
2
θ
φ
˙
2
)
+
∂
V
∂
r
=
0
,
{\displaystyle m{\ddot {r}}-mr({\dot {\theta }}^{2}+\sin ^{2}\theta \,{\dot {\varphi }}^{2})+{\frac {\partial V}{\partial r}}=0,}
d
d
t
(
m
r
2
θ
˙
)
−
m
r
2
sin
θ
cos
θ
φ
˙
2
=
0
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}{\dot {\theta }})-mr^{2}\sin \theta \cos \theta \,{\dot {\varphi }}^{2}=0,}
d
d
t
(
m
r
2
sin
2
θ
φ
˙
)
=
0.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}\sin ^{2}\theta \,{\dot {\varphi }})=0.}
The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum
p
φ
=
∂
L
∂
φ
˙
=
m
r
2
sin
2
θ
φ
˙
,
{\displaystyle p_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=mr^{2}\sin ^{2}\theta {\dot {\varphi }},}
in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant.
The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2.
=== Pendulum on a movable support ===
Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the
x
{\displaystyle x}
-direction. Let
x
{\displaystyle x}
be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle
θ
{\displaystyle \theta }
from the vertical. The coordinates and velocity components of the pendulum bob are
x
p
e
n
d
=
x
+
ℓ
sin
θ
⇒
x
˙
p
e
n
d
=
x
˙
+
ℓ
θ
˙
cos
θ
y
p
e
n
d
=
−
ℓ
cos
θ
⇒
y
˙
p
e
n
d
=
ℓ
θ
˙
sin
θ
.
{\displaystyle {\begin{array}{rll}&x_{\mathrm {pend} }=x+\ell \sin \theta &\quad \Rightarrow \quad {\dot {x}}_{\mathrm {pend} }={\dot {x}}+\ell {\dot {\theta }}\cos \theta \\&y_{\mathrm {pend} }=-\ell \cos \theta &\quad \Rightarrow \quad {\dot {y}}_{\mathrm {pend} }=\ell {\dot {\theta }}\sin \theta .\end{array}}}
The generalized coordinates can be taken to be
x
{\displaystyle x}
and
θ
{\displaystyle \theta }
. The kinetic energy of the system is then
T
=
1
2
M
x
˙
2
+
1
2
m
(
x
˙
p
e
n
d
2
+
y
˙
p
e
n
d
2
)
{\displaystyle T={\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left({\dot {x}}_{\mathrm {pend} }^{2}+{\dot {y}}_{\mathrm {pend} }^{2}\right)}
and the potential energy is
V
=
m
g
y
p
e
n
d
{\displaystyle V=mgy_{\mathrm {pend} }}
giving the Lagrangian
L
=
T
−
V
=
1
2
M
x
˙
2
+
1
2
m
[
(
x
˙
+
ℓ
θ
˙
cos
θ
)
2
+
(
ℓ
θ
˙
sin
θ
)
2
]
+
m
g
ℓ
cos
θ
=
1
2
(
M
+
m
)
x
˙
2
+
m
x
˙
ℓ
θ
˙
cos
θ
+
1
2
m
ℓ
2
θ
˙
2
+
m
g
ℓ
cos
θ
.
{\displaystyle {\begin{array}{rcl}L&=&T-V\\&=&{\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left[\left({\dot {x}}+\ell {\dot {\theta }}\cos \theta \right)^{2}+\left(\ell {\dot {\theta }}\sin \theta \right)^{2}\right]+mg\ell \cos \theta \\&=&{\frac {1}{2}}\left(M+m\right){\dot {x}}^{2}+m{\dot {x}}\ell {\dot {\theta }}\cos \theta +{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+mg\ell \cos \theta .\end{array}}}
Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is
p
x
=
∂
L
∂
x
˙
=
(
M
+
m
)
x
˙
+
m
ℓ
θ
˙
cos
θ
,
{\displaystyle p_{x}={\frac {\partial L}{\partial {\dot {x}}}}=(M+m){\dot {x}}+m\ell {\dot {\theta }}\cos \theta ,}
and the Lagrange equation for the support coordinate
x
{\displaystyle x}
is
(
M
+
m
)
x
¨
+
m
ℓ
θ
¨
cos
θ
−
m
ℓ
θ
˙
2
sin
θ
=
0.
{\displaystyle (M+m){\ddot {x}}+m\ell {\ddot {\theta }}\cos \theta -m\ell {\dot {\theta }}^{2}\sin \theta =0.}
The Lagrange equation for the angle θ is
d
d
t
[
m
(
x
˙
ℓ
cos
θ
+
ℓ
2
θ
˙
)
]
+
m
ℓ
(
x
˙
θ
˙
+
g
)
sin
θ
=
0
;
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[m({\dot {x}}\ell \cos \theta +\ell ^{2}{\dot {\theta }})\right]+m\ell ({\dot {x}}{\dot {\theta }}+g)\sin \theta =0;}
and simplifying
θ
¨
+
x
¨
ℓ
cos
θ
+
g
ℓ
sin
θ
=
0.
{\displaystyle {\ddot {\theta }}+{\frac {\ddot {x}}{\ell }}\cos \theta +{\frac {g}{\ell }}\sin \theta =0.}
These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example,
x
¨
→
0
{\displaystyle {\ddot {x}}\to 0}
should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while
θ
¨
→
0
{\displaystyle {\ddot {\theta }}\to 0}
should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively.
=== Two-body central force problem ===
Two bodies of masses m1 and m2 with position vectors r1 and r2 are in orbit about each other due to an attractive central potential V. We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies r = r2 − r1 and the location of the center of mass R = (m1r1 + m2r2)/(m1 + m2). The Lagrangian is then
L
=
1
2
M
R
˙
2
⏟
L
cm
+
1
2
μ
r
˙
2
−
V
(
|
r
|
)
⏟
L
rel
{\displaystyle L=\underbrace {{\frac {1}{2}}M{\dot {\mathbf {R} }}^{2}} _{L_{\text{cm}}}+\underbrace {{\frac {1}{2}}\mu {\dot {\mathbf {r} }}^{2}-V(|\mathbf {r} |)} _{L_{\text{rel}}}}
where M = m1 + m2 is the total mass, μ = m1m2/(m1 + m2) is the reduced mass, and V the potential of the radial force, which depends only on the magnitude of the separation |r| = |r2 − r1|. The Lagrangian splits into a center-of-mass term Lcm and a relative motion term Lrel.
The Euler–Lagrange equation for R is simply
M
R
¨
=
0
,
{\displaystyle M{\ddot {\mathbf {R} }}=0,}
which states the center of mass moves in a straight line at constant velocity.
Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates (r, θ) and take r = |r|,
L
rel
=
1
2
μ
(
r
˙
2
+
r
2
θ
˙
2
)
−
V
(
r
)
,
{\displaystyle L_{\text{rel}}={\frac {1}{2}}\mu \left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)-V(r),}
so θ is a cyclic coordinate with the corresponding conserved (angular) momentum
p
θ
=
∂
L
rel
∂
θ
˙
=
μ
r
2
θ
˙
=
ℓ
.
{\displaystyle p_{\theta }={\frac {\partial L_{\text{rel}}}{\partial {\dot {\theta }}}}=\mu r^{2}{\dot {\theta }}=\ell .}
The radial coordinate r and angular velocity dθ/dt can vary with time, but only in such a way that ℓ is constant. The Lagrange equation for r is
μ
r
θ
˙
2
−
d
V
d
r
=
μ
r
¨
.
{\displaystyle \mu r{\dot {\theta }}^{2}-{\frac {dV}{dr}}=\mu {\ddot {r}}.}
This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity dθ/dt from this radial equation,
μ
r
¨
=
−
d
V
d
r
+
ℓ
2
μ
r
3
.
{\displaystyle \mu {\ddot {r}}=-{\frac {\mathrm {d} V}{\mathrm {d} r}}+{\frac {\ell ^{2}}{\mu r^{3}}}.}
which is the equation of motion for a one-dimensional problem in which a particle of mass μ is subjected to the inward central force −dV/dr and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term):
F
c
f
=
μ
r
θ
˙
2
=
ℓ
2
μ
r
3
.
{\displaystyle F_{\mathrm {cf} }=\mu r{\dot {\theta }}^{2}={\frac {\ell ^{2}}{\mu r^{3}}}.}
Of course, if one remains entirely within the one-dimensional formulation, ℓ enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated.
If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates (r, θ) and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says:
"Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion.
This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently."
It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system.
== Extensions to include non-conservative forces ==
=== Dissipative forces ===
Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangian formulated by a certain doubling of the degrees of freedom.
In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form:
D
=
1
2
∑
j
=
1
m
∑
k
=
1
m
C
j
k
q
˙
j
q
˙
k
,
{\displaystyle D={\frac {1}{2}}\sum _{j=1}^{m}\sum _{k=1}^{m}C_{jk}{\dot {q}}_{j}{\dot {q}}_{k},}
where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then
Q
j
=
−
∂
V
∂
q
j
−
∂
D
∂
q
˙
j
{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}}-{\frac {\partial D}{\partial {\dot {q}}_{j}}}}
and
d
d
t
(
∂
L
∂
q
˙
j
)
−
∂
L
∂
q
j
+
∂
D
∂
q
˙
j
=
0.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)-{\frac {\partial L}{\partial q_{j}}}+{\frac {\partial D}{\partial {\dot {q}}_{j}}}=0.}
=== Electromagnetism ===
A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. Not only can the fields form non conservative potentials, these potentials can also be velocity dependent.
The Lagrangian for a charged particle with electrical charge q, interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential ϕ = ϕ(r, t) and magnetic vector potential A = A(r, t) are defined from the electric field E = E(r, t) and magnetic field B = B(r, t) as follows:
E
=
−
∇
ϕ
−
∂
A
∂
t
,
B
=
∇
×
A
.
{\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}\phi -{\frac {\partial \mathbf {A} }{\partial t}},\quad \mathbf {B} ={\boldsymbol {\nabla }}\times \mathbf {A} .}
The Lagrangian of a massive charged test particle in an electromagnetic field
L
=
1
2
m
r
˙
2
+
q
r
˙
⋅
A
−
q
ϕ
,
{\displaystyle L={\tfrac {1}{2}}m{\dot {\mathbf {r} }}^{2}+q\,{\dot {\mathbf {r} }}\cdot \mathbf {A} -q\phi ,}
is called minimal coupling. This is a good example of when the common rule of thumb that the Lagrangian is the kinetic energy minus the potential energy is incorrect. Combined with Euler–Lagrange equation, it produces the Lorentz force law
m
r
¨
=
q
E
+
q
r
˙
×
B
{\displaystyle m{\ddot {\mathbf {r} }}=q\mathbf {E} +q{\dot {\mathbf {r} }}\times \mathbf {B} }
Under gauge transformation:
A
→
A
+
∇
f
,
ϕ
→
ϕ
−
f
˙
,
{\displaystyle \mathbf {A} \rightarrow \mathbf {A} +{\boldsymbol {\nabla }}f,\quad \phi \rightarrow \phi -{\dot {f}},}
where f(r,t) is any scalar function of space and time, the aforementioned Lagrangian transforms like:
L
→
L
+
q
(
r
˙
⋅
∇
+
∂
∂
t
)
f
=
L
+
q
d
f
d
t
,
{\displaystyle L\rightarrow L+q\left({\dot {\mathbf {r} }}\cdot {\boldsymbol {\nabla }}+{\frac {\partial }{\partial t}}\right)f=L+q{\frac {df}{dt}},}
which still produces the same Lorentz force law.
Note that the canonical momentum (conjugate to position r) is the kinetic momentum plus a contribution from the A field (known as the potential momentum):
p
=
∂
L
∂
r
˙
=
m
r
˙
+
q
A
.
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=m{\dot {\mathbf {r} }}+q\mathbf {A} .}
This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. From this expression, we can see that the canonical momentum p is not gauge invariant, and therefore not a measurable physical quantity; However, if r is cyclic (i.e. Lagrangian is independent of position r), which happens if the ϕ and A fields are uniform, then this canonical momentum p given here is the conserved momentum, while the measurable physical kinetic momentum mv is not.
== Other contexts and formulations ==
The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations.
=== Alternative formulations of classical mechanics ===
A closely related formulation of classical mechanics is Hamiltonian mechanics. The Hamiltonian is defined by
H
=
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
−
L
{\displaystyle H=\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-L}
and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta. This doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)).
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates.
=== Momentum space formulation ===
The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian L(q, dq/dt, t) obtains the generalized momenta Lagrangian L′(p, dp/dt, t) in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system. In practice generalized coordinates are more convenient to use and interpret than generalized momenta.
=== Higher derivatives of generalized coordinates ===
There is no mathematical reason to restrict the derivatives of generalized coordinates to first order only. It is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler–Lagrange equation for details. However, from the physical point-of-view there is an obstacle to include time derivatives higher than the first order, which is implied by Ostrogradsky's construction of a canonical formalism for nondegenerate higher derivative Lagrangians, see Ostrogradsky instability
=== Optics ===
Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow.
=== Relativistic formulation ===
Lagrangian mechanics can be formulated in special relativity and general relativity. Some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out.
=== Quantum mechanics ===
In quantum mechanics, action and quantum-mechanical phase are related via the Planck constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions.
In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics.
=== Classical field theory ===
In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system. In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field ϕ(r, t) defined over a region of 3D space. Associated with the field is a Lagrangian density
L
(
ϕ
,
∇
ϕ
,
ϕ
˙
,
r
,
t
)
{\displaystyle {\mathcal {L}}(\phi ,\nabla \phi ,{\dot {\phi }},\mathbf {r} ,t)}
defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"). The Lagrangian is then the volume integral of the Lagrangian density over 3D space
L
(
t
)
=
∫
L
d
3
r
{\displaystyle L(t)=\int {\mathcal {L}}\,\mathrm {d} ^{3}\mathbf {r} }
where d3r is a 3D differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian.
=== Noether's theorem ===
The action principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical conserved quantities to continuous symmetries of a physical system.
If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry. This characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity.
== See also ==
== Footnotes ==
== Notes ==
== References ==
== Further reading ==
Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988).
Cassel, Kevin (2013). Variational methods with applications in science and engineering. Cambridge: Cambridge University Press. ISBN 978-1-107-02258-4.
Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002.
== External links ==
David Tong. "Cambridge Lecture Notes on Classical Dynamics". DAMTP. Retrieved 2017-06-08.
Principle of least action interactive Excellent interactive explanation/webpage
Joseph Louis de Lagrange - Œuvres complètes (Gallica-Math)
Constrained motion and generalized coordinates, page 4 | Wikipedia/Lagrangian_mechanics |
Introduction to the Theory of Error-Correcting Codes is a textbook on error-correcting codes, by Vera Pless. It was published in 1982 by John Wiley & Sons, with a second edition in 1989 and a third in 1998. The Basic Library List Committee of the Mathematical Association of America has rated the book as essential for inclusion in undergraduate mathematics libraries.
== Topics ==
This book is mainly centered around algebraic and combinatorial techniques for designing and using error-correcting linear block codes. It differs from previous works in this area in its reduction of each result to its mathematical foundations, and its clear exposition of the results follow from these foundations.
The first two of its ten chapters present background and introductory material, including Hamming distance, decoding methods including maximum likelihood and syndromes, sphere packing and the Hamming bound, the Singleton bound, and the Gilbert–Varshamov bound, and the Hamming(7,4) code. They also include brief discussions of additional material not covered in more detail later, including information theory, convolutional codes, and burst error-correcting codes. Chapter 3 presents the BCH code over the field
G
F
(
2
4
)
{\displaystyle GF(2^{4})}
, and Chapter 4 develops the theory of finite fields more generally.
Chapter 5 studies cyclic codes and Chapter 6 studies a special case of cyclic codes, the quadratic residue codes. Chapter 7 returns to BCH codes. After these discussions of specific codes, the next chapter concerns enumerator polynomials, including the MacWilliams identities, Pless's own power moment identities, and the Gleason polynomials.
The final two chapters connect this material to the theory of combinatorial designs and the design of experiments, and include material on the Assmus–Mattson theorem, the Witt design, the binary Golay codes, and the ternary Golay codes.
The second edition adds material on BCH codes, Reed–Solomon error correction, Reed–Muller codes, decoding Golay codes, and "a new, simple combinatorial proof of the MacWilliams identities".
As well as correcting some errors and adding more exercises, the third edition includes new material on connections between greedily constructed lexicographic codes and combinatorial game theory, the Griesmer bound, non-linear codes, and the Gray images of
Z
4
{\displaystyle \mathbb {Z} ^{4}}
codes.
== Audience and reception ==
This book is written as a textbook for advanced undergraduates; reviewer H. N. calls it "a leisurely introduction to the field which is at the same time mathematically rigorous". It includes over 250 problems, and can be read by mathematically-inclined students with only a background in linear algebra (provided in an appendix) and with no prior knowledge of coding theory.
Reviewer Ian F. Blake complained that the first edition omitted some topics necessary for engineers, including algebraic decoding, Goppa codes, Reed–Solomon error correction, and performance analysis, making this more appropriate for mathematics courses, but he suggests that it could still be used as the basis of an engineering course by replacing the last two chapters with this material, and overall he calls the book "a delightful little monograph". Reviewer John Baylis adds that "for clearly exhibiting coding theory as a showpiece of applied modern algebra I haven't seen any to beat this one".
== Related reading ==
Other books in this area include The Theory of Error-Correcting Codes (1977) by Jessie MacWilliams and Neil Sloane, and A First Course in Coding Theory (1988) by Raymond Hill.
== References ==
== External links ==
Introduction to the Theory of Error-Correcting Codes (2nd ed.) on the Internet Archive | Wikipedia/Introduction_to_the_Theory_of_Error-Correcting_Codes |
In mathematics, geometric calculus extends geometric algebra to include differentiation and integration. The formalism is powerful and can be shown to reproduce other mathematical theories including vector calculus, differential geometry, and differential forms.
== Differentiation ==
With a geometric algebra given, let
a
{\displaystyle a}
and
b
{\displaystyle b}
be vectors and let
F
{\displaystyle F}
be a multivector-valued function of a vector. The directional derivative of
F
{\displaystyle F}
along
b
{\displaystyle b}
at
a
{\displaystyle a}
is defined as
(
∇
b
F
)
(
a
)
=
lim
ϵ
→
0
F
(
a
+
ϵ
b
)
−
F
(
a
)
ϵ
,
{\displaystyle (\nabla _{b}F)(a)=\lim _{\epsilon \rightarrow 0}{\frac {F(a+\epsilon b)-F(a)}{\epsilon }},}
provided that the limit exists for all
b
{\displaystyle b}
, where the limit is taken for scalar
ϵ
{\displaystyle \epsilon }
. This is similar to the usual definition of a directional derivative but extends it to functions that are not necessarily scalar-valued.
Next, choose a set of basis vectors
{
e
i
}
{\displaystyle \{e_{i}\}}
and consider the operators, denoted
∂
i
{\displaystyle \partial _{i}}
, that perform directional derivatives in the directions of
e
i
{\displaystyle e_{i}}
:
∂
i
:
F
↦
(
x
↦
(
∇
e
i
F
)
(
x
)
)
.
{\displaystyle \partial _{i}:F\mapsto (x\mapsto (\nabla _{e_{i}}F)(x)).}
Then, using the Einstein summation notation, consider the operator:
e
i
∂
i
,
{\displaystyle e^{i}\partial _{i},}
which means
F
↦
e
i
∂
i
F
,
{\displaystyle F\mapsto e^{i}\partial _{i}F,}
where the geometric product is applied after the directional derivative. More verbosely:
F
↦
(
x
↦
e
i
(
∇
e
i
F
)
(
x
)
)
.
{\displaystyle F\mapsto (x\mapsto e^{i}(\nabla _{e_{i}}F)(x)).}
This operator is independent of the choice of frame, and can thus be used to define what in geometric calculus is called the vector derivative:
∇
=
e
i
∂
i
.
{\displaystyle \nabla =e^{i}\partial _{i}.}
This is similar to the usual definition of the gradient, but it, too, extends to functions that are not necessarily scalar-valued.
The directional derivative is linear regarding its direction, that is:
∇
α
a
+
β
b
=
α
∇
a
+
β
∇
b
.
{\displaystyle \nabla _{\alpha a+\beta b}=\alpha \nabla _{a}+\beta \nabla _{b}.}
From this follows that the directional derivative is the inner product of its direction by the vector derivative. All needs to be observed is that the direction
a
{\displaystyle a}
can be written
a
=
(
a
⋅
e
i
)
e
i
{\displaystyle a=(a\cdot e^{i})e_{i}}
, so that:
∇
a
=
∇
(
a
⋅
e
i
)
e
i
=
(
a
⋅
e
i
)
∇
e
i
=
a
⋅
(
e
i
∇
e
i
)
=
a
⋅
∇
.
{\displaystyle \nabla _{a}=\nabla _{(a\cdot e^{i})e_{i}}=(a\cdot e^{i})\nabla _{e_{i}}=a\cdot (e^{i}\nabla _{e_{i}})=a\cdot \nabla .}
For this reason,
∇
a
F
(
x
)
{\displaystyle \nabla _{a}F(x)}
is often noted
a
⋅
∇
F
(
x
)
{\displaystyle a\cdot \nabla F(x)}
.
The standard order of operations for the vector derivative is that it acts only on the function closest to its immediate right. Given two functions
F
{\displaystyle F}
and
G
{\displaystyle G}
, then for example we have
∇
F
G
=
(
∇
F
)
G
.
{\displaystyle \nabla FG=(\nabla F)G.}
=== Product rule ===
Although the partial derivative exhibits a product rule, the vector derivative only partially inherits this property. Consider two functions
F
{\displaystyle F}
and
G
{\displaystyle G}
:
∇
(
F
G
)
=
e
i
∂
i
(
F
G
)
=
e
i
(
(
∂
i
F
)
G
+
F
(
∂
i
G
)
)
=
e
i
(
∂
i
F
)
G
+
e
i
F
(
∂
i
G
)
.
{\displaystyle {\begin{aligned}\nabla (FG)&=e^{i}\partial _{i}(FG)\\&=e^{i}((\partial _{i}F)G+F(\partial _{i}G))\\&=e^{i}(\partial _{i}F)G+e^{i}F(\partial _{i}G).\end{aligned}}}
Since the geometric product is not commutative with
e
i
F
≠
F
e
i
{\displaystyle e^{i}F\neq Fe^{i}}
in general, we need a new notation to proceed. A solution is to adopt the overdot notation, in which the scope of a vector derivative with an overdot is the multivector-valued function sharing the same overdot. In this case, if we define
∇
˙
F
G
˙
=
e
i
F
(
∂
i
G
)
,
{\displaystyle {\dot {\nabla }}F{\dot {G}}=e^{i}F(\partial _{i}G),}
then the product rule for the vector derivative is
∇
(
F
G
)
=
∇
F
G
+
∇
˙
F
G
˙
.
{\displaystyle \nabla (FG)=\nabla FG+{\dot {\nabla }}F{\dot {G}}.}
=== Interior and exterior derivative ===
Let
F
{\displaystyle F}
be an
r
{\displaystyle r}
-grade multivector. Then we can define an additional pair of operators, the interior and exterior derivatives,
∇
⋅
F
=
⟨
∇
F
⟩
r
−
1
=
e
i
⋅
∂
i
F
,
{\displaystyle \nabla \cdot F=\langle \nabla F\rangle _{r-1}=e^{i}\cdot \partial _{i}F,}
∇
∧
F
=
⟨
∇
F
⟩
r
+
1
=
e
i
∧
∂
i
F
.
{\displaystyle \nabla \wedge F=\langle \nabla F\rangle _{r+1}=e^{i}\wedge \partial _{i}F.}
In particular, if
F
{\displaystyle F}
is grade 1 (vector-valued function), then we can write
∇
F
=
∇
⋅
F
+
∇
∧
F
{\displaystyle \nabla F=\nabla \cdot F+\nabla \wedge F}
and identify the divergence and curl as
∇
⋅
F
=
div
F
,
{\displaystyle \nabla \cdot F=\operatorname {div} F,}
∇
∧
F
=
I
curl
F
.
{\displaystyle \nabla \wedge F=I\,\operatorname {curl} F.}
Unlike the vector derivative, neither the interior derivative operator nor the exterior derivative operator is invertible.
=== Multivector derivative ===
The derivative with respect to a vector as discussed above can be generalized to a derivative with respect to a general multivector, called the multivector derivative.
Let
F
{\displaystyle F}
be a multivector-valued function of a multivector. The directional derivative of
F
{\displaystyle F}
with respect to
X
{\displaystyle X}
in the direction
A
{\displaystyle A}
, where
X
{\displaystyle X}
and
A
{\displaystyle A}
are multivectors, is defined as
A
∗
∂
X
F
(
X
)
=
lim
ϵ
→
0
F
(
X
+
ϵ
A
)
−
F
(
X
)
ϵ
,
{\displaystyle A*\partial _{X}F(X)=\lim _{\epsilon \to 0}{\frac {F(X+\epsilon A)-F(X)}{\epsilon }}\ ,}
where
A
∗
B
=
⟨
A
B
⟩
{\displaystyle A*B=\langle AB\rangle }
is the scalar product. With
{
e
i
}
{\displaystyle \{e_{i}\}}
a vector basis and
{
e
i
}
{\displaystyle \{e^{i}\}}
the corresponding dual basis, the multivector derivative is defined in terms of the directional derivative as
∂
∂
X
=
∂
X
=
∑
i
<
⋯
<
j
e
i
∧
⋯
∧
e
j
(
e
j
∧
⋯
∧
e
i
)
∗
∂
X
.
{\displaystyle {\frac {\partial }{\partial X}}=\partial _{X}=\sum _{i<\dots <j}e^{i}\wedge \cdots \wedge e^{j}(e_{j}\wedge \cdots \wedge e_{i})*\partial _{X}\ .}
This equation is just expressing
∂
X
{\displaystyle \partial _{X}}
in terms of components in a reciprocal basis of blades, as discussed in the article section Geometric algebra#Dual basis.
A key property of the multivector derivative is that
∂
X
⟨
X
A
⟩
=
P
X
(
A
)
,
{\displaystyle \partial _{X}\langle XA\rangle =P_{X}(A)\ ,}
where
P
X
(
A
)
{\displaystyle P_{X}(A)}
is the projection of
A
{\displaystyle A}
onto the grades contained in
X
{\displaystyle X}
.
The multivector derivative finds applications in Lagrangian field theory.
== Integration ==
Let
{
e
1
,
…
,
e
n
}
{\displaystyle \{e_{1},\ldots ,e_{n}\}}
be a set of basis vectors that span an
n
{\displaystyle n}
-dimensional vector space. From geometric algebra, we interpret the pseudoscalar
e
1
∧
e
2
∧
⋯
∧
e
n
{\displaystyle e_{1}\wedge e_{2}\wedge \cdots \wedge e_{n}}
to be the signed volume of the
n
{\displaystyle n}
-parallelotope subtended by these basis vectors. If the basis vectors are orthonormal, then this is the unit pseudoscalar.
More generally, we may restrict ourselves to a subset of
k
{\displaystyle k}
of the basis vectors, where
1
≤
k
≤
n
{\displaystyle 1\leq k\leq n}
, to treat the length, area, or other general
k
{\displaystyle k}
-volume of a subspace in the overall
n
{\displaystyle n}
-dimensional vector space. We denote these selected basis vectors by
{
e
i
1
,
…
,
e
i
k
}
{\displaystyle \{e_{i_{1}},\ldots ,e_{i_{k}}\}}
. A general
k
{\displaystyle k}
-volume of the
k
{\displaystyle k}
-parallelotope subtended by these basis vectors is the grade
k
{\displaystyle k}
multivector
e
i
1
∧
e
i
2
∧
⋯
∧
e
i
k
{\displaystyle e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}}
.
Even more generally, we may consider a new set of vectors
{
x
i
1
e
i
1
,
…
,
x
i
k
e
i
k
}
{\displaystyle \{x^{i_{1}}e_{i_{1}},\ldots ,x^{i_{k}}e_{i_{k}}\}}
proportional to the
k
{\displaystyle k}
basis vectors, where each of the
{
x
i
j
}
{\displaystyle \{x^{i_{j}}\}}
is a component that scales one of the basis vectors. We are free to choose components as infinitesimally small as we wish as long as they remain nonzero. Since the outer product of these terms can be interpreted as a
k
{\displaystyle k}
-volume, a natural way to define a measure is
d
k
X
=
(
d
x
i
1
e
i
1
)
∧
(
d
x
i
2
e
i
2
)
∧
⋯
∧
(
d
x
i
k
e
i
k
)
=
(
e
i
1
∧
e
i
2
∧
⋯
∧
e
i
k
)
d
x
i
1
d
x
i
2
⋯
d
x
i
k
.
{\displaystyle {\begin{aligned}d^{k}X&=\left(dx^{i_{1}}e_{i_{1}}\right)\wedge \left(dx^{i_{2}}e_{i_{2}}\right)\wedge \cdots \wedge \left(dx^{i_{k}}e_{i_{k}}\right)\\&=\left(e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}\right)dx^{i_{1}}dx^{i_{2}}\cdots dx^{i_{k}}.\end{aligned}}}
The measure is therefore always proportional to the unit pseudoscalar of a
k
{\displaystyle k}
-dimensional subspace of the vector space. Compare the Riemannian volume form in the theory of differential forms. The integral is taken with respect to this measure:
∫
V
F
(
x
)
d
k
X
=
∫
V
F
(
x
)
(
e
i
1
∧
e
i
2
∧
⋯
∧
e
i
k
)
d
x
i
1
d
x
i
2
⋯
d
x
i
k
.
{\displaystyle \int _{V}F(x)\,d^{k}X=\int _{V}F(x)\left(e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}\right)dx^{i_{1}}dx^{i_{2}}\cdots dx^{i_{k}}.}
More formally, consider some directed volume
V
{\displaystyle V}
of the subspace. We may divide this volume into a sum of simplices. Let
{
x
i
}
{\displaystyle \{x_{i}\}}
be the coordinates of the vertices. At each vertex we assign a measure
Δ
U
i
(
x
)
{\displaystyle \Delta U_{i}(x)}
as the average measure of the simplices sharing the vertex. Then the integral of
F
(
x
)
{\displaystyle F(x)}
with respect to
U
(
x
)
{\displaystyle U(x)}
over this volume is obtained in the limit of finer partitioning of the volume into smaller simplices:
∫
V
F
d
U
=
lim
n
→
∞
∑
i
=
1
n
F
(
x
i
)
Δ
U
i
(
x
i
)
.
{\displaystyle \int _{V}F\,dU=\lim _{n\rightarrow \infty }\sum _{i=1}^{n}F(x_{i})\,\Delta U_{i}(x_{i}).}
=== Fundamental theorem of geometric calculus ===
The reason for defining the vector derivative and integral as above is that they allow a strong generalization of Stokes' theorem. Let
L
(
A
;
x
)
{\displaystyle {\mathsf {L}}(A;x)}
be a multivector-valued function of
r
{\displaystyle r}
-grade input
A
{\displaystyle A}
and general position
x
{\displaystyle x}
, linear in its first argument. Then the fundamental theorem of geometric calculus relates the integral of a derivative over the volume
V
{\displaystyle V}
to the integral over its boundary:
∫
V
L
˙
(
∇
˙
d
X
;
x
)
=
∮
∂
V
L
(
d
S
;
x
)
.
{\displaystyle \int _{V}{\dot {\mathsf {L}}}\left({\dot {\nabla }}dX;x\right)=\oint _{\partial V}{\mathsf {L}}(dS;x).}
As an example, let
L
(
A
;
x
)
=
⟨
F
(
x
)
A
I
−
1
⟩
{\displaystyle {\mathsf {L}}(A;x)=\langle F(x)AI^{-1}\rangle }
for a vector-valued function
F
(
x
)
{\displaystyle F(x)}
and a (
n
−
1
{\displaystyle n-1}
)-grade multivector
A
{\displaystyle A}
. We find that
∫
V
L
˙
(
∇
˙
d
X
;
x
)
=
∫
V
⟨
F
˙
(
x
)
∇
˙
d
X
I
−
1
⟩
=
∫
V
⟨
F
˙
(
x
)
∇
˙
|
d
X
|
⟩
=
∫
V
∇
⋅
F
(
x
)
|
d
X
|
.
{\displaystyle {\begin{aligned}\int _{V}{\dot {\mathsf {L}}}\left({\dot {\nabla }}dX;x\right)&=\int _{V}\langle {\dot {F}}(x){\dot {\nabla }}\,dX\,I^{-1}\rangle \\&=\int _{V}\langle {\dot {F}}(x){\dot {\nabla }}\,|dX|\rangle \\&=\int _{V}\nabla \cdot F(x)\,|dX|.\end{aligned}}}
Likewise,
∮
∂
V
L
(
d
S
;
x
)
=
∮
∂
V
⟨
F
(
x
)
d
S
I
−
1
⟩
=
∮
∂
V
⟨
F
(
x
)
n
^
|
d
S
|
⟩
=
∮
∂
V
F
(
x
)
⋅
n
^
|
d
S
|
.
{\displaystyle {\begin{aligned}\oint _{\partial V}{\mathsf {L}}(dS;x)&=\oint _{\partial V}\langle F(x)\,dS\,I^{-1}\rangle \\&=\oint _{\partial V}\langle F(x){\hat {n}}\,|dS|\rangle \\&=\oint _{\partial V}F(x)\cdot {\hat {n}}\,|dS|.\end{aligned}}}
Thus we recover the divergence theorem,
∫
V
∇
⋅
F
(
x
)
|
d
X
|
=
∮
∂
V
F
(
x
)
⋅
n
^
|
d
S
|
.
{\displaystyle \int _{V}\nabla \cdot F(x)\,|dX|=\oint _{\partial V}F(x)\cdot {\hat {n}}\,|dS|.}
== Covariant derivative ==
A sufficiently smooth
k
{\displaystyle k}
-surface in an
n
{\displaystyle n}
-dimensional space is deemed a manifold. To each point on the manifold, we may attach a
k
{\displaystyle k}
-blade
B
{\displaystyle B}
that is tangent to the manifold. Locally,
B
{\displaystyle B}
acts as a pseudoscalar of the
k
{\displaystyle k}
-dimensional space. This blade defines a projection of vectors onto the manifold:
P
B
(
A
)
=
(
A
⋅
B
−
1
)
B
.
{\displaystyle {\mathcal {P}}_{B}(A)=(A\cdot B^{-1})B.}
Just as the vector derivative
∇
{\displaystyle \nabla }
is defined over the entire
n
{\displaystyle n}
-dimensional space, we may wish to define an intrinsic derivative
∂
{\displaystyle \partial }
, locally defined on the manifold:
∂
F
=
P
B
(
∇
)
F
.
{\displaystyle \partial F={\mathcal {P}}_{B}(\nabla )F.}
(Note: The right hand side of the above may not lie in the tangent space to the manifold. Therefore, it is not the same as
P
B
(
∇
F
)
{\displaystyle {\mathcal {P}}_{B}(\nabla F)}
, which necessarily does lie in the tangent space.)
If
a
{\displaystyle a}
is a vector tangent to the manifold, then indeed both the vector derivative and intrinsic derivative give the same directional derivative:
a
⋅
∂
F
=
a
⋅
∇
F
.
{\displaystyle a\cdot \partial F=a\cdot \nabla F.}
Although this operation is perfectly valid, it is not always useful because
∂
F
{\displaystyle \partial F}
itself is not necessarily on the manifold. Therefore, we define the covariant derivative to be the forced projection of the intrinsic derivative back onto the manifold:
a
⋅
D
F
=
P
B
(
a
⋅
∂
F
)
=
P
B
(
a
⋅
P
B
(
∇
)
F
)
.
{\displaystyle a\cdot DF={\mathcal {P}}_{B}(a\cdot \partial F)={\mathcal {P}}_{B}(a\cdot {\mathcal {P}}_{B}(\nabla )F).}
Since any general multivector can be expressed as a sum of a projection and a rejection, in this case
a
⋅
∂
F
=
P
B
(
a
⋅
∂
F
)
+
P
B
⊥
(
a
⋅
∂
F
)
,
{\displaystyle a\cdot \partial F={\mathcal {P}}_{B}(a\cdot \partial F)+{\mathcal {P}}_{B}^{\perp }(a\cdot \partial F),}
we introduce a new function, the shape tensor
S
(
a
)
{\displaystyle {\mathsf {S}}(a)}
, which satisfies
F
×
S
(
a
)
=
P
B
⊥
(
a
⋅
∂
F
)
,
{\displaystyle F\times {\mathsf {S}}(a)={\mathcal {P}}_{B}^{\perp }(a\cdot \partial F),}
where
×
{\displaystyle \times }
is the commutator product. In a local coordinate basis
{
e
i
}
{\displaystyle \{e_{i}\}}
spanning the tangent surface, the shape tensor is given by
S
(
a
)
=
e
i
∧
P
B
⊥
(
a
⋅
∂
e
i
)
.
{\displaystyle {\mathsf {S}}(a)=e^{i}\wedge {\mathcal {P}}_{B}^{\perp }(a\cdot \partial e_{i}).}
Importantly, on a general manifold, the covariant derivative does not commute. In particular, the commutator is related to the shape tensor by
[
a
⋅
D
,
b
⋅
D
]
F
=
−
(
S
(
a
)
×
S
(
b
)
)
×
F
.
{\displaystyle [a\cdot D,\,b\cdot D]F=-({\mathsf {S}}(a)\times {\mathsf {S}}(b))\times F.}
Clearly the term
S
(
a
)
×
S
(
b
)
{\displaystyle {\mathsf {S}}(a)\times {\mathsf {S}}(b)}
is of interest. However it, like the intrinsic derivative, is not necessarily on the manifold. Therefore, we can define the Riemann tensor to be the projection back onto the manifold:
R
(
a
∧
b
)
=
−
P
B
(
S
(
a
)
×
S
(
b
)
)
.
{\displaystyle {\mathsf {R}}(a\wedge b)=-{\mathcal {P}}_{B}({\mathsf {S}}(a)\times {\mathsf {S}}(b)).}
Lastly, if
F
{\displaystyle F}
is of grade
r
{\displaystyle r}
, then we can define interior and exterior covariant derivatives as
D
⋅
F
=
⟨
D
F
⟩
r
−
1
,
{\displaystyle D\cdot F=\langle DF\rangle _{r-1},}
D
∧
F
=
⟨
D
F
⟩
r
+
1
,
{\displaystyle D\wedge F=\langle DF\rangle _{r+1},}
and likewise for the intrinsic derivative.
== Relation to differential geometry ==
On a manifold, locally we may assign a tangent surface spanned by a set of basis vectors
{
e
i
}
{\displaystyle \{e_{i}\}}
. We can associate the components of a metric tensor, the Christoffel symbols, and the Riemann curvature tensor as follows:
g
i
j
=
e
i
⋅
e
j
,
{\displaystyle g_{ij}=e_{i}\cdot e_{j},}
Γ
i
j
k
=
(
e
i
⋅
D
e
j
)
⋅
e
k
,
{\displaystyle \Gamma _{ij}^{k}=(e_{i}\cdot De_{j})\cdot e^{k},}
R
i
j
k
l
=
(
R
(
e
i
∧
e
j
)
⋅
e
k
)
⋅
e
l
.
{\displaystyle R_{ijkl}=({\mathsf {R}}(e_{i}\wedge e_{j})\cdot e_{k})\cdot e_{l}.}
These relations embed the theory of differential geometry within geometric calculus.
== Relation to differential forms ==
In a local coordinate system (
x
1
,
…
,
x
n
{\displaystyle x^{1},\ldots ,x^{n}}
), the coordinate differentials
d
x
1
{\displaystyle dx^{1}}
, ...,
d
x
n
{\displaystyle dx^{n}}
form a basic set of one-forms within the coordinate chart. Given a multi-index
I
=
(
i
1
,
…
,
i
k
)
{\displaystyle I=(i_{1},\ldots ,i_{k})}
with
1
≤
i
p
≤
n
{\displaystyle 1\leq i_{p}\leq n}
for
1
≤
p
≤
k
{\displaystyle 1\leq p\leq k}
, we can define a
k
{\displaystyle k}
-form
ω
=
f
I
d
x
I
=
f
i
1
i
2
⋯
i
k
d
x
i
1
∧
d
x
i
2
∧
⋯
∧
d
x
i
k
.
{\displaystyle \omega =f_{I}\,dx^{I}=f_{i_{1}i_{2}\cdots i_{k}}\,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}.}
We can alternatively introduce a
k
{\displaystyle k}
-grade multivector
A
{\displaystyle A}
as
A
=
f
i
1
i
2
⋯
i
k
e
i
1
∧
e
i
2
∧
⋯
∧
e
i
k
{\displaystyle A=f_{i_{1}i_{2}\cdots i_{k}}e^{i_{1}}\wedge e^{i_{2}}\wedge \cdots \wedge e^{i_{k}}}
and a measure
d
k
X
=
(
d
x
i
1
e
i
1
)
∧
(
d
x
i
2
e
i
2
)
∧
⋯
∧
(
d
x
i
k
e
i
k
)
=
(
e
i
1
∧
e
i
2
∧
⋯
∧
e
i
k
)
d
x
i
1
d
x
i
2
⋯
d
x
i
k
.
{\displaystyle {\begin{aligned}d^{k}X&=\left(dx^{i_{1}}e_{i_{1}}\right)\wedge \left(dx^{i_{2}}e_{i_{2}}\right)\wedge \cdots \wedge \left(dx^{i_{k}}e_{i_{k}}\right)\\&=\left(e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}\right)dx^{i_{1}}dx^{i_{2}}\cdots dx^{i_{k}}.\end{aligned}}}
Apart from a subtle difference in meaning for the exterior product with respect to differential forms versus the exterior product with respect to vectors (in the former the increments are covectors, whereas in the latter they represent scalars), we see the correspondences of the differential form
ω
≅
A
†
⋅
d
k
X
=
A
⋅
(
d
k
X
)
†
,
{\displaystyle \omega \cong A^{\dagger }\cdot d^{k}X=A\cdot \left(d^{k}X\right)^{\dagger },}
its derivative
d
ω
≅
(
D
∧
A
)
†
⋅
d
k
+
1
X
=
(
D
∧
A
)
⋅
(
d
k
+
1
X
)
†
,
{\displaystyle d\omega \cong (D\wedge A)^{\dagger }\cdot d^{k+1}X=(D\wedge A)\cdot \left(d^{k+1}X\right)^{\dagger },}
and its Hodge dual
⋆
ω
≅
(
I
−
1
A
)
†
⋅
d
k
X
,
{\displaystyle \star \omega \cong (I^{-1}A)^{\dagger }\cdot d^{k}X,}
embed the theory of differential forms within geometric calculus.
== History ==
Following is a diagram summarizing the history of geometric calculus.
== References and further reading ==
Macdonald, Alan (2012). Vector and Geometric Calculus. Charleston: CreateSpace. ISBN 9781480132450. OCLC 829395829. | Wikipedia/Geometric_calculus |
In mathematics, a Poisson algebra is an associative algebra together with a Lie bracket that also satisfies Leibniz's law; that is, the bracket is also a derivation. Poisson algebras appear naturally in Hamiltonian mechanics, and are also central in the study of quantum groups. Manifolds with a Poisson algebra structure are known as Poisson manifolds, of which the symplectic manifolds and the Poisson–Lie groups are a special case. The algebra is named in honour of Siméon Denis Poisson.
== Definition ==
A Poisson algebra is a vector space over a field K equipped with two bilinear products, ⋅ and {, }, having the following properties:
The product ⋅ forms an associative K-algebra.
The product {, }, called the Poisson bracket, forms a Lie algebra, and so it is anti-symmetric, and obeys the Jacobi identity.
The Poisson bracket acts as a derivation of the associative product ⋅, so that for any three elements x, y and z in the algebra, one has {x, y ⋅ z} = {x, y} ⋅ z + y ⋅ {x, z}.
The last property often allows a variety of different formulations of the algebra to be given, as noted in the examples below.
== Examples ==
Poisson algebras occur in various settings.
=== Symplectic manifolds ===
The space of real-valued smooth functions over a symplectic manifold forms a Poisson algebra. On a symplectic manifold, every real-valued function H on the manifold induces a vector field XH, the Hamiltonian vector field. Then, given any two smooth functions F and G over the symplectic manifold, the Poisson bracket may be defined as:
{
F
,
G
}
=
d
G
(
X
F
)
=
X
F
(
G
)
{\displaystyle \{F,G\}=dG(X_{F})=X_{F}(G)\,}
.
This definition is consistent in part because the Poisson bracket acts as a derivation. Equivalently, one may define the bracket {,} as
X
{
F
,
G
}
=
[
X
F
,
X
G
]
{\displaystyle X_{\{F,G\}}=[X_{F},X_{G}]\,}
where [,] is the Lie derivative. When the symplectic manifold is R2n with the standard symplectic structure, then the Poisson bracket takes on the well-known form
{
F
,
G
}
=
∑
i
=
1
n
∂
F
∂
q
i
∂
G
∂
p
i
−
∂
F
∂
p
i
∂
G
∂
q
i
.
{\displaystyle \{F,G\}=\sum _{i=1}^{n}{\frac {\partial F}{\partial q_{i}}}{\frac {\partial G}{\partial p_{i}}}-{\frac {\partial F}{\partial p_{i}}}{\frac {\partial G}{\partial q_{i}}}.}
Similar considerations apply for Poisson manifolds, which generalize symplectic manifolds by allowing the symplectic bivector to be rank deficient.
=== Lie algebras ===
The tensor algebra of a Lie algebra has a Poisson algebra structure. A very explicit construction of this is given in the article on universal enveloping algebras.
The construction proceeds by first building the tensor algebra of the underlying vector space of the Lie algebra. The tensor algebra is simply the disjoint union (direct sum ⊕) of all tensor products of this vector space. One can then show that the Lie bracket can be consistently lifted to the entire tensor algebra: it obeys both the product rule, and the Jacobi identity of the Poisson bracket, and thus is the Poisson bracket, when lifted. The pair of products {,} and ⊗ then form a Poisson algebra. Observe that ⊗ is neither commutative nor is it anti-commutative: it is merely associative.
Thus, one has the general statement that the tensor algebra of any Lie algebra is a Poisson algebra. The universal enveloping algebra is obtained by modding out the Poisson algebra structure.
=== Associative algebras ===
If A is an associative algebra, then imposing the commutator [x, y] = xy − yx turns it into a Poisson algebra (and thus, also a Lie algebra) AL. Note that the resulting AL should not be confused with the tensor algebra construction described in the previous section. If one wished, one could also apply that construction as well, but that would give a different Poisson algebra, one that would be much larger.
=== Vertex operator algebras ===
For a vertex operator algebra (V, Y, ω, 1), the space V/C2(V) is a Poisson algebra with {a, b} = a0b and a ⋅ b = a−1b. For certain vertex operator algebras, these Poisson algebras are finite-dimensional.
=== Z2 grading ===
Poisson algebras can be given a Z2-grading in one of two different ways. These two result in the Poisson superalgebra and the Gerstenhaber algebra. The difference between the two is in the grading of the product itself. For the Poisson superalgebra, the grading is given by
|
{
a
,
b
}
|
=
|
a
|
+
|
b
|
{\displaystyle |\{a,b\}|=|a|+|b|}
whereas in the Gerstenhaber algebra, the bracket decreases the grading by one:
|
{
a
,
b
}
|
=
|
a
|
+
|
b
|
−
1
{\displaystyle |\{a,b\}|=|a|+|b|-1}
In both of these expressions
|
a
|
=
deg
a
{\displaystyle |a|=\deg a}
denotes the grading of the element
a
{\displaystyle a}
; typically, it counts how
a
{\displaystyle a}
can be decomposed into an even or odd product of generating elements. Gerstenhaber algebras conventionally occur in BRST quantization.
== See also ==
Moyal bracket
Kontsevich quantization formula
== References ==
Y. Kosmann-Schwarzbach (2001) [1994], "Poisson algebra", Encyclopedia of Mathematics, EMS Press
Bhaskara, K. H.; Viswanath, K. (1988). Poisson algebras and Poisson manifolds. Longman. ISBN 0-582-01989-3. | Wikipedia/Poisson_algebra |
A wide area network (WAN) is a telecommunications network that extends over a large geographic area. Wide area networks are often established with leased telecommunication circuits.
Businesses, as well as schools and government entities, use wide area networks to relay data to staff, students, clients, buyers and suppliers from various locations around the world. In essence, this mode of telecommunication allows a business to effectively carry out its daily function regardless of location. The Internet may be considered a WAN. Many WANs are, however, built for one particular organization and are private. WANs can be separated from local area networks (LANs) in that the latter refers to physically proximal networks.
== Design options ==
The textbook definition of a WAN is a computer network spanning regions, countries, or even the world. However, in terms of the application of communication protocols and concepts, it may be best to view WANs as computer networking technologies used to transmit data over long distances, and between different networks. This distinction stems from the fact that common local area network (LAN) technologies operating at lower layers of the OSI model (such as the forms of Ethernet or Wi-Fi) are often designed for physically proximal networks, and thus cannot transmit data over tens, hundreds, or even thousands of miles or kilometres.
WANs are used to connect LANs and other types of networks together so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet.
WANs are often built using leased lines. At each end of the leased line, a router connects the LAN on one side with a second router within the LAN on the other. Because leased lines can be very expensive, instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, Multiprotocol Label Switching (MPLS), Asynchronous Transfer Mode (ATM) and Frame Relay are often used by service providers to deliver the links that are used in WANs. It is also possible to build a WAN with Ethernet.
Academic research into wide area networks can be broken down into three areas: mathematical models, network emulation, and network simulation.
Performance improvements are sometimes delivered via wide area file services or WAN optimization.
== Private networks ==
Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addressed in these ranges are not routable on the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose.
Since two private networks, e.g., two branch offices, cannot directly communicate via the public Internet, the two networks must be bridged across the Internet via a virtual private network (VPN) or other form of IP tunnel that encapsulates packets, including their headers containing the private addresses, for transmission across the public network. Additionally, encapsulated packets may be encrypted to secure their data.
== Connection technology ==
Many technologies are available for wide area network links. Examples include circuit-switched telephone lines, radio wave transmission, and optical fiber. New developments have successively increased transmission rates. In c. 1960, a 110 bit/s line was normal on the edge of the WAN, while core links of 56 or 64 kbit/s were considered fast. Today, households are connected to the Internet with dial-up, asymmetric digital subscriber line (ADSL), cable, WiMAX, cellular network or fiber. The speeds that people can currently use range from 28.8 kbit/s through a 28K modem over a telephone connection to speeds as high as 100 Gbit/s using 100 Gigabit Ethernet.
The following communication and networking technologies have been used to implement WANs.
AT&T conducted trials in 2017 for business use of 400-gigabit Ethernet. Researchers Robert Maher, Alex Alvarado, Domaniç Lavery, and Polina Bayvel of University College London were able to increase networking speeds to 1.125 terabits per second. Christos Santis, graduate student Scott Steger, Amnon Yariv, Martin and Eileen Summerfield developed a new laser that potentially quadruples transfer speeds with fiber optics.
== See also ==
Cell relay
Internet area network (IAN)
Label switching
Low-power wide-area network (LPWAN)
Wide area application services
Wireless WAN
== References ==
== External links ==
Cisco - Introduction to WAN Technologies
"What is WAN (wide area network)? - Definition from WhatIs.com", SearchEnterpriseWAN, archived from the original on 2017-04-29, retrieved 2017-04-21
What is a software-defined wide area network? | Wikipedia/Wide_area_network |
Supersymmetric theory of stochastic dynamics (STS) is a multidisciplinary approach to stochastic dynamics on the intersection of dynamical systems theory,
statistical physics,
stochastic differential equations (SDE),
topological field theories,
and the theory of pseudo-Hermitian operators. It can be seen as an algebraic dual to the traditional set-theoretic framework of the dynamical systems theory, with its added algebraic structure and an inherent topological supersymmetry (TS) enabling the generalization of certain concepts from deterministic to stochastic models. At the same time, it can be looked upon as a topological field theory of stochastic dynamics that reveals various topological aspects.
STS seeks to give a rigorous mathematical derivation to several universal phenomena of stochastic dynamical systems. It identifies spontaneous breakdown of TS, present in all stochastic models, as the stochastic generalization of chaos. In this view, STS reveals that dynamical chaos is a form of long-range topological order. The theory also provides the lowest level classification of stochastic chaos which has a potential to explain self-organized criticality.
== Overview ==
The traditional approach to stochastic dynamics focuses on the temporal evolution of probability distributions. At any moment, the distribution encodes the information or the memory of the system's past, much like wavefunctions in quantum theory. STS uses generalized probability distributions, or "wavefunctions", that depend not only on the original variables of the model but also on their "superpartners", whose evolution determines Lyapunov exponents. This structure enables an extended form of memory that includes also the memory of initial conditions/perturbations known in the context of dynamical chaos as the butterfly effect.
From an algebraic topology perspective, the wavefunctions are differential forms and dynamical systems theory defines their dynamics by the generalized transfer operator (GTO) -- the pullback averaged over noise. GTO commutes with the exterior derivative, which is the topological supersymmetry (TS) of STS.
The presence of TS arises from the fact that continuous-time dynamics preserves the topology of the phase/state space: trajectories originating from close initial conditions remain close over time for any noise configuration. If TS is spontaneously broken, this property no longer holds on average in the limit of infinitely long evolution, meaning the system exhibits a stochastic variant of the butterfly effect. In other words, STS reveals that chaos is a spontaneous long-range order -- a perspective long anticipated within the concept of complexity: as pointed out in Ref.
in the context of STS:
... chaos is counter-intuitively the "ordered" phase of dynamical systems. Moreover, a pioneer of complexity, Prigogine, would define chaos as a spatiotemporally complex form of order...
The Goldstone theorem necessitates the long-range response, which may account for 1/f noise. The Edge of Chaos is interpreted as noise-induced chaos -- a distinct phase where TS is broken in a specific manner and dynamics is dominated by noise-induced instantons. In the deterministic limit, this phase collapses onto the critical boundary of conventional chaos.
== History and relation to other theories ==
The first relation between supersymmetry and stochastic dynamics was established in two papers in 1979 and 1982 by Giorgio Parisi and Nicolas Sourlas, where Langevin SDEs -- SDEs with linear phase spaces, gradient flow vector fields, and additive noises -- were given supersymmetric representation with the help of the BRST gauge fixing procedure. While the original goal of their work was dimensional reduction, the so-emerged supersymmetry of Langevin SDEs has since been addressed from a few different angles including the fluctuation-dissipation theorems, Jarzynski equality, Onsager principle of microscopic reversibility, solutions of Fokker–Planck equations, self-organization, etc.
The Parisi-Sourlas method has been extended to several other classes of dynamical systems, including classical mechanics, its stochastic generalization, and higher-order Langevin SDEs.
The theory of pseudo-Hermitian supersymmetric operators
and the relation between the Parisi-Sourlas method and Lyapunov exponents
further enabled the extension of the theory to SDEs of arbitrary form and the identification of the spontaneous BRST supersymmetry breaking as a stochastic generalization of chaos.
In parallel, the concept of the generalized transfer operator have been introduced in the dynamical systems theory. This concept underlies the stochastic evolution operator of STS and provides it with a solid mathematical meaning. Similar constructions were studied in the theory of SDEs.
The Parisi-Sourlas method has been recognized as a member of Witten-type or cohomological topological field theory, a class of models to which STS also belongs.
== Dynamical systems theory perspective ==
=== Generalized transfer operator ===
The physicist's way to look at a stochastic differential equation is essentially a continuous-time non-autonomous dynamical system that can be defined as:
x
˙
(
t
)
=
F
(
x
(
t
)
)
+
(
2
Θ
)
1
/
2
G
a
(
x
(
t
)
)
ξ
a
(
t
)
≡
F
(
ξ
(
t
)
)
,
{\displaystyle {\dot {x}}(t)=F(x(t))+(2\Theta )^{1/2}G_{a}(x(t))\xi ^{a}(t)\equiv {\mathcal {F}}(\xi (t)),}
where
x
∈
X
{\textstyle x\in X}
is a point in a closed smooth manifold,
X
{\textstyle X}
, called in dynamical systems theory a state space while in physics, where
X
{\displaystyle X}
is often a symplectic manifold with half of variables having the meaning of momenta, it is called the phase space. Further,
F
(
x
)
∈
T
X
x
{\displaystyle F(x)\in TX_{x}}
is a sufficiently smooth flow vector field from the tangent space of
X
{\displaystyle X}
having the meaning of deterministic law of evolution, and
G
a
∈
T
X
,
a
=
1
,
…
,
D
ξ
{\displaystyle G_{a}\in TX,a=1,\ldots ,D_{\xi }}
is a set of sufficiently smooth vector fields that specify how the system is coupled to the time-dependent noise,
ξ
(
t
)
∈
R
D
ξ
{\displaystyle \xi (t)\in \mathbb {R} ^{D_{\xi }}}
, which is called additive/multiplicative depending on whether
G
a
{\displaystyle G_{a}}
's are independent/dependent on the position on
X
{\displaystyle X}
.
The randomness of the noise will be introduced later. For now, the noise is a deterministic function of time and the equation above is an ordinary differential equation (ODE) with a time-dependent flow vector field,
F
{\displaystyle {\mathcal {F}}}
. The solutions/trajectories of this ODE are differentiable with respect to initial conditions even for non-differentiable
ξ
(
t
)
{\displaystyle \xi (t)}
's. In other words, there exists a two-parameter family of noise-configuration-dependent diffeomorphisms:
M
(
ξ
)
t
t
′
:
X
→
X
,
M
(
ξ
)
t
t
′
∘
M
(
ξ
)
t
′
t
″
=
M
(
ξ
)
t
t
″
,
M
(
ξ
)
t
t
′
|
t
=
t
′
=
Id
X
,
{\displaystyle M(\xi )_{tt'}:X\to X,M(\xi )_{tt'}\circ M(\xi )_{t't''}=M(\xi )_{tt''},\left.M(\xi )_{tt'}\right|_{t=t'}={\text{Id}}_{X},}
such that the solution of the ODE with initial condition
x
(
t
′
)
=
x
′
{\displaystyle x(t')=x'}
can be expressed as
x
(
t
)
=
M
(
ξ
)
t
t
′
(
x
′
)
{\displaystyle x(t)=M(\xi )_{tt'}(x')}
.
The dynamics can now be defined as follows: if at time
t
′
{\displaystyle t'}
, the system is described by the probability distribution
P
(
x
)
{\displaystyle P(x)}
, then the average value of some function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
at a later time
t
{\displaystyle t}
is given by:
f
¯
(
t
)
=
∫
X
f
(
M
(
ξ
)
t
t
′
(
x
)
)
P
(
x
)
d
x
1
∧
.
.
.
∧
d
x
D
=
∫
X
f
(
x
)
M
^
(
ξ
)
t
′
t
∗
(
P
(
x
)
d
x
1
∧
.
.
.
∧
d
x
D
)
.
{\displaystyle {\bar {f}}(t)=\int _{X}f\left(M(\xi )_{tt'}(x)\right)P(x)dx^{1}\wedge ...\wedge dx^{D}=\int _{X}f(x){\hat {M}}(\xi )_{t't}^{*}\left(P(x)dx^{1}\wedge ...\wedge dx^{D}\right).}
Here
M
^
(
ξ
)
t
′
t
∗
{\displaystyle {\hat {M}}(\xi )_{t't}^{*}}
is action or pullback induced by the inverse map,
M
(
ξ
)
t
t
′
−
1
=
M
(
ξ
)
t
′
t
{\displaystyle M(\xi )_{tt'}^{-1}=M(\xi )_{t't}}
, on the probability distribution understood in a coordinate-free setting as a top-degree differential form.
Pullbacks are a wider concept, defined also for k-forms, i.e., differential forms of other possible degrees k,
0
≤
k
≤
D
=
d
i
m
X
{\displaystyle 0\leq k\leq D=dimX}
,
ψ
(
x
)
=
ψ
i
1
.
.
.
.
i
k
(
x
)
d
x
1
∧
.
.
.
∧
d
x
k
∈
Ω
(
k
)
(
x
)
{\displaystyle \psi (x)=\psi _{i_{1}....i_{k}}(x)dx^{1}\wedge ...\wedge dx^{k}\in \Omega ^{(k)}(x)}
, where
Ω
(
k
)
(
x
)
{\displaystyle \Omega ^{(k)}(x)}
is the space all k-forms at point x.
According to the example above, the temporal evolution of k-forms is given by,
|
ψ
(
t
)
⟩
=
M
^
(
ξ
)
t
′
t
∗
|
ψ
(
t
′
)
⟩
,
{\displaystyle |\psi (t)\rangle ={\hat {M}}(\xi )_{t't}^{*}|\psi (t')\rangle ,}
where
|
ψ
⟩
∈
Ω
(
X
)
=
⨁
k
=
0
D
Ω
(
k
)
(
X
)
{\displaystyle |\psi \rangle \in \Omega (X)=\bigoplus \nolimits _{k=0}^{D}\Omega ^{(k)}(X)}
is a time-dependent "wavefunction", adopting the terminology of quantum theory.
Unlike, say, trajectories or positions in
X
{\displaystyle X}
, pullbacks are linear objects even for nonlinear
X
{\displaystyle X}
. As a linear object, the pullback can be averaged over the noise configurations leading to the generalized transfer operator (GTO)
-- the dynamical systems theory counterpart of the stochastic evolution operator of the theory of SDEs and/or the Parisi-Sourlas approach. For Gaussian white noise,
⟨
ξ
a
(
t
)
⟩
noise
=
0
,
⟨
ξ
a
(
t
)
ξ
b
(
t
′
)
⟩
noise
=
δ
a
b
δ
(
t
−
t
′
)
{\displaystyle \langle \xi ^{a}(t)\rangle _{\text{noise}}=0,\langle \xi ^{a}(t)\xi ^{b}(t')\rangle _{\text{noise}}=\delta ^{ab}\delta (t-t')}
..., the GTO is
M
^
t
t
′
=
⟨
M
^
(
ξ
)
t
′
t
∗
⟩
noise
=
e
−
(
t
−
t
′
)
H
^
,
{\displaystyle {\hat {\mathcal {M}}}_{tt'}=\langle {\hat {M}}(\xi )_{t't}^{*}\rangle _{\text{noise}}=e^{-(t-t'){\hat {H}}},}
with the infinitesimal GTO, or evolution operator,
H
^
=
L
^
F
−
Θ
L
^
G
a
L
^
G
a
,
{\displaystyle {\hat {H}}={\hat {L}}_{F}-\Theta {\hat {L}}_{G_{a}}{\hat {L}}_{G_{a}},}
where
L
^
F
{\displaystyle {\hat {L}}_{F}}
is the Lie derivative along the vector field specified in the subscript. Its fundamental mathematical meaning -- the pullback averaged over noise -- ensures that GTO is unique. It corresponds to Stratonovich interpretation in the traditional approach to SDEs.
=== Topological supersymmetry ===
With the help of Cartan formula, saying that a Lie derivative is "d-exact", i.e., can be given as, e.g.,
L
^
A
=
[
d
^
,
ı
^
A
]
{\displaystyle {\hat {L}}_{A}=[{\hat {d}},{\hat {\imath }}_{A}]}
, where square brackets denote bi-graded commutator and
d
^
{\displaystyle {\hat {d}}}
and
ı
^
A
{\displaystyle {\hat {\imath }}_{A}}
are, respectively, the exterior derivative and interior multiplication along A, the following explicitly
can be obtained, where
d
¯
^
=
ı
^
F
−
Θ
ı
^
G
a
L
^
G
a
{\displaystyle {\hat {\bar {d}}}={\hat {\imath }}_{\mathcal {F}}-\Theta {\hat {\imath }}_{G_{a}}{\hat {L}}_{G_{a}}}
. This form of the evolution operator is similar to that of Supersymmetric quantum mechanics, and it is a central feature of topological field theories of Witten-type. It assumes that the GTO commutes with
d
^
{\displaystyle {\hat {d}}}
, which is a (super)symmetry of the model. This symmetry is referred to as topological supersymmetry (TS), particularly because the exterior derivative plays a fundamental role in algebraic topology. TS pairs up eigenstates of GTO into doublets.
=== Eigensystem of GTO ===
GTO is a pseudo-Hermitian operator. It has a complete bi-orthogonal eigensystem with the left and right eigenvectors, or the bras and the kets, related nontrivially. The eigensystems of GTO have a certain set of universal properties that limit the possible spectra of the physically meaningful models -- the ones with discrete spectra and with real parts of eigenvalues limited from below -- to the three major types presented in the figure on the right. These properties include:
The eigenvalues are either real or come in complex conjugate pairs called in dynamical systems theory Reulle-Pollicott resonances. This form of spectrum implies the presence of pseudo-time-reversal symmetry.
Each eigenstate has a well-defined degree.
H
^
(
0
,
D
)
{\displaystyle {\hat {H}}^{(0,D)}}
do not break TS,
min Re
(
spec
H
^
(
0
,
D
)
)
=
0
{\displaystyle {\text{min Re}}(\operatorname {spec} {\hat {H}}^{(0,D)})=0}
.
Each De Rham cohomology provides one zero-eigenvalue supersymmetric "singlet" such that
d
^
|
θ
⟩
=
0
,
⟨
θ
|
d
^
=
0
{\displaystyle {\hat {d}}|\theta \rangle =0,\langle \theta |{\hat {d}}=0}
. The singlet from
H
^
(
D
)
{\displaystyle {\hat {H}}^{(D)}}
is the stationary probability distribution known as "ergodic zero".
All the other eigenstates are non-supersymmetric "doublets" related by TS:
H
^
|
α
⟩
=
H
α
|
α
⟩
,
H
^
|
α
′
⟩
=
H
α
|
α
′
⟩
{\displaystyle {\hat {H}}|\alpha \rangle =H_{\alpha }|\alpha \rangle ,\;{\hat {H}}|\alpha '\rangle =H_{\alpha }|\alpha '\rangle }
and
⟨
α
|
H
^
=
⟨
α
|
H
α
,
⟨
α
′
|
H
^
=
⟨
α
′
|
H
α
{\displaystyle \langle \alpha |{\hat {H}}=\langle \alpha |H_{\alpha },\langle \alpha '|{\hat {H}}=\langle \alpha '|H_{\alpha }}
, where
H
α
{\displaystyle H_{\alpha }}
is the corresponding eigenvalue, and
|
α
′
⟩
=
d
^
|
α
⟩
,
⟨
α
|
=
⟨
α
′
|
d
^
{\displaystyle |\alpha '\rangle ={\hat {d}}|\alpha \rangle ,\;\langle \alpha |=\langle \alpha '|{\hat {d}}}
.
=== Stochastic chaos ===
In dynamical systems theory, a system can be characterized as chaotic if the spectral radius of the finite-time GTO is larger than unity. Under this condition, the partition function,
Z
t
t
′
=
T
r
M
^
t
t
′
=
∑
α
e
−
(
t
−
t
′
)
H
α
,
{\displaystyle Z_{tt'}=Tr{\hat {\mathcal {M}}}_{tt'}=\sum \nolimits _{\alpha }e^{-(t-t')H_{\alpha }},}
grows exponentially in the limit of infinitely long evolution signaling the exponential growth of the number of closed solutions -- the hallmark of chaotic dynamics. In terms of the infinitesimal GTO, this condition reads,
Δ
=
−
min
α
Re
H
α
>
0
,
{\displaystyle \Delta =-\min _{\alpha }{\text{Re }}H_{\alpha }>0,}
where
Δ
{\displaystyle \Delta }
is the rate of the exponential growth which is known as "pressure", a member of the family of dynamical entropies such as topological entropy. Spectra b and c in the figure satisfy this condition.
One notable advantage of defining stochastic chaos in this way, compared to other possible approaches, is its equivalence to the spontaneous breakdown of topological supersymmetry (see below). Consequently, through the Goldstone theorem, it has the potential to explain the experimental signature of chaotic behavior, commonly known as 1/f noise.
==== Stochastic Poincaré–Bendixson theorem ====
Due to one of the spectral properties of GTO that
H
^
(
0
,
D
)
{\displaystyle {\hat {H}}^{(0,D)}}
never break TS, i.e.,
min Re
(
spec
H
^
(
0
,
D
)
)
=
0
{\displaystyle {\text{min Re}}(\operatorname {spec} {\hat {H}}^{(0,D)})=0}
, a model has got to have at least two degrees other than 0 and D in order to accommodate a non-supersymmetric doublet with a negative real part of its eigenvalue and, consequently, be chaotic. This implies
D
=
dim
X
≥
3
{\displaystyle D={\text{dim }}X\geq 3}
, which can be viewed as a stochastic generalization of the Poincaré–Bendixson theorem.
=== Sharp trace and Witten Index ===
Another object of interest is the sharp trace of the GTO,
W
=
T
r
(
−
1
)
k
^
M
^
t
t
′
=
∑
α
(
−
1
)
k
α
e
−
(
t
−
t
′
)
H
α
,
{\displaystyle W=Tr(-1)^{\hat {k}}{\hat {\mathcal {M}}}_{tt'}=\sum \nolimits _{\alpha }(-1)^{k_{\alpha }}e^{-(t-t')H_{\alpha }},}
where
k
^
|
ψ
α
⟩
=
k
α
|
ψ
α
⟩
{\displaystyle {\hat {k}}|\psi _{\alpha }\rangle =k_{\alpha }|\psi _{\alpha }\rangle }
with
k
^
{\displaystyle {\hat {k}}}
being the operator of the degree of the differential form. This is a fundamental object of topological nature known in physics as the Witten index. From the properties of the eigensystem of GTO, only supersymmetric singlets contribute to the Witten index,
W
=
∑
k
=
0
D
(
−
1
)
k
B
k
=
E
u
.
C
h
(
X
)
{\displaystyle W=\sum \nolimits _{k=0}^{D}(-1)^{k}B_{k}=Eu.Ch(X)}
, where
E
u
.
C
h
.
{\displaystyle Eu.Ch.}
is the Euler characteristic and B 's arte the numbers of supersymmetric singlets of the corresponding degree. These numbers equal Betti numbers as follows from one of the properties of GTO that each de Rham cohomology class provides one supersymmetric singlet.
== Physical Perspective ==
=== Parisi–Sourlas method as a BRST gauge-fixing procedure ===
The idea of the Parisi–Sourlas method is to rewrite the partition function of the noise in terms of the dynamical variables of the model using BRST gauge-fixing procedure. The resulting expression is the Witten index, whose physical meaning is (up to a topological factor) the partition function of the noise.
The pathintegral representation of the Witten index can be achieved in three steps: (i) introduction of the dynamical variables into the partition function of the noise; (ii) BRST gauge fixing the integration over the paths to the trajectories of the SDE which can be looked upon as the Gribov copies; and (iii) out integration of the noise. This can be expressed as the following
Here, the noise is assumed Gaussian white, p.b.c. signifies periodic boundary conditions,
J
(
ξ
)
{\displaystyle \textstyle J(\xi )}
is the Jacobian compensating (up to a sign) the Jacobian from the
δ
{\displaystyle \delta }
-functional,
Φ
{\displaystyle \Phi }
is the collection of fields that includes, besides the original field
x
{\displaystyle x}
, the Faddeev–Popov ghosts
χ
,
χ
¯
{\displaystyle \chi ,{\bar {\chi }}}
and the Lagrange multiplier,
B
{\displaystyle B}
, the topological and/or BRST supersymmetry is,
Q
=
∫
d
τ
(
χ
i
(
τ
)
δ
/
δ
x
i
(
τ
)
+
B
i
(
τ
)
δ
/
δ
χ
¯
i
(
τ
)
)
,
{\displaystyle Q=\textstyle \int d\tau (\chi ^{i}(\tau )\delta /\delta x^{i}(\tau )+B_{i}(\tau )\delta /\delta {\bar {\chi }}_{i}(\tau )),}
that can be looked upon as a pathintegral version of exterior derivative, and the gauge fermion
d
¯
=
ı
F
−
Θ
ı
G
a
L
G
a
,
with
L
G
a
=
(
Q
,
ı
G
a
)
{\textstyle \textstyle {\bar {d}}=\textstyle \imath _{F}-\Theta \imath _{G_{a}}L_{G_{a}},{\text{ with }}L_{G_{a}}=(Q,\imath _{G_{a}})}
being the pathintegral version of Lie derivative.
=== STS as a topological field theory ===
The Parisi-Sourlas method is peculiar in that sense that it looks like gauge fixing of an empty theory -- the gauge fixing term is the only part of the action. This is a definitive feature of Witten-type topological field theories. Therefore, the Parisi-Sourlas method is a TFT
and as a TFT it has got objects that are topological invariants.
The Parisi-Sourlas functional is one of them. It is essentially a pathintegral representation of the Witten index. The topological character of
W
{\displaystyle W}
is seen by noting that the gauge-fixing character of the functional ensures that only solutions of the SDE contribute. Each solution provides either positive or negative unity:
W
=
⟨
∬
p
.
b
.
c
J
(
ξ
)
(
∏
τ
δ
D
(
x
˙
(
τ
)
−
F
(
x
(
τ
)
,
ξ
(
τ
)
)
)
)
D
x
⟩
noise
=
⟨
I
N
(
ξ
)
⟩
noise
,
{\displaystyle W=\langle \iint _{p.b.c}J(\xi )\left(\prod \nolimits _{\tau }\delta ^{D}({\dot {x}}(\tau )-{\mathcal {F}}(x(\tau ),\xi (\tau )))\right){\mathcal {D}}x\rangle _{\text{noise}}=\textstyle \left\langle I_{N}(\xi )\right\rangle _{\text{noise}},}
with
I
N
(
ξ
)
=
∑
solutions
sign
J
(
ξ
)
{\displaystyle I_{N}(\xi )=\sum \nolimits _{\text{solutions}}\operatorname {sign} J(\xi )}
being the index of the so-called Nicolai map, the map from the space of closed paths to the noise configurations making these closed paths solutions of the SDE,
ξ
a
(
x
)
=
G
i
a
(
x
˙
i
−
F
i
)
/
(
2
Θ
)
1
/
2
{\textstyle \xi ^{a}(x)=G_{i}^{a}({\dot {x}}^{i}-F^{i})/(2\Theta )^{1/2}}
. The index of the map can be viewed as a realization of Poincaré–Hopf theorem on the infinite-dimensional space of close paths with the SDE playing the role of the vector field and with the solutions of the SDE playing the role of the critical points with index
sign
J
(
ξ
)
=
sign
Det
δ
ξ
/
δ
x
.
{\displaystyle \operatorname {sign} J(\xi )=\operatorname {sign} {\text{Det }}\delta \xi /\delta x.}
I
N
(
ξ
)
{\textstyle I_{N}(\xi )}
is a topological object independent of the noise configuration. It equals its own stochastic average which, in turn, equals the Witten index.
==== Instantons ====
There are other classes of topological objects in TFTs including instantons, i.e., the matrix elements between states of the Witten-Morse-Smale-Bott complex
which is the algebraic representation of the Morse-Smale complex. In fact, cohomological TFTs are often called intersection theory on instantons. From the STS viewpoint, instantons refers to quanta of transient dynamics, such as neuronal avalanches or solar flares, and complex or composite instantons represent nonlinear dynamical processes that occur in response to quenches -- external changes in parameters -- such as paper crumpling, protein folding etc. The TFT aspect of STS in instantons remains largely unexplored.
=== Operator representation ===
Just like the partition function of the noise that it represents, the Witten index contains no information about the system's dynamics and cannot be used directly to investigate the dynamics in the system. The information on the dynamics is contained in the stochastic evolution operator (SEO) -- the Parisi-Sourlas path integral with open boundary conditions. Using the explicit form of the action
(
Q
,
Ψ
(
Φ
)
)
=
∫
t
′
t
d
τ
(
i
B
x
˙
+
i
χ
˙
χ
¯
−
H
)
{\displaystyle (Q,\Psi (\Phi ))=\int _{t'}^{t}d\tau (iB{\dot {x}}+i{\dot {\chi }}{\bar {\chi }}-H)}
, where
H
=
(
Q
,
d
¯
)
{\displaystyle H=(Q,{\bar {d}})}
, the operator representation of the SEO can be derived as
∬
x
χ
(
t
′
)
=
x
i
χ
i
x
χ
(
t
)
=
x
f
χ
f
e
∫
t
′
t
d
τ
(
i
B
x
˙
+
i
χ
˙
χ
¯
−
H
)
D
Φ
=
⟨
x
f
χ
f
|
e
−
(
t
−
t
′
)
H
^
|
x
i
χ
i
⟩
,
{\displaystyle \iint _{{x\chi (t')=x_{i}\chi _{i}} \atop {x\chi (t)=x_{f}\chi _{f}}}e^{\int _{t'}^{t}d\tau (iB{\dot {x}}+i{\dot {\chi }}{\bar {\chi }}-H)}{\mathcal {D}}\Phi =\langle x_{f}\chi _{f}|e^{-(t-t'){\hat {H}}}|x_{i}\chi _{i}\rangle ,}
where the infinitesimal SEO
H
^
=
H
(
x
B
χ
χ
¯
)
|
B
,
χ
¯
→
B
^
,
χ
¯
^
{\displaystyle {\hat {H}}=\left.H(xB\chi {\bar {\chi }})\right|_{B,{\bar {\chi }}\to {\hat {B}},{\hat {\bar {\chi }}}}}
, with
i
B
^
i
=
∂
/
∂
x
i
,
i
χ
¯
^
i
=
∂
/
∂
χ
i
{\displaystyle i{\hat {B}}_{i}=\partial /\partial x^{i},i{\hat {\bar {\chi }}}_{i}=\partial /\partial \chi ^{i}}
. The explicit form of the SEO contains an ambiguity arising from the non-commutativity of momentum and position operators:
B
x
{\displaystyle Bx}
in the path integral representation admits an entire
α
{\displaystyle \alpha }
-family of interpretations in the operator representation:
α
B
^
x
^
+
(
1
−
α
)
x
^
B
^
.
{\displaystyle \alpha {\hat {B}}{\hat {x}}+(1-\alpha ){\hat {x}}{\hat {B}}.}
The same ambiguity arises in the theory of SDEs, where different choices of
α
{\displaystyle \alpha }
are referred to as different interpretations of SDEs with
α
=
1
and
1
/
2
{\displaystyle \alpha =1{\text{ and }}1/2}
being respectively the Ito and Stratonovich interpretations.
This ambiguity can be removed by additional conditions. In quantum theory, the condition is Hermiticity of Hamiltonian, which is satisfied by the Weyl symmetrization rule corresponding to
α
=
1
/
2
{\displaystyle \alpha =1/2}
. In STS, the condition is that the SEO equals the GTO, which is also achieved at
α
=
1
/
2
{\displaystyle \alpha =1/2}
. In other words, only the Stratonovich interpretation of SDEs is consistent with the dynamical systems theory approach. Other interpretations differ by the shifted flow vector field in the corresponding SEO,
F
α
=
F
−
Θ
(
2
α
−
1
)
(
G
a
⋅
∂
)
G
a
{\displaystyle F_{\alpha }=F-\Theta (2\alpha -1)(G_{a}\cdot \partial )G_{a}}
.
=== Effective field theory ===
The fermions of STS represent the differentials of the wavefunctions understood as differential forms. These differentials and/or fermions are intrinsically linked to stochastic Lyapunov exponents that define the butterfly effect so that the effective field theory for these fermions -- referred to as goldstinos in the context of the spontaneous TS breaking -- is a theory of the butterfly effect. Moreover, due to the gaplessness of goldstinos, this theory is a conformal field theory
and some correlators are long ranged.
This qualitatively explains the widespread occurrence of long-range behavior in chaotic dynamics known as 1/f noise. A more rigorous theoretical explanation of 1/f noise remains an open problem.
== Applications ==
=== Self-organized criticality and instantonic chaos ===
Since the late 80's,
the concept of the Edge of chaos has emerged -- a finite-width phase at the boundary of conventional chaos, where dynamics is often dominated by power-law distributed instantonic processes such as solar flares, earthquakes, and neuronal avalanches.
This phase has also been recognized as potentially significant for information processing.
Its phenomenological understanding is largely based on the concepts of self-adaptation and self-organization.
STS offers the following explanation for the Edge of chaos (see figure on the right)., In the presence of noise, the TS can be spontaneously broken not only by the non-integrability of the flow vector field, as in deterministic chaos, but also by noise-induced instantons.
Under this condition, the dynamics must be dominated by instantons with power-law distributions, as dictated by the Goldstone theorem. In the deterministic limit, the noise-induced instantons vanish, causing the phase hosting this type of noise-induced dynamics to collapse onto the boundary of the deterministic chaos (see figure on top of the page).
== See also ==
Stochastic quantization
Supersymmetric quantum mechanics
Topological quantum field theory
Stochastic differential equation
Dynamical systems theory
Chaos theory
Self-Organized Criticality
== References == | Wikipedia/Supersymmetric_theory_of_stochastic_dynamics |
In formal language theory, an alphabet, sometimes called a vocabulary, is a non-empty set of indivisible symbols/characters/glyphs, typically thought of as representing letters, characters, digits, phonemes, or even words. The definition is used in a diverse range of fields including logic, mathematics, computer science, and linguistics. An alphabet may have any cardinality ("size") and, depending on its purpose, may be finite (e.g., the alphabet of letters "a" through "z"), countable (e.g.,
{
v
1
,
v
2
,
…
}
{\displaystyle \{v_{1},v_{2},\ldots \}}
), or even uncountable (e.g.,
{
v
x
:
x
∈
R
}
{\displaystyle \{v_{x}:x\in \mathbb {R} \}}
).
Strings, also known as "words" or "sentences", over an alphabet are defined as a sequence of the symbols from the alphabet set. For example, the alphabet of lowercase letters "a" through "z" can be used to form English words like "iceberg" while the alphabet of both upper and lower case letters can also be used to form proper names like "Wikipedia". A common alphabet is {0,1}, the binary alphabet, and a "00101111" is an example of a binary string. Infinite sequences of symbols may be considered as well (see Omega language).
It is often necessary for practical purposes to restrict the symbols in an alphabet so that they are unambiguous when interpreted. For instance, if the two-member alphabet is {00,0}, a string written on paper as "000" is ambiguous because it is unclear if it is a sequence of three "0" symbols, a "00" followed by a "0", or a "0" followed by a "00".
== Notation ==
By definition, the alphabet of a formal language
L
{\displaystyle L}
over
Σ
{\displaystyle \Sigma }
is the set
Σ
{\displaystyle \Sigma }
, which can be any non-empty set of symbols from which every string in
L
{\displaystyle L}
is built. For example, the set
Σ
=
{
_
,
a
,
…
,
z
,
A
,
…
,
Z
,
0
,
1
,
…
,
9
}
{\displaystyle \Sigma =\{\_,\mathrm {a} ,\dots ,\mathrm {z} ,\mathrm {A} ,\dots ,\mathrm {Z} ,0,\mathrm {1} ,\dots ,\mathrm {9} \}}
can be the alphabet of the formal language
L
{\displaystyle L}
that means "all variable identifiers in C programming language". Notice that it is not required to use every symbol in the alphabet of
L
{\displaystyle L}
for its strings.
Given an alphabet
Σ
{\displaystyle \Sigma }
, the set of all strings of length
n
{\displaystyle n}
over the alphabet
Σ
{\displaystyle \Sigma }
is indicated by
Σ
n
{\displaystyle \Sigma ^{n}}
. The set
⋃
i
∈
N
Σ
i
{\textstyle \bigcup _{i\in \mathbb {N} }\Sigma ^{i}}
of all finite strings (regardless of their length) is indicated by the Kleene star operator as
Σ
∗
{\displaystyle \Sigma ^{*}}
, and is also called the Kleene closure of
Σ
{\displaystyle \Sigma }
. The notation
Σ
ω
{\displaystyle \Sigma ^{\omega }}
indicates the set of all infinite sequences over the alphabet
Σ
{\displaystyle \Sigma }
, and
Σ
∞
{\displaystyle \Sigma ^{\infty }}
indicates the set
Σ
∗
∪
Σ
ω
{\displaystyle \Sigma ^{\ast }\cup \Sigma ^{\omega }}
of all finite or infinite sequences.
For example, using the binary alphabet {0,1}, the strings ε, 0, 1, 00, 01, 10, 11, 000, etc. are all in the Kleene closure of the alphabet (where ε represents the empty string).
== Applications ==
Alphabets are important in the use of formal languages, automata and semiautomata. In most cases, for defining instances of automata, such as deterministic finite automata (DFAs), it is required to specify an alphabet from which the input strings for the automaton are built. In these applications, an alphabet is usually required to be a finite set, but is not otherwise restricted.
When using automata, regular expressions, or formal grammars as part of string-processing algorithms, the alphabet may be assumed to be the character set of the text to be processed by these algorithms, or a subset of allowable characters from the character set.
== See also ==
Combinatorics on words
Terminal and nonterminal symbols
== References ==
== Literature ==
John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley Publishing, Reading Massachusetts, 1979. ISBN 0-201-02988-X. | Wikipedia/Alphabet_(computer_science) |
Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals.
Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution.
Ordinary differential equations occur in many scientific disciplines, including physics, chemistry, biology, and economics. In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved.
== The problem ==
A first-order differential equation is an Initial value problem (IVP) of the form,
where
f
{\displaystyle f}
is a function
f
:
[
t
0
,
∞
)
×
R
d
→
R
d
{\displaystyle f:[t_{0},\infty )\times \mathbb {R} ^{d}\to \mathbb {R} ^{d}}
, and the initial condition
y
0
∈
R
d
{\displaystyle y_{0}\in \mathbb {R} ^{d}}
is a given vector. First-order means that only the first derivative of y appears in the equation, and higher derivatives are absent.
Without loss of generality to higher-order systems, we restrict ourselves to first-order differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables. For example, the second-order equation y′′ = −y can be rewritten as two first-order equations: y′ = z and z′ = −y.
In this section, we describe numerical methods for IVPs, and remark that boundary value problems (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution y at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the shooting method (and its variants) or global methods like finite differences, Galerkin methods, or collocation methods are appropriate for that class of problems.
The Picard–Lindelöf theorem states that there is a unique solution, provided f is Lipschitz-continuous.
== Methods ==
Numerical methods for solving first-order IVPs often fall into one of two large categories: linear multistep methods, or Runge–Kutta methods. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit linear multistep methods include Adams-Moulton methods, and backward differentiation methods (BDF), whereas implicit Runge–Kutta methods include diagonally implicit Runge–Kutta (DIRK), singly diagonally implicit Runge–Kutta (SDIRK), and Gauss–Radau (based on Gaussian quadrature) numerical methods. Explicit examples from the linear multistep family include the Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal Butcher tableau is explicit. A loose rule of thumb dictates that stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes.
The so-called general linear methods (GLMs) are a generalization of the above two large classes of methods.
=== Euler method ===
From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line tangent to the curve.
Starting with the differential equation (1), we replace the derivative y′ by the finite difference approximation
which when re-arranged yields the following formula
y
(
t
+
h
)
≈
y
(
t
)
+
h
y
′
(
t
)
{\displaystyle y(t+h)\approx y(t)+hy'(t)}
and using (1) gives:
This formula is usually applied in the following way. We choose a step size h, and we construct the sequence
t
0
,
t
1
=
t
0
+
h
,
t
2
=
t
0
+
2
h
,
.
.
.
{\displaystyle t_{0},t_{1}=t_{0}+h,t_{2}=t_{0}+2h,...}
We denote by
y
n
{\displaystyle y_{n}}
a numerical estimate of the exact solution
y
(
t
n
)
{\displaystyle y(t_{n})}
. Motivated by (3), we compute these estimates by the following recursive scheme
This is the Euler method (or forward Euler method, in contrast with the backward Euler method, to be described below). The method is named after Leonhard Euler who described it in 1768.
The Euler method is an example of an explicit method. This means that the new value yn+1 is defined in terms of things that are already known, like yn.
=== Backward Euler method ===
If, instead of (2), we use the approximation
we get the backward Euler method:
The backward Euler method is an implicit method, meaning that we have to solve an equation to find yn+1. One often uses fixed-point iteration or (some modification of) the Newton–Raphson method to achieve this.
It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as (6) is that they are usually more stable for solving a stiff equation, meaning that a larger step size h can be used.
=== First-order exponential integrator method ===
Exponential integrators describe a large class of integrators that have recently seen a lot of development. They date back to at least the 1960s.
In place of (1), we assume the differential equation is either of the form
or it has been locally linearized about a background state to produce a linear term
−
A
y
{\displaystyle -Ay}
and a nonlinear term
N
(
y
)
{\displaystyle {\mathcal {N}}(y)}
.
Exponential integrators are constructed by multiplying (7) by
e
A
t
{\textstyle e^{At}}
, and exactly integrating the result over
a time interval
[
t
n
,
t
n
+
1
=
t
n
+
h
]
{\displaystyle [t_{n},t_{n+1}=t_{n}+h]}
:
y
n
+
1
=
e
−
A
h
y
n
+
∫
0
h
e
−
(
h
−
τ
)
A
N
(
y
(
t
n
+
τ
)
)
d
τ
.
{\displaystyle y_{n+1}=e^{-Ah}y_{n}+\int _{0}^{h}e^{-(h-\tau )A}{\mathcal {N}}\left(y\left(t_{n}+\tau \right)\right)\,d\tau .}
This integral equation is exact, but it doesn't define the integral.
The first-order exponential integrator can be realized by holding
N
(
y
(
t
n
+
τ
)
)
{\displaystyle {\mathcal {N}}(y(t_{n}+\tau ))}
constant over the full interval:
=== Generalizations ===
The Euler method is often not accurate enough. In more precise terms, it only has order one (the concept of order is explained below). This caused mathematicians to look for higher-order methods.
One possibility is to use not only the previously computed value yn to determine yn+1, but to make the solution depend on more past values. This yields a so-called multistep method. Perhaps the simplest is the leapfrog method which is second order and (roughly speaking) relies on two time values.
Almost all practical multistep methods fall within the family of linear multistep methods, which have the form
α
k
y
n
+
k
+
α
k
−
1
y
n
+
k
−
1
+
⋯
+
α
0
y
n
=
h
[
β
k
f
(
t
n
+
k
,
y
n
+
k
)
+
β
k
−
1
f
(
t
n
+
k
−
1
,
y
n
+
k
−
1
)
+
⋯
+
β
0
f
(
t
n
,
y
n
)
]
.
{\displaystyle {\begin{aligned}&{}\alpha _{k}y_{n+k}+\alpha _{k-1}y_{n+k-1}+\cdots +\alpha _{0}y_{n}\\&{}\quad =h\left[\beta _{k}f(t_{n+k},y_{n+k})+\beta _{k-1}f(t_{n+k-1},y_{n+k-1})+\cdots +\beta _{0}f(t_{n},y_{n})\right].\end{aligned}}}
Another possibility is to use more points in the interval
[
t
n
,
t
n
+
1
]
{\displaystyle [t_{n},t_{n+1}]}
. This leads to the family of Runge–Kutta methods, named after Carl Runge and Martin Kutta. One of their fourth-order methods is especially popular.
=== Advanced features ===
A good implementation of one of these methods for solving an ODE entails more than the time-stepping formula.
It is often inefficient to use the same step size all the time, so variable step-size methods have been developed. Usually, the step size is chosen such that the (local) error per step is below some tolerance level. This means that the methods must also compute an error indicator, an estimate of the local error.
An extension of this idea is to choose dynamically between different methods of different orders (this is called a variable order method). Methods based on Richardson extrapolation, such as the Bulirsch–Stoer algorithm, are often used to construct various methods of different orders.
Other desirable features include:
dense output: cheap numerical approximations for the whole integration interval, and not only at the points t0, t1, t2, ...
event location: finding the times where, say, a particular function vanishes. This typically requires the use of a root-finding algorithm.
support for parallel computing.
when used for integrating with respect to time, time reversibility
=== Alternative methods ===
Many methods do not fall within the framework discussed here. Some classes of alternative methods are:
multiderivative methods, which use not only the function f but also its derivatives. This class includes Hermite–Obreschkoff methods and Fehlberg methods, as well as methods like the Parker–Sochacki method or Bychkov–Scherbakov method, which compute the coefficients of the Taylor series of the solution y recursively.
methods for second order ODEs. We said that all higher-order ODEs can be transformed to first-order ODEs of the form (1). While this is certainly true, it may not be the best way to proceed. In particular, Nyström methods work directly with second-order equations.
geometric integration methods are especially designed for special classes of ODEs (for example, symplectic integrators for the solution of Hamiltonian equations). They take care that the numerical solution respects the underlying structure or geometry of these classes.
Quantized state systems methods are a family of ODE integration methods based on the idea of state quantization. They are efficient when simulating sparse systems with frequent discontinuities.
=== Parallel-in-time methods ===
Some IVPs require integration at such high temporal resolution and/or over such long time intervals that classical serial time-stepping methods become computationally infeasible to run in real-time (e.g. IVPs in numerical weather prediction, plasma modelling, and molecular dynamics). Parallel-in-time (PinT) methods have been developed in response to these issues in order to reduce simulation runtimes through the use of parallel computing.
Early PinT methods (the earliest being proposed in the 1960s) were initially overlooked by researchers due to the fact that the parallel computing architectures that they required were not yet widely available. With more computing power available, interest was renewed in the early 2000s with the development of Parareal, a flexible, easy-to-use PinT algorithm that is suitable for solving a wide variety of IVPs. The advent of exascale computing has meant that PinT algorithms are attracting increasing research attention and are being developed in such a way that they can harness the world's most powerful supercomputers. The most popular methods as of 2023 include Parareal, PFASST, ParaDiag, and MGRIT.
== Analysis ==
Numerical analysis is not only the design of numerical methods, but also their analysis. Three central concepts in this analysis are:
convergence: whether the method approximates the solution,
order: how well it approximates the solution, and
stability: whether errors are damped out.
=== Convergence ===
A numerical method is said to be convergent if the numerical solution approaches the exact solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a Lipschitz function f and every t* > 0,
lim
h
→
0
+
max
n
=
0
,
1
,
…
,
⌊
t
∗
/
h
⌋
‖
y
n
,
h
−
y
(
t
n
)
‖
=
0.
{\displaystyle \lim _{h\to 0^{+}}\max _{n=0,1,\dots ,\lfloor t^{*}/h\rfloor }\left\|y_{n,h}-y(t_{n})\right\|=0.}
All the methods mentioned above are convergent.
=== Consistency and order ===
Suppose the numerical method is
y
n
+
k
=
Ψ
(
t
n
+
k
;
y
n
,
y
n
+
1
,
…
,
y
n
+
k
−
1
;
h
)
.
{\displaystyle y_{n+k}=\Psi (t_{n+k};y_{n},y_{n+1},\dots ,y_{n+k-1};h).\,}
The local (truncation) error of the method is the error committed by one step of the method. That is, it is the difference between the result given by the method, assuming that no error was made in earlier steps, and the exact solution:
δ
n
+
k
h
=
Ψ
(
t
n
+
k
;
y
(
t
n
)
,
y
(
t
n
+
1
)
,
…
,
y
(
t
n
+
k
−
1
)
;
h
)
−
y
(
t
n
+
k
)
.
{\displaystyle \delta _{n+k}^{h}=\Psi \left(t_{n+k};y(t_{n}),y(t_{n+1}),\dots ,y(t_{n+k-1});h\right)-y(t_{n+k}).}
The method is said to be consistent if
lim
h
→
0
δ
n
+
k
h
h
=
0.
{\displaystyle \lim _{h\to 0}{\frac {\delta _{n+k}^{h}}{h}}=0.}
The method has order
p
{\displaystyle p}
if
δ
n
+
k
h
=
O
(
h
p
+
1
)
as
h
→
0.
{\displaystyle \delta _{n+k}^{h}=O(h^{p+1})\quad {\mbox{as }}h\to 0.}
Hence a method is consistent if it has an order greater than 0. The (forward) Euler method (4) and the backward Euler method (6) introduced above both have order 1, so they are consistent. Most methods being used in practice attain higher order. Consistency is a necessary condition for convergence, but not sufficient; for a method to be convergent, it must be both consistent and zero-stable.
A related concept is the global (truncation) error, the error sustained in all the steps one needs to reach a fixed time
t
{\displaystyle t}
. Explicitly, the global error at time
t
{\displaystyle t}
is
y
N
−
y
(
t
)
{\displaystyle y_{N}-y(t)}
where
N
=
(
t
−
t
0
)
/
h
{\displaystyle N=(t-t_{0})/h}
. The global error of a
p
{\displaystyle p}
th order one-step method is
O
(
h
p
)
{\displaystyle O(h^{p})}
; in particular, such a method is convergent. This statement is not necessarily true for multi-step methods.
=== Stability and stiffness ===
For some differential equations, application of standard methods—such as the Euler method, explicit Runge–Kutta methods, or multistep methods (for example, Adams–Bashforth methods)—exhibit instability in the solutions, though other methods may produce stable solutions. This "difficult behaviour" in the equation (which may not necessarily be complex itself) is described as stiffness, and is often caused by the presence of different time scales in the underlying problem. For example, a collision in a mechanical system like in an impact oscillator typically occurs at much smaller time scale than the time for the motion of objects; this discrepancy makes for very "sharp turns" in the curves of the state parameters.
Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather forecasting, biology, plasma physics, and electronics. One way to overcome stiffness is to extend the notion of differential equation to that of differential inclusion, which allows for and models non-smoothness.
== History ==
Below is a timeline of some important developments in this field.
1768 - Leonhard Euler publishes his method.
1824 - Augustin Louis Cauchy proves convergence of the Euler method. In this proof, Cauchy uses the implicit Euler method.
1855 - First mention of the multistep methods of John Couch Adams in a letter written by Francis Bashforth.
1895 - Carl Runge publishes the first Runge–Kutta method.
1901 - Martin Kutta describes the popular fourth-order Runge–Kutta method.
1910 - Lewis Fry Richardson announces his extrapolation method, Richardson extrapolation.
1952 - Charles F. Curtiss and Joseph Oakland Hirschfelder coin the term stiff equations.
1963 - Germund Dahlquist introduces A-stability of integration methods.
== Numerical solutions to second-order one-dimensional boundary value problems ==
Boundary value problems (BVPs) are usually solved numerically by solving an approximately equivalent matrix problem obtained by discretizing the original BVP. The most commonly used method for numerically solving BVPs in one dimension is called the Finite Difference Method. This method takes advantage of linear combinations of point values to construct finite difference coefficients that describe derivatives of the function. For example, the second-order central difference approximation to the first derivative is given by:
u
i
+
1
−
u
i
−
1
2
h
=
u
′
(
x
i
)
+
O
(
h
2
)
,
{\displaystyle {\frac {u_{i+1}-u_{i-1}}{2h}}=u'(x_{i})+{\mathcal {O}}(h^{2}),}
and the second-order central difference for the second derivative is given by:
u
i
+
1
−
2
u
i
+
u
i
−
1
h
2
=
u
″
(
x
i
)
+
O
(
h
2
)
.
{\displaystyle {\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}=u''(x_{i})+{\mathcal {O}}(h^{2}).}
In both of these formulae,
h
=
x
i
−
x
i
−
1
{\displaystyle h=x_{i}-x_{i-1}}
is the distance between neighbouring x values on the discretized domain. One then constructs a linear system that can then be solved by standard matrix methods. For example, suppose the equation to be solved is:
d
2
u
d
x
2
−
u
=
0
,
u
(
0
)
=
0
,
u
(
1
)
=
1.
{\displaystyle {\begin{aligned}&{}{\frac {d^{2}u}{dx^{2}}}-u=0,\\&{}u(0)=0,\\&{}u(1)=1.\end{aligned}}}
The next step would be to discretize the problem and use linear derivative approximations such as
u
i
″
=
u
i
+
1
−
2
u
i
+
u
i
−
1
h
2
{\displaystyle u''_{i}={\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}}
and solve the resulting system of linear equations. This would lead to equations such as:
u
i
+
1
−
2
u
i
+
u
i
−
1
h
2
−
u
i
=
0
,
∀
i
=
1
,
2
,
3
,
.
.
.
,
n
−
1
.
{\displaystyle {\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}-u_{i}=0,\quad \forall i={1,2,3,...,n-1}.}
On first viewing, this system of equations appears to have difficulty associated with the fact that the equation involves no terms that are not multiplied by variables, but in fact this is false. At i = 1 and n − 1 there is a term involving the boundary values
u
(
0
)
=
u
0
{\displaystyle u(0)=u_{0}}
and
u
(
1
)
=
u
n
{\displaystyle u(1)=u_{n}}
and since these two values are known, one can simply substitute them into this equation and as a result have a non-homogeneous system of linear equations that has non-trivial solutions.
== See also ==
Courant–Friedrichs–Lewy condition
Energy drift
General linear methods
List of numerical analysis topics#Numerical methods for ordinary differential equations
Reversible reference system propagation algorithm
Modelica Language and OpenModelica software
== Notes ==
== References ==
Bradie, Brian (2006). A Friendly Introduction to Numerical Analysis. Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN 978-0-13-013054-9.
J. C. Butcher, Numerical methods for ordinary differential equations, ISBN 0-471-96758-0
Hairer, E.; Nørsett, S. P.; Wanner, G. (1993). Solving Ordinary Differential Equations. I. Nonstiff Problems. Springer Series in Computational Mathematics. Vol. 8 (2nd ed.). Springer-Verlag, Berlin. ISBN 3-540-56670-8. MR 1227985.
Ernst Hairer and Gerhard Wanner, Solving ordinary differential equations II: Stiff and differential-algebraic problems, second edition, Springer Verlag, Berlin, 1996. ISBN 3-540-60452-9. (This two-volume monograph systematically covers all aspects of the field.)
Hochbruck, Marlis; Ostermann, Alexander (May 2010). "Exponential integrators". Acta Numerica. 19: 209–286. Bibcode:2010AcNum..19..209H. CiteSeerX 10.1.1.187.6794. doi:10.1017/S0962492910000048. S2CID 4841957.
Arieh Iserles, A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, 1996. ISBN 0-521-55376-8 (hardback), ISBN 0-521-55655-4 (paperback). (Textbook, targeting advanced undergraduate and postgraduate students in mathematics, which also discusses numerical partial differential equations.)
John Denholm Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Chichester, 1991. ISBN 0-471-92990-5. (Textbook, slightly more demanding than the book by Iserles.)
== External links ==
Joseph W. Rudmin, Application of the Parker–Sochacki Method to Celestial Mechanics Archived 2016-05-16 at the Portuguese Web Archive, 1998.
Dominique Tournès, L'intégration approchée des équations différentielles ordinaires (1671–1914), thèse de doctorat de l'université Paris 7 - Denis Diderot, juin 1996. Réimp. Villeneuve d'Ascq : Presses universitaires du Septentrion, 1997, 468 p. (Extensive online material on ODE numerical analysis history, for English-language material on the history of ODE numerical analysis, see, for example, the paper books by Chabert and Goldstine quoted by him.)
Pchelintsev, A.N. (2020). "An accurate numerical method and algorithm for constructing solutions of chaotic systems". Journal of Applied Nonlinear Dynamics. 9 (2): 207–221. arXiv:2011.10664. doi:10.5890/JAND.2020.06.004. S2CID 225853788.
kv on GitHub (C++ library with rigorous ODE solvers)
INTLAB (A library made by MATLAB/GNU Octave which includes rigorous ODE solvers) | Wikipedia/Numerical_methods_for_ordinary_differential_equations |
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events. This is done especially in the context of Markov information sources and hidden Markov models (HMM).
The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, speech synthesis, diarization, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
== History ==
The Viterbi algorithm is named after Andrew Viterbi, who proposed it in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. It has, however, a history of multiple invention, with at least seven independent discoveries, including those by Viterbi, Needleman and Wunsch, and Wagner and Fischer. It was introduced to natural language processing as a method of part-of-speech tagging as early as 1987.
Viterbi path and Viterbi algorithm have become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.
For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse". Another application is in target tracking, where the track is computed that assigns a maximum likelihood to a sequence of observations.
== Algorithm ==
Given a hidden Markov model with a set of hidden states
S
{\displaystyle S}
and a sequence of
T
{\displaystyle T}
observations
o
0
,
o
1
,
…
,
o
T
−
1
{\displaystyle o_{0},o_{1},\dots ,o_{T-1}}
, the Viterbi algorithm finds the most likely sequence of states that could have produced those observations. At each time step
t
{\displaystyle t}
, the algorithm solves the subproblem where only the observations up to
o
t
{\displaystyle o_{t}}
are considered.
Two matrices of size
T
×
|
S
|
{\displaystyle T\times \left|{S}\right|}
are constructed:
P
t
,
s
{\displaystyle P_{t,s}}
contains the maximum probability of ending up at state
s
{\displaystyle s}
at observation
t
{\displaystyle t}
, out of all possible sequences of states leading up to it.
Q
t
,
s
{\displaystyle Q_{t,s}}
tracks the previous state that was used before
s
{\displaystyle s}
in this maximum probability state sequence.
Let
π
s
{\displaystyle \pi _{s}}
and
a
r
,
s
{\displaystyle a_{r,s}}
be the initial and transition probabilities respectively, and let
b
s
,
o
{\displaystyle b_{s,o}}
be the probability of observing
o
{\displaystyle o}
at state
s
{\displaystyle s}
. Then the values of
P
{\displaystyle P}
are given by the recurrence relation
P
t
,
s
=
{
π
s
⋅
b
s
,
o
t
if
t
=
0
,
max
r
∈
S
(
P
t
−
1
,
r
⋅
a
r
,
s
⋅
b
s
,
o
t
)
if
t
>
0.
{\displaystyle P_{t,s}={\begin{cases}\pi _{s}\cdot b_{s,o_{t}}&{\text{if }}t=0,\\\max _{r\in S}\left(P_{t-1,r}\cdot a_{r,s}\cdot b_{s,o_{t}}\right)&{\text{if }}t>0.\end{cases}}}
The formula for
Q
t
,
s
{\displaystyle Q_{t,s}}
is identical for
t
>
0
{\displaystyle t>0}
, except that
max
{\displaystyle \max }
is replaced with
arg
max
{\displaystyle \arg \max }
, and
Q
0
,
s
=
0
{\displaystyle Q_{0,s}=0}
.
The Viterbi path can be found by selecting the maximum of
P
{\displaystyle P}
at the final timestep, and following
Q
{\displaystyle Q}
in reverse.
== Pseudocode ==
function Viterbi(states, init, trans, emit, obs) is
input states: S hidden states
input init: initial probabilities of each state
input trans: S × S transition matrix
input emit: S × O emission matrix
input obs: sequence of T observations
prob ← T × S matrix of zeroes
prev ← empty T × S matrix
for each state s in states do
prob[0][s] = init[s] * emit[s][obs[0]]
for t = 1 to T - 1 inclusive do // t = 0 has been dealt with already
for each state s in states do
for each state r in states do
new_prob ← prob[t - 1][r] * trans[r][s] * emit[s][obs[t]]
if new_prob > prob[t][s] then
prob[t][s] ← new_prob
prev[t][s] ← r
path ← empty array of length T
path[T - 1] ← the state s with maximum prob[T - 1][s]
for t = T - 2 to 0 inclusive do
path[t] ← prev[t + 1][path[t + 1]]
return path
end
The time complexity of the algorithm is
O
(
T
×
|
S
|
2
)
{\displaystyle O(T\times \left|{S}\right|^{2})}
. If it is known which state transitions have non-zero probability, an improved bound can be found by iterating over only those
r
{\displaystyle r}
which link to
s
{\displaystyle s}
in the inner loop. Then using amortized analysis one can show that the complexity is
O
(
T
×
(
|
S
|
+
|
E
|
)
)
{\displaystyle O(T\times (\left|{S}\right|+\left|{E}\right|))}
, where
E
{\displaystyle E}
is the number of edges in the graph, i.e. the number of non-zero entries in the transition matrix.
== Example ==
A doctor wishes to determine whether patients are healthy or have a fever. The only information the doctor can obtain is by asking patients how they feel. The patients may report that they either feel normal, dizzy, or cold.
It is believed that the health condition of the patients operates as a discrete Markov chain. There are two states, "healthy" and "fever", but the doctor cannot observe them directly; they are hidden from the doctor. On each day, the chance that a patient tells the doctor "I feel normal", "I feel cold", or "I feel dizzy", depends only on the patient's health condition on that day.
The observations (normal, cold, dizzy) along with the hidden states (healthy, fever) form a hidden Markov model (HMM). From past experience, the probabilities of this model have been estimated as:
init = {"Healthy": 0.6, "Fever": 0.4}
trans = {
"Healthy": {"Healthy": 0.7, "Fever": 0.3},
"Fever": {"Healthy": 0.4, "Fever": 0.6},
}
emit = {
"Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
"Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
}
In this code, init represents the doctor's belief about how likely the patient is to be healthy initially. Note that the particular probability distribution used here is not the equilibrium one, which would be {'Healthy': 0.57, 'Fever': 0.43} according to the transition probabilities. The transition probabilities trans represent the change of health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emission probabilities emit represent how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy.
A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day.
Firstly, the probabilities of being healthy or having a fever on the first day are calculated. The probability that a patient will be healthy on the first day and report feeling normal is
0.6
×
0.5
=
0.3
{\displaystyle 0.6\times 0.5=0.3}
. Similarly, the probability that a patient will have a fever on the first day and report feeling normal is
0.4
×
0.1
=
0.04
{\displaystyle 0.4\times 0.1=0.04}
.
The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of
0.3
×
0.7
×
0.4
=
0.084
{\displaystyle 0.3\times 0.7\times 0.4=0.084}
and
0.04
×
0.4
×
0.4
=
0.0064
{\displaystyle 0.04\times 0.4\times 0.4=0.0064}
. This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering.
The rest of the probabilities are summarised in the following table:
From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.01512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day.
The operation of Viterbi's algorithm can be visualized by means of a trellis diagram. The Viterbi path is essentially the shortest path through this trellis.
== Extensions ==
A generalization of the Viterbi algorithm, termed the max-sum algorithm (or max-product algorithm) can be used to find the most likely assignment of all or some subset of latent variables in a large number of graphical models, e.g. Bayesian networks, Markov random fields and conditional random fields. The latent variables need, in general, to be connected in a way somewhat similar to a hidden Markov model (HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involves message passing and is substantially similar to the belief propagation algorithm (which is the generalization of the forward-backward algorithm).
With an algorithm called iterative Viterbi decoding, one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal with turbo code. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.
An alternative algorithm, the Lazy Viterbi algorithm, has been proposed. For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the original Viterbi decoder (using Viterbi algorithm). While the original Viterbi algorithm calculates every node in the trellis of possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy to parallelize in hardware.
== Soft output Viterbi algorithm ==
The soft output Viterbi algorithm (SOVA) is a variant of the classical Viterbi algorithm.
SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account the a priori probabilities of the input symbols, and produces a soft output indicating the reliability of the decision.
The first step in the SOVA is the selection of the survivor path, passing through one unique node at each time instant, t. Since each node has 2 branches converging at it (with one branch being chosen to form the Survivor Path, and the other being discarded), the difference in the branch metrics (or cost) between the chosen and discarded branches indicate the amount of error in the choice.
This cost is accumulated over the entire sliding window (usually equals at least five constraint lengths), to indicate the soft output measure of reliability of the hard bit decision of the Viterbi algorithm.
== See also ==
Expectation–maximization algorithm
Baum–Welch algorithm
Forward-backward algorithm
Forward algorithm
Error-correcting code
Viterbi decoder
Hidden Markov model
Part-of-speech tagging
A* search algorithm
== References ==
== General references ==
Viterbi AJ (April 1967). "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm". IEEE Transactions on Information Theory. 13 (2): 260–269. doi:10.1109/TIT.1967.1054010. (note: the Viterbi decoding algorithm is described in section IV.) Subscription required.
Feldman J, Abou-Faycal I, Frigo M (2002). "A fast maximum-likelihood decoder for convolutional codes". Proceedings IEEE 56th Vehicular Technology Conference. Vol. 1. pp. 371–375. CiteSeerX 10.1.1.114.1314. doi:10.1109/VETECF.2002.1040367. ISBN 978-0-7803-7467-6. S2CID 9783963.
Forney GD (March 1973). "The Viterbi algorithm". Proceedings of the IEEE. 61 (3): 268–278. doi:10.1109/PROC.1973.9030. Subscription required.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.2. Viterbi Decoding". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17.
Rabiner LR (February 1989). "A tutorial on hidden Markov models and selected applications in speech recognition". Proceedings of the IEEE. 77 (2): 257–286. CiteSeerX 10.1.1.381.3454. doi:10.1109/5.18626. S2CID 13618539. (Describes the forward algorithm and Viterbi algorithm for HMMs).
Shinghal, R. and Godfried T. Toussaint, "Experiments in text recognition with the modified Viterbi algorithm," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-l, April 1979, pp. 184–193.
Shinghal, R. and Godfried T. Toussaint, "The sensitivity of the modified Viterbi algorithm to the source statistics," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-2, March 1980, pp. 181–185.
== External links ==
Implementations in Java, F#, Clojure, C# on Wikibooks
Tutorial on convolutional coding with viterbi decoding, by Chip Fleming
A tutorial for a Hidden Markov Model toolkit (implemented in C) that contains a description of the Viterbi algorithm
Viterbi algorithm by Dr. Andrew J. Viterbi (scholarpedia.org).
=== Implementations ===
Mathematica has an implementation as part of its support for stochastic processes
Susa signal processing framework provides the C++ implementation for Forward error correction codes and channel equalization here.
C++
C#
Java Archived 2014-05-04 at the Wayback Machine
Java 8
Julia (HMMBase.jl)
Perl
Prolog Archived 2012-05-02 at the Wayback Machine
Haskell
Go
SFIHMM includes code for Viterbi decoding. | Wikipedia/Viterbi_algorithm |
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable
X
{\displaystyle X}
, which may be any member
x
{\displaystyle x}
within the set
X
{\displaystyle {\mathcal {X}}}
and is distributed according to
p
:
X
→
[
0
,
1
]
{\displaystyle p\colon {\mathcal {X}}\to [0,1]}
, the entropy is
H
(
X
)
:=
−
∑
x
∈
X
p
(
x
)
log
p
(
x
)
,
{\displaystyle \mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x),}
where
Σ
{\displaystyle \Sigma }
denotes the sum over the variable's possible values. The choice of base for
log
{\displaystyle \log }
, the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.
The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem.
Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition
E
[
−
log
p
(
X
)
]
{\displaystyle \mathbb {E} [-\log p(X)]}
generalizes the above.
== Introduction ==
The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the occurrence of a very low probability event.
The information content, also called the surprisal or self-information, of an event
E
{\displaystyle E}
is a function that increases as the probability
p
(
E
)
{\displaystyle p(E)}
of an event decreases. When
p
(
E
)
{\displaystyle p(E)}
is close to 1, the surprisal of the event is low, but if
p
(
E
)
{\displaystyle p(E)}
is close to 0, the surprisal of the event is high. This relationship is described by the function
log
(
1
p
(
E
)
)
,
{\displaystyle \log \left({\frac {1}{p(E)}}\right),}
where
log
{\displaystyle \log }
is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section § Characterization.
Hence, we can define the information, or surprisal, of an event
E
{\displaystyle E}
by
I
(
E
)
=
log
(
1
p
(
E
)
)
,
{\displaystyle I(E)=\log \left({\frac {1}{p(E)}}\right),}
or equivalently,
I
(
E
)
=
−
log
(
p
(
E
)
)
.
{\displaystyle I(E)=-\log(p(E)).}
Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial.: 67 This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (
p
=
1
/
6
{\displaystyle p=1/6}
) than each outcome of a coin toss (
p
=
1
/
2
{\displaystyle p=1/2}
).
Consider a coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is when p = 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit (similarly, one trit with equiprobable values contains
log
2
3
{\displaystyle \log _{2}3}
(about 1.58496) bits of information because it can have one of three values). The minimum surprise is when p = 0 (impossibility) or p = 1 (certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity, there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits.
=== Example ===
Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.
English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.: 234
== Definition ==
Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable
X
{\textstyle X}
, which takes values in the set
X
{\displaystyle {\mathcal {X}}}
and is distributed according to
p
:
X
→
[
0
,
1
]
{\displaystyle p:{\mathcal {X}}\to [0,1]}
such that
p
(
x
)
:=
P
[
X
=
x
]
{\displaystyle p(x):=\mathbb {P} [X=x]}
:
H
(
X
)
=
E
[
I
(
X
)
]
=
E
[
−
log
p
(
X
)
]
.
{\displaystyle \mathrm {H} (X)=\mathbb {E} [\operatorname {I} (X)]=\mathbb {E} [-\log p(X)].}
Here
E
{\displaystyle \mathbb {E} }
is the expected value operator, and I is the information content of X.: 11 : 19–20
I
(
X
)
{\displaystyle \operatorname {I} (X)}
is itself a random variable.
The entropy can explicitly be written as:
H
(
X
)
=
−
∑
x
∈
X
p
(
x
)
log
b
p
(
x
)
,
{\displaystyle \mathrm {H} (X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{b}p(x),}
where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10.
In the case of
p
(
x
)
=
0
{\displaystyle p(x)=0}
for some
x
∈
X
{\displaystyle x\in {\mathcal {X}}}
, the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit:: 13
lim
p
→
0
+
p
log
(
p
)
=
0.
{\displaystyle \lim _{p\to 0^{+}}p\log(p)=0.}
One may also define the conditional entropy of two variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
taking values from sets
X
{\displaystyle {\mathcal {X}}}
and
Y
{\displaystyle {\mathcal {Y}}}
respectively, as:: 16
H
(
X
|
Y
)
=
−
∑
x
,
y
∈
X
×
Y
p
X
,
Y
(
x
,
y
)
log
p
X
,
Y
(
x
,
y
)
p
Y
(
y
)
,
{\displaystyle \mathrm {H} (X|Y)=-\sum _{x,y\in {\mathcal {X}}\times {\mathcal {Y}}}p_{X,Y}(x,y)\log {\frac {p_{X,Y}(x,y)}{p_{Y}(y)}},}
where
p
X
,
Y
(
x
,
y
)
:=
P
[
X
=
x
,
Y
=
y
]
{\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]}
and
p
Y
(
y
)
=
P
[
Y
=
y
]
{\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]}
. This quantity should be understood as the remaining randomness in the random variable
X
{\displaystyle X}
given the random variable
Y
{\displaystyle Y}
.
=== Measure theory ===
Entropy can be formally defined in the language of measure theory as follows: Let
(
X
,
Σ
,
μ
)
{\displaystyle (X,\Sigma ,\mu )}
be a probability space. Let
A
∈
Σ
{\displaystyle A\in \Sigma }
be an event. The surprisal of
A
{\displaystyle A}
is
σ
μ
(
A
)
=
−
ln
μ
(
A
)
.
{\displaystyle \sigma _{\mu }(A)=-\ln \mu (A).}
The expected surprisal of
A
{\displaystyle A}
is
h
μ
(
A
)
=
μ
(
A
)
σ
μ
(
A
)
.
{\displaystyle h_{\mu }(A)=\mu (A)\sigma _{\mu }(A).}
A
μ
{\displaystyle \mu }
-almost partition is a set family
P
⊆
P
(
X
)
{\displaystyle P\subseteq {\mathcal {P}}(X)}
such that
μ
(
∪
P
)
=
1
{\displaystyle \mu (\mathop {\cup } P)=1}
and
μ
(
A
∩
B
)
=
0
{\displaystyle \mu (A\cap B)=0}
for all distinct
A
,
B
∈
P
{\displaystyle A,B\in P}
. (This is a relaxation of the usual conditions for a partition.) The entropy of
P
{\displaystyle P}
is
H
μ
(
P
)
=
∑
A
∈
P
h
μ
(
A
)
.
{\displaystyle \mathrm {H} _{\mu }(P)=\sum _{A\in P}h_{\mu }(A).}
Let
M
{\displaystyle M}
be a sigma-algebra on
X
{\displaystyle X}
. The entropy of
M
{\displaystyle M}
is
H
μ
(
M
)
=
sup
P
⊆
M
H
μ
(
P
)
.
{\displaystyle \mathrm {H} _{\mu }(M)=\sup _{P\subseteq M}\mathrm {H} _{\mu }(P).}
Finally, the entropy of the probability space is
H
μ
(
Σ
)
{\displaystyle \mathrm {H} _{\mu }(\Sigma )}
, that is, the entropy with respect to
μ
{\displaystyle \mu }
of the sigma-algebra of all measurable subsets of
X
{\displaystyle X}
.
Recent studies on layered dynamical systems have introduced the concept of symbolic conditional entropy, further extending classical entropy measures to more abstract informational structures.
== Example ==
Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because
H
(
X
)
=
−
∑
i
=
1
n
p
(
x
i
)
log
b
p
(
x
i
)
=
−
∑
i
=
1
2
1
2
log
2
1
2
=
−
∑
i
=
1
2
1
2
⋅
(
−
1
)
=
1.
{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-\sum _{i=1}^{n}{p(x_{i})\log _{b}p(x_{i})}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\log _{2}{\frac {1}{2}}}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\cdot (-1)}=1.\end{aligned}}}
However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p = 0.7, then
H
(
X
)
=
−
p
log
2
p
−
q
log
2
q
=
−
0.7
log
2
(
0.7
)
−
0.3
log
2
(
0.3
)
≈
−
0.7
⋅
(
−
0.515
)
−
0.3
⋅
(
−
1.737
)
=
0.8816
<
1.
{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log _{2}p-q\log _{2}q\\[1ex]&=-0.7\log _{2}(0.7)-0.3\log _{2}(0.3)\\[1ex]&\approx -0.7\cdot (-0.515)-0.3\cdot (-1.737)\\[1ex]&=0.8816<1.\end{aligned}}}
Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.: 14–15
== Characterization ==
To understand the meaning of −Σ pi log(pi), first define an information function I in terms of an event i with probability pi. The amount of information acquired due to the observation of event i follows from Shannon's solution of the fundamental properties of information:
I(p) is monotonically decreasing in p: an increase in the probability of an event decreases the information from an observed event, and vice versa.
I(1) = 0: events that always occur do not communicate information.
I(p1·p2) = I(p1) + I(p2): the information learned from independent events is the sum of the information learned from each event.
I(p) is a twice continuously differentiable function of p.
Given two independent events, if the first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn equiprobable outcomes of the joint event. This means that if log2(n) bits are needed to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both.
Shannon discovered that a suitable choice of
I
{\displaystyle \operatorname {I} }
is given by:
I
(
p
)
=
log
(
1
p
)
=
−
log
(
p
)
.
{\displaystyle \operatorname {I} (p)=\log \left({\tfrac {1}{p}}\right)=-\log(p).}
In fact, the only possible values of
I
{\displaystyle \operatorname {I} }
are
I
(
u
)
=
k
log
u
{\displaystyle \operatorname {I} (u)=k\log u}
for
k
<
0
{\displaystyle k<0}
. Additionally, choosing a value for k is equivalent to choosing a value
x
>
1
{\displaystyle x>1}
for
k
=
−
1
/
log
x
{\displaystyle k=-1/\log x}
, so that x corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties.
The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) = 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, n tosses provide n bits of information, which is approximately 0.693n nats or 0.301n decimal digits.
The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
=== Alternative characterization ===
Another characterization of entropy uses the following properties. We denote pi = Pr(X = xi) and Ηn(p1, ..., pn) = Η(X).
Continuity: H should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount.
Symmetry: H should be unchanged if the outcomes xi are re-ordered. That is,
H
n
(
p
1
,
p
2
,
…
,
p
n
)
=
H
n
(
p
i
1
,
p
i
2
,
…
,
p
i
n
)
{\displaystyle \mathrm {H} _{n}\left(p_{1},p_{2},\ldots ,p_{n}\right)=\mathrm {H} _{n}\left(p_{i_{1}},p_{i_{2}},\ldots ,p_{i_{n}}\right)}
for any permutation
{
i
1
,
.
.
.
,
i
n
}
{\displaystyle \{i_{1},...,i_{n}\}}
of
{
1
,
.
.
.
,
n
}
{\displaystyle \{1,...,n\}}
.
Maximum:
H
n
{\displaystyle \mathrm {H} _{n}}
should be maximal if all the outcomes are equally likely i.e.
H
n
(
p
1
,
…
,
p
n
)
≤
H
n
(
1
n
,
…
,
1
n
)
{\displaystyle \mathrm {H} _{n}(p_{1},\ldots ,p_{n})\leq \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)}
.
Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e.
H
n
(
1
n
,
…
,
1
n
⏟
n
)
<
H
n
+
1
(
1
n
+
1
,
…
,
1
n
+
1
⏟
n
+
1
)
.
{\displaystyle \mathrm {H} _{n}{\bigg (}\underbrace {{\frac {1}{n}},\ldots ,{\frac {1}{n}}} _{n}{\bigg )}<\mathrm {H} _{n+1}{\bigg (}\underbrace {{\frac {1}{n+1}},\ldots ,{\frac {1}{n+1}}} _{n+1}{\bigg )}.}
Additivity: given an ensemble of n uniformly distributed elements that are partitioned into k boxes (sub-systems) with b1, ..., bk elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.
==== Discussion ====
The rule of additivity has the following consequences: for positive integers bi where b1 + ... + bk = n,
H
n
(
1
n
,
…
,
1
n
)
=
H
k
(
b
1
n
,
…
,
b
k
n
)
+
∑
i
=
1
k
b
i
n
H
b
i
(
1
b
i
,
…
,
1
b
i
)
.
{\displaystyle \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)=\mathrm {H} _{k}\left({\frac {b_{1}}{n}},\ldots ,{\frac {b_{k}}{n}}\right)+\sum _{i=1}^{k}{\frac {b_{i}}{n}}\,\mathrm {H} _{b_{i}}\left({\frac {1}{b_{i}}},\ldots ,{\frac {1}{b_{i}}}\right).}
Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source set with n symbols can be defined simply as being equal to its n-ary entropy. See also Redundancy (information theory).
The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the conditional probability is defined in terms of a multiplicative property,
P
(
A
∣
B
)
⋅
P
(
B
)
=
P
(
A
∩
B
)
{\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)}
. Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals
μ
(
A
)
⋅
ln
μ
(
A
)
{\displaystyle \mu (A)\cdot \ln \mu (A)}
for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string,
log
2
{\displaystyle \log _{2}}
lends itself to practical interpretations.
Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on.
=== Alternative characterization via additivity and subadditivity ===
Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties:
Subadditivity:
H
(
X
,
Y
)
≤
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)}
for jointly distributed random variables
X
,
Y
{\displaystyle X,Y}
.
Additivity:
H
(
X
,
Y
)
=
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X)+\mathrm {H} (Y)}
when the random variables
X
,
Y
{\displaystyle X,Y}
are independent.
Expansibility:
H
n
+
1
(
p
1
,
…
,
p
n
,
0
)
=
H
n
(
p
1
,
…
,
p
n
)
{\displaystyle \mathrm {H} _{n+1}(p_{1},\ldots ,p_{n},0)=\mathrm {H} _{n}(p_{1},\ldots ,p_{n})}
, i.e., adding an outcome with probability zero does not change the entropy.
Symmetry:
H
n
(
p
1
,
…
,
p
n
)
{\displaystyle \mathrm {H} _{n}(p_{1},\ldots ,p_{n})}
is invariant under permutation of
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
.
Small for small probabilities:
lim
q
→
0
+
H
2
(
1
−
q
,
q
)
=
0
{\displaystyle \lim _{q\to 0^{+}}\mathrm {H} _{2}(1-q,q)=0}
.
==== Discussion ====
It was shown that any function
H
{\displaystyle \mathrm {H} }
satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
.
It is worth noting that if we drop the "small for small probabilities" property, then
H
{\displaystyle \mathrm {H} }
must be a non-negative linear combination of the Shannon entropy and the Hartley entropy.
== Further properties ==
The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable X:
Adding or removing an event with probability zero does not contribute to the entropy:
H
n
+
1
(
p
1
,
…
,
p
n
,
0
)
=
H
n
(
p
1
,
…
,
p
n
)
.
{\displaystyle \mathrm {H} _{n+1}(p_{1},\ldots ,p_{n},0)=\mathrm {H} _{n}(p_{1},\ldots ,p_{n}).}
The maximal entropy of an event with n different outcomes is logb(n): it is attained by the uniform probability distribution. That is, uncertainty is maximal when all possible events are equiprobable:: 29
H
(
p
1
,
…
,
p
n
)
≤
log
b
n
.
{\displaystyle \mathrm {H} (p_{1},\dots ,p_{n})\leq \log _{b}n.}
The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of Y, then revealing the value of X given that you know the value of Y. This may be written as:: 16
H
(
X
,
Y
)
=
H
(
X
|
Y
)
+
H
(
Y
)
=
H
(
Y
|
X
)
+
H
(
X
)
.
{\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X|Y)+\mathrm {H} (Y)=\mathrm {H} (Y|X)+\mathrm {H} (X).}
If
Y
=
f
(
X
)
{\displaystyle Y=f(X)}
where
f
{\displaystyle f}
is a function, then
H
(
f
(
X
)
|
X
)
=
0
{\displaystyle \mathrm {H} (f(X)|X)=0}
. Applying the previous formula to
H
(
X
,
f
(
X
)
)
{\displaystyle \mathrm {H} (X,f(X))}
yields
H
(
X
)
+
H
(
f
(
X
)
|
X
)
=
H
(
f
(
X
)
)
+
H
(
X
|
f
(
X
)
)
,
{\displaystyle \mathrm {H} (X)+\mathrm {H} (f(X)|X)=\mathrm {H} (f(X))+\mathrm {H} (X|f(X)),}
so
H
(
f
(
X
)
)
≤
H
(
X
)
{\displaystyle \mathrm {H} (f(X))\leq \mathrm {H} (X)}
, the entropy of a variable can only decrease when the latter is passed through a function.
If X and Y are two independent random variables, then knowing the value of Y doesn't influence our knowledge of the value of X (since the two don't influence each other by independence):
H
(
X
|
Y
)
=
H
(
X
)
.
{\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (X).}
More generally, for any random variables X and Y, we have: 29
H
(
X
|
Y
)
≤
H
(
X
)
.
{\displaystyle \mathrm {H} (X|Y)\leq \mathrm {H} (X).}
The entropy of two simultaneous events is no more than the sum of the entropies of each individual event i.e.,
H
(
X
,
Y
)
≤
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)}
, with equality if and only if the two events are independent.: 28
The entropy
H
(
p
)
{\displaystyle \mathrm {H} (p)}
is concave in the probability mass function
p
{\displaystyle p}
, i.e.: 30
H
(
λ
p
1
+
(
1
−
λ
)
p
2
)
≥
λ
H
(
p
1
)
+
(
1
−
λ
)
H
(
p
2
)
{\displaystyle \mathrm {H} (\lambda p_{1}+(1-\lambda )p_{2})\geq \lambda \mathrm {H} (p_{1})+(1-\lambda )\mathrm {H} (p_{2})}
for all probability mass functions
p
1
,
p
2
{\displaystyle p_{1},p_{2}}
and
0
≤
λ
≤
1
{\displaystyle 0\leq \lambda \leq 1}
.: 32
Accordingly, the negative entropy (negentropy) function is convex, and its convex conjugate is LogSumExp.
== Aspects ==
=== Relationship to thermodynamic entropy ===
The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.
In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy
S
=
−
k
B
∑
i
p
i
ln
p
i
,
{\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i}\,,}
where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Ludwig Boltzmann (1872).
The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927:
S
=
−
k
B
T
r
(
ρ
ln
ρ
)
,
{\displaystyle S=-k_{\text{B}}\,{\rm {Tr}}(\rho \ln \rho )\,,}
where ρ is the density matrix of the quantum mechanical system and Tr is the trace.
At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.
The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by his equation:
S
=
k
B
ln
W
,
{\displaystyle S=k_{\text{B}}\ln W,}
where
S
{\displaystyle S}
is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and kB is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is pi = 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently kB times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate.
In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.
=== Data compression ===
Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.
If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.
A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.: 60–65
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunications networks.
=== Entropy as a measure of diversity ===
Entropy is one of several ways to measure biodiversity and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of 1D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types.
=== Entropy of a sequence ===
There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message:
the self-information of an individual message or symbol taken from a given probability distribution (message or sequence seen as an individual event),
the joint entropy of the symbols forming the message or sequence (seen as a set of events),
the entropy rate of a stochastic process (message or sequence is seen as a succession of events).
(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.
It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way.
If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are N published books, and each book is only published once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2) for n = 3, 4, 5, ..., F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.
=== Limitations of entropy in cryptography ===
In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average)
2
127
{\displaystyle 2^{127}}
guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
=== Data as a Markov process ===
A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:
H
(
S
)
=
−
∑
i
p
i
log
p
i
,
{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\log p_{i},}
where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:
H
(
S
)
=
−
∑
i
p
i
∑
j
p
i
(
j
)
log
p
i
(
j
)
,
{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}\ p_{i}(j)\log p_{i}(j),}
where i is a state (certain preceding characters) and
p
i
(
j
)
{\displaystyle p_{i}(j)}
is the probability of j given i as the previous character.
For a second order Markov source, the entropy rate is
H
(
S
)
=
−
∑
i
p
i
∑
j
p
i
(
j
)
∑
k
p
i
,
j
(
k
)
log
p
i
,
j
(
k
)
.
{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}p_{i}(j)\sum _{k}p_{i,j}(k)\ \log p_{i,j}(k).}
== Efficiency (normalized entropy) ==
A source set
X
{\displaystyle {\mathcal {X}}}
with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:
η
(
X
)
=
H
H
max
=
−
∑
i
=
1
n
p
(
x
i
)
log
b
(
p
(
x
i
)
)
log
b
(
n
)
.
{\displaystyle \eta (X)={\frac {H}{H_{\text{max}}}}=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}.}
Applying the basic properties of the logarithm, this quantity can also be expressed as:
η
(
X
)
=
−
∑
i
=
1
n
p
(
x
i
)
log
b
(
p
(
x
i
)
)
log
b
(
n
)
=
∑
i
=
1
n
log
b
(
p
(
x
i
)
−
p
(
x
i
)
)
log
b
(
n
)
=
∑
i
=
1
n
log
n
(
p
(
x
i
)
−
p
(
x
i
)
)
=
log
n
(
∏
i
=
1
n
p
(
x
i
)
−
p
(
x
i
)
)
.
{\displaystyle {\begin{aligned}\eta (X)&=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}=\sum _{i=1}^{n}{\frac {\log _{b}\left(p(x_{i})^{-p(x_{i})}\right)}{\log _{b}(n)}}\\[1ex]&=\sum _{i=1}^{n}\log _{n}\left(p(x_{i})^{-p(x_{i})}\right)=\log _{n}\left(\prod _{i=1}^{n}p(x_{i})^{-p(x_{i})}\right).\end{aligned}}}
Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy
log
b
(
n
)
{\displaystyle {\log _{b}(n)}}
. Furthermore, the efficiency is indifferent to the choice of (positive) base b, as indicated by the insensitivity within the final logarithm above thereto.
== Entropy for continuous random variables ==
=== Differential entropy ===
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function f(x) with finite or infinite support
X
{\displaystyle \mathbb {X} }
on the real line is defined by analogy, using the above form of the entropy as an expectation:: 224
H
(
X
)
=
E
[
−
log
f
(
X
)
]
=
−
∫
X
f
(
x
)
log
f
(
x
)
d
x
.
{\displaystyle \mathrm {H} (X)=\mathbb {E} [-\log f(X)]=-\int _{\mathbb {X} }f(x)\log f(x)\,\mathrm {d} x.}
This is the differential entropy (or continuous entropy). A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann.
Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.
To answer this question, a connection must be established between the two functions:
In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As the continuous domain is generalized, the width must be made explicit.
To do this, start with a continuous function f discretized into bins of size
Δ
{\displaystyle \Delta }
.
By the mean-value theorem there exists a value xi in each bin such that
f
(
x
i
)
Δ
=
∫
i
Δ
(
i
+
1
)
Δ
f
(
x
)
d
x
{\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\,dx}
the integral of the function f can be approximated (in the Riemannian sense) by
∫
−
∞
∞
f
(
x
)
d
x
=
lim
Δ
→
0
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
,
{\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{\Delta \to 0}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta ,}
where this limit and "bin size goes to zero" are equivalent.
We will denote
H
Δ
:=
−
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
f
(
x
i
)
Δ
)
{\displaystyle \mathrm {H} ^{\Delta }:=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log \left(f(x_{i})\Delta \right)}
and expanding the logarithm, we have
H
Δ
=
−
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
f
(
x
i
)
)
−
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
Δ
)
.
{\displaystyle \mathrm {H} ^{\Delta }=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(\Delta ).}
As Δ → 0, we have
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
→
∫
−
∞
∞
f
(
x
)
d
x
=
1
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
f
(
x
i
)
)
→
∫
−
∞
∞
f
(
x
)
log
f
(
x
)
d
x
.
{\displaystyle {\begin{aligned}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta &\to \int _{-\infty }^{\infty }f(x)\,dx=1\\\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))&\to \int _{-\infty }^{\infty }f(x)\log f(x)\,dx.\end{aligned}}}
Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy:
h
[
f
]
=
lim
Δ
→
0
(
H
Δ
+
log
Δ
)
=
−
∫
−
∞
∞
f
(
x
)
log
f
(
x
)
d
x
,
{\displaystyle h[f]=\lim _{\Delta \to 0}\left(\mathrm {H} ^{\Delta }+\log \Delta \right)=-\int _{-\infty }^{\infty }f(x)\log f(x)\,dx,}
which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension).
=== Limiting density of discrete points ===
It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some "standard" value of x (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:
H
=
∫
−
∞
∞
f
(
x
)
log
(
f
(
x
)
Δ
)
d
x
,
{\displaystyle \mathrm {H} =\int _{-\infty }^{\infty }f(x)\log(f(x)\,\Delta )\,dx,}
and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy as
N
→
∞
{\displaystyle N\rightarrow \infty }
would also include a term of
log
(
N
)
{\displaystyle \log(N)}
, which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.
=== Relative entropy ===
Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as
D
K
L
(
p
‖
m
)
=
∫
log
(
f
(
x
)
)
p
(
d
x
)
=
∫
f
(
x
)
log
(
f
(
x
)
)
m
(
d
x
)
.
{\displaystyle D_{\mathrm {KL} }(p\|m)=\int \log(f(x))p(dx)=\int f(x)\log(f(x))m(dx).}
In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure m.
== Use in number theory ==
Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem.
Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) XH =
λ
(
n
+
H
)
{\displaystyle \lambda (n+H)}
. And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of XH could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem.
The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem.
While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction.
== Use in combinatorics ==
Entropy has become a useful quantity in combinatorics.
=== Loomis–Whitney inequality ===
A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset A ⊆ Zd, we have
|
A
|
d
−
1
≤
∏
i
=
1
d
|
P
i
(
A
)
|
{\displaystyle |A|^{d-1}\leq \prod _{i=1}^{d}|P_{i}(A)|}
where Pi is the orthogonal projection in the ith coordinate:
P
i
(
A
)
=
{
(
x
1
,
…
,
x
i
−
1
,
x
i
+
1
,
…
,
x
d
)
:
(
x
1
,
…
,
x
d
)
∈
A
}
.
{\displaystyle P_{i}(A)=\{(x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{d}):(x_{1},\ldots ,x_{d})\in A\}.}
The proof follows as a simple corollary of Shearer's inequality: if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, ..., d} such that every integer between 1 and d lies in exactly r of these subsets, then
H
[
(
X
1
,
…
,
X
d
)
]
≤
1
r
∑
i
=
1
n
H
[
(
X
j
)
j
∈
S
i
]
{\displaystyle \mathrm {H} [(X_{1},\ldots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}\mathrm {H} [(X_{j})_{j\in S_{i}}]}
where
(
X
j
)
j
∈
S
i
{\displaystyle (X_{j})_{j\in S_{i}}}
is the Cartesian product of random variables Xj with indexes j in Si (so the dimension of this vector is equal to the size of Si).
We sketch how Loomis–Whitney follows from this: Indeed, let X be a uniformly distributed random variable with values in A and so that each point in A occurs with equal probability. Then (by the further properties of entropy mentioned above) Η(X) = log|A|, where |A| denotes the cardinality of A. Let Si = {1, 2, ..., i−1, i+1, ..., d}. The range of
(
X
j
)
j
∈
S
i
{\displaystyle (X_{j})_{j\in S_{i}}}
is contained in Pi(A) and hence
H
[
(
X
j
)
j
∈
S
i
]
≤
log
|
P
i
(
A
)
|
{\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|}
. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.
=== Approximation to binomial coefficient ===
For integers 0 < k < n let q = k/n. Then
2
n
H
(
q
)
n
+
1
≤
(
n
k
)
≤
2
n
H
(
q
)
,
{\displaystyle {\frac {2^{n\mathrm {H} (q)}}{n+1}}\leq {\tbinom {n}{k}}\leq 2^{n\mathrm {H} (q)},}
where : 43
H
(
q
)
=
−
q
log
2
(
q
)
−
(
1
−
q
)
log
2
(
1
−
q
)
.
{\displaystyle \mathrm {H} (q)=-q\log _{2}(q)-(1-q)\log _{2}(1-q).}
A nice interpretation of this is that the number of binary strings of length n with exactly k many 1's is approximately
2
n
H
(
k
/
n
)
{\displaystyle 2^{n\mathrm {H} (k/n)}}
.
== Use in machine learning ==
Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.
Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees
I
G
(
Y
,
X
)
{\displaystyle IG(Y,X)}
, which is equal to the difference between the entropy of
Y
{\displaystyle Y}
and the conditional entropy of
Y
{\displaystyle Y}
given
X
{\displaystyle X}
, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute
X
{\displaystyle X}
. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally.
Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior.
Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).
== See also ==
== Notes ==
== References ==
This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== Further reading ==
=== Textbooks on information theory ===
Cover, T.M., Thomas, J.A. (2006), Elements of Information Theory – 2nd Ed., Wiley-Interscience, ISBN 978-0-471-24195-9
MacKay, D.J.C. (2003), Information Theory, Inference and Learning Algorithms, Cambridge University Press, ISBN 978-0-521-64298-9
Arndt, C. (2004), Information Measures: Information and its Description in Science and Engineering, Springer, ISBN 978-3-540-40855-0
Gray, R. M. (2011), Entropy and Information Theory, Springer.
Martin, Nathaniel F.G.; England, James W. (2011). Mathematical Theory of Entropy. Cambridge University Press. ISBN 978-0-521-17738-2.
Shannon, C.E., Weaver, W. (1949) The Mathematical Theory of Communication, Univ of Illinois Press. ISBN 0-252-72548-4
Stone, J. V. (2014), Chapter 1 of Information Theory: A Tutorial Introduction Archived 3 June 2016 at the Wayback Machine, University of Sheffield, England. ISBN 978-0956372857.
== External links ==
"Entropy", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Entropy" Archived 4 June 2016 at the Wayback Machine at Rosetta Code—repository of implementations of Shannon entropy in different programming languages.
Entropy Archived 31 May 2016 at the Wayback Machine an interdisciplinary journal on all aspects of the entropy concept. Open access. | Wikipedia/Information_entropy |
In science, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. An example of a scalar field is a weather map, with the surface temperature described by assigning a number to each point on the map. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field.
In the modern framework of the quantum field theory, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. Richard Feynman said, "The fact that the electromagnetic field can possess momentum and energy makes it very real, and [...] a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e. they follow Gauss's law).
A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively. A field has a consistent tensorial character wherever it is defined: i.e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In this theory an equivalent representation of field is a field particle, for instance a boson.
== History ==
To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object. His idea in Opticks that optical reflection and refraction arise from interactions across the entire surface is arguably the beginning of the field theory of electric force.
The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1845 Michael Faraday became the first to coin the term "magnetic field". And Lord Kelvin provided a formal definition for a field in 1851.
The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields, called electromagnetic waves, propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past.
Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities.
In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic field. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to a lower quantum state led to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles, including electrons and protons, could be understood as the quanta of some quantum field, elevating fields to the status of the most fundamental objects in nature. That said, John Wheeler and Richard Feynman seriously considered Newton's pre-field concept of action at a distance (although they set it aside because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics).
== Classical fields ==
There are several examples of classical fields. Classical field theories remain useful wherever quantum properties do not arise, and can be active areas of research. Elasticity of materials, fluid dynamics and Maxwell's equations are cases in point.
Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described.
=== Newtonian gravitation ===
A classical field theory describing gravity is Newtonian gravitation, which describes the gravitational force as a mutual interaction between two masses.
Any body with mass M is associated with a gravitational field g which describes its influence on other bodies with mass. The gravitational field of M at a point r in space corresponds to the ratio between force F that M exerts on a small or negligible test mass m located at r and the test mass itself:
g
(
r
)
=
F
(
r
)
m
.
{\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}.}
Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M.
According to Newton's law of universal gravitation, F(r) is given by
F
(
r
)
=
−
G
M
m
r
2
r
^
,
{\displaystyle \mathbf {F} (\mathbf {r} )=-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }},}
where
r
^
{\displaystyle {\hat {\mathbf {r} }}}
is a unit vector lying along the line joining M and m and pointing from M to m. Therefore, the gravitational field of M is
g
(
r
)
=
F
(
r
)
m
=
−
G
M
r
2
r
^
.
{\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}=-{\frac {GM}{r^{2}}}{\hat {\mathbf {r} }}.}
The experimental observation that inertial mass and gravitational mass are equal to an unprecedented level of accuracy leads to the identity that gravitational field strength is identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity.
Because the gravitational force F is conservative, the gravitational field g can be rewritten in terms of the gradient of a scalar function, the gravitational potential Φ(r):
g
(
r
)
=
−
∇
Φ
(
r
)
.
{\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla \Phi (\mathbf {r} ).}
=== Electromagnetism ===
Michael Faraday first realized the importance of a field as a physical quantity, during his investigations into magnetism. He realized that electric and magnetic fields are not only fields of force which dictate the motion of particles, but also have an independent physical reality because they carry energy.
These ideas eventually led to the creation, by James Clerk Maxwell, of the first unified field theory in physics with the introduction of equations for the electromagnetic field. The modern versions of these equations are called Maxwell's equations.
==== Electrostatics ====
A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that F = qE. Using this and Coulomb's law tells us that the electric field due to a single charged particle is
E
=
1
4
π
ϵ
0
q
r
2
r
^
.
{\displaystyle \mathbf {E} ={\frac {1}{4\pi \epsilon _{0}}}{\frac {q}{r^{2}}}{\hat {\mathbf {r} }}.}
The electric field is conservative, and hence can be described by a scalar potential, V(r):
E
(
r
)
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {E} (\mathbf {r} )=-\nabla V(\mathbf {r} ).}
==== Magnetostatics ====
A steady current I flowing along a path ℓ will create a field B, that exerts a force on nearby moving charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is
F
(
r
)
=
q
v
×
B
(
r
)
,
{\displaystyle \mathbf {F} (\mathbf {r} )=q\mathbf {v} \times \mathbf {B} (\mathbf {r} ),}
where B(r) is the magnetic field, which is determined from I by the Biot–Savart law:
B
(
r
)
=
μ
0
4
π
∫
I
d
ℓ
×
r
^
r
2
.
{\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int {\frac {Id{\boldsymbol {\ell }}\times {\hat {\mathbf {r} }}}{r^{2}}}.}
The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r):
B
(
r
)
=
∇
×
A
(
r
)
{\displaystyle \mathbf {B} (\mathbf {r} )={\boldsymbol {\nabla }}\times \mathbf {A} (\mathbf {r} )}
==== Electrodynamics ====
In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to ρ and J.
Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations
E
=
−
∇
V
−
∂
A
∂
t
{\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}V-{\frac {\partial \mathbf {A} }{\partial t}}}
B
=
∇
×
A
.
{\displaystyle \mathbf {B} ={\boldsymbol {\nabla }}\times \mathbf {A} .}
At the end of the 19th century, the electromagnetic field was understood as a collection of two vector fields in space. Nowadays, one recognizes this as a single antisymmetric 2nd-rank tensor field in spacetime.
=== Gravitation in general relativity ===
Einstein's theory of gravity, called general relativity, is another example of a field theory. Here the principal field is the metric tensor, a symmetric 2nd-rank tensor field in spacetime. This replaces Newton's law of universal gravitation.
=== Waves as fields ===
Waves can be constructed as physical fields, due to their finite propagation speed and causal nature when a simplified physical model of an isolated closed system is set . They are also subject to the inverse-square law.
For electromagnetic waves, there are optical fields, and terms such as near- and far-field limits for diffraction. In practice though, the field theories of optics are superseded by the electromagnetic field theory of Maxwell
Gravity waves are waves in the surface of water, defined by a height field.
=== Fluid dynamics ===
Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid,
∂
∂
t
(
ρ
u
)
+
∇
⋅
(
ρ
u
⊗
u
+
p
I
)
=
∇
⋅
τ
+
ρ
b
{\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} )=\nabla \cdot {\boldsymbol {\tau }}+\rho \mathbf {b} }
if the density ρ, pressure p, deviatoric stress tensor τ of the fluid, as well as external body forces b, are all given. The flow velocity u is the vector field to solve for.
=== Elasticity ===
Linear elasticity is defined in terms of constitutive equations between tensor fields,
σ
i
j
=
L
i
j
k
l
ε
k
l
{\displaystyle \sigma _{ij}=L_{ijkl}\varepsilon _{kl}}
where
σ
i
j
{\displaystyle \sigma _{ij}}
are the components of the 3x3 Cauchy stress tensor,
ε
i
j
{\displaystyle \varepsilon _{ij}}
the components of the 3x3 infinitesimal strain and
L
i
j
k
l
{\displaystyle L_{ijkl}}
is the elasticity tensor, a fourth-rank tensor with 81 components (usually 21 independent components).
=== Thermodynamics and transport equations ===
Assuming that the temperature T is an intensive quantity, i.e., a single-valued, differentiable function of three-dimensional space (a scalar field), i.e., that
T
=
T
(
r
)
{\displaystyle T=T(\mathbf {r} )}
, then the temperature gradient is a vector field defined as
∇
T
{\displaystyle \nabla T}
. In thermal conduction, the temperature field appears in Fourier's law,
q
=
−
k
∇
T
{\displaystyle \mathbf {q} =-k\nabla T}
where q is the heat flux field and k the thermal conductivity.
Temperature and pressure gradients are also important for meteorology.
== Quantum fields ==
It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory.
In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges.
These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory.
In BRST theory one deals with odd fields, e.g. Faddeev–Popov ghosts. There are different descriptions of odd classical fields both on graded manifolds and supermanifolds.
As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (specifically, relativistic wave equations (RWEs)). Thus one can speak of Yang–Mills, Dirac, Klein–Gordon and Schrödinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus for spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization.
== Field theory ==
Field theory usually refers to a construction of the dynamics of a field, i.e. a specification of how a field changes with time or with respect to other independent physical variables on which the field depends. Usually this is done by writing a Lagrangian or a Hamiltonian of the field, and treating it as a classical or quantum mechanical system with an infinite number of degrees of freedom. The resulting field theories are referred to as classical or quantum field theories.
The dynamics of a classical field are usually specified by the Lagrangian density in terms of the field components; the dynamics can be obtained by using the action principle.
It is possible to construct simple fields without any prior knowledge of physics using only mathematics from multivariable calculus, potential theory and partial differential equations (PDEs). For example, scalar PDEs might consider quantities such as amplitude, density and pressure fields for the wave equation and fluid dynamics; temperature/concentration fields for the heat/diffusion equations. Outside of physics proper (e.g., radiometry and computer graphics), there are even light fields. All these previous examples are scalar fields. Similarly for vectors, there are vector PDEs for displacement, velocity and vorticity fields in (applied mathematical) fluid dynamics, but vector calculus may now be needed in addition, being calculus for vector fields (as are these three quantities, and those for vector PDEs in general). More generally problems in continuum mechanics may involve for example, directional elasticity (from which comes the term tensor, derived from the Latin word for stretch), complex fluid flows or anisotropic diffusion, which are framed as matrix-tensor PDEs, and then require matrices or tensor fields, hence matrix or tensor calculus. The scalars (and hence the vectors, matrices and tensors) can be real or complex as both are fields in the abstract-algebraic/ring-theoretic sense.
In a general setting, classical fields are described by sections of fiber bundles and their dynamics is formulated in the terms of jet manifolds (covariant classical field theory).
In modern physics, the most often studied fields are those that model the four fundamental forces which one day may lead to the Unified Field Theory.
=== Symmetries of fields ===
A convenient way of classifying a field (classical or quantum) is by the symmetries it possesses. Physical symmetries are usually of two types:
==== Spacetime symmetries ====
Fields are often classified by their behaviour under transformations of spacetime. The terms used in this classification are:
scalar fields (such as temperature) whose values are given by a single variable at each point of space. This value does not change under transformations of space.
vector fields (such as the magnitude and direction of the force at each point in a magnetic field) which are specified by attaching a vector to each point of space. The components of this vector transform between themselves contravariantly under rotations in space. Similarly, a dual (or co-) vector field attaches a dual vector to each point of space, and the components of each dual vector transform covariantly.
tensor fields, (such as the stress tensor of a crystal) specified by a tensor at each point of space. Under rotations in space, the components of the tensor transform in a more general way which depends on the number of covariant indices and contravariant indices.
spinor fields (such as the Dirac spinor) arise in quantum field theory to describe particles with spin which transform like vectors except for one of their components; in other words, when one rotates a vector field 360 degrees around a specific axis, the vector field turns to itself; however, spinors would turn to their negatives in the same case.
==== Internal symmetries ====
Fields may have internal symmetries in addition to spacetime symmetries. In many situations, one needs fields which are a list of spacetime scalars: (φ1, φ2, ... φN). For example, in weather prediction these may be temperature, pressure, humidity, etc. In particle physics, the color symmetry of the interaction of quarks is an example of an internal symmetry, that of the strong interaction. Other examples are isospin, weak isospin, strangeness and any other flavour symmetry.
If there is a symmetry of the problem, not involving spacetime, under which these components transform into each other, then this set of symmetries is called an internal symmetry. One may also make a classification of the charges of the fields under internal symmetries.
=== Statistical field theory ===
Statistical field theory attempts to extend the field-theoretic paradigm toward many-body systems and statistical mechanics. As above, it can be approached by the usual infinite number of degrees of freedom argument.
Much like statistical mechanics has some overlap between quantum and classical mechanics, statistical field theory has links to both quantum and classical field theories, especially the former with which it shares many methods. One important example is mean field theory.
=== Continuous random fields ===
Classical fields as above, such as the electromagnetic field, are usually infinitely differentiable functions, but they are in any case almost always twice differentiable. In contrast, generalized functions are not continuous. When dealing carefully with classical fields at finite temperature, the mathematical methods of continuous random fields are used, because thermally fluctuating classical fields are nowhere differentiable. Random fields are indexed sets of random variables; a continuous random field is a random field that has a set of functions as its index set. In particular, it is often mathematically convenient to take a continuous random field to have a Schwartz space of functions as its index set, in which case the continuous random field is a tempered distribution.
We can think about a continuous random field, in a (very) rough way, as an ordinary function that is
±
∞
{\displaystyle \pm \infty }
almost everywhere, but such that when we take a weighted average of all the infinities over any finite region, we get a finite result. The infinities are not well-defined; but the finite values can be associated with the functions used as the weight functions to get the finite values, and that can be well-defined. We can define a continuous random field well enough as a linear map from a space of functions into the real numbers.
== See also ==
== Notes ==
== References ==
== Further reading ==
"Fields". Principles of Physical Science. Vol. 25 (15th ed.). 1994. p. 815 – via Encyclopædia Britannica (Macropaedia).
Landau, Lev D. and Lifshitz, Evgeny M. (1971). Classical Theory of Fields (3rd ed.). London: Pergamon. ISBN 0-08-016019-0. Vol. 2 of the Course of Theoretical Physics.
Jepsen, Kathryn (July 18, 2013). "Real talk: Everything is made of fields" (PDF). Symmetry Magazine. Archived from the original (PDF) on March 4, 2016. Retrieved June 9, 2015.
== External links ==
Particle and Polymer Field Theories | Wikipedia/Field_theory_(physics) |
A timeline of events related to information theory, quantum information theory and statistical physics, data compression, error correcting codes and related subjects.
1872 – Ludwig Boltzmann presents his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle
1878 – J. Willard Gibbs defines the Gibbs entropy: the probabilities in the entropy formula are now taken as probabilities of the state of the whole system
1924 – Harry Nyquist discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system
1927 – John von Neumann defines the von Neumann entropy, extending the Gibbs entropy to quantum mechanics
1928 – Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning)
1929 – Leó Szilárd analyses Maxwell's demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work
1940 – Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process
1944 – Claude Shannon's theory of information is substantially complete
1947 – Richard W. Hamming invents Hamming codes for error detection and correction (to protect patent rights, the result is not published until 1950)
1948 – Claude E. Shannon publishes A Mathematical Theory of Communication
1949 – Claude E. Shannon publishes Communication in the Presence of Noise – Nyquist–Shannon sampling theorem and Shannon–Hartley law
1949 – Claude E. Shannon's Communication Theory of Secrecy Systems is declassified
1949 – Robert M. Fano publishes Transmission of Information. M.I.T. Press, Cambridge, Massachusetts – Shannon–Fano coding
1949 – Leon G. Kraft discovers Kraft's inequality, which shows the limits of prefix codes
1949 – Marcel J. E. Golay introduces Golay codes for forward error correction
1951 – Solomon Kullback and Richard Leibler introduce the Kullback–Leibler divergence
1951 – David A. Huffman invents Huffman encoding, a method of finding optimal prefix codes for lossless data compression
1953 – August Albert Sardinas and George W. Patterson devise the Sardinas–Patterson algorithm, a procedure to decide whether a given variable-length code is uniquely decodable
1954 – Irving S. Reed and David E. Muller propose Reed–Muller codes
1955 – Peter Elias introduces convolutional codes
1957 – Eugene Prange first discusses cyclic codes
1959 – Alexis Hocquenghem, and independently the next year Raj Chandra Bose and Dwijendra Kumar Ray-Chaudhuri, discover BCH codes
1960 – Irving S. Reed and Gustave Solomon propose Reed–Solomon codes
1962 – Robert G. Gallager proposes low-density parity-check codes; they are unused for 30 years due to technical limitations
1965 – Dave Forney discusses concatenated codes
1966 – Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) develop linear predictive coding (LPC), a form of speech coding
1967 – Andrew Viterbi reveals the Viterbi algorithm, making decoding of convolutional codes practicable
1968 – Elwyn Berlekamp invents the Berlekamp–Massey algorithm; its application to decoding BCH and Reed–Solomon codes is pointed out by James L. Massey the following year
1968 – Chris Wallace and David M. Boulton publish the first of many papers on Minimum Message Length (MML) statistical and inductive inference
1970 – Valerii Denisovich Goppa introduces Goppa codes
1972 – Jørn Justesen proposes Justesen codes, an improvement of Reed–Solomon codes
1972 – Nasir Ahmed proposes the discrete cosine transform (DCT), which he develops with T. Natarajan and K. R. Rao in 1973; the DCT later became the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3
1973 – David Slepian and Jack Wolf discover and prove the Slepian–Wolf coding limits for distributed source coding
1976 – Gottfried Ungerboeck gives the first paper on trellis modulation; a more detailed exposition in 1982 leads to a raising of analogue modem POTS speeds from 9.6 kbit/s to 33.6 kbit/s
1976 – Richard Pasco and Jorma J. Rissanen develop effective arithmetic coding techniques
1977 – Abraham Lempel and Jacob Ziv develop Lempel–Ziv compression (LZ77)
1982 – Valerii Denisovich Goppa introduces algebraic geometry codes
1989 – Phil Katz publishes the .zip format including DEFLATE (LZ77 + Huffman coding); later to become the most widely used archive container
1993 – Claude Berrou, Alain Glavieux and Punya Thitimajshima introduce Turbo codes
1994 – Michael Burrows and David Wheeler publish the Burrows–Wheeler transform, later to find use in bzip2
1995 – Benjamin Schumacher coins the term qubit and proves the quantum noiseless coding theorem
2003 – David J. C. MacKay shows the connection between information theory, inference and machine learning in his book.
2006 – Jarosław Duda introduces first Asymmetric numeral systems entropy coding: since 2014 popular replacement of Huffman and arithmetic coding in compressors like Facebook Zstandard, Apple LZFSE, CRAM or JPEG XL
2008 – Erdal Arıkan introduces polar codes, the first practical construction of codes that achieves capacity for a wide array of channels
== References == | Wikipedia/Timeline_of_information_theory |
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
A DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample.
There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT. This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT,: ix, xiii, 1, 141–304 used in several ISO/IEC and ITU-T international standards.
DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks. DCT blocks sizes including 8x8 pixels for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels. The DCT has a strong energy compaction property, capable of achieving high quality at high data compression ratios. However, blocky compression artifacts can appear when heavy DCT compression is applied.
== History ==
The DCT was first conceived by Nasir Ahmed while working at Kansas State University. The concept was proposed to the National Science Foundation in 1972. The DCT was originally intended for image compression. Ahmed developed a practical DCT algorithm with his PhD students T. Raj Natarajan and K. R. Rao at the University of Texas at Arlington in 1973. They presented their results in a January 1974 paper, titled Discrete Cosine Transform. It described what is now called the type-II DCT (DCT-II),: 51 as well as the type-III inverse DCT (IDCT).
Since its introduction in 1974, there has been significant research on the DCT. In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm. Further developments include a 1978 paper by M. J. Narasimha and A. M. Peterson, and a 1984 paper by B. G. Lee. These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by the Joint Photographic Experts Group as the basis for JPEG's lossy image compression algorithm in 1992.
The discrete sine transform (DST) was derived from the DCT, by replacing the Neumann condition at x=0 with a Dirichlet condition.: 35-36 The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao. A type-I DST (DST-I) was later described by Anil K. Jain in 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.
In 1975, John A. Roese and Guner S. Robinson adapted the DCT for inter-frame motion-compensated video coding. They experimented with the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, also called block motion compensation. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards.
A DCT variant, the modified discrete cosine transform (MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used in most modern audio compression formats, such as Dolby Digital (AC-3), MP3 (which uses a hybrid DCT-FFT algorithm), Advanced Audio Coding (AAC), and Vorbis (Ogg).
Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at the University of New Mexico in 1995. This allows the DCT technique to be used for lossless compression of images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT and delta modulation. It is a more effective lossless compression algorithm than entropy coding. Lossless DCT is also known as LDCT.
== Applications ==
The DCT is the most widely used transformation technique in signal processing, and by far the most widely used linear transform in data compression. Uncompressed digital media as well as lossless compression have high memory and bandwidth requirements, which is significantly reduced by the DCT lossy compression technique, capable of achieving data compression ratios from 8:1 to 14:1 for near-studio-quality, up to 100:1 for acceptable-quality content. DCT compression standards are used in digital media technologies, such as digital images, digital photos, digital video, streaming media, digital television, streaming television, video on demand (VOD), digital cinema, high-definition video (HD video), and high-definition television (HDTV).
The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strong energy compaction property. In typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlated Markov processes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform (which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions.
DCTs are widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even and odd boundary conditions at the two ends of the array.
DCTs are closely related to Chebyshev polynomials, and fast DCT algorithms (below) are used in Chebyshev approximation of arbitrary functions by series of Chebyshev polynomials, for example in Clenshaw–Curtis quadrature.
=== General applications ===
The DCT is widely used in many applications, which include the following.
=== Visual media standards ===
The DCT-II is an important image compression technique. It is used in image compression standards such as JPEG, and video compression standards such as H.26x, MJPEG, MPEG, DV, Theora and Daala. There, the two-dimensional DCT-II of
N
×
N
{\displaystyle N\times N}
blocks are computed and the results are quantized and entropy coded. In this case,
N
{\displaystyle N}
is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the
(
0
,
0
)
{\displaystyle (0,0)}
element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies.
The integer DCT, an integer approximation of the DCT, is used in Advanced Video Coding (AVC), introduced in 2003, and High Efficiency Video Coding (HEVC), introduced in 2013. The integer DCT is also used in the High Efficiency Image Format (HEIF), which uses a subset of the HEVC video coding format for coding still images. AVC uses 4 x 4 and 8 x 8 blocks. HEVC and HEIF use varied block sizes between 4 x 4 and 32 x 32 pixels. As of 2019, AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers.
==== Image formats ====
==== Video formats ====
=== MDCT audio standards ===
==== General audio ====
==== Speech coding ====
=== Multidimensional DCT ===
Multidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems, variable temporal length 3-D DCT coding, video coding algorithms, adaptive video coding and 3-D Compression. Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using MD DCTs is rapidly increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform and cosine-modulated wavelet bases.
=== Digital signal processing ===
DCT plays an important role in digital signal processing specifically data compression. The DCT is widely implemented in digital signal processors (DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such as encoding, decoding, video, audio, multiplexing, control signals, signaling, and analog-to-digital conversion. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.
=== Compression artifacts ===
A common issue with DCT compression in digital media are blocky compression artifacts, caused by DCT blocks. In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT blocks is taken within each block and the resulting DCT coefficients are quantized. This process can cause blocking artifacts, primarily at high data compression ratios. This can also cause the mosquito noise effect, commonly found in digital video.
DCT blocks are often used in glitch art. The artist Rosa Menkman makes use of DCT-based compression artifacts in her glitch art, particularly the DCT blocks found in most digital media formats such as JPEG digital images and MP3 audio. Another example is Jpegs by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style.
== Informal overview ==
Like any Fourier-related transform, DCTs express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the DFT, a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form of complex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies different boundary conditions from the DFT or other related transforms.
The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an extension of that function outside the domain. That is, once you write a function
f
(
x
)
{\displaystyle f(x)}
as a sum of sinusoids, you can evaluate that sum at any
x
{\displaystyle x}
, even for
x
{\displaystyle x}
where the original
f
(
x
)
{\displaystyle f(x)}
was not specified. The DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an even extension of the original function.
However, because DCTs operate on finite, discrete sequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd at both the left and right boundaries of the domain (i.e. the min-n and max-n boundaries in the definitions below, respectively). Second, one has to specify around what point the function is even or odd. In particular, consider a sequence abcd of four equally spaced data points, and say that we specify an even left boundary. There are two sensible possibilities: either the data are even about the sample a, in which case the even extension is dcbabcd, or the data are even about the point halfway between a and the previous point, in which case the even extension is dcbaabcd (a is repeated).
Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. These choices lead to all the standard variations of DCTs and also discrete sine transforms (DSTs). Half of these possibilities, those where the left boundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST.
These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solve partial differential equations by spectral methods, the boundary conditions are directly specified as a part of the problem being solved. Or, for the MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for the energy compactification properties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series.
In particular, it is well known that any discontinuities in a function reduce the rate of convergence of the Fourier series so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed. However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries. In contrast, a DCT where both boundaries are even always yields a continuous extension at the boundaries (although the slope is generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience.
== Formal definition ==
Formally, the discrete cosine transform is a linear, invertible function
f
:
R
N
→
R
N
{\displaystyle f:\mathbb {R} ^{N}\to \mathbb {R} ^{N}}
(where
R
{\displaystyle \mathbb {R} }
denotes the set of real numbers), or equivalently an invertible N × N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers
x
0
,
…
x
N
−
1
{\displaystyle ~x_{0},\ \ldots \ x_{N-1}~}
are transformed into the N real numbers
X
0
,
…
,
X
N
−
1
{\displaystyle X_{0},\,\ldots ,\,X_{N-1}}
according to one of the formulas:
=== DCT-I ===
X
k
=
1
2
(
x
0
+
(
−
1
)
k
x
N
−
1
)
+
∑
n
=
1
N
−
2
x
n
cos
[
π
N
−
1
n
k
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}={\frac {1}{2}}(x_{0}+(-1)^{k}x_{N-1})+\sum _{n=1}^{N-2}x_{n}\cos \left[\,{\tfrac {\ \pi }{\,N-1\,}}\,n\,k\,\right]\qquad {\text{ for }}~k=0,\ \ldots \ N-1~.}
Some authors further multiply the
x
0
{\displaystyle x_{0}}
and
x
N
−
1
{\displaystyle x_{N-1}}
terms by
2
{\displaystyle {\sqrt {2\,}}\,}
and correspondingly multiply the
X
0
{\displaystyle X_{0}}
and
X
N
−
1
{\displaystyle X_{N-1}}
terms by
1
/
2
{\displaystyle 1/{\sqrt {2\,}}\,}
which, if one further multiplies by an overall scale factor of
2
N
−
1
{\textstyle {\sqrt {{\tfrac {2}{N-1\,}}\,}}}
, makes the DCT-I matrix orthogonal but breaks the direct correspondence with a real-even DFT.
The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of
2
(
N
−
1
)
{\displaystyle 2(N-1)}
real numbers with even symmetry. For example, a DCT-I of
N
=
5
{\displaystyle N=5}
real numbers
a
b
c
d
e
{\displaystyle a\ b\ c\ d\ e}
is exactly equivalent to a DFT of eight real numbers
a
b
c
d
e
d
c
b
{\displaystyle a\ b\ c\ d\ e\ d\ c\ b}
(even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.)
Note, however, that the DCT-I is not defined for
N
{\displaystyle N}
less than 2, while all other DCT types are defined for any positive
N
{\displaystyle N}
.
Thus, the DCT-I corresponds to the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
0
{\displaystyle n=0}
and even around
n
=
N
−
1
{\displaystyle n=N-1}
; similarly for
X
k
{\displaystyle X_{k}}
.
=== DCT-II ===
X
k
=
∑
n
=
0
N
−
1
x
n
cos
[
π
N
(
n
+
1
2
)
k
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\cos \left[\,{\tfrac {\,\pi \,}{N}}\left(n+{\tfrac {1}{2}}\right)k\,\right]\qquad {\text{ for }}~k=0,\ \dots \ N-1~.}
The DCT-II is probably the most commonly used form, and is often simply referred to as the DCT.
This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of
4
N
{\displaystyle 4N}
real inputs of even symmetry, where the even-indexed elements are zero. That is, it is half of the DFT of the
4
N
{\displaystyle 4N}
inputs
y
n
,
{\displaystyle y_{n},}
where
y
2
n
=
0
{\displaystyle y_{2n}=0}
,
y
2
n
+
1
=
x
n
{\displaystyle y_{2n+1}=x_{n}}
for
0
≤
n
<
N
{\displaystyle 0\leq n<N}
,
y
2
N
=
0
{\displaystyle y_{2N}=0}
, and
y
4
N
−
n
=
y
n
{\displaystyle y_{4N-n}=y_{n}}
for
0
<
n
<
2
N
{\displaystyle 0<n<2N}
. DCT-II transformation is also possible using
2
N
{\displaystyle 2N}
signal followed by a multiplication by half shift. This is demonstrated by Makhoul.
Some authors further multiply the
X
0
{\displaystyle X_{0}}
term by
1
/
N
{\displaystyle 1/{\sqrt {N\,}}\,}
and multiply the rest of the matrix by an overall scale factor of
2
/
N
{\textstyle {\sqrt {{2}/{N}}}}
(see below for the corresponding change in DCT-III). This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. This is the normalization used by Matlab. In many applications, such as JPEG, the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. the quantization step in JPEG), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications.
The DCT-II implies the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
−
1
/
2
{\displaystyle n=-1/2}
and even around
n
=
N
−
1
/
2
{\displaystyle n=N-1/2\,}
;
X
k
{\displaystyle X_{k}}
is even around
k
=
0
{\displaystyle k=0}
and odd around
k
=
N
{\displaystyle k=N}
.
=== DCT-III ===
X
k
=
1
2
x
0
+
∑
n
=
1
N
−
1
x
n
cos
[
π
N
(
k
+
1
2
)
n
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}={\tfrac {1}{2}}x_{0}+\sum _{n=1}^{N-1}x_{n}\cos \left[\,{\tfrac {\,\pi \,}{N}}\left(k+{\tfrac {1}{2}}\right)n\,\right]\qquad {\text{ for }}~k=0,\ \ldots \ N-1~.}
Because it is the inverse of DCT-II up to a scale factor (see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT").
Some authors divide the
x
0
{\displaystyle x_{0}}
term by
2
{\displaystyle {\sqrt {2}}}
instead of by 2 (resulting in an overall
x
0
/
2
{\displaystyle x_{0}/{\sqrt {2}}}
term) and multiply the resulting matrix by an overall scale factor of
2
/
N
{\textstyle {\sqrt {2/N}}}
(see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output.
The DCT-III implies the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
0
{\displaystyle n=0}
and odd around
n
=
N
;
{\displaystyle n=N;}
X
k
{\displaystyle X_{k}}
is even around
k
=
−
1
/
2
{\displaystyle k=-1/2}
and even around
k
=
N
−
1
/
2.
{\displaystyle k=N-1/2.}
=== DCT-IV ===
X
k
=
∑
n
=
0
N
−
1
x
n
cos
[
π
N
(
n
+
1
2
)
(
k
+
1
2
)
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\cos \left[\,{\tfrac {\,\pi \,}{N}}\,\left(n+{\tfrac {1}{2}}\right)\left(k+{\tfrac {1}{2}}\right)\,\right]\qquad {\text{ for }}k=0,\ \ldots \ N-1~.}
The DCT-IV matrix becomes orthogonal (and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of
2
/
N
.
{\textstyle {\sqrt {2/N}}.}
A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT).
The DCT-IV implies the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
−
1
/
2
{\displaystyle n=-1/2}
and odd around
n
=
N
−
1
/
2
;
{\displaystyle n=N-1/2;}
similarly for
X
k
.
{\displaystyle X_{k}.}
=== DCT V-VIII ===
DCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary.
In other words, DCT types I–IV are equivalent to real-even DFTs of even order (regardless of whether
N
{\displaystyle N}
is even or odd), since the corresponding DFT is of length
2
(
N
−
1
)
{\displaystyle 2(N-1)}
(for DCT-I) or
4
N
{\displaystyle 4N}
(for DCT-II & III) or
8
N
{\displaystyle 8N}
(for DCT-IV). The four additional types of discrete cosine transform correspond essentially to real-even DFTs of logically odd order, which have factors of
N
±
1
/
2
{\displaystyle N\pm {1}/{2}}
in the denominators of the cosine arguments.
However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.
(The trivial real-even array, a length-one DFT (odd length) of a single number a , corresponds to a DCT-V of length
N
=
1.
{\displaystyle N=1.}
)
== Inverse transforms ==
Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N − 1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa.
Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by
2
/
N
{\textstyle {\sqrt {2/N}}}
so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of √2 (see above), this can be used to make the transform matrix orthogonal.
== Multidimensional DCTs ==
Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension.
=== M-D DCT-II ===
For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above):
X
k
1
,
k
2
=
∑
n
1
=
0
N
1
−
1
(
∑
n
2
=
0
N
2
−
1
x
n
1
,
n
2
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
)
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
=
∑
n
1
=
0
N
1
−
1
∑
n
2
=
0
N
2
−
1
x
n
1
,
n
2
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
.
{\displaystyle {\begin{aligned}X_{k_{1},k_{2}}&=\sum _{n_{1}=0}^{N_{1}-1}\left(\sum _{n_{2}=0}^{N_{2}-1}x_{n_{1},n_{2}}\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right]\right)\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\\&=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}x_{n_{1},n_{2}}\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right].\end{aligned}}}
The inverse of a multi-dimensional DCT is just a separable product of the inverses of the corresponding one-dimensional DCTs (see above), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm.
The 3-D DCT-II is only the extension of 2-D DCT-II in three dimensional space and mathematically can be calculated by the formula
X
k
1
,
k
2
,
k
3
=
∑
n
1
=
0
N
1
−
1
∑
n
2
=
0
N
2
−
1
∑
n
3
=
0
N
3
−
1
x
n
1
,
n
2
,
n
3
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
cos
[
π
N
3
(
n
3
+
1
2
)
k
3
]
,
for
k
i
=
0
,
1
,
2
,
…
,
N
i
−
1.
{\displaystyle X_{k_{1},k_{2},k_{3}}=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}\sum _{n_{3}=0}^{N_{3}-1}x_{n_{1},n_{2},n_{3}}\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right]\cos \left[{\frac {\pi }{N_{3}}}\left(n_{3}+{\frac {1}{2}}\right)k_{3}\right],\quad {\text{for }}k_{i}=0,1,2,\dots ,N_{i}-1.}
The inverse of 3-D DCT-II is 3-D DCT-III and can be computed from the formula given by
x
n
1
,
n
2
,
n
3
=
∑
k
1
=
0
N
1
−
1
∑
k
2
=
0
N
2
−
1
∑
k
3
=
0
N
3
−
1
X
k
1
,
k
2
,
k
3
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
cos
[
π
N
3
(
n
3
+
1
2
)
k
3
]
,
for
n
i
=
0
,
1
,
2
,
…
,
N
i
−
1.
{\displaystyle x_{n_{1},n_{2},n_{3}}=\sum _{k_{1}=0}^{N_{1}-1}\sum _{k_{2}=0}^{N_{2}-1}\sum _{k_{3}=0}^{N_{3}-1}X_{k_{1},k_{2},k_{3}}\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right]\cos \left[{\frac {\pi }{N_{3}}}\left(n_{3}+{\frac {1}{2}}\right)k_{3}\right],\quad {\text{for }}n_{i}=0,1,2,\dots ,N_{i}-1.}
Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed.
==== 3-D DCT-II VR DIF ====
In order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows. The transform size N × N × N is assumed to be 2.
x
~
(
n
1
,
n
2
,
n
3
)
=
x
(
2
n
1
,
2
n
2
,
2
n
3
)
x
~
(
n
1
,
n
2
,
N
−
n
3
−
1
)
=
x
(
2
n
1
,
2
n
2
,
2
n
3
+
1
)
x
~
(
n
1
,
N
−
n
2
−
1
,
n
3
)
=
x
(
2
n
1
,
2
n
2
+
1
,
2
n
3
)
x
~
(
n
1
,
N
−
n
2
−
1
,
N
−
n
3
−
1
)
=
x
(
2
n
1
,
2
n
2
+
1
,
2
n
3
+
1
)
x
~
(
N
−
n
1
−
1
,
n
2
,
n
3
)
=
x
(
2
n
1
+
1
,
2
n
2
,
2
n
3
)
x
~
(
N
−
n
1
−
1
,
n
2
,
N
−
n
3
−
1
)
=
x
(
2
n
1
+
1
,
2
n
2
,
2
n
3
+
1
)
x
~
(
N
−
n
1
−
1
,
N
−
n
2
−
1
,
n
3
)
=
x
(
2
n
1
+
1
,
2
n
2
+
1
,
2
n
3
)
x
~
(
N
−
n
1
−
1
,
N
−
n
2
−
1
,
N
−
n
3
−
1
)
=
x
(
2
n
1
+
1
,
2
n
2
+
1
,
2
n
3
+
1
)
{\displaystyle {\begin{array}{lcl}{\tilde {x}}(n_{1},n_{2},n_{3})=x(2n_{1},2n_{2},2n_{3})\\{\tilde {x}}(n_{1},n_{2},N-n_{3}-1)=x(2n_{1},2n_{2},2n_{3}+1)\\{\tilde {x}}(n_{1},N-n_{2}-1,n_{3})=x(2n_{1},2n_{2}+1,2n_{3})\\{\tilde {x}}(n_{1},N-n_{2}-1,N-n_{3}-1)=x(2n_{1},2n_{2}+1,2n_{3}+1)\\{\tilde {x}}(N-n_{1}-1,n_{2},n_{3})=x(2n_{1}+1,2n_{2},2n_{3})\\{\tilde {x}}(N-n_{1}-1,n_{2},N-n_{3}-1)=x(2n_{1}+1,2n_{2},2n_{3}+1)\\{\tilde {x}}(N-n_{1}-1,N-n_{2}-1,n_{3})=x(2n_{1}+1,2n_{2}+1,2n_{3})\\{\tilde {x}}(N-n_{1}-1,N-n_{2}-1,N-n_{3}-1)=x(2n_{1}+1,2n_{2}+1,2n_{3}+1)\\\end{array}}}
where
0
≤
n
1
,
n
2
,
n
3
≤
N
2
−
1
{\displaystyle 0\leq n_{1},n_{2},n_{3}\leq {\frac {N}{2}}-1}
The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, where
c
(
φ
i
)
=
cos
(
φ
i
)
{\displaystyle c(\varphi _{i})=\cos(\varphi _{i})}
.
The original 3-D DCT-II now can be written as
X
(
k
1
,
k
2
,
k
3
)
=
∑
n
1
=
1
N
−
1
∑
n
2
=
1
N
−
1
∑
n
3
=
1
N
−
1
x
~
(
n
1
,
n
2
,
n
3
)
cos
(
φ
k
1
)
cos
(
φ
k
2
)
cos
(
φ
k
3
)
{\displaystyle X(k_{1},k_{2},k_{3})=\sum _{n_{1}=1}^{N-1}\sum _{n_{2}=1}^{N-1}\sum _{n_{3}=1}^{N-1}{\tilde {x}}(n_{1},n_{2},n_{3})\cos(\varphi k_{1})\cos(\varphi k_{2})\cos(\varphi k_{3})}
where
φ
i
=
π
2
N
(
4
N
i
+
1
)
,
and
i
=
1
,
2
,
3.
{\displaystyle \varphi _{i}={\frac {\pi }{2N}}(4N_{i}+1),{\text{ and }}i=1,2,3.}
If the even and the odd parts of
k
1
,
k
2
{\displaystyle k_{1},k_{2}}
and
k
3
{\displaystyle k_{3}}
and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as
X
(
k
1
,
k
2
,
k
3
)
=
∑
n
1
=
1
N
2
−
1
∑
n
2
=
1
N
2
−
1
∑
n
1
=
1
N
2
−
1
x
~
i
j
l
(
n
1
,
n
2
,
n
3
)
cos
(
φ
(
2
k
1
+
i
)
cos
(
φ
(
2
k
2
+
j
)
cos
(
φ
(
2
k
3
+
l
)
)
{\displaystyle X(k_{1},k_{2},k_{3})=\sum _{n_{1}=1}^{{\tfrac {N}{2}}-1}\sum _{n_{2}=1}^{{\tfrac {N}{2}}-1}\sum _{n_{1}=1}^{{\tfrac {N}{2}}-1}{\tilde {x}}_{ijl}(n_{1},n_{2},n_{3})\cos(\varphi (2k_{1}+i)\cos(\varphi (2k_{2}+j)\cos(\varphi (2k_{3}+l))}
where
x
~
i
j
l
(
n
1
,
n
2
,
n
3
)
=
x
~
(
n
1
,
n
2
,
n
3
)
+
(
−
1
)
l
x
~
(
n
1
,
n
2
,
n
3
+
n
2
)
{\displaystyle {\tilde {x}}_{ijl}(n_{1},n_{2},n_{3})={\tilde {x}}(n_{1},n_{2},n_{3})+(-1)^{l}{\tilde {x}}\left(n_{1},n_{2},n_{3}+{\frac {n}{2}}\right)}
+
(
−
1
)
j
x
~
(
n
1
,
n
2
+
n
2
,
n
3
)
+
(
−
1
)
j
+
l
x
~
(
n
1
,
n
2
+
n
2
,
n
3
+
n
2
)
{\displaystyle +(-1)^{j}{\tilde {x}}\left(n_{1},n_{2}+{\frac {n}{2}},n_{3}\right)+(-1)^{j+l}{\tilde {x}}\left(n_{1},n_{2}+{\frac {n}{2}},n_{3}+{\frac {n}{2}}\right)}
+
(
−
1
)
i
x
~
(
n
1
+
n
2
,
n
2
,
n
3
)
+
(
−
1
)
i
+
j
x
~
(
n
1
+
n
2
+
n
2
,
n
2
,
n
3
)
{\displaystyle +(-1)^{i}{\tilde {x}}\left(n_{1}+{\frac {n}{2}},n_{2},n_{3}\right)+(-1)^{i+j}{\tilde {x}}\left(n_{1}+{\frac {n}{2}}+{\frac {n}{2}},n_{2},n_{3}\right)}
+
(
−
1
)
i
+
l
x
~
(
n
1
+
n
2
,
n
2
,
n
3
+
n
3
)
{\displaystyle +(-1)^{i+l}{\tilde {x}}\left(n_{1}+{\frac {n}{2}},n_{2},n_{3}+{\frac {n}{3}}\right)}
+
(
−
1
)
i
+
j
+
l
x
~
(
n
1
+
n
2
,
n
2
+
n
2
,
n
3
+
n
2
)
where
i
,
j
,
l
=
0
or
1.
{\displaystyle +(-1)^{i+j+l}{\tilde {x}}\left(n_{1}+{\frac {n}{2}},n_{2}+{\frac {n}{2}},n_{3}+{\frac {n}{2}}\right){\text{ where }}i,j,l=0{\text{ or }}1.}
===== Arithmetic complexity =====
The whole 3-D DCT calculation needs
[
log
2
N
]
{\displaystyle ~[\log _{2}N]~}
stages, and each stage involves
1
8
N
3
{\displaystyle ~{\tfrac {1}{8}}\ N^{3}~}
butterflies. The whole 3-D DCT requires
[
1
8
N
3
log
2
N
]
{\displaystyle ~\left[{\tfrac {1}{8}}\ N^{3}\log _{2}N\right]~}
butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is
[
7
8
N
3
log
2
N
]
,
{\displaystyle ~\left[{\tfrac {7}{8}}\ N^{3}\ \log _{2}N\right]~,}
and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by
[
3
2
N
3
log
2
N
]
⏟
Real
+
[
3
2
N
3
log
2
N
−
3
N
3
+
3
N
2
]
⏟
Recursive
=
[
9
2
N
3
log
2
N
−
3
N
3
+
3
N
2
]
.
{\displaystyle ~\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N\right]} _{\text{Real}}+\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]} _{\text{Recursive}}=\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~.}
The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by
[
3
2
N
3
log
2
N
]
{\displaystyle ~\left[{\frac {3}{2}}N^{3}\log _{2}N\right]~}
and
[
9
2
N
3
log
2
N
−
3
N
3
+
3
N
2
]
,
{\displaystyle ~\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~,}
respectively. From Table 1, it can be seen that the total number
of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications.
The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor. Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications, it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of the Cooley–Tukey FFT algorithm in 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-style Cooley–Tukey FFT algorithms.
The image to the right shows a combination of horizontal and vertical frequencies for an 8 × 8
(
N
1
=
N
2
=
8
)
{\displaystyle (~N_{1}=N_{2}=8~)}
two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle.
For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data ( 8×8 ) is transformed to a linear combination of these 64 frequency squares.
=== MD-DCT-IV ===
The M-D DCT-IV is just an extension of 1-D DCT-IV on to M dimensional domain. The 2-D DCT-IV of a matrix or an image is given by
X
k
,
ℓ
=
∑
n
=
0
N
−
1
∑
m
=
0
M
−
1
x
n
,
m
cos
(
(
2
m
+
1
)
(
2
k
+
1
)
π
4
N
)
cos
(
(
2
n
+
1
)
(
2
ℓ
+
1
)
π
4
M
)
,
{\displaystyle X_{k,\ell }=\sum _{n=0}^{N-1}\;\sum _{m=0}^{M-1}\ x_{n,m}\cos \left(\ {\frac {\,(2m+1)(2k+1)\ \pi \,}{4N}}\ \right)\cos \left(\ {\frac {\,(2n+1)(2\ell +1)\ \pi \,}{4M}}\ \right)~,}
for
k
=
0
,
1
,
2
…
N
−
1
{\displaystyle ~~k=0,\ 1,\ 2\ \ldots \ N-1~~}
and
ℓ
=
0
,
1
,
2
,
…
M
−
1
.
{\displaystyle ~~\ell =0,\ 1,\ 2,\ \ldots \ M-1~.}
We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields.
== Computation ==
Although the direct application of these formulas would require
O
(
N
2
)
{\displaystyle ~{\mathcal {O}}(N^{2})~}
operations, it is possible to compute the same thing with only
O
(
N
log
N
)
{\displaystyle ~{\mathcal {O}}(N\log N)~}
complexity by factorizing the computation similarly to the fast Fourier transform (FFT). One can also compute DCTs via FFTs combined with
O
(
N
)
{\displaystyle ~{\mathcal {O}}(N)~}
pre- and post-processing steps. In general,
O
(
N
log
N
)
{\displaystyle ~{\mathcal {O}}(N\log N)~}
methods to compute DCTs are known as fast cosine transform (FCT) algorithms.
The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plus
O
(
N
)
{\displaystyle ~{\mathcal {O}}(N)~}
extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least for power-of-two sizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically (Frigo & Johnson 2005). Algorithms based on the Cooley–Tukey FFT algorithm are most common, but any other FFT algorithm is also applicable. For example, the Winograd FFT algorithm leads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by (Feig & Winograd 1992a) for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well (Duhamel & Vetterli 1990).
While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengths N with FFT-based algorithms.
Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the 8 × 8 DCT-II used in JPEG compression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.)
In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size
4
N
{\displaystyle ~4N~}
with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used in FFTPACK and FFTW) was described by Narasimha & Peterson (1978) and Makhoul (1980), and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT-II.
Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent size
N
{\displaystyle ~N~}
real-data FFT is also performed by a real-data split-radix algorithm (as in Sorensen et al. (1987)), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II (
2
N
log
2
N
−
N
+
2
{\displaystyle ~2N\log _{2}N-N+2~}
real-arithmetic operations).
A recent reduction in the operation count to
17
9
N
log
2
N
+
O
(
N
)
{\displaystyle ~{\tfrac {17}{9}}N\log _{2}N+{\mathcal {O}}(N)}
also uses a real-data FFT. So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for small
N
,
{\displaystyle ~N~,}
but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.)
== Example of IDCT ==
Consider this 8x8 grayscale image of capital letter A.
Each basis function is multiplied by its coefficient and then this product is added to the final image.
== See also ==
Discrete wavelet transform
JPEG - Discrete cosine transform - Contains a potentially easier to understand example of DCT transformation
List of Fourier-related transforms
Modified discrete cosine transform
== Notes ==
== References ==
== Further reading ==
Narasimha, M.; Peterson, A. (June 1978). "On the Computation of the Discrete Cosine Transform". IEEE Transactions on Communications. 26 (6): 934–936. doi:10.1109/TCOM.1978.1094144.
Makhoul, J. (February 1980). "A fast cosine transform in one and two dimensions". IEEE Transactions on Acoustics, Speech, and Signal Processing. 28 (1): 27–34. doi:10.1109/TASSP.1980.1163351.
Sorensen, H.; Jones, D.; Heideman, M.; Burrus, C. (June 1987). "Real-valued fast Fourier transform algorithms". IEEE Transactions on Acoustics, Speech, and Signal Processing. 35 (6): 849–863. CiteSeerX 10.1.1.205.4523. doi:10.1109/TASSP.1987.1165220.
Plonka, G.; Tasche, M. (January 2005). "Fast and numerically stable algorithms for discrete cosine transforms". Linear Algebra and Its Applications. 394 (1): 309–345. doi:10.1016/j.laa.2004.07.015.
Duhamel, P.; Vetterli, M. (April 1990). "Fast fourier transforms: A tutorial review and a state of the art". Signal Processing (Submitted manuscript). 19 (4): 259–299. Bibcode:1990SigPr..19..259D. doi:10.1016/0165-1684(90)90158-U.
Ahmed, N. (January 1991). "How I came up with the discrete cosine transform". Digital Signal Processing. 1 (1): 4–9. Bibcode:1991DSP.....1....4A. doi:10.1016/1051-2004(91)90086-Z.
Feig, E.; Winograd, S. (September 1992b). "Fast algorithms for the discrete cosine transform". IEEE Transactions on Signal Processing. 40 (9): 2174–2193. Bibcode:1992ITSP...40.2174F. doi:10.1109/78.157218.
Malvar, Henrique (1992), Signal Processing with Lapped Transforms, Boston: Artech House, ISBN 978-0-89006-467-2
Martucci, S. A. (May 1994). "Symmetric convolution and the discrete sine and cosine transforms". IEEE Transactions on Signal Processing. 42 (5): 1038–1051. Bibcode:1994ITSP...42.1038M. doi:10.1109/78.295213.
Oppenheim, Alan; Schafer, Ronald; Buck, John (1999), Discrete-Time Signal Processing (2nd ed.), Upper Saddle River, N.J: Prentice Hall, ISBN 978-0-13-754920-7
Frigo, M.; Johnson, S. G. (February 2005). "The Design and Implementation of FFTW3" (PDF). Proceedings of the IEEE. 93 (2): 216–231. Bibcode:2005IEEEP..93..216F. CiteSeerX 10.1.1.66.3097. doi:10.1109/JPROC.2004.840301. S2CID 6644892.
Boussakta, Said.; Alshibami, Hamoud O. (April 2004). "Fast Algorithm for the 3-D DCT-II" (PDF). IEEE Transactions on Signal Processing. 52 (4): 992–1000. Bibcode:2004ITSP...52..992B. doi:10.1109/TSP.2004.823472. S2CID 3385296.
Cheng, L. Z.; Zeng, Y. H. (2003). "New fast algorithm for multidimensional type-IV DCT". IEEE Transactions on Signal Processing. 51 (1): 213–220. doi:10.1109/TSP.2002.806558.
Wen-Hsiung Chen; Smith, C.; Fralick, S. (September 1977). "A Fast Computational Algorithm for the Discrete Cosine Transform". IEEE Transactions on Communications. 25 (9): 1004–1009. doi:10.1109/TCOM.1977.1093941.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 12.4.2. Cosine Transform", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2011-08-11, retrieved 2011-08-13
== External links ==
Syed Ali Khayam: The Discrete Cosine Transform (DCT): Theory and Application
Implementation of MPEG integer approximation of 8x8 IDCT (ISO/IEC 23002-2)
Matteo Frigo and Steven G. Johnson: FFTW, FFTW Home Page. A free (GPL) C library that can compute fast DCTs (types I-IV) in one or more dimensions, of arbitrary size.
Takuya Ooura: General Purpose FFT Package, FFT Package 1-dim / 2-dim. Free C & FORTRAN libraries for computing fast DCTs (types II–III) in one, two or three dimensions, power of 2 sizes.
Tim Kientzle: Fast algorithms for computing the 8-point DCT and IDCT, Algorithm Alley.
LTFAT is a free Matlab/Octave toolbox with interfaces to the FFTW implementation of the DCTs and DSTs of type I-IV. | Wikipedia/Discrete_cosine_transform |
Vector calculus or vector analysis is a branch of mathematics concerned with the differentiation and integration of vector fields, primarily in three-dimensional Euclidean space,
R
3
.
{\displaystyle \mathbb {R} ^{3}.}
The term vector calculus is sometimes used as a synonym for the broader subject of multivariable calculus, which spans vector calculus as well as partial differentiation and multiple integration. Vector calculus plays an important role in differential geometry and in the study of partial differential equations. It is used extensively in physics and engineering, especially in the description of electromagnetic fields, gravitational fields, and fluid flow.
Vector calculus was developed from the theory of quaternions by J. Willard Gibbs and Oliver Heaviside near the end of the 19th century, and most of the notation and terminology was established by Gibbs and Edwin Bidwell Wilson in their 1901 book, Vector Analysis, though earlier mathematicians such as Isaac Newton pioneered the field. In its standard form using the cross product, vector calculus does not generalize to higher dimensions, but the alternative approach of geometric algebra, which uses the exterior product, does (see § Generalizations below for more).
== Basic objects ==
=== Scalar fields ===
A scalar field associates a scalar value to every point in a space. The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields (known as scalar bosons), such as the Higgs field. These fields are the subject of scalar field theory.
=== Vector fields ===
A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point. This can be used, for example, to calculate work done over a line.
=== Vectors and pseudovectors ===
In more advanced treatments, one further distinguishes pseudovector fields and pseudoscalar fields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, the curl of a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated in geometric algebra, as described below.
== Vector algebra ==
The algebraic (non-differential) operations in vector calculus are referred to as vector algebra, being defined for a vector space and then applied pointwise to a vector field. The basic algebraic operations consist of:
Also commonly used are the two triple products:
== Operators and theorems ==
=== Differential operators ===
Vector calculus studies various differential operators defined on scalar or vector fields, which are typically expressed in terms of the del operator (
∇
{\displaystyle \nabla }
), also known as "nabla". The three basic vector operators are:
Also commonly used are the two Laplace operators:
A quantity called the Jacobian matrix is useful for studying functions when both the domain and range of the function are multivariable, such as a change of variables during integration.
=== Integral theorems ===
The three basic vector operators have corresponding theorems which generalize the fundamental theorem of calculus to higher dimensions:
In two dimensions, the divergence and curl theorems reduce to the Green's theorem:
== Applications ==
=== Linear approximations ===
Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable function f(x, y) with real values, one can approximate f(x, y) for (x, y) close to (a, b) by the formula
f
(
x
,
y
)
≈
f
(
a
,
b
)
+
∂
f
∂
x
(
a
,
b
)
(
x
−
a
)
+
∂
f
∂
y
(
a
,
b
)
(
y
−
b
)
.
{\displaystyle f(x,y)\ \approx \ f(a,b)+{\tfrac {\partial f}{\partial x}}(a,b)\,(x-a)+{\tfrac {\partial f}{\partial y}}(a,b)\,(y-b).}
The right-hand side is the equation of the plane tangent to the graph of z = f(x, y) at (a, b).
=== Optimization ===
For a continuously differentiable function of several real variables, a point P (that is, a set of values for the input variables, which is viewed as a point in Rn) is critical if all of the partial derivatives of the function are zero at P, or, equivalently, if its gradient is zero. The critical values are the values of the function at the critical points.
If the function is smooth, or, at least twice continuously differentiable, a critical point may be either a local maximum, a local minimum or a saddle point. The different cases may be distinguished by considering the eigenvalues of the Hessian matrix of second derivatives.
By Fermat's theorem, all local maxima and minima of a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros.
== Generalizations ==
Vector calculus can also be generalized to other 3-manifolds and higher-dimensional spaces.
=== Different 3-manifolds ===
Vector calculus is initially defined for Euclidean 3-space,
R
3
,
{\displaystyle \mathbb {R} ^{3},}
which has additional structure beyond simply being a 3-dimensional real vector space, namely: a norm (giving a notion of length) defined via an inner product (the dot product), which in turn gives a notion of angle, and an orientation, which gives a notion of left-handed and right-handed. These structures give rise to a volume form, and also the cross product, which is used pervasively in vector calculus.
The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of the coordinate system to be taken into account (see Cross product § Handedness for more detail).
Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetric nondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (the special orthogonal group SO(3)).
More generally, vector calculus can be defined on any 3-dimensional oriented Riemannian manifold, or more generally pseudo-Riemannian manifold. This structure simply means that the tangent space at each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegenerate metric tensor and an orientation, and works because vector calculus is defined in terms of tangent vectors at each point.
=== Other dimensions ===
Most of the analytic results are easily understood, in a more general form, using the machinery of differential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yielding harmonic analysis), while curl and cross product do not generalize as directly.
From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as being k-vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to 0, 1, n − 1 or n dimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors.
In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7 (and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or 7 dimensions can a cross product be defined (generalizations in other dimensionalities either require
n
−
1
{\displaystyle n-1}
vectors to yield 1 vector, or are alternative Lie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated at Curl § Generalizations; in brief, the curl of a vector field is a bivector field, which may be interpreted as the special orthogonal Lie algebra of infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally
(
n
2
)
=
1
2
n
(
n
−
1
)
{\displaystyle \textstyle {{\binom {n}{2}}={\frac {1}{2}}n(n-1)}}
dimensions of rotations in n dimensions).
There are two important alternative generalizations of vector calculus. The first, geometric algebra, uses k-vector fields instead of vector fields (in 3 or fewer dimensions, every k-vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with the exterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yields Clifford algebras as the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions.
The second generalization uses differential forms (k-covector fields) instead of vector fields or k-vector fields, and is widely used in mathematics, particularly in differential geometry, geometric topology, and harmonic analysis, in particular yielding Hodge theory on oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to the exterior derivative of 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form of Stokes' theorem.
From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear.
From the point of view of geometric algebra, vector calculus implicitly identifies k-vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifies k-forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
The Feynman Lectures on Physics Vol. II Ch. 2: Differential Calculus of Vector Fields
"Vector analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Vector algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
A survey of the improper use of ∇ in vector analysis (1994) Tai, Chen-To
Vector Analysis: A Text-book for the Use of Students of Mathematics and Physics, (based upon the lectures of Willard Gibbs) by Edwin Bidwell Wilson, published 1902. | Wikipedia/Vector_calculus |
A neuron (American English), neurone (British English), or nerve cell, is an excitable cell that fires electric signals called action potentials across a neural network in the nervous system. They are located in the nervous system and help to receive and conduct impulses. Neurons communicate with other cells via synapses, which are specialized connections that commonly use minute amounts of chemical neurotransmitters to pass the electric signal from the presynaptic neuron to the target cell through the synaptic gap.
Neurons are the main components of nervous tissue in all animals except sponges and placozoans. Plants and fungi do not have nerve cells. Molecular evidence suggests that the ability to generate electric signals first appeared in evolution some 700 to 800 million years ago, during the Tonian period. Predecessors of neurons were the peptidergic secretory cells. They eventually gained new gene modules which enabled cells to create post-synaptic scaffolds and ion channels that generate fast electrical signals. The ability to generate electric signals was a key innovation in the evolution of the nervous system.
Neurons are typically classified into three types based on their function. Sensory neurons respond to stimuli such as touch, sound, or light that affect the cells of the sensory organs, and they send signals to the spinal cord or brain. Motor neurons receive signals from the brain and spinal cord to control everything from muscle contractions to glandular output. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord. When multiple neurons are functionally connected together, they form what is called a neural circuit.
A neuron contains all the structures of other cells such as a nucleus, mitochondria, and Golgi bodies but has additional unique structures such as an axon, and dendrites. The soma is a compact structure, and the axon and dendrites are filaments extruding from the soma. Dendrites typically branch profusely and extend a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock and travels for as far as 1 meter in humans or more in other species. It branches but usually maintains a constant diameter. At the farthest tip of the axon's branches are axon terminals, where the neuron can transmit a signal across the synapse to another cell. Neurons may lack dendrites or have no axons. The term neurite is used to describe either a dendrite or an axon, particularly when the cell is undifferentiated.
Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to the dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite. The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to the maintenance of voltage gradients across their membranes. If the voltage changes by a large enough amount over a short interval, the neuron generates an all-or-nothing electrochemical pulse called an action potential. This potential travels rapidly along the axon and activates synaptic connections as it reaches them. Synaptic signals may be excitatory or inhibitory, increasing or reducing the net voltage that reaches the soma.
In most cases, neurons are generated by neural stem cells during brain development and childhood. Neurogenesis largely ceases during adulthood in most areas of the brain.
== Nervous system ==
Neurons are the primary components of the nervous system, along with the glial cells that give them structural and metabolic support. The nervous system is made up of the central nervous system, which includes the brain and spinal cord, and the peripheral nervous system, which includes the autonomic, enteric and somatic nervous systems. In vertebrates, the majority of neurons belong to the central nervous system, but some reside in peripheral ganglia, and many sensory neurons are situated in sensory organs such as the retina and cochlea.
Axons may bundle into nerve fascicles that make up the nerves in the peripheral nervous system (like strands of wire that make up a cable). In the central nervous system bundles of axons are called nerve tracts.
== Anatomy and histology ==
Neurons are highly specialized for the processing and transmission of cellular signals. Given the diversity of functions performed in different parts of the nervous system, there is a wide variety in their shape, size, and electrochemical properties. For instance, the soma of a neuron can vary from 4 to 100 micrometers in diameter.
The soma is the body of the neuron. As it contains the nucleus, most protein synthesis occurs here. The nucleus can range from 3 to 18 micrometers in diameter.
The dendrites of a neuron are cellular extensions with many branches. This overall shape and structure are referred to metaphorically as a dendritic tree. This is where the majority of input to the neuron occurs via the dendritic spine.
The axon is a finer, cable-like projection that can extend tens, hundreds, or even tens of thousands of times the diameter of the soma in length. The axon primarily carries nerve signals away from the soma and carries some types of information back to it. Many neurons have only one axon, but this axon may—and usually will—undergo extensive branching, enabling communication with many target cells. The part of the axon where it emerges from the soma is called the axon hillock. Besides being an anatomical structure, the axon hillock also has the greatest density of voltage-dependent sodium channels. This makes it the most easily excited part of the neuron and the spike initiation zone for the axon. In electrophysiological terms, it has the most negative threshold potential.
While the axon and axon hillock are generally involved in information outflow, this region can also receive input from other neurons.
The axon terminal is found at the end of the axon farthest from the soma and contains synapses. Synaptic boutons are specialized structures where neurotransmitter chemicals are released to communicate with target neurons. In addition to synaptic boutons at the axon terminal, a neuron may have en passant boutons, which are located along the length of the axon.
The accepted view of the neuron attributes dedicated functions to its various anatomical components; however, dendrites and axons often act in ways contrary to their so-called main function.
Axons and dendrites in the central nervous system are typically only about one micrometer thick, while some in the peripheral nervous system are much thicker. The soma is usually about 10–25 micrometers in diameter and often is not much larger than the cell nucleus it contains. The longest axon of a human motor neuron can be over a meter long, reaching from the base of the spine to the toes.
Sensory neurons can have axons that run from the toes to the posterior column of the spinal cord, over 1.5 meters in adults. Giraffes have single axons several meters in length running along the entire length of their necks. Much of what is known about axonal function comes from studying the squid giant axon, an ideal experimental preparation because of its relatively immense size (0.5–1 millimeter thick, several centimeters long).
Fully differentiated neurons are permanently postmitotic however, stem cells present in the adult brain may regenerate functional neurons throughout the life of an organism (see neurogenesis). Astrocytes are star-shaped glial cells that have been observed to turn into neurons by virtue of their stem cell-like characteristic of pluripotency.
=== Membrane ===
Like all animal cells, the cell body of every neuron is enclosed by a plasma membrane, a bilayer of lipid molecules with many types of protein structures embedded in it. A lipid bilayer is a powerful electrical insulator, but in neurons, many of the protein structures embedded in the membrane are electrically active. These include ion channels that permit electrically charged ions to flow across the membrane and ion pumps that chemically transport ions from one side of the membrane to the other. Most ion channels are permeable only to specific types of ions. Some ion channels are voltage gated, meaning that they can be switched between open and closed states by altering the voltage difference across the membrane. Others are chemically gated, meaning that they can be switched between open and closed states by interactions with chemicals that diffuse through the extracellular fluid. The ion materials include sodium, potassium, chloride, and calcium. The interactions between ion channels and ion pumps produce a voltage difference across the membrane, typically a bit less than 1/10 of a volt at baseline. This voltage has two functions: first, it provides a power source for an assortment of voltage-dependent protein machinery that is embedded in the membrane; second, it provides a basis for electrical signal transmission between different parts of the membrane.
=== Histology and internal structure ===
Numerous microscopic clumps called Nissl bodies (or Nissl substance) are seen when nerve cell bodies are stained with a basophilic ("base-loving") dye. These structures consist of rough endoplasmic reticulum and associated ribosomal RNA. Named after German psychiatrist and neuropathologist Franz Nissl (1860–1919), they are involved in protein synthesis and their prominence can be explained by the fact that nerve cells are very metabolically active. Basophilic dyes such as aniline or (weakly) hematoxylin highlight negatively charged components, and so bind to the phosphate backbone of the ribosomal RNA.
The cell body of a neuron is supported by a complex mesh of structural proteins called neurofilaments, which together with neurotubules (neuronal microtubules) are assembled into larger neurofibrils. Some neurons also contain pigment granules, such as neuromelanin (a brownish-black pigment that is byproduct of synthesis of catecholamines), and lipofuscin (a yellowish-brown pigment), both of which accumulate with age. Other structural proteins that are important for neuronal function are actin and the tubulin of microtubules. Class III β-tubulin is found almost exclusively in neurons. Actin is predominately found at the tips of axons and dendrites during neuronal development. There the actin dynamics can be modulated via an interplay with microtubule.
There are different internal structural characteristics between axons and dendrites. Typical axons seldom contain ribosomes, except some in the initial segment. Dendrites contain granular endoplasmic reticulum or ribosomes, in diminishing amounts as the distance from the cell body increases.
== Classification ==
Neurons vary in shape and size and can be classified by their morphology and function. The anatomist Camillo Golgi grouped neurons into two types; type I with long axons used to move signals over long distances and type II with short axons, which can often be confused with dendrites. Type I cells can be further classified by the location of the soma. The basic morphology of type I neurons, represented by spinal motor neurons, consists of a cell body called the soma and a long thin axon covered by a myelin sheath. The dendritic tree wraps around the cell body and receives signals from other neurons. The end of the axon has branching axon terminals that release neurotransmitters into a gap called the synaptic cleft between the terminals and the dendrites of the next neuron.
=== Structural classification ===
==== Polarity ====
Most neurons can be anatomically characterized as:
Unipolar: single process. Unipolar cells are exclusively sensory neurons. Their dendrites receive sensory information, sometimes directly from the stimulus itself. The cell bodies of unipolar neurons are always found in ganglia. Sensory reception is a peripheral function, so the cell body is in the periphery, though closer to the CNS in a ganglion. The axon projects from the dendrite endings, past the cell body in a ganglion, and into the central nervous system.
Bipolar: 1 axon and 1 dendrite. They are found mainly in the olfactory epithelium, and as part of the retina.
Multipolar: 1 axon and 2 or more dendrites
Golgi I: neurons with long-projecting axonal processes; examples are pyramidal cells, Purkinje cells, and anterior horn cells
Golgi II: neurons whose axonal process projects locally; the best example is the granule cell
Anaxonic: where the axon cannot be distinguished from the dendrite(s)
Pseudounipolar: 1 process which then serves as both an axon and a dendrite
==== Other ====
Some unique neuronal types can be identified according to their location in the nervous system and distinct shape. Some examples are:
Basket cells, interneurons that form a dense plexus of terminals around the soma of target cells, found in the cortex and cerebellum
Betz cells, large motor neurons in primary motor cortex
Lugaro cells, interneurons of the cerebellum
Medium spiny neurons, most neurons in the corpus striatum
Purkinje cells, huge neurons in the cerebellum, a type of Golgi I multipolar neuron
Pyramidal cells, neurons with triangular soma, a type of Golgi I
Rosehip cells, unique human inhibitory neurons that interconnect with Pyramidal cells
Renshaw cells, neurons with both ends linked to alpha motor neurons
Unipolar brush cells, interneurons with unique dendrite ending in a brush-like tuft
Granule cells, a type of Golgi II neuron
Anterior horn cells, motoneurons located in the spinal cord
Spindle cells, interneurons that connect widely separated areas of the brain
=== Functional classification ===
==== Direction ====
Afferent neurons convey information from tissues and organs into the central nervous system and are also called sensory neurons.
Efferent neurons (motor neurons) transmit signals from the central nervous system to the effector cells.
Interneurons connect neurons within specific regions of the central nervous system.
Afferent and efferent also refer generally to neurons that, respectively, bring information to or send information from the brain.
==== Action on other neurons ====
A neuron affects other neurons by releasing a neurotransmitter that binds to chemical receptors. The effect on the postsynaptic neuron is determined by the type of receptor that is activated, not by the presynaptic neuron or by the neurotransmitter. Receptors are classified broadly as excitatory (causing an increase in firing rate), inhibitory (causing a decrease in firing rate), or modulatory (causing long-lasting effects not directly related to firing rate).
The two most common (90%+) neurotransmitters in the brain, glutamate and GABA, have largely consistent actions. Glutamate acts on several types of receptors and has effects that are excitatory at ionotropic receptors and a modulatory effect at metabotropic receptors. Similarly, GABA acts on several types of receptors, but all of them have inhibitory effects (in adult animals, at least). Because of this consistency, it is common for neuroscientists to refer to cells that release glutamate as "excitatory neurons", and cells that release GABA as "inhibitory neurons". Some other types of neurons have consistent effects, for example, "excitatory" motor neurons in the spinal cord that release acetylcholine, and "inhibitory" spinal neurons that release glycine.
The distinction between excitatory and inhibitory neurotransmitters is not absolute. Rather, it depends on the class of chemical receptors present on the postsynaptic neuron. In principle, a single neuron, releasing a single neurotransmitter, can have excitatory effects on some targets, inhibitory effects on others, and modulatory effects on others still. For example, photoreceptor cells in the retina constantly release the neurotransmitter glutamate in the absence of light. So-called OFF bipolar cells are, like most neurons, excited by the released glutamate. However, neighboring target neurons called ON bipolar cells are instead inhibited by glutamate, because they lack typical ionotropic glutamate receptors and instead express a class of inhibitory metabotropic glutamate receptors. When light is present, the photoreceptors cease releasing glutamate, which relieves the ON bipolar cells from inhibition, activating them; this simultaneously removes the excitation from the OFF bipolar cells, silencing them.
It is possible to identify the type of inhibitory effect a presynaptic neuron will have on a postsynaptic neuron, based on the proteins the presynaptic neuron expresses. Parvalbumin-expressing neurons typically dampen the output signal of the postsynaptic neuron in the visual cortex, whereas somatostatin-expressing neurons typically block dendritic inputs to the postsynaptic neuron.
==== Discharge patterns ====
Neurons have intrinsic electroresponsive properties like intrinsic transmembrane voltage oscillatory patterns. So neurons can be classified according to their electrophysiological characteristics:
Tonic or regular spiking. Some neurons are typically constantly (tonically) active, typically firing at a constant frequency. Example: interneurons in neurostriatum.
Phasic or bursting. Neurons that fire in bursts are called phasic.
Fast-spiking. Some neurons are notable for their high firing rates, for example, some types of cortical inhibitory interneurons, cells in globus pallidus, retinal ganglion cells.
==== Neurotransmitter ====
Neurotransmitters are chemical messengers passed from one neuron to another neuron or to a muscle cell or gland cell.
Cholinergic neurons – acetylcholine. Acetylcholine is released from presynaptic neurons into the synaptic cleft. It acts as a ligand for both ligand-gated ion channels and metabotropic (GPCRs) muscarinic receptors. Nicotinic receptors are pentameric ligand-gated ion channels composed of alpha and beta subunits that bind nicotine. Ligand binding opens the channel causing the influx of Na+ depolarization and increases the probability of presynaptic neurotransmitter release. Acetylcholine is synthesized from choline and acetyl coenzyme A.
Adrenergic neurons – noradrenaline. Noradrenaline (norepinephrine) is released from most postganglionic neurons in the sympathetic nervous system onto two sets of GPCRs: alpha adrenoceptors and beta adrenoceptors. Noradrenaline is one of the three common catecholamine neurotransmitters, and the most prevalent of them in the peripheral nervous system; as with other catecholamines, it is synthesized from tyrosine.
GABAergic neurons – gamma aminobutyric acid. GABA is one of two neuroinhibitors in the central nervous system (CNS), along with glycine. GABA has a homologous function to ACh, gating anion channels that allow Cl− ions to enter the post synaptic neuron. Cl− causes hyperpolarization within the neuron, decreasing the probability of an action potential firing as the voltage becomes more negative (for an action potential to fire, a positive voltage threshold must be reached). GABA is synthesized from glutamate neurotransmitters by the enzyme glutamate decarboxylase.
Glutamatergic neurons – glutamate. Glutamate is one of two primary excitatory amino acid neurotransmitters, along with aspartate. Glutamate receptors are one of four categories, three of which are ligand-gated ion channels and one of which is a G-protein coupled receptor (often referred to as GPCR).
AMPA and Kainate receptors function as cation channels permeable to Na+ cation channels mediating fast excitatory synaptic transmission.
NMDA receptors are another cation channel that is more permeable to Ca2+. The function of NMDA receptors depends on glycine receptor binding as a co-agonist within the channel pore. NMDA receptors do not function without both ligands present.
Metabotropic receptors, GPCRs modulate synaptic transmission and postsynaptic excitability.
Glutamate can cause excitotoxicity when blood flow to the brain is interrupted, resulting in brain damage. When blood flow is suppressed, glutamate is released from presynaptic neurons, causing greater NMDA and AMPA receptor activation than normal outside of stress conditions, leading to elevated Ca2+ and Na+ entering the post synaptic neuron and cell damage. Glutamate is synthesized from the amino acid glutamine by the enzyme glutamate synthase.
Dopaminergic neurons—dopamine. Dopamine is a neurotransmitter that acts on D1 type (D1 and D5) Gs-coupled receptors, which increase cAMP and PKA, and D2 type (D2, D3, and D4) receptors, which activate Gi-coupled receptors that decrease cAMP and PKA. Dopamine is connected to mood and behavior and modulates both pre- and post-synaptic neurotransmission. Loss of dopamine neurons in the substantia nigra has been linked to Parkinson's disease. Dopamine is synthesized from the amino acid tyrosine. Tyrosine is catalyzed into levodopa (or L-DOPA) by tyrosine hydroxylase, and levodopa is then converted into dopamine by the aromatic amino acid decarboxylase.
Serotonergic neurons—serotonin. Serotonin (5-Hydroxytryptamine, 5-HT) can act as excitatory or inhibitory. Of its four 5-HT receptor classes, 3 are GPCR and 1 is a ligand-gated cation channel. Serotonin is synthesized from tryptophan by tryptophan hydroxylase, and then further by decarboxylase. A lack of 5-HT at postsynaptic neurons has been linked to depression. Drugs that block the presynaptic serotonin transporter are used for treatment, such as Prozac and Zoloft.
Purinergic neurons—ATP. ATP is a neurotransmitter acting at both ligand-gated ion channels (P2X receptors) and GPCRs (P2Y) receptors. ATP is, however, best known as a cotransmitter. Such purinergic signaling can also be mediated by other purines like adenosine, which particularly acts at P2Y receptors.
Histaminergic neurons—histamine. Histamine is a monoamine neurotransmitter and neuromodulator. Histamine-producing neurons are found in the tuberomammillary nucleus of the hypothalamus. Histamine is involved in arousal and regulating sleep/wake behaviors.
==== Multimodel classification ====
Since 2012 there has been a push from the cellular and computational neuroscience community to come up with a universal classification of neurons that will apply to all neurons in the brain as well as across species. This is done by considering the three essential qualities of all neurons: electrophysiology, morphology, and the individual transcriptome of the cells. Besides being universal this classification has the advantage of being able to classify astrocytes as well. A method called patch-sequencing in which all three qualities can be measured at once is used extensively by the Allen Institute for Brain Science. In 2023, a comprehensive cell atlas of the adult, and developing human brain at the transcriptional, epigenetic, and functional levels was created through an international collaboration of researchers using the most cutting-edge molecular biology approaches.
== Connectivity ==
Neurons communicate with each other via synapses, where either the axon terminal of one cell contacts another neuron's dendrite, soma, or, less commonly, axon. Neurons such as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making connections with tens of thousands of other cells; other neurons, such as the magnocellular neurons of the supraoptic nucleus, have only one or two dendrites, each of which receives thousands of synapses.
Synapses can be excitatory or inhibitory, either increasing or decreasing activity in the target neuron, respectively. Some neurons also communicate via electrical synapses, which are direct, electrically conductive junctions between cells.
When an action potential reaches the axon terminal, it opens voltage-gated calcium channels, allowing calcium ions to enter the terminal. Calcium causes synaptic vesicles filled with neurotransmitter molecules to fuse with the membrane, releasing their contents into the synaptic cleft. The neurotransmitters diffuse across the synaptic cleft and activate receptors on the postsynaptic neuron. High cytosolic calcium in the axon terminal triggers mitochondrial calcium uptake, which, in turn, activates mitochondrial energy metabolism to produce ATP to support continuous neurotransmission.
An autapse is a synapse in which a neuron's axon connects to its dendrites.
The human brain has some 8.6 x 1010 (eighty six billion) neurons. Each neuron has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion).
=== Nonelectrochemical signaling ===
Beyond electrical and chemical signaling, studies suggest neurons in healthy human brains can also communicate through:
force generated by the enlargement of dendritic spines
the transfer of proteins – transneuronally transported proteins (TNTPs)
They can also get modulated by input from the environment and hormones released from other parts of the organism, which could be influenced more or less directly by neurons. This also applies to neurotrophins such as BDNF. The gut microbiome is also connected with the brain.
Neurons also communicate with microglia, the brain's main immune cells via specialized contact sites, called "somatic junctions". These connections enable microglia to constantly monitor and regulate neuronal functions, and exert neuroprotection when needed.
== Mechanisms for propagating action potentials ==
In 1937 John Zachary Young suggested that the squid giant axon could be used to study neuronal electrical properties. It is larger than but similar to human neurons, making it easier to study. By inserting electrodes into the squid giant axons, accurate measurements were made of the membrane potential.
The cell membrane of the axon and soma contain voltage-gated ion channels that allow the neuron to generate and propagate an electrical signal (an action potential). Some neurons also generate subthreshold membrane potential oscillations. These signals are generated and propagated by charge-carrying ions including sodium (Na+), potassium (K+), chloride (Cl−), and calcium (Ca2+).
Several stimuli can activate a neuron leading to electrical activity, including pressure, stretch, chemical transmitters, and changes in the electric potential across the cell membrane. Stimuli cause specific ion-channels within the cell membrane to open, leading to a flow of ions through the cell membrane, changing the membrane potential. Neurons must maintain the specific electrical properties that define their neuron type.
Thin neurons and axons require less metabolic expense to produce and carry action potentials, but thicker axons convey impulses more rapidly. To minimize metabolic expense while maintaining rapid conduction, many neurons have insulating sheaths of myelin around their axons. The sheaths are formed by glial cells: oligodendrocytes in the central nervous system and Schwann cells in the peripheral nervous system. The sheath enables action potentials to travel faster than in unmyelinated axons of the same diameter, whilst using less energy. The myelin sheath in peripheral nerves normally runs along the axon in sections about 1 mm long, punctuated by unsheathed nodes of Ranvier, which contain a high density of voltage-gated ion channels. Multiple sclerosis is a neurological disorder that results from the demyelination of axons in the central nervous system.
Some neurons do not generate action potentials but instead generate a graded electrical signal, which in turn causes graded neurotransmitter release. Such non-spiking neurons tend to be sensory neurons or interneurons, because they cannot carry signals long distances.
== Neural coding ==
Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationships among the electrical activities of the neurons within the ensemble. It is thought that neurons can encode both digital and analog information.
== All-or-none principle ==
The conduction of nerve impulses is an example of an all-or-none response. In other words, if a neuron responds at all, then it must respond completely. Greater intensity of stimulation, like brighter image/louder sound, does not produce a stronger signal but can increase firing frequency.: 31 Receptors respond in different ways to stimuli. Slowly adapting or tonic receptors respond to a steady stimulus and produce a steady rate of firing. Tonic receptors most often respond to increased stimulus intensity by increasing their firing frequency, usually as a power function of stimulus plotted against impulses per second. This can be likened to an intrinsic property of light where greater intensity of a specific frequency (color) requires more photons, as the photons can not become "stronger" for a specific frequency.
Other receptor types include quickly adapting or phasic receptors, where firing decreases or stops with a steady stimulus; examples include skin which, when touched causes neurons to fire, but if the object maintains even pressure, the neurons stop firing. The neurons of the skin and muscles that are responsive to pressure and vibration have filtering accessory structures that aid their function.
The pacinian corpuscle is one such structure. It has concentric layers like an onion, which form around the axon terminal. When pressure is applied and the corpuscle is deformed, mechanical stimulus is transferred to the axon, which fires. If the pressure is steady, the stimulus ends; thus, these neurons typically respond with a transient depolarization during the initial deformation and again when the pressure is removed, which causes the corpuscle to change shape again. Other types of adaptation are important in extending the function of several other neurons.
== Etymology and spelling ==
The German anatomist Heinrich Wilhelm Waldeyer introduced the term neuron in 1891, based on the ancient Greek νεῦρον neuron 'sinew, cord, nerve'.
The word was adopted in French with the spelling neurone. That spelling was also used by many writers in English, but has now become rare in American usage and uncommon in British usage.
Some previous works used nerve cell (cellule nervose), as adopted in Camillo Golgi's 1873 paper on the discovery of the silver staining technique used to visualize nervous tissue under light microscopy.
== History ==
The neuron's place as the primary functional unit of the nervous system was first recognized in the late 19th century through the work of the Spanish anatomist Santiago Ramón y Cajal.
To make the structure of individual neurons visible, Ramón y Cajal improved a silver staining process that had been developed by Camillo Golgi. The improved process involves a technique called "double impregnation" and is still in use.
In 1888 Ramón y Cajal published a paper about the bird cerebellum. In this paper, he stated that he could not find evidence for anastomosis between axons and dendrites and called each nervous element "an autonomous canton." This became known as the neuron doctrine, one of the central tenets of modern neuroscience.
In 1891, the German anatomist Heinrich Wilhelm Waldeyer wrote a highly influential review of the neuron doctrine in which he introduced the term neuron to describe the anatomical and physiological unit of the nervous system.
The silver impregnation stains are a useful method for neuroanatomical investigations because, for reasons unknown, it stains only a small percentage of cells in a tissue, exposing the complete micro structure of individual neurons without much overlap from other cells.
=== Neuron doctrine ===
The neuron doctrine is the now fundamental idea that neurons are the basic structural and functional units of the nervous system. The theory was put forward by Santiago Ramón y Cajal in the late 19th century. It held that neurons are discrete cells (not connected in a meshwork), acting as metabolically distinct units.
Later discoveries yielded refinements to the doctrine. For example, glial cells, which are non-neuronal, play an essential role in information processing. Also, electrical synapses are more common than previously thought, comprising direct, cytoplasmic connections between neurons; In fact, neurons can form even tighter couplings: the squid giant axon arises from the fusion of multiple axons.
Ramón y Cajal also postulated the Law of Dynamic Polarization, which states that a neuron receives signals at its dendrites and cell body and transmits them, as action potentials, along the axon in one direction: away from the cell body. The Law of Dynamic Polarization has important exceptions; dendrites can serve as synaptic output sites of neurons and axons can receive synaptic inputs.
=== Compartmental modelling of neurons ===
Although neurons are often described as "fundamental units" of the brain, they perform internal computations. Neurons integrate input within dendrites, and this complexity is lost in models that assume neurons to be a fundamental unit. Dendritic branches can be modeled as spatial compartments, whose activity is related to passive membrane properties, but may also be different depending on input from synapses. Compartmental modelling of dendrites is especially helpful for understanding the behavior of neurons that are too small to record with electrodes, as is the case for Drosophila melanogaster.
== Neurons in the brain ==
The number of neurons in the brain varies dramatically from species to species. In a human, there are an estimated 10–20 billion neurons in the cerebral cortex and 55–70 billion neurons in the cerebellum. By contrast, the nematode worm Caenorhabditis elegans has just 302 neurons, making it an ideal model organism as scientists have been able to map all of its neurons. The fruit fly Drosophila melanogaster, a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors. Many properties of neurons, from the type of neurotransmitters used to ion channel composition, are maintained across species, allowing scientists to study processes occurring in more complex organisms in much simpler experimental systems.
== Neurological disorders ==
Charcot–Marie–Tooth disease (CMT) is a heterogeneous inherited disorder of nerves (neuropathy) that is characterized by loss of muscle tissue and touch sensation, predominantly in the feet and legs extending to the hands and arms in advanced stages. Presently incurable, this disease is one of the most common inherited neurological disorders, affecting 36 in 100,000 people.
Alzheimer's disease (AD), also known simply as Alzheimer's, is a neurodegenerative disease characterized by progressive cognitive deterioration, together with declining activities of daily living and neuropsychiatric symptoms or behavioral changes. The most striking early symptom is loss of short-term memory (amnesia), which usually manifests as minor forgetfulness that becomes steadily more pronounced with illness progression, with relative preservation of older memories. As the disorder progresses, cognitive (intellectual) impairment extends to the domains of language (aphasia), skilled movements (apraxia), and recognition (agnosia), and functions such as decision-making and planning become impaired.
Parkinson's disease (PD), also known as Parkinson's, is a degenerative disorder of the central nervous system that often impairs motor skills and speech. Parkinson's disease belongs to a group of conditions called movement disorders. It is characterized by muscle rigidity, tremor, a slowing of physical movement (bradykinesia), and in extreme cases, a loss of physical movement (akinesia). The primary symptoms are the results of decreased stimulation of the motor cortex by the basal ganglia, normally caused by the insufficient formation and action of dopamine, which is produced in the dopaminergic neurons of the brain. Secondary symptoms may include high-level cognitive dysfunction and subtle language problems. PD is both chronic and progressive.
Myasthenia gravis is a neuromuscular disease leading to fluctuating muscle weakness and fatigability during simple activities. Weakness is typically caused by circulating antibodies that block acetylcholine receptors at the postsynaptic neuromuscular junction, inhibiting the stimulative effect of the neurotransmitter acetylcholine. Myasthenia is treated with immunosuppressants, cholinesterase inhibitors and, in selected cases, thymectomy.
=== Demyelination ===
Demyelination is a process characterized by the gradual loss of the myelin sheath enveloping nerve fibers. When myelin deteriorates, signal conduction along nerves can be significantly impaired or lost, and the nerve eventually withers. Demyelination may affect both central and peripheral nervous systems, contributing to various neurological disorders such as multiple sclerosis, Guillain-Barré syndrome, and chronic inflammatory demyelinating polyneuropathy. Although demyelination is often caused by an autoimmune reaction, it may also be caused by viral infections, metabolic disorders, trauma, and some medications.
=== Axonal degeneration ===
Although most injury responses include a calcium influx signaling to promote resealing of severed parts, axonal injuries initially lead to acute axonal degeneration, which is the rapid separation of the proximal and distal ends, occurring within 30 minutes of injury. Degeneration follows with swelling of the axolemma, and eventually leads to bead-like formation. Granular disintegration of the axonal cytoskeleton and inner organelles occurs after axolemma degradation. Early changes include accumulation of mitochondria in the paranodal regions at the site of injury. The endoplasmic reticulum degrades and mitochondria swell up and eventually disintegrate. The disintegration is dependent on ubiquitin and calpain proteases (caused by the influx of calcium ions), suggesting that axonal degeneration is an active process that produces complete fragmentation. The process takes about roughly 24 hours in the PNS and longer in the CNS. The signaling pathways leading to axolemma degeneration are unknown.
== Development ==
Neurons develop through the process of neurogenesis, in which neural stem cells divide to produce differentiated neurons. Once fully differentiated they are no longer capable of undergoing mitosis. Neurogenesis primarily occurs during embryonic development.
Neurons initially develop from the neural tube in the embryo. The neural tube has three layers – a ventricular zone, an intermediate zone, and a marginal zone. The ventricular zone surrounds the tube's central canal and becomes the ependyma. Dividing cells of the ventricular zone form the intermediate zone which stretches to the outermost layer of the neural tube called the pial layer. The gray matter of the brain is derived from the intermediate zone. The extensions of the neurons in the intermediate zone make up the marginal zone when myelinated becomes the brain's white matter.
Differentiation of the neurons is ordered by their size. Large motor neurons are first. Smaller sensory neurons together with glial cell differentiate at birth.
Adult neurogenesis can occur and studies of the age of human neurons suggest that this process occurs only for a minority of cells and that the vast majority of neurons in the neocortex form before birth and persist without replacement. The extent to which adult neurogenesis exists in humans, and its contribution to cognition are controversial, with conflicting reports published in 2018.
The body contains a variety of stem cell types that can differentiate into neurons. Researchers found a way to transform human skin cells into nerve cells using transdifferentiation, in which "cells are forced to adopt new identities".
During neurogenesis in the mammalian brain, progenitor and stem cells progress from proliferative divisions to differentiative divisions. This progression leads to the neurons and glia that populate cortical layers. Epigenetic modifications play a key role in regulating gene expression in differentiating neural stem cells, and are critical for cell fate determination in the developing and adult mammalian brain. Epigenetic modifications include DNA cytosine methylation to form 5-methylcytosine and 5-methylcytosine demethylation. DNA cytosine methylation is catalyzed by DNA methyltransferases (DNMTs). Methylcytosine demethylation is catalyzed in several stages by TET enzymes that carry out oxidative reactions (e.g. 5-methylcytosine to 5-hydroxymethylcytosine) and enzymes of the DNA base excision repair (BER) pathway.
At different stages of mammalian nervous system development, two DNA repair processes are employed in the repair of DNA double-strand breaks. These pathways are homologous recombinational repair used in proliferating neural precursor cells, and non-homologous end joining used mainly at later developmental stages
Intercellular communication between developing neurons and microglia is also indispensable for proper neurogenesis and brain development.
== Nerve regeneration ==
Peripheral axons can regrow if they are severed, but one neuron cannot be functionally replaced by one of another type (Llinás' law).
== See also ==
== References ==
== Further reading ==
== External links ==
IBRO (International Brain Research Organization). Fostering neuroscience research especially in less well-funded countries.
NeuronBank an online neuromics tool for cataloging neuronal types and synaptic connectivity.
High Resolution Neuroanatomical Images of Primate and Non-Primate Brains.
The Department of Neuroscience at Wikiversity, which presently offers two courses: Fundamentals of Neuroscience and Comparative Neuroscience.
NIF Search – Neuron Archived 2015-01-22 at the Wayback Machine via the Neuroscience Information Framework
Cell Centered Database – Neuron
Complete list of neuron types according to the Petilla convention, at NeuroLex.
NeuroMorpho.Org an online database of digital reconstructions of neuronal morphology.
Immunohistochemistry Image Gallery: Neuron
Khan Academy: Anatomy of a neuron
Neuron images | Wikipedia/Neurons |
In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method must have an expected code length greater than or equal to the entropy of the source.
More precisely, the source coding theorem states that for any source distribution, the expected code length satisfies
E
x
∼
P
[
ℓ
(
d
(
x
)
)
]
≥
E
x
∼
P
[
−
log
b
(
P
(
x
)
)
]
{\displaystyle \operatorname {E} _{x\sim P}[\ell (d(x))]\geq \operatorname {E} _{x\sim P}[-\log _{b}(P(x))]}
, where
ℓ
{\displaystyle \ell }
is the function specifying the number of symbols in a code word,
d
{\displaystyle d}
is the coding function,
b
{\displaystyle b}
is the number of symbols used to make output codes and
P
{\displaystyle P}
is the probability of the source symbol. An entropy coding attempts to approach this lower bound.
Two of the most common entropy coding techniques are Huffman coding and arithmetic coding.
If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code may be useful.
These static codes include universal codes (such as Elias gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding).
Since 2014, data compressors have started using the asymmetric numeral systems family of entropy coding techniques, which allows combination of the compression ratio of arithmetic coding with a processing cost similar to Huffman coding.
== Entropy as a measure of similarity ==
Besides using entropy coding as a way to compress digital data, an entropy encoder can also be used to measure the amount of similarity between streams of data and already existing classes of data. This is done by generating an entropy coder/compressor for each class of data; unknown data is then classified by feeding the uncompressed data to each compressor and seeing which compressor yields the highest compression. The coder with the best compression is probably the coder trained on the data that was most similar to the unknown data.
== See also ==
Arithmetic coding
Asymmetric numeral systems (ANS)
Context-adaptive binary arithmetic coding (CABAC)
Huffman coding
Range coding
== References ==
== External links ==
Information Theory, Inference, and Learning Algorithms, by David MacKay (2003), gives an introduction to Shannon theory and data compression, including the Huffman coding and arithmetic coding.
Source Coding, by T. Wiegand and H. Schwarz (2011). | Wikipedia/Entropy_encoding |
Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause or prevent a tornado in Texas.: 181–184
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaos: When the present determines the future but the approximate present does not approximately determine the future.
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
== Introduction ==
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
== Chaotic dynamics ==
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
=== Sensitivity to initial conditions ===
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993,: 8 "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration.": 23 The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions.: 189–204 A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach 100 °C (212 °F) or fall below −130 °C (−202 °F) on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation
δ
Z
0
{\displaystyle \delta \mathbf {Z} _{0}}
, the two trajectories end up diverging at a rate given by
|
δ
Z
(
t
)
|
≈
e
λ
t
|
δ
Z
0
|
,
{\displaystyle |\delta \mathbf {Z} (t)|\approx e^{\lambda t}|\delta \mathbf {Z} _{0}|,}
where
t
{\displaystyle t}
is the time and
λ
{\displaystyle \lambda }
is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE, coupled with the solution's boundedness, is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
=== Non-periodicity ===
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
=== Topological mixing ===
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
=== Topological transitivity ===
A map
f
:
X
→
X
{\displaystyle f:X\to X}
is said to be topologically transitive if for any pair of non-empty open sets
U
,
V
⊂
X
{\displaystyle U,V\subset X}
, there exists
k
>
0
{\displaystyle k>0}
such that
f
k
(
U
)
∩
V
≠
∅
{\displaystyle f^{k}(U)\cap V\neq \emptyset }
. Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
=== Density of periodic orbits ===
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,
5
−
5
8
{\displaystyle {\tfrac {5-{\sqrt {5}}}{8}}}
→
5
+
5
8
{\displaystyle {\tfrac {5+{\sqrt {5}}}{8}}}
→
5
−
5
8
{\displaystyle {\tfrac {5-{\sqrt {5}}}{8}}}
(or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
=== Strange attractors ===
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
=== Coexisting attractors ===
In contrast to single type chaotic solutions, studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
=== Minimum complexity of a chaotic system ===
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
d
x
d
t
=
σ
y
−
σ
x
,
d
y
d
t
=
ρ
x
−
x
z
−
y
,
d
z
d
t
=
x
y
−
β
z
.
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} x}{\mathrm {d} t}}&=\sigma y-\sigma x,\\{\frac {\mathrm {d} y}{\mathrm {d} t}}&=\rho x-xz-y,\\{\frac {\mathrm {d} z}{\mathrm {d} t}}&=xy-\beta z.\end{aligned}}}
where
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
make up the system state,
t
{\displaystyle t}
is time, and
σ
{\displaystyle \sigma }
,
ρ
{\displaystyle \rho }
,
β
{\displaystyle \beta }
are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
=== Infinite dimensional maps ===
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
ψ
n
+
1
(
r
→
,
t
)
=
∫
K
(
r
→
−
r
→
,
,
t
)
f
[
ψ
n
(
r
→
,
,
t
)
]
d
r
→
,
{\displaystyle \psi _{n+1}({\vec {r}},t)=\int K({\vec {r}}-{\vec {r}}^{,},t)f[\psi _{n}({\vec {r}}^{,},t)]d{\vec {r}}^{,}}
,
where kernel
K
(
r
→
−
r
→
,
,
t
)
{\displaystyle K({\vec {r}}-{\vec {r}}^{,},t)}
is propagator derived as Green function of a relevant physical system,
f
[
ψ
n
(
r
→
,
t
)
]
{\displaystyle f[\psi _{n}({\vec {r}},t)]}
might be logistic map alike
ψ
→
G
ψ
[
1
−
tanh
(
ψ
)
]
{\displaystyle \psi \rightarrow G\psi [1-\tanh(\psi )]}
or complex map. For examples of complex maps the Julia set
f
[
ψ
]
=
ψ
2
{\displaystyle f[\psi ]=\psi ^{2}}
or Ikeda map
ψ
n
+
1
=
A
+
B
ψ
n
e
i
(
|
ψ
n
|
2
+
C
)
{\displaystyle \psi _{n+1}=A+B\psi _{n}e^{i(|\psi _{n}|^{2}+C)}}
may serve. When wave propagation problems at distance
L
=
c
t
{\displaystyle L=ct}
with wavelength
λ
=
2
π
/
k
{\displaystyle \lambda =2\pi /k}
are considered the kernel
K
{\displaystyle K}
may have a form of Green function for Schrödinger equation:.
K
(
r
→
−
r
→
,
,
L
)
=
i
k
exp
[
i
k
L
]
2
π
L
exp
[
i
k
|
r
→
−
r
→
,
|
2
2
L
]
{\displaystyle K({\vec {r}}-{\vec {r}}^{,},L)={\frac {ik\exp[ikL]}{2\pi L}}\exp[{\frac {ik|{\vec {r}}-{\vec {r}}^{,}|^{2}}{2L}}]}
.
== Spontaneous order ==
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
== History ==
James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
== A popular but inaccurate analogy for chaos ==
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. The characteristic of the aforementioned verse was described as "finite-time sensitive dependence".
== Applications ==
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
=== Cryptography ===
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
=== Robotics ===
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
=== Biology ===
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint.: 176, 177 There is always potential difficulty in distinguishing real chaos from chaos that is only in the model.: 176, 177 Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984.: 176, 177 Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.: 169
=== Economics ===
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able to detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
=== Finite predictability in weather and climate ===
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
=== AI-extended modeling framework ===
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
=== Other areas ===
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
== See also ==
Examples of chaotic systems
Other related topics
People
== References ==
Attribution
This article incorporates text from a free content work. Licensed under CC-BY (license statement/permission). Text taken from Three Kinds of Butterfly Effects within Lorenz Models, Bo-Wen Shen, Roger A. Pielke, Sr., Xubin Zeng, Jialin Cui, Sara Faghih-Naini, Wei Paxson, and Robert Atlas, MDPI. Encyclopedia.
== Further reading ==
=== Articles ===
Sharkovskii, A.N. (1964). "Co-existence of cycles of a continuous mapping of the line into itself". Ukrainian Math. J. 16: 61–71.
Li, T.Y.; Yorke, J.A. (1975). "Period Three Implies Chaos" (PDF). American Mathematical Monthly. 82 (10): 985–92. Bibcode:1975AmMM...82..985L. CiteSeerX 10.1.1.329.5038. doi:10.2307/2318254. JSTOR 2318254. Archived from the original (PDF) on 2009-12-29. Retrieved 2009-08-12.
Alemansour, Hamed; Miandoab, Ehsan Maani; Pishkenari, Hossein Nejat (March 2017). "Effect of size on the chaotic behavior of nano resonators". Communications in Nonlinear Science and Numerical Simulation. 44: 495–505. Bibcode:2017CNSNS..44..495A. doi:10.1016/j.cnsns.2016.09.010.
Crutchfield; Tucker; Morrison; J.D. Farmer; Packard; N.H.; Shaw; R.S (December 1986). "Chaos". Scientific American. 255 (6): 38–49 (bibliography p.136). Bibcode:1986SciAm.255d..38T. doi:10.1038/scientificamerican1286-46. Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Kolyada, S.F. (2004). "Li-Yorke sensitivity and other concepts of chaos". Ukrainian Math. J. 56 (8): 1242–57. doi:10.1007/s11253-005-0055-4. S2CID 207251437.
Day, R.H.; Pavlov, O.V. (2004). "Computing Economic Chaos". Computational Economics. 23 (4): 289–301. arXiv:2211.02441. doi:10.1023/B:CSEM.0000026787.81469.1f. S2CID 119972392. SSRN 806124.
Strelioff, C.; Hübler, A. (2006). "Medium-Term Prediction of Chaos" (PDF). Phys. Rev. Lett. 96 (4): 044101. Bibcode:2006PhRvL..96d4101S. doi:10.1103/PhysRevLett.96.044101. PMID 16486826. 044101. Archived from the original (PDF) on 2013-04-26.
Hübler, A.; Foster, G.; Phelps, K. (2007). "Managing Chaos: Thinking out of the Box" (PDF). Complexity. 12 (3): 10–13. Bibcode:2007Cmplx..12c..10H. doi:10.1002/cplx.20159. Archived from the original (PDF) on 2012-10-30. Retrieved 2011-07-17.
Motter, Adilson E.; Campbell, David K. (2013). "Chaos at 50". Physics Today. 66 (5): 27. arXiv:1306.5777. Bibcode:2013PhT....66e..27M. doi:10.1063/PT.3.1977. S2CID 54005470.
=== Textbooks ===
Alligood, K.T.; Sauer, T.; Yorke, J.A. (1997). Chaos: an introduction to dynamical systems. Springer-Verlag. ISBN 978-0-387-94677-1.
Baker, G. L. (1996). Chaos, Scattering and Statistical Mechanics. Cambridge University Press. ISBN 978-0-521-39511-3.
Badii, R.; Politi A. (1997). Complexity: hierarchical structures and scaling in physics. Cambridge University Press. ISBN 978-0-521-66385-4.
Collet, Pierre; Eckmann, Jean-Pierre (1980). Iterated Maps on the Interval as Dynamical Systems. Birkhauser. ISBN 978-0-8176-4926-5.
Devaney, Robert L. (2003). An Introduction to Chaotic Dynamical Systems (2nd ed.). Westview Press. ISBN 978-0-8133-4085-2.
Robinson, Clark (1995). Dynamical systems: Stability, symbolic dynamics, and chaos. CRC Press. ISBN 0-8493-8493-1.
Feldman, D. P. (2012). Chaos and Fractals: An Elementary Introduction. Oxford University Press. ISBN 978-0-19-956644-0. Archived from the original on 2019-12-31. Retrieved 2016-12-29.
Gollub, J. P.; Baker, G. L. (1996). Chaotic dynamics. Cambridge University Press. ISBN 978-0-521-47685-0.
Guckenheimer, John; Holmes, Philip (1983). Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer-Verlag. ISBN 978-0-387-90819-9.
Gulick, Denny (1992). Encounters with Chaos. McGraw-Hill. ISBN 978-0-07-025203-5.
Gutzwiller, Martin (1990). Chaos in Classical and Quantum Mechanics. Springer-Verlag. ISBN 978-0-387-97173-5.
Hoover, William Graham (2001) [1999]. Time Reversibility, Computer Simulation, and Chaos. World Scientific. ISBN 978-981-02-4073-8.
Kautz, Richard (2011). Chaos: The Science of Predictable Random Motion. Oxford University Press. ISBN 978-0-19-959458-0.
Kiel, L. Douglas; Elliott, Euel W. (1997). Chaos Theory in the Social Sciences. Perseus Publishing. ISBN 978-0-472-08472-2.
Moon, Francis (1990). Chaotic and Fractal Dynamics. Springer-Verlag. ISBN 978-0-471-54571-2.
Orlando, Giuseppe; Pisarchick, Alexander; Stoop, Ruedi (2021). Nonlinearities in Economics. Dynamic Modeling and Econometrics in Economics and Finance. Vol. 29. doi:10.1007/978-3-030-70982-2. ISBN 978-3-030-70981-5. S2CID 239756912.
Ott, Edward (2002). Chaos in Dynamical Systems. Cambridge University Press. ISBN 978-0-521-01084-9.
Strogatz, Steven (2000). Nonlinear Dynamics and Chaos. Perseus Publishing. ISBN 978-0-7382-0453-6.
Sprott, Julien Clinton (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 978-0-19-850840-3.
Tél, Tamás; Gruiz, Márton (2006). Chaotic dynamics: An introduction based on classical mechanics. Cambridge University Press. ISBN 978-0-521-83912-9.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Thompson JM, Stewart HB (2001). Nonlinear Dynamics And Chaos. John Wiley and Sons Ltd. ISBN 978-0-471-87645-8.
Tufillaro; Reilly (1992). An experimental approach to nonlinear dynamics and chaos. American Journal of Physics. Vol. 61. Addison-Wesley. p. 958. Bibcode:1993AmJPh..61..958T. doi:10.1119/1.17380. ISBN 978-0-201-55441-0.
Wiggins, Stephen (2003). Introduction to Applied Dynamical Systems and Chaos. Springer. ISBN 978-0-387-00177-7.
Zaslavsky, George M. (2005). Hamiltonian Chaos and Fractional Dynamics. Oxford University Press. ISBN 978-0-19-852604-9.
=== Semitechnical and popular works ===
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, ISBN 978-981-4374-42-2.
Abraham, Ralph H.; Ueda, Yoshisuke, eds. (2000). The Chaos Avant-Garde: Memoirs of the Early Days of Chaos Theory. World Scientific Series on Nonlinear Science Series A. Vol. 39. World Scientific. Bibcode:2000cagm.book.....A. doi:10.1142/4510. ISBN 978-981-238-647-2.
Barnsley, Michael F. (2000). Fractals Everywhere. Morgan Kaufmann. ISBN 978-0-12-079069-2.
Bird, Richard J. (2003). Chaos and Life: Complexity and Order in Evolution and Thought. Columbia University Press. ISBN 978-0-231-12662-5.
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Cunningham, Lawrence A. (1994). "From Random Walks to Chaotic Crashes: The Linear Genealogy of the Efficient Capital Market Hypothesis". George Washington Law Review. 62: 546.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
John Gribbin. Deep Simplicity. Penguin Press Science. Penguin Books.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
Marshall, Alan (2002). The Unity of Nature - Wholeness and Disintegration in Ecology and Science. doi:10.1142/9781860949548. ISBN 9781860949548.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
Peitgen, Heinz-Otto; Richter, Peter H. (1986). The Beauty of Fractals. doi:10.1007/978-3-642-61717-1. ISBN 978-3-642-61719-5.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Ian Roulstone; John Norbury (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721.
Ruelle, D. (1989). Chaotic Evolution and Strange Attractors. doi:10.1017/CBO9780511608773. ISBN 9780521362726.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Smith, Peter (1998). Explaining Chaos. doi:10.1017/CBO9780511554544. ISBN 9780511554544.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
== External links ==
"Chaos", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence, Italy
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt) Archived 2007-02-02 at the Wayback Machine
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller | Wikipedia/Chaos_theory |
In theoretical physics, a supersymmetry algebra (or SUSY algebra) is a mathematical formalism for describing the relation between bosons and fermions. The supersymmetry algebra contains not only the Poincaré algebra and a compact subalgebra of internal symmetries, but also contains some fermionic supercharges, transforming as a sum of N real spinor representations of the Poincaré group. Such symmetries are allowed by the Haag–Łopuszański–Sohnius theorem. When N>1 the algebra is said to have extended supersymmetry. The supersymmetry algebra is a semidirect sum of a central extension of the super-Poincaré algebra by a compact Lie algebra B of internal symmetries.
Bosonic fields commute while fermionic fields anticommute. In order to have a transformation that relates the two kinds of fields, the introduction of a Z2-grading under which the even elements are bosonic and the odd elements are fermionic is required. Such an algebra is called a Lie superalgebra.
Just as one can have representations of a Lie algebra, one can also have representations of a Lie superalgebra, called supermultiplets. For each Lie algebra, there exists an associated Lie group which is connected and simply connected, unique up to isomorphism, and the representations of the algebra can be extended to create group representations. In the same way, representations of a Lie superalgebra can sometimes be extended into representations of a Lie supergroup.
== Structure of a supersymmetry algebra ==
The general supersymmetry algebra for spacetime dimension d, and with the fermionic piece consisting of a sum of N irreducible real spinor representations, has a structure of the form
(P×Z).Q.(L×B)
where
P is a bosonic abelian vector normal subalgebra of dimension d, normally identified with translations of spacetime. It is a vector representation of L.
Z is a scalar bosonic algebra in the center whose elements are called central charges.
Q is an abelian fermionic spinor subquotient algebra, and is a sum of N real spinor representations of L. (When the signature of spacetime is divisible by 4 there are two different spinor representations of L, so there is some ambiguity about the structure of Q as a representation of L.) The elements of Q, or rather their inverse images in the supersymmetry algebra, are called supercharges. The subalgebra (P×Z).Q is sometimes also called the supersymmetry algebra and is nilpotent of length at most 2, with the Lie bracket of two supercharges lying in P×Z.
L is a bosonic subalgebra, isomorphic to the Lorentz algebra in d dimensions, of dimension d(d–1)/2
B is a scalar bosonic subalgebra, given by the Lie algebra of some compact group, called the group of internal symmetries. It commutes with P,Z, and L, but may act non-trivially on the supercharges Q.
The terms "bosonic" and "fermionic" refer to even and odd subspaces of the superalgebra.
The terms "scalar", "spinor", "vector", refer to the behavior of subalgebras under the action of the Lorentz algebra L.
The number N is the number of irreducible real spin representations. When the signature of spacetime is divisible by 4 this is ambiguous as in this case there are two different irreducible real spinor representations, and the number N is sometimes replaced by a pair of integers (N1, N2).
The supersymmetry algebra is sometimes regarded as a real super algebra, and sometimes as a complex algebra with a hermitian conjugation. These two views are essentially equivalent, as the real algebra can be constructed from the complex algebra by taking the skew-Hermitian elements, and the complex algebra can be constructed from the real one by taking tensor product with the complex numbers.
The bosonic part of the superalgebra is isomorphic to the product of the Poincaré algebra P.L with the algebra Z×B of internal symmetries.
When N>1 the algebra is said to have extended supersymmetry.
When Z is trivial, the subalgebra P.Q.L is the super-Poincaré algebra.
== See also ==
Adinkra symbols
Super-Poincaré algebra
Superconformal algebra
Supersymmetry algebras in 1 + 1 dimensions
N = 2 superconformal algebra
== References ==
Bagger, Jonathan; Wess, Julius (1992), Supersymmetry and supergravity, Princeton Series in Physics (2nd ed.), Princeton University Press, ISBN 0-691-02530-4, MR 1152804
Haag, Rudolf; Sohnius, Martin; Łopuszański, Jan T. (1975), "All possible generators of supersymmetries of the S-matrix", Nuclear Physics B, 88 (2): 257–274, Bibcode:1975NuPhB..88..257H, doi:10.1016/0550-3213(75)90279-5, MR 0411396 | Wikipedia/Supersymmetry_algebra |
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
The function is often thought of as an "unknown" that solves the equation, similar to how x is thought of as an unknown number solving, e.g., an algebraic equation like x2 − 3x + 2 = 0. However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000.
Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics (Schrödinger equation, Pauli equation etc.). They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology.
Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, where the meaning of a solution depends on the context of the problem, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.
Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations.
== Introduction ==
A function u(x, y, z) of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
=
0.
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=0.}
Such functions were widely studied in the 19th century due to their relevance for classical mechanics, for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance
u
(
x
,
y
,
z
)
=
1
x
2
−
2
x
+
y
2
+
z
2
+
1
{\displaystyle u(x,y,z)={\frac {1}{\sqrt {x^{2}-2x+y^{2}+z^{2}+1}}}}
and
u
(
x
,
y
,
z
)
=
2
x
2
−
y
2
−
z
2
{\displaystyle u(x,y,z)=2x^{2}-y^{2}-z^{2}}
are both harmonic while
u
(
x
,
y
,
z
)
=
sin
(
x
y
)
+
z
{\displaystyle u(x,y,z)=\sin(xy)+z}
is not. It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not, in any immediate way, special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist.
The nature of this failure can be seen more concretely in the case of the following PDE: for a function v(x, y) of two variables, consider the equation
∂
2
v
∂
x
∂
y
=
0.
{\displaystyle {\frac {\partial ^{2}v}{\partial x\partial y}}=0.}
It can be directly checked that any function v of the form v(x, y) = f(x) + g(y), for any single-variable functions f and g whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDEs, one generally has the free choice of functions.
The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate.
To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function". Otherwise, speaking only in terms such as "a function of two variables", it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself.
The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions.
Let B denote the unit-radius disk around the origin in the plane. For any continuous function U on the unit circle, there is exactly one function u on B such that
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
=
0
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0}
and whose restriction to the unit circle is given by U.
For any functions f and g on the real line R, there is exactly one function u on R × (−1, 1) such that
∂
2
u
∂
x
2
−
∂
2
u
∂
y
2
=
0
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}-{\frac {\partial ^{2}u}{\partial y^{2}}}=0}
and with u(x, 0) = f(x) and ∂u/∂y(x, 0) = g(x) for all values of x.
Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function.
If u is a function on R2 with
∂
∂
x
∂
u
∂
x
1
+
(
∂
u
∂
x
)
2
+
(
∂
u
∂
y
)
2
+
∂
∂
y
∂
u
∂
y
1
+
(
∂
u
∂
x
)
2
+
(
∂
u
∂
y
)
2
=
0
,
{\displaystyle {\frac {\partial }{\partial x}}{\frac {\frac {\partial u}{\partial x}}{\sqrt {1+\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial u}{\partial y}}\right)^{2}}}}+{\frac {\partial }{\partial y}}{\frac {\frac {\partial u}{\partial y}}{\sqrt {1+\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial u}{\partial y}}\right)^{2}}}}=0,}
then there are numbers a, b, and c with u(x, y) = ax + by + c.
In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution.
== Definition ==
A partial differential equation is an equation that involves an unknown function of
n
≥
2
{\displaystyle n\geq 2}
variables and (some of) its partial derivatives. That is, for the unknown function
u
:
U
→
R
,
{\displaystyle u:U\rightarrow \mathbb {R} ,}
of variables
x
=
(
x
1
,
…
,
x
n
)
{\displaystyle x=(x_{1},\dots ,x_{n})}
belonging to the open subset
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
, the
k
t
h
{\displaystyle k^{th}}
-order partial differential equation is defined as
F
[
D
k
u
,
D
k
−
1
u
,
…
,
D
u
,
u
,
x
]
=
0
,
{\displaystyle F[D^{k}u,D^{k-1}u,\dots ,Du,u,x]=0,}
where
F
:
R
n
k
×
R
n
k
−
1
⋯
×
R
n
×
R
×
U
→
R
,
{\displaystyle F:\mathbb {R} ^{n^{k}}\times \mathbb {R} ^{n^{k-1}}\dots \times \mathbb {R} ^{n}\times \mathbb {R} \times U\rightarrow \mathbb {R} ,}
and
D
{\displaystyle D}
is the partial derivative operator.
=== Notation ===
When writing PDEs, it is common to denote partial derivatives using subscripts. For example:
u
x
=
∂
u
∂
x
,
u
x
x
=
∂
2
u
∂
x
2
,
u
x
y
=
∂
2
u
∂
y
∂
x
=
∂
∂
y
(
∂
u
∂
x
)
.
{\displaystyle u_{x}={\frac {\partial u}{\partial x}},\quad u_{xx}={\frac {\partial ^{2}u}{\partial x^{2}}},\quad u_{xy}={\frac {\partial ^{2}u}{\partial y\,\partial x}}={\frac {\partial }{\partial y}}\left({\frac {\partial u}{\partial x}}\right).}
In the general situation that u is a function of n variables, then ui denotes the first partial derivative relative to the i-th input, uij denotes the second partial derivative relative to the i-th and j-th inputs, and so on.
The Greek letter Δ denotes the Laplace operator; if u is a function of n variables, then
Δ
u
=
u
11
+
u
22
+
⋯
+
u
n
n
.
{\displaystyle \Delta u=u_{11}+u_{22}+\cdots +u_{nn}.}
In the physics literature, the Laplace operator is often denoted by ∇2; in the mathematics literature, ∇2u may also denote the Hessian matrix of u.
== Classification ==
=== Linear and nonlinear equations ===
A PDE is called linear if it is linear in the unknown and its derivatives. For example, for a function u of x and y, a second order linear PDE is of the form
a
1
(
x
,
y
)
u
x
x
+
a
2
(
x
,
y
)
u
x
y
+
a
3
(
x
,
y
)
u
y
x
+
a
4
(
x
,
y
)
u
y
y
+
a
5
(
x
,
y
)
u
x
+
a
6
(
x
,
y
)
u
y
+
a
7
(
x
,
y
)
u
=
f
(
x
,
y
)
{\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+a_{5}(x,y)u_{x}+a_{6}(x,y)u_{y}+a_{7}(x,y)u=f(x,y)}
where ai and f are functions of the independent variables x and y only. (Often the mixed-partial derivatives uxy and uyx will be equated, but this is not required for the discussion of linearity.)
If the ai are constants (independent of x and y) then the PDE is called linear with constant coefficients. If f is zero everywhere then the linear PDE is homogeneous, otherwise it is inhomogeneous. (This is separate from asymptotic homogenization, which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs.)
Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is
a
1
(
x
,
y
)
u
x
x
+
a
2
(
x
,
y
)
u
x
y
+
a
3
(
x
,
y
)
u
y
x
+
a
4
(
x
,
y
)
u
y
y
+
f
(
u
x
,
u
y
,
u
,
x
,
y
)
=
0
{\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0}
In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives:
a
1
(
u
x
,
u
y
,
u
,
x
,
y
)
u
x
x
+
a
2
(
u
x
,
u
y
,
u
,
x
,
y
)
u
x
y
+
a
3
(
u
x
,
u
y
,
u
,
x
,
y
)
u
y
x
+
a
4
(
u
x
,
u
y
,
u
,
x
,
y
)
u
y
y
+
f
(
u
x
,
u
y
,
u
,
x
,
y
)
=
0
{\displaystyle a_{1}(u_{x},u_{y},u,x,y)u_{xx}+a_{2}(u_{x},u_{y},u,x,y)u_{xy}+a_{3}(u_{x},u_{y},u,x,y)u_{yx}+a_{4}(u_{x},u_{y},u,x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0}
Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion.
A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry.
=== Second order equations ===
The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial- and boundary conditions and to the smoothness of the solutions. Assuming uxy = uyx, the general linear second-order PDE in two independent variables has the form
A
u
x
x
+
2
B
u
x
y
+
C
u
y
y
+
⋯
(lower order terms)
=
0
,
{\displaystyle Au_{xx}+2Bu_{xy}+Cu_{yy}+\cdots {\mbox{(lower order terms)}}=0,}
where the coefficients A, B, C... may depend upon x and y. If A2 + B2 + C2 > 0 over a region of the xy-plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section:
A
x
2
+
2
B
x
y
+
C
y
2
+
⋯
=
0.
{\displaystyle Ax^{2}+2Bxy+Cy^{2}+\cdots =0.}
More precisely, replacing ∂x by X, and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification.
Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant B2 − 4AC, the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by B2 − AC due to the convention of the xy term being 2B rather than B; formally, the discriminant (of the associated quadratic form) is (2B)2 − 4AC = 4(B2 − AC), with the factor of 4 dropped for simplicity.
B2 − AC < 0 (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where x < 0. By change of variables, the equation can always be expressed in the form:
u
x
x
+
u
y
y
+
⋯
=
0
,
{\displaystyle u_{xx}+u_{yy}+\cdots =0,}
where x and y correspond to changed variables. This justifies Laplace equation as an example of this type.
B2 − AC = 0 (parabolic partial differential equation): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where x = 0. By change of variables, the equation can always be expressed in the form:
u
x
x
+
⋯
=
0
,
{\displaystyle u_{xx}+\cdots =0,}
where x correspond to changed variables. This justifies heat equation, which are of form
u
t
−
u
x
x
+
⋯
=
0
{\textstyle u_{t}-u_{xx}+\cdots =0}
, as an example of this type.
B2 − AC > 0 (hyperbolic partial differential equation): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where x > 0. By change of variables, the equation can always be expressed in the form:
u
x
x
−
u
y
y
+
⋯
=
0
,
{\displaystyle u_{xx}-u_{yy}+\cdots =0,}
where x and y correspond to changed variables. This justifies wave equation as an example of this type.
If there are n independent variables x1, x2 , …, xn, a general linear partial differential equation of second order has the form
L
u
=
∑
i
=
1
n
∑
j
=
1
n
a
i
,
j
∂
2
u
∂
x
i
∂
x
j
+
lower-order terms
=
0.
{\displaystyle Lu=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{i,j}{\frac {\partial ^{2}u}{\partial x_{i}\partial x_{j}}}\quad +{\text{lower-order terms}}=0.}
The classification depends upon the signature of the eigenvalues of the coefficient matrix ai,j.
Elliptic: the eigenvalues are all positive or all negative.
Parabolic: the eigenvalues are all positive or all negative, except one that is zero.
Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative.
Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues.
The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation, the heat equation, and the wave equation.
However, the classification only depends on linearity of the second-order terms and is therefore applicable to semi- and quasilinear PDEs as well. The basic types also extend to hybrids such as the Euler–Tricomi equation; varying from elliptic to hyperbolic for different regions of the domain, as well as higher-order PDEs, but such knowledge is more specialized.
=== Systems of first-order equations and characteristic surfaces ===
The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for ν = 1, 2, …, n. The partial differential equation takes the form
L
u
=
∑
ν
=
1
n
A
ν
∂
u
∂
x
ν
+
B
=
0
,
{\displaystyle Lu=\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial u}{\partial x_{\nu }}}+B=0,}
where the coefficient matrices Aν and the vector B may depend upon x and u. If a hypersurface S is given in the implicit form
φ
(
x
1
,
x
2
,
…
,
x
n
)
=
0
,
{\displaystyle \varphi (x_{1},x_{2},\ldots ,x_{n})=0,}
where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given point if the characteristic form vanishes:
Q
(
∂
φ
∂
x
1
,
…
,
∂
φ
∂
x
n
)
=
det
[
∑
ν
=
1
n
A
ν
∂
φ
∂
x
ν
]
=
0.
{\displaystyle Q\left({\frac {\partial \varphi }{\partial x_{1}}},\ldots ,{\frac {\partial \varphi }{\partial x_{n}}}\right)=\det \left[\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial \varphi }{\partial x_{\nu }}}\right]=0.}
The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S, then it may be possible to determine the normal derivative of u on S from the differential equation. If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. If the data on S and the differential equation do not determine the normal derivative of u on S, then the surface is characteristic, and the differential equation restricts the data on S: the differential equation is internal to S.
A first-order system Lu = 0 is elliptic if no surface is characteristic for L: the values of u on S and the differential equation always determine the normal derivative of u on S.
A first-order system is hyperbolic at a point if there is a spacelike surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ, the equation Q(λξ + η) = 0 has m real roots λ1, λ2, …, λm. The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form Q(ζ) = 0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has nm sheets, and the axis ζ = λξ runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets.
== Analytical solutions ==
=== Separation of variables ===
Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem.
In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve.
This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately.
This generalizes to the method of characteristics, and is also used in integral transforms.
=== Method of characteristics ===
The characteristic surface in n = 2-dimensional space is called a characteristic curve.
In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics.
More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces.
=== Integral transform ===
An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator.
An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves.
If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral.
=== Change of variables ===
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
+
r
S
∂
V
∂
S
−
r
V
=
0
{\displaystyle {\frac {\partial V}{\partial t}}+{\tfrac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}
is reducible to the heat equation
∂
u
∂
τ
=
∂
2
u
∂
x
2
{\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {\partial ^{2}u}{\partial x^{2}}}}
by the change of variables
V
(
S
,
t
)
=
v
(
x
,
τ
)
,
x
=
ln
(
S
)
,
τ
=
1
2
σ
2
(
T
−
t
)
,
v
(
x
,
τ
)
=
e
−
α
x
−
β
τ
u
(
x
,
τ
)
.
{\displaystyle {\begin{aligned}V(S,t)&=v(x,\tau ),\\[5px]x&=\ln \left(S\right),\\[5px]\tau &={\tfrac {1}{2}}\sigma ^{2}(T-t),\\[5px]v(x,\tau )&=e^{-\alpha x-\beta \tau }u(x,\tau ).\end{aligned}}}
=== Fundamental solution ===
Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source
P
(
D
)
u
=
δ
{\displaystyle P(D)u=\delta }
), then taking the convolution with the boundary conditions to get the solution.
This is analogous in signal processing to understanding a filter by its impulse response.
=== Superposition principle ===
The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example sin x + sin x = 2 sin x. The same principle can be observed in PDEs where the solutions may be real or complex and additive. If u1 and u2 are solutions of linear PDE in some function space R, then u = c1u1 + c2u2 with any constants c1 and c2 are also a solution of that PDE in the same function space.
=== Methods for non-linear equations ===
There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis).
Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems.
The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations.
In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers.
=== Lie group method ===
From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.
A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE.
Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines.
=== Semi-analytical methods ===
The Adomian decomposition method, the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method. These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality.
== Numerical solutions ==
The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods, which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc.
=== Finite element method ===
The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc.
=== Finite difference method ===
Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.
=== Finite volume method ===
Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design.
=== Neural networks ===
== Weak solutions ==
Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions.
An example for the definition of a weak solution is as follows:
Consider the boundary-value problem given by:
L
u
=
f
in
U
,
u
=
0
on
∂
U
,
{\displaystyle {\begin{aligned}Lu&=f\quad {\text{in }}U,\\u&=0\quad {\text{on }}\partial U,\end{aligned}}}
where
L
u
=
−
∑
i
,
j
∂
j
(
a
i
j
∂
i
u
)
+
∑
i
b
i
∂
i
u
+
c
u
{\displaystyle Lu=-\sum _{i,j}\partial _{j}(a^{ij}\partial _{i}u)+\sum _{i}b^{i}\partial _{i}u+cu}
denotes a second-order partial differential operator in divergence form.
We say a
u
∈
H
0
1
(
U
)
{\displaystyle u\in H_{0}^{1}(U)}
is a weak solution if
∫
U
[
∑
i
,
j
a
i
j
(
∂
i
u
)
(
∂
j
v
)
+
∑
i
b
i
(
∂
i
u
)
v
+
c
u
v
]
d
x
=
∫
U
f
v
d
x
{\displaystyle \int _{U}[\sum _{i,j}a^{ij}(\partial _{i}u)(\partial _{j}v)+\sum _{i}b^{i}(\partial _{i}u)v+cuv]dx=\int _{U}fvdx}
for every
v
∈
H
0
1
(
U
)
{\displaystyle v\in H_{0}^{1}(U)}
, which can be derived by a formal integral by parts.
An example for a weak solution is as follows:
ϕ
(
x
)
=
1
4
π
1
|
x
|
{\displaystyle \phi (x)={\frac {1}{4\pi }}{\frac {1}{|x|}}}
is a weak solution satisfying
∇
2
ϕ
=
δ
in
R
3
{\displaystyle \nabla ^{2}\phi =\delta {\text{ in }}R^{3}}
in distributional sense, as formally,
∫
R
3
∇
2
ϕ
(
x
)
ψ
(
x
)
d
x
=
∫
R
3
ϕ
(
x
)
∇
2
ψ
(
x
)
d
x
=
ψ
(
0
)
for
ψ
∈
C
c
∞
(
R
3
)
.
{\displaystyle \int _{R^{3}}\nabla ^{2}\phi (x)\psi (x)dx=\int _{R^{3}}\phi (x)\nabla ^{2}\psi (x)dx=\psi (0){\text{ for }}\psi \in C_{c}^{\infty }(R^{3}).}
== Theoretical Studies ==
As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary.
=== Well-posedness ===
Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have:
an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE
by continuously changing the free choices, one continuously changes the corresponding solution
This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity", in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed.
=== Regularity ===
Regularity refers to the integrability and differentiability of weak solutions, which can often be represented by Sobolev spaces.
This problem arise due to the difficulty in searching for classical solutions. Researchers often tend to find weak solutions at first and then find out whether it is smooth enough to be qualified as a classical solution.
Results from functional analysis are often used in this field of study.
== See also ==
Some common PDEs
Acoustic wave equation
Burgers' equation
Continuity equation
Heat equation
Helmholtz equation
Klein–Gordon equation
Jacobi equation
Lagrange equation
Lorenz equation
Laplace's equation
Maxwell's equations
Navier-Stokes equation
Poisson's equation
Reaction–diffusion system
Schrödinger equation
Wave equation
Types of boundary conditions
Dirichlet boundary condition
Neumann boundary condition
Robin boundary condition
Cauchy problem
Various topics
Jet bundle
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Numerical partial differential equations
Partial differential algebraic equation
Recurrence relation
Stochastic processes and boundary value problems
== Notes ==
== References ==
== Further reading ==
Cajori, Florian (1928). "The Early History of Partial Differential Equations and of Partial Differentiation and Integration" (PDF). The American Mathematical Monthly. 35 (9): 459–467. doi:10.2307/2298771. JSTOR 2298771. Archived from the original (PDF) on 2018-11-23. Retrieved 2016-05-15.
Nirenberg, Louis (1994). "Partial differential equations in the first half of the century." Development of mathematics 1900–1950 (Luxembourg, 1992), 479–515, Birkhäuser, Basel.
Brezis, Haïm; Browder, Felix (1998). "Partial Differential Equations in the 20th Century". Advances in Mathematics. 135 (1): 76–144. doi:10.1006/aima.1997.1713.
== External links ==
"Differential equation, partial", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Partial Differential Equations: Exact Solutions at EqWorld: The World of Mathematical Equations.
Partial Differential Equations: Index at EqWorld: The World of Mathematical Equations.
Partial Differential Equations: Methods at EqWorld: The World of Mathematical Equations.
Example problems with solutions at exampleproblems.com
Partial Differential Equations at mathworld.wolfram.com
Partial Differential Equations with Mathematica
Partial Differential Equations in Cleve Moler: Numerical Computing with MATLAB
Partial Differential Equations at nag.com
Sanderson, Grant (April 21, 2019). "But what is a partial differential equation?". 3Blue1Brown. Archived from the original on 2021-11-02 – via YouTube. | Wikipedia/Partial_differential_equation |
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string acts like a particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force. Thus, string theory is a theory of quantum gravity.
String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. String theory has contributed a number of advances to mathematical physics, which have been applied to a variety of problems in black hole physics, early universe cosmology, nuclear physics, and condensed matter physics, and it has stimulated a number of major developments in pure mathematics. Because string theory potentially provides a unified description of gravity and particle physics, it is a candidate for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. Despite much work on these problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of its details.
String theory was first studied in the late 1960s as a theory of the strong nuclear force, before being abandoned in favor of quantum chromodynamics. Subsequently, it was realized that the very properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of particles known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. Five consistent versions of superstring theory were developed before it was conjectured in the mid-1990s that they were all different limiting cases of a single theory in eleven dimensions known as M-theory. In late 1997, theorists discovered an important relationship called the anti-de Sitter/conformal field theory correspondence (AdS/CFT correspondence), which relates string theory to another type of physical theory called a quantum field theory.
One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, which has complicated efforts to develop theories of particle physics based on string theory. These issues have led some in the community to criticize these approaches to physics, and to question the value of continued research on string theory unification.
== Fundamentals ==
=== Overview ===
In the 20th century, two theoretical frameworks emerged for formulating the laws of physics. The first is Albert Einstein's general theory of relativity, a theory that explains the force of gravity and the structure of spacetime at the macro-level. The other is quantum mechanics, a completely different formulation, which uses known probability principles to describe physical phenomena at the micro-level. By the late 1970s, these two frameworks had proven to be sufficient to explain most of the observed features of the universe, from elementary particles to atoms to the evolution of stars and the universe as a whole.
In spite of these successes, there are still many problems that remain to be solved. One of the deepest problems in modern physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, whereas the other fundamental forces are described within the framework of quantum mechanics. A quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, but difficulties arise when one attempts to apply the usual prescriptions of quantum theory to the force of gravity.
String theory is a theoretical framework that attempts to address these questions.
The starting point for string theory is the idea that the point-like particles of particle physics can also be modeled as one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other. In a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle consistent with non-string models of elementary particles, with its mass, charge, and other properties determined by the vibrational state of the string. String theory's application as a form of quantum gravity proposes a vibrational state responsible for the graviton, a yet unproven quantum particle that is theorized to carry gravitational force.
One of the main developments of the past several decades in string theory was the discovery of certain 'dualities', mathematical transformations that identify one physical theory with another. Physicists studying string theory have discovered a number of these dualities between different versions of string theory, and this has led to the conjecture that all consistent versions of string theory are subsumed in a single framework known as M-theory.
Studies of string theory have also yielded a number of results on the nature of black holes and the gravitational interaction. There are certain paradoxes that arise when one attempts to understand the quantum aspects of black holes, and work on string theory has attempted to clarify these issues. In late 1997 this line of work culminated in the discovery of the anti-de Sitter/conformal field theory correspondence or AdS/CFT. This is a theoretical result that relates string theory to other physical theories which are better understood theoretically. The AdS/CFT correspondence has implications for the study of black holes and quantum gravity, and it has been applied to other subjects, including nuclear and condensed matter physics.
Since string theory incorporates all of the fundamental interactions, including gravity, many physicists hope that it will eventually be developed to the point where it fully describes our universe, making it a theory of everything. One of the goals of current research in string theory is to find a solution of the theory that reproduces the observed spectrum of elementary particles, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. While there has been progress toward these goals, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of details.
One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. The scattering of strings is most straightforwardly defined using the techniques of perturbation theory, but it is not known in general how to define string theory nonperturbatively. It is also not clear whether there is any principle by which string theory selects its vacuum state, the physical state that determines the properties of our universe. These problems have led some in the community to criticize these approaches to the unification of physics and question the value of continued research on these problems.
=== Strings ===
The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields.
In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions.
The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional (2D) surface representing the motion of a string. Unlike in quantum field theory, string theory does not have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach.
In theories of particle physics based on string theory, the characteristic length scale of strings is assumed to be on the order of the Planck length, or 10−35 meters, the scale at which the effects of quantum gravity are believed to become significant. On much larger length scales, such as the scales visible in physics laboratories, such objects would be indistinguishable from zero-dimensional point particles, and the vibrational state of the string would determine the type of particle. One of the vibrational states of a string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force.
The original version of string theory was bosonic string theory, but this version described only bosons, a class of particles that transmit forces between the matter particles, or fermions. Bosonic string theory was eventually superseded by theories called superstring theories. These theories describe both bosons and fermions, and they incorporate a theoretical idea called supersymmetry. In theories with supersymmetry, each boson has a counterpart which is a fermion, and vice versa.
There are several versions of superstring theory: type I, type IIA, type IIB, and two flavors of heterotic string theory (SO(32) and E8×E8). The different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings (which are segments with endpoints) and closed strings (which form closed loops), while types IIA, IIB and heterotic include only closed strings.
=== Extra dimensions ===
In everyday life, there are three familiar dimensions (3D) of space: height, width and length. Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modeled as separate entities but are instead unified to a four-dimensional (4D) spacetime. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime.
In spite of the fact that the Universe is well described by 4D spacetime, there are several reasons why physicists consider theories in other dimensions. In some cases, by modeling spacetime in a different number of dimensions, a theory becomes more mathematically tractable, and one can perform calculations and gain general insights more easily. There are also situations where theories in two or three spacetime dimensions are useful for describing phenomena in condensed matter physics. Finally, there exist scenarios in which there could actually be more than 4D of spacetime which have nonetheless managed to escape detection.
String theories require extra dimensions of spacetime for their mathematical consistency. In bosonic string theory, spacetime is 26-dimensional, while in superstring theory it is 10-dimensional, and in M-theory it is 11-dimensional. In order to describe real physical phenomena using string theory, one must therefore imagine scenarios in which these extra dimensions would not be observed in experiments.
Compactification is one way of modifying the number of dimensions in a physical theory. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions.
Compactification can be used to construct models in which spacetime is effectively four-dimensional. However, not every way of compactifying the extra dimensions produces a model with the right properties to describe nature. In a viable model of particle physics, the compact extra dimensions must be shaped like a Calabi–Yau manifold. A Calabi–Yau manifold is a special space which is typically taken to be six-dimensional in applications to string theory. It is named after mathematicians Eugenio Calabi and Shing-Tung Yau.
Another approach to reducing the number of dimensions is the so-called brane-world scenario. In this approach, physicists assume that the observable universe is a four-dimensional subspace of a higher dimensional space. In such models, the force-carrying bosons of particle physics arise from open strings with endpoints attached to the four-dimensional subspace, while gravity arises from closed strings propagating through the larger ambient space. This idea plays an important role in attempts to develop models of real-world physics based on string theory, and it provides a natural explanation for the weakness of gravity compared to the other fundamental forces.
=== Dualities ===
A notable fact about string theory is that the different versions of the theory all turn out to be related in highly nontrivial ways. One of the relationships that can exist between different string theories is called S-duality. This is a relationship that says that a collection of strongly interacting particles in one theory can, in some cases, be viewed as a collection of weakly interacting particles in a completely different theory. Roughly speaking, a collection of particles is said to be strongly interacting if they combine and decay often and weakly interacting if they do so infrequently. Type I string theory turns out to be equivalent by S-duality to the SO(32) heterotic string theory. Similarly, type IIB string theory is related to itself in a nontrivial way by S-duality.
Another relationship between different string theories is T-duality. Here one considers strings propagating around a circular extra dimension. T-duality states that a string propagating around a circle of radius R is equivalent to a string propagating around a circle of radius 1/R in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum p and winding number n in one description, it will have momentum n and winding number p in the dual description. For example, type IIA string theory is equivalent to type IIB string theory via T-duality, and the two versions of heterotic string theory are also related by T-duality.
In general, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. Two theories related by a duality need not be string theories. For example, Montonen–Olive duality is an example of an S-duality relationship between quantum field theories. The AdS/CFT correspondence is an example of a duality that relates string theory to a quantum field theory. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory. The two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena.
=== Branes ===
In string theory and other related theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For instance, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension p, these are called p-branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane.
Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They have mass and can have other attributes such as charge. A p-brane sweeps out a (p+1)-dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane.
In string theory, D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to a certain mathematical condition on the system known as the Dirichlet boundary condition. The study of D-branes in string theory has led to important results such as the AdS/CFT correspondence, which has shed light on many problems in quantum field theory.
Branes are frequently studied from a purely mathematical point of view, and they are described as objects of certain categories, such as the derived category of coherent sheaves on a complex algebraic variety, or the Fukaya category of a symplectic manifold. The connection between the physical notion of a brane and the mathematical notion of a category has led to important mathematical insights in the fields of algebraic and symplectic geometry and representation theory.
== M-theory ==
Prior to 1995, theorists believed that there were five consistent versions of superstring theory (type I, type IIA, type IIB, and two versions of heterotic string theory). This understanding changed in 1995 when Edward Witten suggested that the five theories were just special limiting cases of an eleven-dimensional theory called M-theory. Witten's conjecture was based on the work of a number of other physicists, including Ashoke Sen, Chris Hull, Paul Townsend, and Michael Duff. His announcement led to a flurry of research activity now known as the second superstring revolution.
=== Unification of superstring theories ===
In the 1970s, many physicists became interested in supergravity theories, which combine general relativity with supersymmetry. Whereas general relativity makes sense in any number of dimensions, supergravity places an upper limit on the number of dimensions. In 1978, work by Werner Nahm showed that the maximum spacetime dimension in which one can formulate a consistent supersymmetric theory is eleven. In the same year, Eugene Cremmer, Bernard Julia, and Joël Scherk of the École Normale Supérieure showed that supergravity not only permits up to eleven dimensions but is in fact most elegant in this maximal number of dimensions.
Initially, many physicists hoped that by compactifying eleven-dimensional supergravity, it might be possible to construct realistic models of our four-dimensional world. The hope was that such models would provide a unified description of the four fundamental forces of nature: electromagnetism, the strong and weak nuclear forces, and gravity. Interest in eleven-dimensional supergravity soon waned as various flaws in this scheme were discovered. One of the problems was that the laws of physics appear to distinguish between clockwise and counterclockwise, a phenomenon known as chirality. Edward Witten and others observed this chirality property cannot be readily derived by compactifying from eleven dimensions.
In the first superstring revolution in 1984, many physicists turned to string theory as a unified theory of particle physics and quantum gravity. Unlike supergravity theory, string theory was able to accommodate the chirality of the standard model, and it provided a theory of gravity consistent with quantum effects. Another feature of string theory that many physicists were drawn to in the 1980s and 1990s was its high degree of uniqueness. In ordinary particle theories, one can consider any collection of elementary particles whose classical behavior is described by an arbitrary Lagrangian. In string theory, the possibilities are much more constrained: by the 1990s, physicists had argued that there were only five consistent supersymmetric versions of the theory.
Although there were only a handful of consistent superstring theories, it remained a mystery why there was not just one consistent formulation. However, as physicists began to examine string theory more closely, they realized that these theories are related in intricate and nontrivial ways. They found that a system of strongly interacting strings can, in some cases, be viewed as a system of weakly interacting strings. This phenomenon is known as S-duality. It was studied by Ashoke Sen in the context of heterotic strings in four dimensions and by Chris Hull and Paul Townsend in the context of the type IIB theory. Theorists also found that different string theories may be related by T-duality. This duality implies that strings propagating on completely different spacetime geometries may be physically equivalent.
At around the same time, as many physicists were studying the properties of strings, a small group of physicists were examining the possible applications of higher dimensional objects. In 1987, Eric Bergshoeff, Ergin Sezgin, and Paul Townsend showed that eleven-dimensional supergravity includes two-dimensional branes. Intuitively, these objects look like sheets or membranes propagating through the eleven-dimensional spacetime. Shortly after this discovery, Michael Duff, Paul Howe, Takeo Inami, and Kellogg Stelle considered a particular compactification of eleven-dimensional supergravity with one of the dimensions curled up into a circle. In this setting, one can imagine the membrane wrapping around the circular dimension. If the radius of the circle is sufficiently small, then this membrane looks just like a string in ten-dimensional spacetime. Duff and his collaborators showed that this construction reproduces exactly the strings appearing in type IIA superstring theory.
Speaking at a string theory conference in 1995, Edward Witten made the surprising suggestion that all five superstring theories were in fact just different limiting cases of a single theory in eleven spacetime dimensions. Witten's announcement drew together all of the previous results on S- and T-duality and the appearance of higher-dimensional branes in string theory. In the months following Witten's announcement, hundreds of new papers appeared on the Internet confirming different parts of his proposal. Today this flurry of work is known as the second superstring revolution.
Initially, some physicists suggested that the new theory was a fundamental theory of membranes, but Witten was skeptical of the role of membranes in the theory. In a paper from 1996, Hořava and Witten wrote "As it has been proposed that the eleven-dimensional theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committally call it the M-theory, leaving to the future the relation of M to membranes." In the absence of an understanding of the true meaning and structure of M-theory, Witten has suggested that the M should stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known.
=== Matrix theory ===
In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics.
One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting.
The development of the matrix model formulation of M-theory has led physicists to consider various connections between string theory and a branch of mathematics called noncommutative geometry. This subject is a generalization of ordinary geometry in which mathematicians define new geometric notions using tools from noncommutative algebra. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which spacetime is described mathematically using noncommutative geometry. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories.
== Black holes ==
In general relativity, a black hole is defined as a region of spacetime in which the gravitational field is so strong that no particle or radiation can escape. In the currently accepted models of stellar evolution, black holes are thought to arise when massive stars undergo gravitational collapse, and many galaxies are thought to contain supermassive black holes at their centers. Black holes are also important for theoretical reasons, as they present profound challenges for theorists attempting to understand the quantum aspects of gravity. String theory has proved to be an important tool for investigating the theoretical properties of black holes because it provides a framework in which theorists can study their thermodynamics.
=== Bekenstein–Hawking formula ===
In the branch of physics called statistical mechanics, entropy is a measure of the randomness or disorder of a physical system. This concept was studied in the 1870s by the Austrian physicist Ludwig Boltzmann, who showed that the thermodynamic properties of a gas could be derived from the combined properties of its many constituent molecules. Boltzmann argued that by averaging the behaviors of all the different molecules in a gas, one can understand macroscopic properties such as volume, temperature, and pressure. In addition, this perspective led him to give a precise definition of entropy as the natural logarithm of the number of different states of the molecules (also called microstates) that give rise to the same macroscopic features.
In the twentieth century, physicists began to apply the same concepts to black holes. In most systems such as gases, the entropy scales with the volume. In the 1970s, the physicist Jacob Bekenstein suggested that the entropy of a black hole is instead proportional to the surface area of its event horizon, the boundary beyond which matter and radiation may escape its gravitational attraction. When combined with ideas of the physicist Stephen Hawking, Bekenstein's work yielded a precise formula for the entropy of a black hole. The Bekenstein–Hawking formula expresses the entropy S as
S
=
c
3
k
A
4
ℏ
G
{\displaystyle S={\frac {c^{3}kA}{4\hbar G}}}
where c is the speed of light, k is the Boltzmann constant, ħ is the reduced Planck constant, G is Newton's constant, and A is the surface area of the event horizon.
Like any physical system, a black hole has an entropy defined in terms of the number of different microstates that lead to the same macroscopic features. The Bekenstein–Hawking entropy formula gives the expected value of the entropy of a black hole, but by the 1990s, physicists still lacked a derivation of this formula by counting microstates in a theory of quantum gravity. Finding such a derivation of this formula was considered an important test of the viability of any theory of quantum gravity such as string theory.
=== Derivation within string theory ===
In a paper from 1996, Andrew Strominger and Cumrun Vafa showed how to derive the Bekenstein–Hawking formula for certain black holes in string theory. Their calculation was based on the observation that D-branes—which look like fluctuating membranes when they are weakly interacting—become dense, massive objects with event horizons when the interactions are strong. In other words, a system of strongly interacting D-branes in string theory is indistinguishable from a black hole. Strominger and Vafa analyzed such D-brane systems and calculated the number of different ways of placing D-branes in spacetime so that their combined mass and charge is equal to a given mass and charge for the resulting black hole. Their calculation reproduced the Bekenstein–Hawking formula exactly, including the factor of 1/4. Subsequent work by Strominger, Vafa, and others refined the original calculations and gave the precise values of the "quantum corrections" needed to describe very small black holes.
The black holes that Strominger and Vafa considered in their original work were quite different from real astrophysical black holes. One difference was that Strominger and Vafa considered only extremal black holes in order to make the calculation tractable. These are defined as black holes with the lowest possible mass compatible with a given charge. Strominger and Vafa also restricted attention to black holes in five-dimensional spacetime with unphysical supersymmetry.
Although it was originally developed in this very particular and physically unrealistic context in string theory, the entropy calculation of Strominger and Vafa has led to a qualitative understanding of how black hole entropy can be accounted for in any theory of quantum gravity. Indeed, in 1998, Strominger argued that the original result could be generalized to an arbitrary consistent theory of quantum gravity without relying on strings or supersymmetry. In collaboration with several other authors in 2010, he showed that some results on black hole entropy could be extended to non-extremal astrophysical black holes.
== AdS/CFT correspondence ==
One approach to formulating string theory and studying its properties is provided by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence. This is a theoretical result that implies that string theory is in some cases equivalent to a quantum field theory. In addition to providing insights into the mathematical structure of string theory, the AdS/CFT correspondence has shed light on many aspects of quantum field theory in regimes where traditional calculational techniques are ineffective. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2010, Maldacena's article had over 7000 citations, becoming the most highly cited article in the field of high energy physics.
=== Overview of the correspondence ===
In the AdS/CFT correspondence, the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the left. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior.
One can imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface.
This construction describes a hypothetical universe with only two space dimensions and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space.
An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, within a small region on the surface around any given point, it looks just like Minkowski space, the model of spacetime used in non-gravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a quantum field theory. The claim is that this quantum field theory is equivalent to a gravitational theory, such as string theory, in the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating entities and calculations in one theory into their counterparts in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding.
=== Applications to quantum gravity ===
The discovery of the AdS/CFT correspondence was a major advance in physicists' understanding of string theory and quantum gravity. One reason for this is that the correspondence provides a formulation of string theory in terms of quantum field theory, which is well understood by comparison. Another reason is that it provides a general framework in which physicists can study and attempt to resolve the paradoxes of black holes.
In 1975, Stephen Hawking published a calculation which suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox.
The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space. These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information.
=== Applications to nuclear physics ===
In addition to its applications to theoretical problems in quantum gravity, the AdS/CFT correspondence has been applied to a variety of problems in quantum field theory. One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvin, conditions similar to those present at around 10−11 seconds after the Big Bang.
The physics of the quark–gluon plasma is governed by a theory called quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark-gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark-gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark-gluon plasma, the shear viscosity and volume density of entropy, should be approximately equal to a certain universal constant. In 2008, the predicted value of this ratio for the quark-gluon plasma was confirmed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.
=== Applications to condensed matter physics ===
The AdS/CFT correspondence has also been used to study aspects of condensed matter physics. Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior.
So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole.
== Phenomenology ==
In addition to being an idea of considerable theoretical interest, string theory provides a framework for constructing models of real-world physics that combine general relativity and particle physics. Phenomenology is the branch of theoretical physics in which physicists construct realistic models of nature from more abstract theoretical ideas. String phenomenology is the part of string theory that attempts to construct realistic or semi-realistic models based on string theory.
Partly because of theoretical and mathematical difficulties and partly because of the extremely high energies needed to test these theories experimentally, there is so far no experimental evidence that would unambiguously point to any of these models being a correct fundamental description of nature. This has led some in the community to criticize these approaches to unification and question the value of continued research on these problems.
=== Particle physics ===
The currently accepted theory describing elementary particles and their interactions is known as the standard model of particle physics. This theory provides a unified description of three of the fundamental forces of nature: electromagnetism and the strong and weak nuclear forces. Despite its remarkable success in explaining a wide range of physical phenomena, the standard model cannot be a complete description of reality. This is because the standard model fails to incorporate the force of gravity and because of problems such as the hierarchy problem and the inability to explain the structure of fermion masses or dark matter.
String theory has been used to construct a variety of models of particle physics going beyond the standard model. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. Such compactifications offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct realistic or semi-realistic models of our four-dimensional world based on M-theory.
=== Cosmology ===
The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. Despite its success in explaining many observed features of the universe including galactic redshifts, the relative abundance of light elements such as hydrogen and helium, and the existence of a cosmic microwave background, there are several questions that remain unanswered. For example, the standard Big Bang model does not explain why the universe appears to be the same in all directions, why it appears flat on very large distance scales, or why certain hypothesized particles such as magnetic monopoles are not observed in experiments.
Currently, the leading candidate for a theory going beyond the Big Bang is the theory of cosmic inflation. Developed by Alan Guth and others in the 1980s, inflation postulates a period of extremely rapid accelerated expansion of the universe prior to the expansion described by the standard Big Bang theory. The theory of cosmic inflation preserves the successes of the Big Bang while providing a natural explanation for some of the mysterious features of the universe. The theory has also received striking support from observations of the cosmic microwave background, the radiation that has filled the sky since around 380,000 years after the Big Bang.
In the theory of inflation, the rapid initial expansion of the universe is caused by a hypothetical particle called the inflaton. The exact properties of this particle are not fixed by the theory but should ultimately be derived from a more fundamental theory such as string theory. Indeed, there have been a number of attempts to identify an inflaton within the spectrum of particles described by string theory and to study inflation using string theory. While these approaches might eventually find support in observational data such as measurements of the cosmic microwave background, the application of string theory to cosmology is still in its early stages.
== Connections to mathematics ==
In addition to influencing research in theoretical physics, string theory has stimulated a number of major developments in pure mathematics. Like many developing ideas in theoretical physics, string theory does not at present have a mathematically rigorous formulation in which all of its concepts can be defined precisely. As a result, physicists who study string theory are often guided by physical intuition to conjecture relationships between the seemingly different mathematical structures that are used to formalize different parts of the theory. These conjectures are later proved by mathematicians, and in this way, string theory serves as a source of new ideas in pure mathematics.
=== Mirror symmetry ===
After Calabi–Yau manifolds had entered physics as a way to compactify extra dimensions in string theory, many physicists began studying these manifolds. In the late 1980s, several physicists noticed that given such a compactification of string theory, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, two different versions of string theory, type IIA and type IIB, can be compactified on completely different Calabi–Yau manifolds giving rise to the same physics. In this situation, the manifolds are called mirror manifolds, and the relationship between the two physical theories is called mirror symmetry.
Regardless of whether Calabi–Yau compactifications of string theory provide a correct description of nature, the existence of the mirror duality between different string theories has significant mathematical consequences. The Calabi–Yau manifolds used in string theory are of interest in pure mathematics, and mirror symmetry allows mathematicians to solve problems in enumerative geometry, a branch of mathematics concerned with counting the numbers of solutions to geometric questions.
Enumerative geometry studies a class of geometric objects called algebraic varieties which are defined by the vanishing of polynomials. For example, the Clebsch cubic illustrated on the right is an algebraic variety defined using a certain polynomial of degree three in four variables. A celebrated result of nineteenth-century mathematicians Arthur Cayley and George Salmon states that there are exactly 27 straight lines that lie entirely on such a surface.
Generalizing this problem, one can ask how many lines can be drawn on a quintic Calabi–Yau manifold, such as the one illustrated above, which is defined by a polynomial of degree five. This problem was solved by the nineteenth-century German mathematician Hermann Schubert, who found that there are exactly 2,875 such lines. In 1986, geometer Sheldon Katz proved that the number of curves, such as circles, that are defined by polynomials of degree two and lie entirely in the quintic is 609,250.
By the year 1991, most of the classical problems of enumerative geometry had been solved and interest in enumerative geometry had begun to diminish. The field was reinvigorated in May 1991 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to translate difficult mathematical questions about one Calabi–Yau manifold into easier questions about its mirror. In particular, they used mirror symmetry to show that a six-dimensional Calabi–Yau manifold can contain exactly 317,206,375 curves of degree three. In addition to counting degree-three curves, Candelas and his collaborators obtained a number of more general results for counting rational curves which went far beyond the results obtained by mathematicians.
Originally, these results of Candelas were justified on physical grounds. However, mathematicians generally prefer rigorous proofs that do not require an appeal to physical intuition. Inspired by physicists' work on mirror symmetry, mathematicians have therefore constructed their own arguments proving the enumerative predictions of mirror symmetry. Today mirror symmetry is an active area of research in mathematics, and mathematicians are working to develop a more complete mathematical understanding of mirror symmetry based on physicists' intuition. Major approaches to mirror symmetry include the homological mirror symmetry program of Maxim Kontsevich and the SYZ conjecture of Andrew Strominger, Shing-Tung Yau, and Eric Zaslow.
=== Monstrous moonshine ===
Group theory is the branch of mathematics that studies the concept of symmetry. For example, one can consider a geometric shape such as an equilateral triangle. There are various operations that one can perform on this triangle without changing its shape. One can rotate it through 120°, 240°, or 360°, or one can reflect in any of the lines labeled S0, S1, or S2 in the picture. Each of these operations is called a symmetry, and the collection of these symmetries satisfies certain technical properties making it into what mathematicians call a group. In this particular example, the group is known as the dihedral group of order 6 because it has six elements. A general group may describe finitely many or infinitely many symmetries; if there are only finitely many symmetries, it is called a finite group.
Mathematicians often strive for a classification (or list) of all mathematical objects of a given type. It is generally believed that finite groups are too diverse to admit a useful classification. A more modest but still challenging problem is to classify all finite simple groups. These are finite groups that may be used as building blocks for constructing arbitrary finite groups in the same way that prime numbers can be used to construct arbitrary whole numbers by taking products. One of the major achievements of contemporary group theory is the classification of finite simple groups, a mathematical theorem that provides a list of all possible finite simple groups.
This classification theorem identifies several infinite families of groups as well as 26 additional groups which do not fit into any family. The latter groups are called the "sporadic" groups, and each one owes its existence to a remarkable combination of circumstances. The largest sporadic group, the so-called monster group, has over 1053 elements, more than a thousand times the number of atoms in the Earth.
A seemingly unrelated construction is the j-function of number theory. This object belongs to a special class of functions called modular functions, whose graphs form a certain kind of repeating pattern. Although this function appears in a branch of mathematics that seems very different from the theory of finite groups, the two subjects turn out to be intimately related. In the late 1970s, mathematicians John McKay and John Thompson noticed that certain numbers arising in the analysis of the monster group (namely, the dimensions of its irreducible representations) are related to numbers that appear in a formula for the j-function (namely, the coefficients of its Fourier series). This relationship was further developed by John Horton Conway and Simon Norton who called it monstrous moonshine because it seemed so far fetched.
In 1992, Richard Borcherds constructed a bridge between the theory of modular functions and finite groups and, in the process, explained the observations of McKay and Thompson. Borcherds' work used ideas from string theory in an essential way, extending earlier results of Igor Frenkel, James Lepowsky, and Arne Meurman, who had realized the monster group as the symmetries of a particular version of string theory. In 1998, Borcherds was awarded the Fields medal for his work.
Since the 1990s, the connection between string theory and moonshine has led to further results in mathematics and physics. In 2010, physicists Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa discovered connections between a different sporadic group, the Mathieu group M24, and a certain version of string theory. Miranda Cheng, John Duncan, and Jeffrey A. Harvey proposed a generalization of this moonshine phenomenon called umbral moonshine, and their conjecture was proved mathematically by Duncan, Michael Griffin, and Ken Ono. Witten has also speculated that the version of string theory appearing in monstrous moonshine might be related to a certain simplified model of gravity in three spacetime dimensions.
== History ==
=== Early results ===
Some of the structures reintroduced by string theory arose for the first time much earlier as part of the program of classical unification started by Albert Einstein. The first person to add a fifth dimension to a theory of gravity was Gunnar Nordström in 1914, who noted that gravity in five dimensions describes both gravity and electromagnetism in four. Nordström attempted to unify electromagnetism with his theory of gravitation, which was however superseded by Einstein's general relativity in 1919. Thereafter, German mathematician Theodor Kaluza combined the fifth dimension with general relativity, and only Kaluza is usually credited with the idea. In 1926, the Swedish physicist Oskar Klein gave a physical interpretation of the unobservable extra dimension—it is wrapped into a small circle. Einstein introduced a non-symmetric metric tensor, while much later Brans and Dicke added a scalar component to gravity. These ideas would be revived within string theory, where they are demanded by consistency conditions.
String theory was originally developed during the late 1960s and early 1970s as a never completely successful theory of hadrons, the subatomic particles like the proton and neutron that feel the strong interaction. In the 1960s, Geoffrey Chew and Steven Frautschi discovered that the mesons make families called Regge trajectories with masses related to spins in a way that was later understood by Yoichiro Nambu, Holger Bech Nielsen and Leonard Susskind to be the relationship expected from rotating strings. Chew advocated making a theory for the interactions of these trajectories that did not presume that they were composed of any fundamental particles, but would construct their interactions from self-consistency conditions on the S-matrix. The S-matrix approach was started by Werner Heisenberg in the 1940s as a way of constructing a theory that did not rely on the local notions of space and time, which Heisenberg believed break down at the nuclear scale. While the scale was off by many orders of magnitude, the approach he advocated was ideally suited for a theory of quantum gravity.
Working with experimental data, R. Dolen, D. Horn and C. Schmid developed some sum rules for hadron exchange. When a particle and antiparticle scatter, virtual particles can be exchanged in two qualitatively different ways. In the s-channel, the two particles annihilate to make temporary intermediate states that fall apart into the final state particles. In the t-channel, the particles exchange intermediate states by emission and absorption. In field theory, the two contributions add together, one giving a continuous background contribution, the other giving peaks at certain energies. In the data, it was clear that the peaks were stealing from the background—the authors interpreted this as saying that the t-channel contribution was dual to the s-channel one, meaning both described the whole amplitude and included the other.
The result was widely advertised by Murray Gell-Mann, leading Gabriele Veneziano to construct a scattering amplitude that had the property of Dolen–Horn–Schmid duality, later renamed world-sheet duality. The amplitude needed poles where the particles appear, on straight-line trajectories, and there is a special mathematical function whose poles are evenly spaced on half the real line—the gamma function—which was widely used in Regge theory. By manipulating combinations of gamma functions, Veneziano was able to find a consistent scattering amplitude with poles on straight lines, with mostly positive residues, which obeyed duality and had the appropriate Regge scaling at high energy. The amplitude could fit near-beam scattering data as well as other Regge type fits and had a suggestive integral representation that could be used for generalization.
Over the next years, hundreds of physicists worked to complete the bootstrap program for this model, with many surprises. Veneziano himself discovered that for the scattering amplitude to describe the scattering of a particle that appears in the theory, an obvious self-consistency condition, the lightest particle must be a tachyon. Miguel Virasoro and Joel Shapiro found a different amplitude now understood to be that of closed strings, while Ziro Koba and Holger Nielsen generalized Veneziano's integral representation to multiparticle scattering. Veneziano and Sergio Fubini introduced an operator formalism for computing the scattering amplitudes that was a forerunner of world-sheet conformal theory, while Virasoro understood how to remove the poles with wrong-sign residues using a constraint on the states. Claud Lovelace calculated a loop amplitude, and noted that there is an inconsistency unless the dimension of the theory is 26. Charles Thorn, Peter Goddard and Richard Brower went on to prove that there are no wrong-sign propagating states in dimensions less than or equal to 26.
In 1969–1970, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind recognized that the theory could be given a description in space and time in terms of strings. The scattering amplitudes were derived systematically from the action principle by Peter Goddard, Jeffrey Goldstone, Claudio Rebbi, and Charles Thorn, giving a space-time picture to the vertex operators introduced by Veneziano and Fubini and a geometrical interpretation to the Virasoro conditions.
In 1971, Pierre Ramond added fermions to the model, which led him to formulate a two-dimensional supersymmetry to cancel the wrong-sign states. John Schwarz and André Neveu added another sector to the fermi theory a short time later. In the fermion theories, the critical dimension was 10. Stanley Mandelstam formulated a world sheet conformal theory for both the bose and fermi case, giving a two-dimensional field theoretic path-integral to generate the operator formalism. Michio Kaku and Keiji Kikkawa gave a different formulation of the bosonic string, as a string field theory, with infinitely many particle types and with fields taking values not on points, but on loops and curves.
In 1974, Tamiaki Yoneya discovered that all the known string theories included a massless spin-two particle that obeyed the correct Ward identities to be a graviton. John Schwarz and Joël Scherk came to the same conclusion and made the bold leap to suggest that string theory was a theory of gravity, not a theory of hadrons. They reintroduced Kaluza–Klein theory as a way of making sense of the extra dimensions. At the same time, quantum chromodynamics was recognized as the correct theory of hadrons, shifting the attention of physicists and apparently leaving the bootstrap program in the dustbin of history.
String theory eventually made it out of the dustbin, but for the following decade, all work on the theory was completely ignored. Still, the theory continued to develop at a steady pace thanks to the work of a handful of devotees. Ferdinando Gliozzi, Joël Scherk, and David Olive realized in 1977 that the original Ramond and Neveu Schwarz-strings were separately inconsistent and needed to be combined. The resulting theory did not have a tachyon and was proven to have space-time supersymmetry by John Schwarz and Michael Green in 1984. The same year, Alexander Polyakov gave the theory a modern path integral formulation, and went on to develop conformal field theory extensively. In 1979, Daniel Friedan showed that the equations of motions of string theory, which are generalizations of the Einstein equations of general relativity, emerge from the renormalization group equations for the two-dimensional field theory. Schwarz and Green discovered T-duality, and constructed two superstring theories—IIA and IIB related by T-duality, and type I theories with open strings. The consistency conditions had been so strong, that the entire theory was nearly uniquely determined, with only a few discrete choices.
=== First superstring revolution ===
In the early 1980s, Edward Witten discovered that most theories of quantum gravity could not accommodate chiral fermions like the neutrino. This led him, in collaboration with Luis Álvarez-Gaumé, to study violations of the conservation laws in gravity theories with anomalies, concluding that type I string theories were inconsistent. Green and Schwarz discovered a contribution to the anomaly that Witten and Alvarez-Gaumé had missed, which restricted the gauge group of the type I string theory to be SO(32). In coming to understand this calculation, Edward Witten became convinced that string theory was truly a consistent theory of gravity, and he became a high-profile advocate. Following Witten's lead, between 1984 and 1986, hundreds of physicists started to work in this field, and this is sometimes called the first superstring revolution.
During this period, David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm discovered heterotic strings. The gauge group of these closed strings was two copies of E8, and either copy could easily and naturally include the standard model. Philip Candelas, Gary Horowitz, Andrew Strominger and Edward Witten found that the Calabi–Yau manifolds are the compactifications that preserve a realistic amount of supersymmetry, while Lance Dixon and others worked out the physical properties of orbifolds, distinctive geometrical singularities allowed in string theory. Cumrun Vafa generalized T-duality from circles to arbitrary manifolds, creating the mathematical field of mirror symmetry. Daniel Friedan, Emil Martinec and Stephen Shenker further developed the covariant quantization of the superstring using conformal field theory techniques. David Gross and Vipul Periwal discovered that string perturbation theory was divergent. Stephen Shenker showed it diverged much faster than in field theory suggesting that new non-perturbative objects were missing.
In the 1990s, Joseph Polchinski discovered that the theory requires higher-dimensional objects, called D-branes and identified these with the black-hole solutions of supergravity. These were understood to be the new objects suggested by the perturbative divergences, and they opened up a new field with rich mathematical structure. It quickly became clear that D-branes and other p-branes, not just strings, formed the matter content of the string theories, and the physical interpretation of the strings and branes was revealed—they are a type of black hole. Leonard Susskind had incorporated the holographic principle of Gerardus 't Hooft into string theory, identifying the long highly excited string states with ordinary thermal black hole states. As suggested by 't Hooft, the fluctuations of the black hole horizon, the world-sheet or world-volume theory, describes not only the degrees of freedom of the black hole, but all nearby objects too.
=== Second superstring revolution ===
In 1995, at the annual conference of string theorists at the University of Southern California (USC), Edward Witten gave a speech on string theory that in essence united the five string theories that existed at the time, and giving birth to a new 11-dimensional theory called M-theory. M-theory was also foreshadowed in the work of Paul Townsend at approximately the same time. The flurry of activity that began at this time is sometimes called the second superstring revolution.
During this period, Tom Banks, Willy Fischler, Stephen Shenker and Leonard Susskind formulated matrix theory, a full holographic description of M-theory using IIA D0 branes. This was the first definition of string theory that was fully non-perturbative and a concrete mathematical realization of the holographic principle. It is an example of a gauge-gravity duality and is now understood to be a special case of the AdS/CFT correspondence. Andrew Strominger and Cumrun Vafa calculated the entropy of certain configurations of D-branes and found agreement with the semi-classical answer for extreme charged black holes. Petr Hořava and Witten found the eleven-dimensional formulation of the heterotic string theories, showing that orbifolds solve the chirality problem. Witten noted that the effective description of the physics of D-branes at low energies is by a supersymmetric gauge theory, and found geometrical interpretations of mathematical structures in gauge theory that he and Nathan Seiberg had earlier discovered in terms of the location of the branes.
In 1997, Juan Maldacena noted that the low energy excitations of a theory near a black hole consist of objects close to the horizon, which for extreme charged black holes looks like an anti-de Sitter space. He noted that in this limit the gauge theory describes the string excitations near the branes. So he hypothesized that string theory on a near-horizon extreme-charged black-hole geometry, an anti-de Sitter space times a sphere with flux, is equally well described by the low-energy limiting gauge theory, the N = 4 supersymmetric Yang–Mills theory. This hypothesis, which is called the AdS/CFT correspondence, was further developed by Steven Gubser, Igor Klebanov and Alexander Polyakov, and by Edward Witten, and it is now well-accepted. It is a concrete realization of the holographic principle, which has far-reaching implications for black holes, locality and information in physics, as well as the nature of the gravitational interaction. Through this relationship, string theory has been shown to be related to gauge theories like quantum chromodynamics and this has led to a more quantitative understanding of the behavior of hadrons, bringing string theory back to its roots.
== Criticism ==
=== Number of solutions ===
To construct models of particle physics based on string theory, physicists typically begin by specifying a shape for the extra dimensions of spacetime. Each of these different shapes corresponds to a different possible universe, or "vacuum state", with a different collection of particles and forces. String theory as it is currently understood has an enormous number of vacuum states, typically estimated to be around 10500, and these might be sufficiently diverse to accommodate almost any phenomenon that might be observed at low energies.
Many critics of string theory have expressed concerns about the large number of possible universes described by string theory. In his book Not Even Wrong, Peter Woit, a lecturer in the mathematics department at Columbia University, has argued that the large number of different physical scenarios renders string theory vacuous as a framework for constructing models of particle physics. According to Woit,
The possible existence of, say, 10500 consistent different vacuum states for superstring theory probably destroys the hope of using the theory to predict anything. If one picks among this large set just those states whose properties agree with present experimental observations, it is likely there still will be such a large number of these that one can get just about whatever value one wants for the results of any new observation.
Some physicists believe this large number of solutions is actually a virtue because it may allow a natural anthropic explanation of the observed values of physical constants, in particular the small value of the cosmological constant. The anthropic principle is the idea that some of the numbers appearing in the laws of physics are not fixed by any fundamental principle but must be compatible with the evolution of intelligent life. In 1987, Steven Weinberg published an article in which he argued that the cosmological constant could not have been too large, or else galaxies and intelligent life would not have been able to develop. Weinberg suggested that there might be a huge number of possible consistent universes, each with a different value of the cosmological constant, and observations indicate a small value of the cosmological constant only because humans happen to live in a universe that has allowed intelligent life, and hence observers, to exist.
String theorist Leonard Susskind has argued that string theory provides a natural anthropic explanation of the small value of the cosmological constant. According to Susskind, the different vacuum states of string theory might be realized as different universes within a larger multiverse. The fact that the observed universe has a small cosmological constant is just a tautological consequence of the fact that a small value is required for life to exist. Many prominent theorists and critics have disagreed with Susskind's conclusions. According to Woit, "in this case [anthropic reasoning] is nothing more than an excuse for failure. Speculative scientific ideas fail not just when they make incorrect predictions, but also when they turn out to be vacuous and incapable of predicting anything."
=== Compatibility with dark energy ===
It remains unknown whether string theory is compatible with a metastable, positive cosmological constant.
Some putative examples of such solutions do exist, such as the model described by Kachru et al. in 2003. In 2018, a group of four physicists advanced a controversial conjecture which would imply that no such universe exists. This is contrary to some popular models of dark energy such as Λ-CDM, which requires a positive vacuum energy. However, string theory is likely compatible with certain types of quintessence, where dark energy is caused by a new field with exotic properties.
=== Background independence ===
One of the fundamental properties of Einstein's general theory of relativity is that it is background independent, meaning that the formulation of the theory does not in any way privilege a particular spacetime geometry.
One of the main criticisms of string theory from early on is that it is not manifestly background-independent. In string theory, one must typically specify a fixed reference geometry for spacetime, and all other possible geometries are described as perturbations of this fixed one. In his book The Trouble With Physics, physicist Lee Smolin of the Perimeter Institute for Theoretical Physics claims that this is the principal weakness of string theory as a theory of quantum gravity, saying that string theory has failed to incorporate this important insight from general relativity.
Others have disagreed with Smolin's characterization of string theory. In a review of Smolin's book, string theorist Joseph Polchinski writes
[Smolin] is mistaking an aspect of the mathematical language being used for one of the physics being described. New physical theories are often discovered using a mathematical language that is not the most suitable for them... In string theory, it has always been clear that the physics is background-independent even if the language being used is not, and the search for a more suitable language continues. Indeed, as Smolin belatedly notes, [AdS/CFT] provides a solution to this problem, one that is unexpected and powerful.
Polchinski notes that an important open problem in quantum gravity is to develop holographic descriptions of gravity which do not require the gravitational field to be asymptotically anti-de Sitter. Smolin has responded by saying that the AdS/CFT correspondence, as it is currently understood, may not be strong enough to resolve all concerns about background independence.
=== Sociology of science ===
Since the superstring revolutions of the 1980s and 1990s, string theory has been one of the dominant paradigms of high energy theoretical physics. Some string theorists have expressed the view that there does not exist an equally successful alternative theory addressing the deep questions of fundamental physics. In an interview from 1987, Nobel laureate David Gross made the following controversial comments about the reasons for the popularity of string theory:
The most important [reason] is that there are no other good ideas around. That's what gets most people into it. When people started to get interested in string theory they didn't know anything about it. In fact, the first reaction of most people is that the theory is extremely ugly and unpleasant, at least that was the case a few years ago when the understanding of string theory was much less developed. It was difficult for people to learn about it and to be turned on. So I think the real reason why people have got attracted by it is because there is no other game in town. All other approaches of constructing grand unified theories, which were more conservative to begin with, and only gradually became more and more radical, have failed, and this game hasn't failed yet.
Several other high-profile theorists and commentators have expressed similar views, suggesting that there are no viable alternatives to string theory.
Many critics of string theory have commented on this state of affairs. In his book criticizing string theory, Peter Woit views the status of string theory research as unhealthy and detrimental to the future of fundamental physics. He argues that the extreme popularity of string theory among theoretical physicists is partly a consequence of the financial structure of academia and the fierce competition for scarce resources. In his book The Road to Reality, mathematical physicist Roger Penrose expresses similar views, stating "The often frantic competitiveness that this ease of communication engenders leads to bandwagon effects, where researchers fear to be left behind if they do not join in." Penrose also claims that the technical difficulty of modern physics forces young scientists to rely on the preferences of established researchers, rather than forging new paths of their own. Lee Smolin expresses a slightly different position in his critique, claiming that string theory grew out of a tradition of particle physics which discourages speculation about the foundations of physics, while his preferred approach, loop quantum gravity, encourages more radical thinking. According to Smolin,
String theory is a powerful, well-motivated idea and deserves much of the work that has been devoted to it. If it has so far failed, the principal reason is that its intrinsic flaws are closely tied to its strengths—and, of course, the story is unfinished, since string theory may well turn out to be part of the truth. The real question is not why we have expended so much energy on string theory but why we haven't expended nearly enough on alternative approaches.
Smolin goes on to offer a number of prescriptions for how scientists might encourage a greater diversity of approaches to quantum gravity research.
== Notes ==
== References ==
=== Bibliography ===
== Further reading ==
=== Popular science ===
Greene, Brian (2003). The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. New York: W.W. Norton & Company. ISBN 978-0-393-05858-1.
Greene, Brian (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. New York: Alfred A. Knopf. Bibcode:2004fcst.book.....G. ISBN 978-0-375-41288-2.
Gubser, Steven Scott (2010). The Little Book of String Theory. Princeton, N. J.: Princeton University Press. ISBN 978-0-691-14289-0.
Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. Knopf. ISBN 978-0-679-45443-4.
Smolin, Lee (2006). The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. New York: Houghton Mifflin Co. ISBN 978-0-618-55105-7.
Woit, Peter (2006-09-04). Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law. New York: Basic Books. ISBN 978-0-465-09275-8. OCLC 67840232. Retrieved 2025-05-31. - UK edition published by Jonathan Cape, London, 2006
=== Textbooks ===
Becker, K.; Becker, M.; Schwarz, J. H. (2006). String Theory and M-Theory: A Modern Introduction. Cambridge University Press. ISBN 978-0521860697.
Blumenhagen, R.; Lüst, D.; Theisen, S. (2012). Basic Concepts of String Theory. Springer. ISBN 978-3642294969.
Green, Michael; Schwarz, John; Witten, Edward (2012). Superstring Theory. Vol. 1: Introduction. Cambridge University Press. ISBN 978-1107029118.
Green, Michael; Schwarz, John; Witten, Edward (2012). Superstring Theory. Vol. 2: Loop amplitudes, anomalies and phenomenology. Cambridge University Press. ISBN 978-1107029132.
Ibáñez, L.E.; Uranga, A.M. (2012). String Theory and Particle Physics: An Introduction to String Phenomenology. Cambridge University Press. ISBN 978-0521517522.
Kiritsis, E. (2019). String Theory in a Nutshell. Princeton University Press. ISBN 978-0691155791.
Ortín, T. (2015). Gravity and Strings. Cambridge University Press. ISBN 978-0521768139.
Polchinski, Joseph (1998). String Theory. Vol. 1: An Introduction to the Bosonic String. Cambridge University Press. ISBN 978-0-521-63303-1.
Polchinski, Joseph (1998). String Theory. Vol. 2: Superstring Theory and Beyond. Cambridge University Press. ISBN 978-0-521-63304-8.
West, P. (2012). Introduction to Strings and Branes. Cambridge University Press. ISBN 978-0521817479.
Zwiebach, Barton (2009). A First Course in String Theory. Cambridge University Press. ISBN 978-0-521-88032-9.
== External links ==
BBC's Parallel Universes, 2002 feature documentary by BBC Horizon, episode "Parallel Universes", focuses on history and emergence of M-theory, and scientists involved.
Nova's The Elegant Universe, 2003 Emmy Award–winning, three-hour miniseries by Nova with Brian Greene, adapted from his The Elegant Universe (original PBS broadcast dates: October 28, 8–10 p.m. and November 4, 8–9 p.m., 2003). | Wikipedia/String_theory |
In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis.
If a collection of operators forms an algebra over a field, then it is an operator algebra. The description of operator algebras is part of operator theory.
== Single operator theory ==
Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification of normal operators in terms of their spectra falls into this category.
=== Spectrum of operators ===
The spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decomposition, eigenvalue decomposition, or eigendecomposition, of the underlying vector space on which the operator acts.
==== Normal operators ====
A normal operator on a complex Hilbert space
H
{\displaystyle H}
is a continuous linear operator
N
:
H
→
H
{\displaystyle N\colon H\rightarrow H}
that commutes with its hermitian adjoint
N
∗
{\displaystyle N^{\ast }}
, that is:
N
N
∗
=
N
∗
N
{\displaystyle NN^{\ast }=N^{\ast }N}
.
Normal operators are important because the spectral theorem holds for them. Today, the class of normal operators is well understood. Examples of normal operators are
unitary operators:
U
∗
=
U
−
1
{\displaystyle U^{\ast }=U^{-1}}
Hermitian operators (i.e., selfadjoint operators):
N
∗
=
N
{\displaystyle N^{\ast }=N}
; (also, anti-selfadjoint operators:
N
∗
=
−
N
{\displaystyle N^{\ast }=-N}
)
positive operators:
N
=
M
∗
M
{\displaystyle N=M^{\ast }M}
, where
M
{\displaystyle M}
is any operator
normal matrices can be seen as normal operators if one takes the Hilbert space to be
C
n
{\displaystyle \mathbb {C} ^{n}}
.
The spectral theorem extends to a more general class of matrices. Let
A
{\displaystyle A}
be an operator on a finite-dimensional inner product space.
A
{\displaystyle A}
is said to be normal if
A
∗
A
=
A
A
∗
{\displaystyle A^{\ast }A=AA^{\ast }}
. One can show that
A
{\displaystyle A}
is normal if and only if it is unitarily diagonalizable: By the Schur decomposition, we have
A
=
U
T
U
∗
{\displaystyle A=UTU^{\ast }}
, where
U
{\displaystyle U}
is unitary and
T
{\displaystyle T}
upper triangular. Since
A
{\displaystyle A}
is normal,
T
∗
T
=
T
T
∗
{\displaystyle T^{\ast }T=TT^{\ast }}
. Therefore,
T
{\displaystyle T}
must be diagonal since normal upper triangular matrices are diagonal. The converse is obvious.
In other words,
A
{\displaystyle A}
is normal if and only if there exists a unitary matrix
U
{\displaystyle U}
such that
A
=
U
D
U
∗
{\displaystyle A=UDU^{*}}
where
D
{\displaystyle D}
is a diagonal matrix. Then, the entries of the diagonal of
D
{\displaystyle D}
are the eigenvalues of
A
{\displaystyle A}
. The column vectors of
U
{\displaystyle U}
are the eigenvectors of
A
{\displaystyle A}
and they are orthonormal. Unlike the Hermitian case, the entries of
D
{\displaystyle D}
need not be real.
=== Polar decomposition ===
The polar decomposition of any bounded linear operator A between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.
The polar decomposition for matrices generalizes as follows: if A is a bounded linear operator then there is a unique factorization of A as a product A = UP where U is a partial isometry, P is a non-negative self-adjoint operator and the initial space of U is the closure of the range of P.
The operator U must be weakened to a partial isometry, rather than unitary, because of the following issues. If A is the one-sided shift on l2(N), then |A| = (A*A)1/2 = I. So if A = U |A|, U must be A, which is not unitary.
The existence of a polar decomposition is a consequence of Douglas' lemma:
The operator C can be defined by C(Bh) = Ah, extended by continuity to the closure of Ran(B), and by zero on the orthogonal complement of Ran(B). The operator C is well-defined since A*A ≤ B*B implies Ker(B) ⊂ Ker(A). The lemma then follows.
In particular, if A*A = B*B, then C is a partial isometry, which is unique if Ker(B*) ⊂ Ker(C).
In general, for any bounded operator A,
A
∗
A
=
(
A
∗
A
)
1
2
(
A
∗
A
)
1
2
,
{\displaystyle A^{*}A=(A^{*}A)^{\frac {1}{2}}(A^{*}A)^{\frac {1}{2}},}
where (A*A)1/2 is the unique positive square root of A*A given by the usual functional calculus. So by the lemma, we have
A
=
U
(
A
∗
A
)
1
2
{\displaystyle A=U(A^{*}A)^{\frac {1}{2}}}
for some partial isometry U, which is unique if Ker(A) ⊂ Ker(U). (Note Ker(A) = Ker(A*A) = Ker(B) = Ker(B*), where B = B* = (A*A)1/2.) Take P to be (A*A)1/2 and one obtains the polar decomposition A = UP. Notice that an analogous argument can be used to show A = P'U' , where P' is positive and U' a partial isometry.
When H is finite dimensional, U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.
By property of the continuous functional calculus, |A| is in the C*-algebra generated by A. A similar but weaker statement holds for the partial isometry: the polar part U is in the von Neumann algebra generated by A. If A is invertible, U will be in the C*-algebra generated by A as well.
=== Connection with complex analysis ===
Many operators that are studied are operators on Hilbert spaces of holomorphic functions, and the study
of the operator is intimately linked to questions in function theory.
For example, Beurling's theorem describes the invariant subspaces of the unilateral shift in terms of inner functions, which are bounded holomorphic functions on the unit disk with unimodular boundary values almost everywhere on the circle. Beurling interpreted the unilateral shift as multiplication by the independent variable on the Hardy space. The success in studying multiplication operators, and more generally Toeplitz operators (which are multiplication, followed by projection onto the Hardy space) has inspired the study of similar questions on other spaces, such as the Bergman space.
== Operator algebras ==
The theory of operator algebras brings algebras of operators such as C*-algebras to the fore.
=== C*-algebras ===
A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map * : A → A. One writes x* for the image of an element x of A. The map * has the following properties:
It is an involution, for every x in A
x
∗
∗
=
(
x
∗
)
∗
=
x
{\displaystyle x^{**}=(x^{*})^{*}=x}
For all x, y in A:
(
x
+
y
)
∗
=
x
∗
+
y
∗
{\displaystyle (x+y)^{*}=x^{*}+y^{*}}
(
x
y
)
∗
=
y
∗
x
∗
{\displaystyle (xy)^{*}=y^{*}x^{*}}
For every λ in C and every x in A:
(
λ
x
)
∗
=
λ
¯
x
∗
.
{\displaystyle (\lambda x)^{*}={\overline {\lambda }}x^{*}.}
For all x in A:
‖
x
∗
x
‖
=
‖
x
‖
‖
x
∗
‖
.
{\displaystyle \|x^{*}x\|=\left\|x\right\|\left\|x^{*}\right\|.}
Remark. The first three identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:
‖
x
x
∗
‖
=
‖
x
‖
2
,
{\displaystyle \|xx^{*}\|=\|x\|^{2},}
The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:
‖
x
‖
2
=
‖
x
∗
x
‖
=
sup
{
|
λ
|
:
x
∗
x
−
λ
1
is not invertible
}
.
{\displaystyle \|x\|^{2}=\|x^{*}x\|=\sup\{|\lambda |:x^{*}x-\lambda \,1{\text{ is not invertible}}\}.}
== See also ==
Invariant subspace
Functional calculus
Spectral theory
Resolvent formalism
Compact operator
Fredholm theory of integral equations
Integral operator
Fredholm operator
Self-adjoint operator
Unbounded operator
Differential operator
Umbral calculus
Contraction mapping
Positive operator on a Hilbert space
Nonnegative operator on a partially ordered vector space
== References ==
== Further reading ==
Conway, J. B.: A Course in Functional Analysis, 2nd edition, Springer-Verlag, 1994, ISBN 0-387-97245-5
Yoshino, Takashi (1993). Introduction to Operator Theory. Chapman and Hall/CRC. ISBN 978-0582237438.
== External links ==
History of Operator Theory | Wikipedia/Operator_theory |
In physics, an effective field theory is a type of approximation, or effective theory, for an underlying physical theory, such as a quantum field theory or a statistical mechanics model. An effective field theory includes the appropriate degrees of freedom to describe physical phenomena occurring at a chosen length scale or energy scale, while ignoring substructure and degrees of freedom at shorter distances (or, equivalently, at higher energies). Intuitively, one averages over the behavior of the underlying theory at shorter length scales to derive what is hoped to be a simplified model at longer length scales. Effective field theories typically work best when there is a large separation between length scale of interest and the length scale of the underlying dynamics. Effective field theories have found use in particle physics, statistical mechanics, condensed matter physics, general relativity, and hydrodynamics. They simplify calculations, and allow treatment of dissipation and radiation effects.
== Renormalization group ==
Presently, effective field theories are discussed in the context of the renormalization group (RG) where the process of integrating out short distance degrees of freedom is made systematic. Although this method is not sufficiently concrete to allow the actual construction of effective field theories, the gross understanding of their usefulness becomes clear through an RG analysis. This method also lends credence to the main technique of constructing effective field theories, through the analysis of symmetries. If there is a single energy scale
M
{\displaystyle M}
in the microscopic theory, then the effective field theory can be seen as an expansion in
1
/
M
{\displaystyle 1/M}
. The construction of an effective field theory accurate to some power of
1
/
M
{\displaystyle 1/M}
requires a new set of free parameters at each order of the expansion in
1
/
M
{\displaystyle 1/M}
. This technique is useful for scattering or other processes where the maximum momentum scale
k
{\displaystyle \mathbf {k} }
satisfies the condition
|
k
|
/
M
≪
1
{\displaystyle |\mathbf {k} |/M\ll 1}
. Since effective field theories are not valid at small length scales, they need not be renormalizable. Indeed, the ever expanding number of parameters at each order in
1
/
M
{\displaystyle 1/M}
required for an effective field theory means that they are generally not renormalizable in the same sense as quantum electrodynamics which requires only the renormalization of two parameters (the fine structure constant and the electron mass).
== Examples ==
=== Fermi theory of beta decay ===
The best-known example of an effective field theory is the Fermi theory of beta decay. This theory was developed during the early study of weak decays of nuclei when only the hadrons and leptons undergoing weak decay were known. The typical reactions studied were:
n
→
p
+
e
−
+
ν
¯
e
μ
−
→
e
−
+
ν
¯
e
+
ν
μ
.
{\displaystyle {\begin{aligned}n&\to p+e^{-}+{\overline {\nu }}_{e}\\\mu ^{-}&\to e^{-}+{\overline {\nu }}_{e}+\nu _{\mu }.\end{aligned}}}
This theory posited a pointlike interaction between the four fermions involved in these reactions. The theory had great phenomenological success and was eventually understood to arise from the gauge theory of electroweak interactions, which forms a part of the standard model of particle physics. In this more fundamental theory, the interactions are mediated by a flavour-changing gauge boson, the W±. The immense success of the Fermi theory was because the W particle has mass of about 80 GeV, whereas the early experiments were all done at an energy scale of less than 10 MeV. Such a separation of scales, by over 3 orders of magnitude, has not been met in any other situation as yet.
=== BCS theory of superconductivity ===
Another famous example is the BCS theory of superconductivity. Here the underlying theory is the theory of electrons in a metal interacting with lattice vibrations called phonons. The phonons cause attractive interactions between some electrons, causing them to form Cooper pairs. The length scale of these pairs is much larger than the wavelength of phonons, making it possible to neglect the dynamics of phonons and construct a theory in which two electrons effectively interact at a point. This theory has had remarkable success in describing and predicting the results of experiments on superconductivity.
=== Gravitational field theories ===
General relativity (GR) itself is expected to be the low energy effective field theory of a full theory of quantum gravity, such as string theory or loop quantum gravity. The expansion scale is the Planck mass.
Effective field theories have also been used to simplify problems in general relativity, in particular in calculating the gravitational wave signature of inspiralling finite-sized objects. The most common EFT in GR is non-relativistic general relativity (NRGR), which is similar to the post-Newtonian expansion. Another common GR EFT is the extreme mass ratio (EMR), which in the context of the inspiralling problem is called extreme mass ratio inspiral.
=== Other examples ===
Presently, effective field theories are written for many situations.
One major branch of nuclear physics is quantum hadrodynamics, where the interactions of hadrons are treated as a field theory, which should be derivable from the underlying theory of quantum chromodynamics (QCD). Quantum hadrodynamics is the theory of the nuclear force, similarly to quantum chromodynamics being the theory of the strong interaction and quantum electrodynamics being the theory of the electromagnetic force. Due to the smaller separation of length scales here, this effective theory has some classificatory power, but not the spectacular success of the Fermi theory.
In particle physics the effective field theory of QCD called chiral perturbation theory has had better success. This theory deals with the interactions of hadrons with pions or kaons, which are the Goldstone bosons of spontaneous chiral symmetry breaking. The expansion parameter is the pion energy/momentum.
For hadrons containing one heavy quark (such as the bottom or charm), an effective field theory which expands in powers of the quark mass, called the heavy quark effective theory (HQET), has been found useful.
For hadrons containing two heavy quarks, an effective field theory which expands in powers of the relative velocity of the heavy quarks, called non-relativistic QCD (NRQCD), has been found useful, especially when used in conjunctions with lattice QCD.
For hadron reactions with light energetic (collinear) particles, the interactions with low-energetic (soft) degrees of freedom are described by the soft-collinear effective theory (SCET).
Much of condensed matter physics consists of writing effective field theories for the particular property of matter being studied.
Dissipationless hydrodynamics can also be treated using effective field theories.
== See also ==
Form factor (quantum field theory)
Renormalization group
Quantum field theory
Quantum triviality
Ginzburg–Landau theory
== References ==
== Books ==
A.A. Petrov and A. Blechman, ‘’Effective Field Theories,’’ Singapore: World Scientific (2016). ISBN 978-981-4434-92-8
C.P. Burgess, ‘’Introduction to Effective Field Theory,‘’ Cambridge University Press (2020). ISBN 978-052-1195-47-8
== External links ==
Birnholtz, Ofek; Hadar, Shahar; Kol, Barak (1998). "Effective Field Theory". arXiv:hep-ph/9806303.
Hartmann, Stephan (2001). "Effective Field Theories, Reductionism and Scientific Explanation" (PDF). Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 32 (2): 267–304. Bibcode:2001SHPMP..32..267H. doi:10.1016/S1355-2198(01)00005-3.
Birnholtz, Ofek; Hadar, Shahar; Kol, Barak (1997). "Aspects of Heavy Quark Theory". Annual Review of Nuclear and Particle Science. 47: 591–661. arXiv:hep-ph/9703290. Bibcode:1997ARNPS..47..591B. doi:10.1146/annurev.nucl.47.1.591. S2CID 13843227.
Effective field theory (Interactions, Symmetry Breaking and Effective Fields - from Quarks to Nuclei. an Internet Lecture by Jacek Dobaczewski) | Wikipedia/Effective_field_theory |
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation.
Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems.
Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics.
Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries.
Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory.
Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory.
The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics.
== Motivation ==
The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system.
Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation.
When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description.
The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system.
Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted.
Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion.
It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations.
Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves.
Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed.
Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed.
== Intrinsic motion ==
=== Generalized coordinates and constraints ===
In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...).: 231
=== Difference between curvillinear and generalized coordinates ===
Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule:
For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple:
q
=
(
q
1
,
q
2
,
…
,
q
N
)
{\displaystyle \mathbf {q} =(q_{1},q_{2},\dots ,q_{N})}
and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities:
d
q
d
t
=
(
d
q
1
d
t
,
d
q
2
d
t
,
…
,
d
q
N
d
t
)
≡
q
˙
=
(
q
˙
1
,
q
˙
2
,
…
,
q
˙
N
)
.
{\displaystyle {\frac {d\mathbf {q} }{dt}}=\left({\frac {dq_{1}}{dt}},{\frac {dq_{2}}{dt}},\dots ,{\frac {dq_{N}}{dt}}\right)\equiv \mathbf {\dot {q}} =({\dot {q}}_{1},{\dot {q}}_{2},\dots ,{\dot {q}}_{N}).}
=== D'Alembert's principle of virtual work ===
D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is:: 265
δ
W
=
Q
⋅
δ
q
=
0
,
{\displaystyle \delta W={\boldsymbol {\mathcal {Q}}}\cdot \delta \mathbf {q} =0\,,}
where
Q
=
(
Q
1
,
Q
2
,
…
,
Q
N
)
{\displaystyle {\boldsymbol {\mathcal {Q}}}=({\mathcal {Q}}_{1},{\mathcal {Q}}_{2},\dots ,{\mathcal {Q}}_{N})}
are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and q are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics:
Q
=
d
d
t
(
∂
T
∂
q
˙
)
−
∂
T
∂
q
,
{\displaystyle {\boldsymbol {\mathcal {Q}}}={\frac {d}{dt}}\left({\frac {\partial T}{\partial \mathbf {\dot {q}} }}\right)-{\frac {\partial T}{\partial \mathbf {q} }}\,,}
where T is the total kinetic energy of the system, and the notation
∂
∂
q
=
(
∂
∂
q
1
,
∂
∂
q
2
,
…
,
∂
∂
q
N
)
{\displaystyle {\frac {\partial }{\partial \mathbf {q} }}=\left({\frac {\partial }{\partial q_{1}}},{\frac {\partial }{\partial q_{2}}},\dots ,{\frac {\partial }{\partial q_{N}}}\right)}
is a useful shorthand (see matrix calculus for this notation).
=== Constraints ===
If the curvilinear coordinate system is defined by the standard position vector r, and if the position vector can be written in terms of the generalized coordinates q and time t in the form:
r
=
r
(
q
(
t
)
,
t
)
{\displaystyle \mathbf {r} =\mathbf {r} (\mathbf {q} (t),t)}
and this relation holds for all times t, then q are called holonomic constraints. Vector r is explicitly dependent on t in cases when the constraints vary with time, not just because of q(t). For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic.
== Lagrangian mechanics ==
The introduction of generalized coordinates and the fundamental Lagrangian function:
L
(
q
,
q
˙
,
t
)
=
T
(
q
,
q
˙
,
t
)
−
V
(
q
,
q
˙
,
t
)
{\displaystyle L(\mathbf {q} ,\mathbf {\dot {q}} ,t)=T(\mathbf {q} ,\mathbf {\dot {q}} ,t)-V(\mathbf {q} ,\mathbf {\dot {q}} ,t)}
where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations;
d
d
t
(
∂
L
∂
q
˙
)
=
∂
L
∂
q
,
{\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right)={\frac {\partial L}{\partial \mathbf {q} }}\,,}
which are a set of N second-order ordinary differential equations, one for each qi(t).
This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit.
The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates:
C
=
{
q
∈
R
N
}
,
{\displaystyle {\mathcal {C}}=\{\mathbf {q} \in \mathbb {R} ^{N}\}\,,}
where
R
N
{\displaystyle \mathbb {R} ^{N}}
is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time:
{
q
(
t
)
∈
R
N
:
t
≥
0
,
t
∈
R
}
⊆
C
,
{\displaystyle \{\mathbf {q} (t)\in \mathbb {R} ^{N}\,:\,t\geq 0,t\in \mathbb {R} \}\subseteq {\mathcal {C}}\,,}
The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle.
== Hamiltonian mechanics ==
The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates:
p
=
∂
L
∂
q
˙
=
(
∂
L
∂
q
˙
1
,
∂
L
∂
q
˙
2
,
⋯
∂
L
∂
q
˙
N
)
=
(
p
1
,
p
2
⋯
p
N
)
,
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial \mathbf {\dot {q}} }}=\left({\frac {\partial L}{\partial {\dot {q}}_{1}}},{\frac {\partial L}{\partial {\dot {q}}_{2}}},\cdots {\frac {\partial L}{\partial {\dot {q}}_{N}}}\right)=(p_{1},p_{2}\cdots p_{N})\,,}
and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta):
H
(
q
,
p
,
t
)
=
p
⋅
q
˙
−
L
(
q
,
q
˙
,
t
)
{\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)=\mathbf {p} \cdot \mathbf {\dot {q}} -L(\mathbf {q} ,\mathbf {\dot {q}} ,t)}
where
⋅
{\displaystyle \cdot }
denotes the dot product, also leading to Hamilton's equations:
p
˙
=
−
∂
H
∂
q
,
q
˙
=
+
∂
H
∂
p
,
{\displaystyle \mathbf {\dot {p}} =-{\frac {\partial H}{\partial \mathbf {q} }}\,,\quad \mathbf {\dot {q}} =+{\frac {\partial H}{\partial \mathbf {p} }}\,,}
which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian:
d
H
d
t
=
−
∂
L
∂
t
,
{\displaystyle {\frac {dH}{dt}}=-{\frac {\partial L}{\partial t}}\,,}
which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law:
p
˙
=
Q
.
{\displaystyle \mathbf {\dot {p}} ={\boldsymbol {\mathcal {Q}}}\,.}
Analogous to the configuration space, the set of all momenta is the generalized momentum space:
M
=
{
p
∈
R
N
}
.
{\displaystyle {\mathcal {M}}=\{\mathbf {p} \in \mathbb {R} ^{N}\}\,.}
("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves)
The set of all positions and momenta form the phase space:
P
=
C
×
M
=
{
(
q
,
p
)
∈
R
2
N
}
,
{\displaystyle {\mathcal {P}}={\mathcal {C}}\times {\mathcal {M}}=\{(\mathbf {q} ,\mathbf {p} )\in \mathbb {R} ^{2N}\}\,,}
that is, the Cartesian product of the configuration space and generalized momentum space.
A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait:
{
(
q
(
t
)
,
p
(
t
)
)
∈
R
2
N
:
t
≥
0
,
t
∈
R
}
⊆
P
,
{\displaystyle \{(\mathbf {q} (t),\mathbf {p} (t))\in \mathbb {R} ^{2N}\,:\,t\geq 0,t\in \mathbb {R} \}\subseteq {\mathcal {P}}\,,}
=== The Poisson bracket ===
All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta:
{
A
,
B
}
≡
{
A
,
B
}
q
,
p
=
∂
A
∂
q
⋅
∂
B
∂
p
−
∂
A
∂
p
⋅
∂
B
∂
q
≡
∑
k
∂
A
∂
q
k
∂
B
∂
p
k
−
∂
A
∂
p
k
∂
B
∂
q
k
,
{\displaystyle {\begin{aligned}\{A,B\}\equiv \{A,B\}_{\mathbf {q} ,\mathbf {p} }&={\frac {\partial A}{\partial \mathbf {q} }}\cdot {\frac {\partial B}{\partial \mathbf {p} }}-{\frac {\partial A}{\partial \mathbf {p} }}\cdot {\frac {\partial B}{\partial \mathbf {q} }}\\&\equiv \sum _{k}{\frac {\partial A}{\partial q_{k}}}{\frac {\partial B}{\partial p_{k}}}-{\frac {\partial A}{\partial p_{k}}}{\frac {\partial B}{\partial q_{k}}}\,,\end{aligned}}}
Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A:
d
A
d
t
=
{
A
,
H
}
+
∂
A
∂
t
.
{\displaystyle {\frac {dA}{dt}}=\{A,H\}+{\frac {\partial A}{\partial t}}\,.}
This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization:
{
A
,
B
}
→
1
i
ℏ
[
A
^
,
B
^
]
.
{\displaystyle \{A,B\}\rightarrow {\frac {1}{i\hbar }}[{\hat {A}},{\hat {B}}]\,.}
== Properties of the Lagrangian and the Hamiltonian ==
Following are overlapping properties between the Lagrangian and Hamiltonian functions.
All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence.
The Lagrangian is invariant under addition of the total time derivative of any function of q' and t, that is:
L
′
=
L
+
d
d
t
F
(
q
,
t
)
,
{\displaystyle L'=L+{\frac {d}{dt}}F(\mathbf {q} ,t)\,,}
so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique.
Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is:
K
=
H
+
∂
∂
t
G
(
q
,
p
,
t
)
,
{\displaystyle K=H+{\frac {\partial }{\partial t}}G(\mathbf {q} ,\mathbf {p} ,t)\,,}
(K is a frequently used letter in this case). This property is used in canonical transformations (see below).
If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations:
∂
L
∂
q
j
=
0
→
d
p
j
d
t
=
d
d
t
∂
L
∂
q
˙
j
=
0
{\displaystyle {\frac {\partial L}{\partial q_{j}}}=0\,\rightarrow \,{\frac {dp_{j}}{dt}}={\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}=0}
Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates.
If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time).
If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then:
T
(
(
λ
q
˙
i
)
2
,
(
λ
q
˙
j
λ
q
˙
k
)
,
q
)
=
λ
2
T
(
(
q
˙
i
)
2
,
q
˙
j
q
˙
k
,
q
)
,
L
(
q
,
q
˙
)
,
{\displaystyle T((\lambda {\dot {q}}_{i})^{2},(\lambda {\dot {q}}_{j}\lambda {\dot {q}}_{k}),\mathbf {q} )=\lambda ^{2}T(({\dot {q}}_{i})^{2},{\dot {q}}_{j}{\dot {q}}_{k},\mathbf {q} )\,,\quad L(\mathbf {q} ,\mathbf {\dot {q}} )\,,}
where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system:
H
=
T
+
V
=
E
.
{\displaystyle H=T+V=E\,.}
This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it.
== Principle of least action ==
Action is another quantity in analytical mechanics defined as a functional of the Lagrangian:
S
=
∫
t
1
t
2
L
(
q
,
q
˙
,
t
)
d
t
.
{\displaystyle {\mathcal {S}}=\int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)dt\,.}
A general way to find the equations of motion from the action is the principle of least action:
δ
S
=
δ
∫
t
1
t
2
L
(
q
,
q
˙
,
t
)
d
t
=
0
,
{\displaystyle \delta {\mathcal {S}}=\delta \int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)dt=0\,,}
where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space
C
{\displaystyle {\mathcal {C}}}
, in other words q(t) tracing out a path in
C
{\displaystyle {\mathcal {C}}}
. The path for which action is least is the path taken by the system.
From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics, and is used for calculating geodesic motion in general relativity.
== Hamiltonian-Jacobi mechanics ==
Canonical transformations
The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways:
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
1
(
q
,
Q
,
t
)
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
2
(
q
,
P
,
t
)
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
3
(
p
,
Q
,
t
)
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
4
(
p
,
P
,
t
)
{\displaystyle {\begin{aligned}&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{1}(\mathbf {q} ,\mathbf {Q} ,t)\\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{2}(\mathbf {q} ,\mathbf {P} ,t)\\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{3}(\mathbf {p} ,\mathbf {Q} ,t)\\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{4}(\mathbf {p} ,\mathbf {P} ,t)\\\end{aligned}}}
With the restriction on P and Q such that the transformed Hamiltonian system is:
P
˙
=
−
∂
K
∂
Q
,
Q
˙
=
+
∂
K
∂
P
,
{\displaystyle \mathbf {\dot {P}} =-{\frac {\partial K}{\partial \mathbf {Q} }}\,,\quad \mathbf {\dot {Q}} =+{\frac {\partial K}{\partial \mathbf {P} }}\,,}
the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem.
The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity,
{
Q
i
,
P
i
}
=
1
{\displaystyle \{Q_{i},P_{i}\}=1}
for all i = 1, 2,...N. If this does not hold then the transformation is not canonical.
The Hamilton–Jacobi equation
By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action
S
{\displaystyle {\mathcal {S}}}
) plus an arbitrary constant C:
G
2
(
q
,
t
)
=
S
(
q
,
t
)
+
C
,
{\displaystyle G_{2}(\mathbf {q} ,t)={\mathcal {S}}(\mathbf {q} ,t)+C\,,}
the generalized momenta become:
p
=
∂
S
∂
q
{\displaystyle \mathbf {p} ={\frac {\partial {\mathcal {S}}}{\partial \mathbf {q} }}}
and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation:
H
=
−
∂
S
∂
t
{\displaystyle H=-{\frac {\partial {\mathcal {S}}}{\partial t}}}
where H is the Hamiltonian as before:
H
=
H
(
q
,
p
,
t
)
=
H
(
q
,
∂
S
∂
q
,
t
)
{\displaystyle H=H(\mathbf {q} ,\mathbf {p} ,t)=H\left(\mathbf {q} ,{\frac {\partial {\mathcal {S}}}{\partial \mathbf {q} }},t\right)}
Another related function is Hamilton's characteristic function
W
(
q
)
=
S
(
q
,
t
)
+
E
t
{\displaystyle W(\mathbf {q} )={\mathcal {S}}(\mathbf {q} ,t)+Et}
used to solve the HJE by additive separation of variables for a time-independent Hamiltonian H.
The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields.
== Routhian mechanics ==
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian:
R
=
p
⋅
q
˙
−
L
(
q
,
p
,
ζ
,
ζ
˙
)
,
{\displaystyle R=\mathbf {p} \cdot \mathbf {\dot {q}} -L(\mathbf {q} ,\mathbf {p} ,{\boldsymbol {\zeta }},{\dot {\boldsymbol {\zeta }}})\,,}
which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q,
q
˙
=
+
∂
R
∂
p
,
p
˙
=
−
∂
R
∂
q
,
{\displaystyle {\dot {\mathbf {q} }}=+{\frac {\partial R}{\partial \mathbf {p} }}\,,\quad {\dot {\mathbf {p} }}=-{\frac {\partial R}{\partial \mathbf {q} }}\,,}
and N − s Lagrangian equations in the non cyclic coordinates ζ.
d
d
t
∂
R
∂
ζ
˙
=
∂
R
∂
ζ
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\boldsymbol {\zeta }}}}}={\frac {\partial R}{\partial {\boldsymbol {\zeta }}}}\,.}
Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom.
The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion.
== Appellian mechanics ==
Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates:
α
r
=
q
¨
r
=
d
2
q
r
d
t
2
,
{\displaystyle \alpha _{r}={\ddot {q}}_{r}={\frac {d^{2}q_{r}}{dt^{2}}}\,,}
as well as generalized forces mentioned above in D'Alembert's principle. The equations are
Q
r
=
∂
S
∂
α
r
,
S
=
1
2
∑
k
=
1
N
m
k
a
k
2
,
{\displaystyle {\mathcal {Q}}_{r}={\frac {\partial S}{\partial \alpha _{r}}}\,,\quad S={\frac {1}{2}}\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}^{2}\,,}
where
a
k
=
r
¨
k
=
d
2
r
k
d
t
2
{\displaystyle \mathbf {a} _{k}={\ddot {\mathbf {r} }}_{k}={\frac {d^{2}\mathbf {r} _{k}}{dt^{2}}}}
is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr.
== Classical field theory ==
=== Lagrangian field theory ===
Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves:
L
=
L
(
ϕ
1
,
ϕ
2
,
…
,
∇
ϕ
1
,
∇
ϕ
2
,
…
,
∂
t
ϕ
1
,
∂
t
ϕ
2
,
…
,
r
,
t
)
.
{\displaystyle {\mathcal {L}}={\mathcal {L}}(\phi _{1},\phi _{2},\dots ,\nabla \phi _{1},\nabla \phi _{2},\dots ,\partial _{t}\phi _{1},\partial _{t}\phi _{2},\ldots ,\mathbf {r} ,t)\,.}
and the Euler–Lagrange equations have an analogue for fields:
∂
μ
(
∂
L
∂
(
∂
μ
ϕ
i
)
)
=
∂
L
∂
ϕ
i
,
{\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{i})}}\right)={\frac {\partial {\mathcal {L}}}{\partial \phi _{i}}}\,,}
where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear.
This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields.
The Lagrangian is the volume integral of the Lagrangian density:
L
=
∫
V
L
d
V
.
{\displaystyle L=\int _{\mathcal {V}}{\mathcal {L}}\,dV\,.}
Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation.
=== Hamiltonian field theory ===
The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are:
π
i
(
r
,
t
)
=
∂
L
∂
ϕ
˙
i
ϕ
˙
i
≡
∂
ϕ
i
∂
t
{\displaystyle \pi _{i}(\mathbf {r} ,t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\phi }}_{i}}}\,\quad {\dot {\phi }}_{i}\equiv {\frac {\partial \phi _{i}}{\partial t}}}
where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density
H
{\displaystyle {\mathcal {H}}}
is defined by analogy with mechanics:
H
(
ϕ
1
,
ϕ
2
,
…
,
π
1
,
π
2
,
…
,
r
,
t
)
=
∑
i
=
1
N
ϕ
˙
i
(
r
,
t
)
π
i
(
r
,
t
)
−
L
.
{\displaystyle {\mathcal {H}}(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\mathbf {r} ,t)=\sum _{i=1}^{N}{\dot {\phi }}_{i}(\mathbf {r} ,t)\pi _{i}(\mathbf {r} ,t)-{\mathcal {L}}\,.}
The equations of motion are:
ϕ
˙
i
=
+
δ
H
δ
π
i
,
π
˙
i
=
−
δ
H
δ
ϕ
i
,
{\displaystyle {\dot {\phi }}_{i}=+{\frac {\delta {\mathcal {H}}}{\delta \pi _{i}}}\,,\quad {\dot {\pi }}_{i}=-{\frac {\delta {\mathcal {H}}}{\delta \phi _{i}}}\,,}
where the variational derivative
δ
δ
ϕ
i
=
∂
∂
ϕ
i
−
∂
μ
∂
∂
(
∂
μ
ϕ
i
)
{\displaystyle {\frac {\delta }{\delta \phi _{i}}}={\frac {\partial }{\partial \phi _{i}}}-\partial _{\mu }{\frac {\partial }{\partial (\partial _{\mu }\phi _{i})}}}
must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear.
Again, the volume integral of the Hamiltonian density is the Hamiltonian
H
=
∫
V
H
d
V
.
{\displaystyle H=\int _{\mathcal {V}}{\mathcal {H}}\,dV\,.}
== Symmetry, conservation, and Noether's theorem ==
Symmetry transformations in classical space and time
Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries.
where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂ and angle θ.
Noether's theorem
Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s:
L
[
q
(
s
,
t
)
,
q
˙
(
s
,
t
)
]
=
L
[
q
(
t
)
,
q
˙
(
t
)
]
{\displaystyle L[q(s,t),{\dot {q}}(s,t)]=L[q(t),{\dot {q}}(t)]}
the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q will be conserved.
== See also ==
Lagrangian mechanics
Hamiltonian mechanics
Theoretical mechanics
Classical mechanics
Hamilton–Jacobi equation
Hamilton's principle
Kinematics
Kinetics (physics)
Non-autonomous mechanics
Udwadia–Kalaba equation
== References and notes == | Wikipedia/Analytical_mechanics |
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
SDEs have a random differential that is in the most basic case random white noise calculated as the distributional derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps.
Stochastic differential equations are in general neither differential equations nor random differential equations. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds.
== Background ==
Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of a stochastic differential equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called Langevin equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force.
The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus.
=== Terminology ===
The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by a proper mathematical definition of the corresponding integral. Such a mathematical definition was first proposed by Kiyosi Itô in the 1940s, leading to what is known today as the Itô calculus.
Another construction was later proposed by Russian physicist Stratonovich,
leading to what is known as the Stratonovich integral.
The Itô integral and Stratonovich integral are related, but different, objects and the choice between them depends on the application considered. The Itô calculus is based on the concept of non-anticipativeness or causality, which is natural in applications where the variable is time.
The Stratonovich calculus, on the other hand, has rules which resemble ordinary calculus and has intrinsic geometric properties which render it more natural when dealing with geometric problems such as random motion on manifolds, although it is possible and in some cases preferable to model random motion on manifolds through Itô SDEs, for example when trying to optimally approximate SDEs on submanifolds.
An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator.
In physical science, there is an ambiguity in the usage of the term "Langevin SDEs". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure, leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic.
=== Stochastic calculus ===
Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.
=== Numerical solutions ===
Numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE), Rosenbrock method, and methods based on different representations of iterated stochastic integrals.
== Use in physics ==
In physics, SDEs have wide applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence.
There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs:
d
x
(
t
)
d
t
=
F
(
x
(
t
)
)
+
∑
α
=
1
n
g
α
(
x
(
t
)
)
ξ
α
(
t
)
,
{\displaystyle {\frac {\mathrm {d} x(t)}{\mathrm {d} t}}=F(x(t))+\sum _{\alpha =1}^{n}g_{\alpha }(x(t))\xi ^{\alpha }(t),\,}
where
x
∈
X
{\displaystyle x\in X}
is the position in the system in its phase (or state) space,
X
{\displaystyle X}
, assumed to be a differentiable manifold, the
F
∈
T
X
{\displaystyle F\in TX}
is a flow vector field representing deterministic law of evolution, and
g
α
∈
T
X
{\displaystyle g_{\alpha }\in TX}
is a set of vector fields that define the coupling of the system to Gaussian white noise,
ξ
α
{\displaystyle \xi ^{\alpha }}
. If
X
{\displaystyle X}
is a linear space and
g
{\displaystyle g}
are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. For additive noise, the Itô and Stratonovich forms of the SDE generate the same solution, and it is not important which definition is used to solve the SDE. For multiplicative noise SDEs the Itô and Stratonovich forms of the SDE are different, and care should be used in mapping between them.
For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition. Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation.
In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.
== Use in probability and mathematical finance ==
The notation used in probability theory (and in many applications of probability theory, for instance in signal processing with the filtering problem and in mathematical finance) is slightly different. It is also the notation used in publications on numerical methods for solving stochastic differential equations. This notation makes the exotic nature of the random function of time
ξ
α
{\displaystyle \xi ^{\alpha }}
in the physics formulation more explicit. In strict mathematical terms,
ξ
α
{\displaystyle \xi ^{\alpha }}
cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation.
A typical equation is of the form
d
X
t
=
μ
(
X
t
,
t
)
d
t
+
σ
(
X
t
,
t
)
d
B
t
,
{\displaystyle \mathrm {d} X_{t}=\mu (X_{t},t)\,\mathrm {d} t+\sigma (X_{t},t)\,\mathrm {d} B_{t},}
where
B
{\displaystyle B}
denotes a Wiener process (standard Brownian motion).
This equation should be interpreted as an informal way of expressing the corresponding integral equation
X
t
+
s
−
X
t
=
∫
t
t
+
s
μ
(
X
u
,
u
)
d
u
+
∫
t
t
+
s
σ
(
X
u
,
u
)
d
B
u
.
{\displaystyle X_{t+s}-X_{t}=\int _{t}^{t+s}\mu (X_{u},u)\mathrm {d} u+\int _{t}^{t+s}\sigma (X_{u},u)\,\mathrm {d} B_{u}.}
The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process Xt changes its value by an amount that is normally distributed with expectation μ(Xt, t) δ and variance σ(Xt, t)2 δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and satisfies the Markov property.
The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (
Ω
,
F
,
P
{\displaystyle \Omega ,\,{\mathcal {F}},\,P}
). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. The Yamada–Watanabe theorem makes a connection between the two.
An important example is the equation for geometric Brownian motion
d
X
t
=
μ
X
t
d
t
+
σ
X
t
d
B
t
.
{\displaystyle \mathrm {d} X_{t}=\mu X_{t}\,\mathrm {d} t+\sigma X_{t}\,\mathrm {d} B_{t}.}
which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics.
Generalizing the geometric Brownian motion, it is also possible to define SDEs admitting strong solutions and whose distribution is a convex combination of densities coming from different geometric Brownian motions or Black Scholes models, obtaining a single SDE whose solutions is distributed as a mixture dynamics of lognormal distributions of different Black Scholes models. This leads to models that can deal with the volatility smile in financial mathematics.
The simpler SDE called arithmetic Brownian motion
d
X
t
=
μ
d
t
+
σ
d
B
t
{\displaystyle \mathrm {d} X_{t}=\mu \,\mathrm {d} t+\sigma \,\mathrm {d} B_{t}}
was used by Louis Bachelier as the first model for stock prices in 1900, known today as Bachelier model.
There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation.
A generalization of stochastic differential equations with the Fisk-Stratonovich integral to semimartingales with jumps are the SDEs of Marcus type. The Marcus integral is an extension of McShane's stochastic calculus.
An innovative application in stochastic finance derives from the usage of the equation for Ornstein–Uhlenbeck process
d
R
t
=
μ
R
t
d
t
+
σ
t
d
B
t
.
{\displaystyle \mathrm {d} R_{t}=\mu R_{t}\,\mathrm {d} t+\sigma _{t}\,\mathrm {d} B_{t}.}
which is the equation for the dynamics of the return of the price of a stock under the hypothesis that returns display a Log-normal distribution.
Under this hypothesis, the methodologies developed by Marcello Minenna determines prediction interval able to identify abnormal return that could hide market abuse phenomena.
=== SDEs on manifolds ===
More generally one can extend the theory of stochastic calculus onto differential manifolds and for this purpose one uses the Fisk-Stratonovich integral. Consider a manifold
M
{\displaystyle M}
, some finite-dimensional vector space
E
{\displaystyle E}
, a filtered probability space
(
Ω
,
F
,
(
F
t
)
t
∈
R
+
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}},P)}
with
(
F
t
)
t
∈
R
+
{\displaystyle ({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}}}
satisfying the usual conditions and let
M
^
=
M
∪
{
∞
}
{\displaystyle {\widehat {M}}=M\cup \{\infty \}}
be the one-point compactification and
x
0
{\displaystyle x_{0}}
be
F
0
{\displaystyle {\mathcal {F}}_{0}}
-measurable. A stochastic differential equation on
M
{\displaystyle M}
written
d
X
=
A
(
X
)
∘
d
Z
{\displaystyle \mathrm {d} X=A(X)\circ dZ}
is a pair
(
A
,
Z
)
{\displaystyle (A,Z)}
, such that
Z
{\displaystyle Z}
is a continuous
E
{\displaystyle E}
-valued semimartingale,
A
:
M
×
E
→
T
M
,
(
x
,
e
)
↦
A
(
x
)
e
{\displaystyle A:M\times E\to TM,(x,e)\mapsto A(x)e}
is a homomorphism of vector bundles over
M
{\displaystyle M}
.
For each
x
∈
M
{\displaystyle x\in M}
the map
A
(
x
)
:
E
→
T
x
M
{\displaystyle A(x):E\to T_{x}M}
is linear and
A
(
⋅
)
e
∈
Γ
(
T
M
)
{\displaystyle A(\cdot )e\in \Gamma (TM)}
for each
e
∈
E
{\displaystyle e\in E}
.
A solution to the SDE on
M
{\displaystyle M}
with initial condition
X
0
=
x
0
{\displaystyle X_{0}=x_{0}}
is a continuous
{
F
t
}
{\displaystyle \{{\mathcal {F}}_{t}\}}
-adapted
M
{\displaystyle M}
-valued process
(
X
t
)
t
<
ζ
{\displaystyle (X_{t})_{t<\zeta }}
up to life time
ζ
{\displaystyle \zeta }
, s.t. for each test function
f
∈
C
c
∞
(
M
)
{\displaystyle f\in C_{c}^{\infty }(M)}
the process
f
(
X
)
{\displaystyle f(X)}
is a real-valued semimartingale and for each stopping time
τ
{\displaystyle \tau }
with
0
≤
τ
<
ζ
{\displaystyle 0\leq \tau <\zeta }
the equation
f
(
X
τ
)
=
f
(
x
0
)
+
∫
0
τ
(
d
f
)
X
A
(
X
)
∘
d
Z
{\displaystyle f(X_{\tau })=f(x_{0})+\int _{0}^{\tau }(\mathrm {d} f)_{X}A(X)\circ \mathrm {d} Z}
holds
P
{\displaystyle P}
-almost surely, where
(
d
f
)
X
:
T
x
M
→
T
f
(
x
)
M
{\displaystyle (df)_{X}:T_{x}M\to T_{f(x)}M}
is the differential at
X
{\displaystyle X}
. It is a maximal solution if the life time is maximal, i.e.,
{
ζ
<
∞
}
⊂
{
lim
t
↗
ζ
X
t
=
∞
in
M
^
}
{\displaystyle \{\zeta <\infty \}\subset \left\{\lim \limits _{t\nearrow \zeta }X_{t}=\infty {\text{ in }}{\widehat {M}}\right\}}
P
{\displaystyle P}
-almost surely. It follows from the fact that
f
(
X
)
{\displaystyle f(X)}
for each test function
f
∈
C
c
∞
(
M
)
{\displaystyle f\in C_{c}^{\infty }(M)}
is a semimartingale, that
X
{\displaystyle X}
is a semimartingale on
M
{\displaystyle M}
. Given a maximal solution we can extend the time of
X
{\displaystyle X}
onto full
R
+
{\displaystyle \mathbb {R} _{+}}
and after a continuation of
f
{\displaystyle f}
on
M
^
{\displaystyle {\widehat {M}}}
we get
f
(
X
t
)
=
f
(
X
0
)
+
∫
0
t
(
d
f
)
X
A
(
X
)
∘
d
Z
,
t
≥
0
{\displaystyle f(X_{t})=f(X_{0})+\int _{0}^{t}(\mathrm {d} f)_{X}A(X)\circ \mathrm {d} Z,\quad t\geq 0}
up to indistinguishable processes.
Although Stratonovich SDEs are the natural choice for SDEs on manifolds, given that they satisfy the chain rule and that their drift and diffusion coefficients behave as vector fields under changes of coordinates, there are cases where Ito calculus on manifolds is preferable. A theory of Ito calculus on manifolds was first developed by Laurent Schwartz through the concept of Schwartz morphism, see also the related 2-jet interpretation of Ito SDEs on manifolds based on the jet-bundle. This interpretation is helpful when trying to optimally approximate the solution of an SDE given on a large space with the solutions of an SDE given on a submanifold of that space, in that a Stratonovich based projection does not result to be optimal. This has been applied to the filtering problem, leading to optimal projection filters.
== As rough paths ==
Usually the solution of an SDE requires a probabilistic setting, as the integral implicit in the solution is a stochastic integral. If it were possible to deal with the differential equation path by path, one would not need to define a stochastic integral and one could develop a theory independently of probability theory.
This points to considering the SDE
d
X
t
(
ω
)
=
μ
(
X
t
(
ω
)
,
t
)
d
t
+
σ
(
X
t
(
ω
)
,
t
)
d
B
t
(
ω
)
{\displaystyle \mathrm {d} X_{t}(\omega )=\mu (X_{t}(\omega ),t)\,\mathrm {d} t+\sigma (X_{t}(\omega ),t)\,\mathrm {d} B_{t}(\omega )}
as a single deterministic differential equation for every
ω
∈
Ω
{\displaystyle \omega \in \Omega }
, where
Ω
{\displaystyle \Omega }
is the sample space in the given probability space (
Ω
,
F
,
P
{\displaystyle \Omega ,\,{\mathcal {F}},\,P}
). However, a direct path-wise interpretation of the SDE is not possible, as the Brownian motion paths have unbounded variation and are nowhere differentiable with probability one, so that there is no naive way to give meaning to terms like
d
B
t
(
ω
)
{\displaystyle \mathrm {d} B_{t}(\omega )}
, precluding also a naive path-wise definition of the stochastic integral as an integral against every single
d
B
t
(
ω
)
{\displaystyle \mathrm {d} B_{t}(\omega )}
. However, motivated by the Wong-Zakai result for limits of solutions of SDEs with regular noise and using rough paths theory, while adding a chosen definition of iterated integrals of Brownian motion, it is possible to define a deterministic rough integral for every single
ω
∈
Ω
{\displaystyle \omega \in \Omega }
that coincides for example with the Ito integral with probability one for a particular choice of the iterated Brownian integral. Other definitions of the iterated integral lead to deterministic pathwise equivalents of different stochastic integrals, like the Stratonovich integral. This has been used for example in financial mathematics to price options without probability.
== Existence and uniqueness of solutions ==
As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2).
Let T > 0, and let
μ
:
R
n
×
[
0
,
T
]
→
R
n
;
{\displaystyle \mu :\mathbb {R} ^{n}\times [0,T]\to \mathbb {R} ^{n};}
σ
:
R
n
×
[
0
,
T
]
→
R
n
×
m
;
{\displaystyle \sigma :\mathbb {R} ^{n}\times [0,T]\to \mathbb {R} ^{n\times m};}
be measurable functions for which there exist constants C and D such that
|
μ
(
x
,
t
)
|
+
|
σ
(
x
,
t
)
|
≤
C
(
1
+
|
x
|
)
;
{\displaystyle {\big |}\mu (x,t){\big |}+{\big |}\sigma (x,t){\big |}\leq C{\big (}1+|x|{\big )};}
|
μ
(
x
,
t
)
−
μ
(
y
,
t
)
|
+
|
σ
(
x
,
t
)
−
σ
(
y
,
t
)
|
≤
D
|
x
−
y
|
;
{\displaystyle {\big |}\mu (x,t)-\mu (y,t){\big |}+{\big |}\sigma (x,t)-\sigma (y,t){\big |}\leq D|x-y|;}
for all t ∈ [0, T] and all x and y ∈ Rn, where
|
σ
|
2
=
∑
i
,
j
=
1
n
|
σ
i
j
|
2
.
{\displaystyle |\sigma |^{2}=\sum _{i,j=1}^{n}|\sigma _{ij}|^{2}.}
Let Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment:
E
[
|
Z
|
2
]
<
+
∞
.
{\displaystyle \mathbb {E} {\big [}|Z|^{2}{\big ]}<+\infty .}
Then the stochastic differential equation/initial value problem
d
X
t
=
μ
(
X
t
,
t
)
d
t
+
σ
(
X
t
,
t
)
d
B
t
for
t
∈
[
0
,
T
]
;
{\displaystyle \mathrm {d} X_{t}=\mu (X_{t},t)\,\mathrm {d} t+\sigma (X_{t},t)\,\mathrm {d} B_{t}{\mbox{ for }}t\in [0,T];}
X
0
=
Z
;
{\displaystyle X_{0}=Z;}
has a P-almost surely unique t-continuous solution (t, ω) ↦ Xt(ω) such that X is adapted to the filtration FtZ generated by Z and Bs, s ≤ t, and
E
[
∫
0
T
|
X
t
|
2
d
t
]
<
+
∞
.
{\displaystyle \mathbb {E} \left[\int _{0}^{T}|X_{t}|^{2}\,\mathrm {d} t\right]<+\infty .}
=== General case: local Lipschitz condition and maximal solutions ===
The stochastic differential equation above is only a special case of a more general form
d
Y
t
=
α
(
t
,
Y
t
)
d
X
t
{\displaystyle \mathrm {d} Y_{t}=\alpha (t,Y_{t})\mathrm {d} X_{t}}
where
X
{\displaystyle X}
is a continuous semimartingale in
R
n
{\displaystyle \mathbb {R} ^{n}}
and
Y
{\displaystyle Y}
is a continuous semimartingal in
R
d
{\displaystyle \mathbb {R} ^{d}}
α
:
R
+
×
U
→
Lin
(
R
n
;
R
d
)
{\displaystyle \alpha :\mathbb {R} _{+}\times U\to \operatorname {Lin} (\mathbb {R} ^{n};\mathbb {R} ^{d})}
is a map from some open nonempty set
U
⊂
R
d
{\displaystyle U\subset \mathbb {R} ^{d}}
, where
Lin
(
R
n
;
R
d
)
{\displaystyle \operatorname {Lin} (\mathbb {R} ^{n};\mathbb {R} ^{d})}
is the space of all linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
d
{\displaystyle \mathbb {R} ^{d}}
.
More generally one can also look at stochastic differential equations on manifolds.
Whether the solution of this equation explodes depends on the choice of
α
{\displaystyle \alpha }
. Suppose
α
{\displaystyle \alpha }
satisfies some local Lipschitz condition, i.e., for
t
≥
0
{\displaystyle t\geq 0}
and some compact set
K
⊂
U
{\displaystyle K\subset U}
and some constant
L
(
t
,
K
)
{\displaystyle L(t,K)}
the condition
|
α
(
s
,
y
)
−
α
(
s
,
x
)
|
≤
L
(
t
,
K
)
|
y
−
x
|
,
x
,
y
∈
K
,
0
≤
s
≤
t
,
{\displaystyle |\alpha (s,y)-\alpha (s,x)|\leq L(t,K)|y-x|,\quad x,y\in K,\;0\leq s\leq t,}
where
|
⋅
|
{\displaystyle |\cdot |}
is the Euclidean norm. This condition guarantees the existence and uniqueness of a so-called maximal solution.
Suppose
α
{\displaystyle \alpha }
is continuous and satisfies the above local Lipschitz condition and let
F
:
Ω
→
U
{\displaystyle F:\Omega \to U}
be some initial condition, meaning it is a measurable function with respect to the initial σ-algebra. Let
ζ
:
Ω
→
R
¯
+
{\displaystyle \zeta :\Omega \to {\overline {\mathbb {R} }}_{+}}
be a predictable stopping time with
ζ
>
0
{\displaystyle \zeta >0}
almost surely. A
U
{\displaystyle U}
-valued semimartingale
(
Y
t
)
t
<
ζ
{\displaystyle (Y_{t})_{t<\zeta }}
is called a maximal solution of
d
Y
t
=
α
(
t
,
Y
t
)
d
X
t
,
Y
0
=
F
{\displaystyle dY_{t}=\alpha (t,Y_{t})dX_{t},\quad Y_{0}=F}
with life time
ζ
{\displaystyle \zeta }
if
for one (and hence all) announcing
ζ
n
↗
ζ
{\displaystyle \zeta _{n}\nearrow \zeta }
the stopped process
Y
ζ
n
{\displaystyle Y^{\zeta _{n}}}
is a solution to the stopped stochastic differential equation
d
Y
=
α
(
t
,
Y
)
d
X
ζ
n
{\displaystyle \mathrm {d} Y=\alpha (t,Y)\mathrm {d} X^{\zeta _{n}}}
on the set
{
ζ
<
∞
}
{\displaystyle \{\zeta <\infty \}}
we have almost surely that
Y
t
→
∂
U
{\displaystyle Y_{t}\to \partial U}
with
t
→
ζ
{\displaystyle t\to \zeta }
.
ζ
{\displaystyle \zeta }
is also a so-called explosion time.
== Some explicitly solvable examples ==
Explicitly solvable SDEs include:
=== Linear SDE: General case ===
d
X
t
=
(
a
(
t
)
X
t
+
c
(
t
)
)
d
t
+
(
b
(
t
)
X
t
+
d
(
t
)
)
d
W
t
{\displaystyle \mathrm {d} X_{t}=(a(t)X_{t}+c(t))\mathrm {d} t+(b(t)X_{t}+d(t))\mathrm {d} W_{t}}
X
t
=
Φ
t
,
t
0
(
X
t
0
+
∫
t
0
t
Φ
s
,
t
0
−
1
(
c
(
s
)
−
b
(
s
)
d
(
s
)
)
d
s
+
∫
t
0
t
Φ
s
,
t
0
−
1
d
(
s
)
d
W
s
)
{\displaystyle X_{t}=\Phi _{t,t_{0}}\left(X_{t_{0}}+\int _{t_{0}}^{t}\Phi _{s,t_{0}}^{-1}(c(s)-b(s)d(s))\mathrm {d} s+\int _{t_{0}}^{t}\Phi _{s,t_{0}}^{-1}d(s)\mathrm {d} W_{s}\right)}
where
Φ
t
,
t
0
=
exp
(
∫
t
0
t
(
a
(
s
)
−
b
2
(
s
)
2
)
d
s
+
∫
t
0
t
b
(
s
)
d
W
s
)
{\displaystyle \Phi _{t,t_{0}}=\exp \left(\int _{t_{0}}^{t}\left(a(s)-{\frac {b^{2}(s)}{2}}\right)\mathrm {d} s+\int _{t_{0}}^{t}b(s)\mathrm {d} W_{s}\right)}
=== Reducible SDEs: Case 1 ===
d
X
t
=
1
2
f
(
X
t
)
f
′
(
X
t
)
d
t
+
f
(
X
t
)
d
W
t
{\displaystyle \mathrm {d} X_{t}={\frac {1}{2}}f(X_{t})f'(X_{t})\mathrm {d} t+f(X_{t})\mathrm {d} W_{t}}
for a given differentiable function
f
{\displaystyle f}
is equivalent to the Stratonovich SDE
d
X
t
=
f
(
X
t
)
∘
W
t
{\displaystyle \mathrm {d} X_{t}=f(X_{t})\circ W_{t}}
which has a general solution
X
t
=
h
−
1
(
W
t
+
h
(
X
0
)
)
{\displaystyle X_{t}=h^{-1}(W_{t}+h(X_{0}))}
where
h
(
x
)
=
∫
x
d
s
f
(
s
)
{\displaystyle h(x)=\int ^{x}{\frac {\mathrm {d} s}{f(s)}}}
=== Reducible SDEs: Case 2 ===
d
X
t
=
(
α
f
(
X
t
)
+
1
2
f
(
X
t
)
f
′
(
X
t
)
)
d
t
+
f
(
X
t
)
d
W
t
{\displaystyle \mathrm {d} X_{t}=\left(\alpha f(X_{t})+{\frac {1}{2}}f(X_{t})f'(X_{t})\right)\mathrm {d} t+f(X_{t})\mathrm {d} W_{t}}
for a given differentiable function
f
{\displaystyle f}
is equivalent to the Stratonovich SDE
d
X
t
=
α
f
(
X
t
)
d
t
+
f
(
X
t
)
∘
W
t
{\displaystyle \mathrm {d} X_{t}=\alpha f(X_{t})\mathrm {d} t+f(X_{t})\circ W_{t}}
which is reducible to
d
Y
t
=
α
d
t
+
d
W
t
{\displaystyle \mathrm {d} Y_{t}=\alpha \mathrm {d} t+\mathrm {d} W_{t}}
where
Y
t
=
h
(
X
t
)
{\displaystyle Y_{t}=h(X_{t})}
where
h
{\displaystyle h}
is defined as before.
Its general solution is
X
t
=
h
−
1
(
α
t
+
W
t
+
h
(
X
0
)
)
{\displaystyle X_{t}=h^{-1}(\alpha t+W_{t}+h(X_{0}))}
== SDEs and supersymmetry ==
In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc.
== See also ==
Backward stochastic differential equation
Langevin dynamics
Local volatility
Stochastic process
Stochastic volatility
Stochastic partial differential equations
Diffusion process
Stochastic difference equation
== References ==
== Further reading ==
Evans, Lawrence C. (2013). An Introduction to Stochastic Differential Equations American Mathematical Society.
Adomian, George (1983). Stochastic systems. Mathematics in Science and Engineering (169). Orlando, FL: Academic Press Inc.
Adomian, George (1986). Nonlinear stochastic operator equations. Orlando, FL: Academic Press Inc. ISBN 978-0-12-044375-8.
Adomian, George (1989). Nonlinear stochastic systems theory and applications to physics. Mathematics and its Applications (46). Dordrecht: Kluwer Academic Publishers Group.
Calin, Ovidiu (2015). An Informal Introduction to Stochastic Calculus with Applications. Singapore: World Scientific Publishing. p. 315. ISBN 978-981-4678-93-3.
Teugels, J.; Sund, B., eds. (2004). Encyclopedia of Actuarial Science. Chichester: Wiley. pp. 523–527.
Gardiner, C. W. (2004). Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. Springer. p. 415.
Mikosch, Thomas (1998). Elementary Stochastic Calculus: with Finance in View. Singapore: World Scientific Publishing. p. 212. ISBN 981-02-3543-7.
Seifedine Kadry (2007). "A Solution of Linear Stochastic Differential Equation". Wseas Transactions on Mathematics. USA: WSEAS TRANSACTIONS on MATHEMATICS, April 2007.: 618. ISSN 1109-2769.
Higham, Desmond J. (January 2001). "An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations". SIAM Review. 43 (3): 525–546. Bibcode:2001SIAMR..43..525H. CiteSeerX 10.1.1.137.6375. doi:10.1137/S0036144500378302.
Desmond Higham and Peter Kloeden: "An Introduction to the Numerical Simulation of Stochastic Differential Equations", SIAM, ISBN 978-1-611976-42-7 (2021). | Wikipedia/Stochastic_differential_equation |
In theoretical physics, supersymmetric quantum mechanics is an area of research where supersymmetry are applied to the simpler setting of plain quantum mechanics, rather than quantum field theory. Supersymmetric quantum mechanics has found applications outside of high-energy physics, such as providing new methods to solve quantum mechanical problems, providing useful extensions to the WKB approximation, and statistical mechanics.
== Introduction ==
Understanding the consequences of supersymmetry (SUSY) has proven mathematically daunting, and it has likewise been difficult to develop theories that could account for symmetry breaking, i.e., the lack of observed partner particles of equal mass. To make progress on these problems, physicists developed supersymmetric quantum mechanics, an application of the supersymmetry superalgebra to quantum mechanics as opposed to quantum field theory. It was hoped that studying SUSY's consequences in this simpler setting would lead to new understanding; remarkably, the effort created new areas of research in quantum mechanics itself.
For example, students are typically taught to "solve" the hydrogen atom by a process that begins by inserting the Coulomb potential into the Schrödinger equation. Following use of multiple differential equations, the analysis produces a recursion relation for the Laguerre polynomials. The outcome is the spectrum of hydrogen-atom energy states (labeled by quantum numbers n and l). Using ideas drawn from SUSY, the final result can be derived with greater ease, in much the same way that operator methods are used to solve the harmonic oscillator. A similar supersymmetric approach can also be used to more accurately find the hydrogen spectrum using the Dirac equation. Oddly enough, this approach is analogous to the way Erwin Schrödinger first solved the hydrogen atom. He did not call his solution supersymmetric, as SUSY was thirty years in the future.
The SUSY solution of the hydrogen atom is only one example of the very general class of solutions which SUSY provides to shape-invariant potentials, a category which includes most potentials taught in introductory quantum mechanics courses.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then called partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy—but, in the relativistic world, energy and mass are interchangeable, so we can just as easily say that the partner particles have equal mass.
SUSY concepts have provided useful extensions to the WKB approximation in the form of a modified version of the Bohr-Sommerfeld quantization condition. In addition, SUSY has been applied to non-quantum statistical mechanics through the Fokker–Planck equation, showing that even if the original inspiration in high-energy particle physics turns out to be a blind alley, its investigation has brought about many useful benefits.
== Example: the harmonic oscillator ==
The Schrödinger equation for the harmonic oscillator takes the form
H
H
O
ψ
n
(
x
)
=
(
−
ℏ
2
2
m
d
2
d
x
2
+
m
ω
2
2
x
2
)
ψ
n
(
x
)
=
E
n
H
O
ψ
n
(
x
)
,
{\displaystyle H^{\rm {HO}}\psi _{n}(x)={\bigg (}{\frac {-\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}{\bigg )}\psi _{n}(x)=E_{n}^{\rm {HO}}\psi _{n}(x),}
where
ψ
n
(
x
)
{\displaystyle \psi _{n}(x)}
is the
n
{\displaystyle n}
th energy eigenstate of
H
HO
{\displaystyle H^{\text{HO}}}
with energy
E
n
HO
{\displaystyle E_{n}^{\text{HO}}}
. We want to find an expression for
E
n
H
O
{\displaystyle E_{n}^{\rm {HO}}}
in terms of
n
{\displaystyle n}
. We define the operators
A
=
ℏ
2
m
d
d
x
+
W
(
x
)
{\displaystyle A={\frac {\hbar }{\sqrt {2m}}}{\frac {d}{dx}}+W(x)}
and
A
†
=
−
ℏ
2
m
d
d
x
+
W
(
x
)
,
{\displaystyle A^{\dagger }=-{\frac {\hbar }{\sqrt {2m}}}{\frac {d}{dx}}+W(x),}
where
W
(
x
)
{\displaystyle W(x)}
, which we need to choose, is called the superpotential of
H
H
O
{\displaystyle H^{\rm {HO}}}
. We also define the aforementioned partner Hamiltonians
H
(
1
)
{\displaystyle H^{(1)}}
and
H
(
2
)
{\displaystyle H^{(2)}}
as
H
(
1
)
=
A
†
A
=
−
ℏ
2
2
m
d
2
d
x
2
−
ℏ
2
m
W
′
(
x
)
+
W
2
(
x
)
{\displaystyle H^{(1)}=A^{\dagger }A={\frac {-\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}-{\frac {\hbar }{\sqrt {2m}}}W^{\prime }(x)+W^{2}(x)}
H
(
2
)
=
A
A
†
=
−
ℏ
2
2
m
d
2
d
x
2
+
ℏ
2
m
W
′
(
x
)
+
W
2
(
x
)
.
{\displaystyle H^{(2)}=AA^{\dagger }={\frac {-\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}+{\frac {\hbar }{\sqrt {2m}}}W^{\prime }(x)+W^{2}(x).}
A zero energy ground state
ψ
0
(
1
)
(
x
)
{\displaystyle \psi _{0}^{(1)}(x)}
of
H
(
1
)
{\displaystyle H^{(1)}}
would satisfy the equation
H
(
1
)
ψ
0
(
1
)
(
x
)
=
A
†
A
ψ
0
(
1
)
(
x
)
=
A
†
(
ℏ
2
m
d
d
x
+
W
(
x
)
)
ψ
0
(
1
)
(
x
)
=
0.
{\displaystyle H^{(1)}\psi _{0}^{(1)}(x)=A^{\dagger }A\psi _{0}^{(1)}(x)=A^{\dagger }{\bigg (}{\frac {\hbar }{\sqrt {2m}}}{\frac {d}{dx}}+W(x){\bigg )}\psi _{0}^{(1)}(x)=0.}
Assuming that we know the ground state of the harmonic oscillator
ψ
0
(
x
)
{\displaystyle \psi _{0}(x)}
, we can solve for
W
(
x
)
{\displaystyle W(x)}
as
W
(
x
)
=
−
ℏ
2
m
(
ψ
0
′
(
x
)
ψ
0
(
x
)
)
=
x
m
ω
2
/
2
{\displaystyle W(x)={\frac {-\hbar }{\sqrt {2m}}}{\bigg (}{\frac {\psi _{0}^{\prime }(x)}{\psi _{0}(x)}}{\bigg )}=x{\sqrt {m\omega ^{2}/2}}}
We then find that
H
(
1
)
=
−
ℏ
2
2
m
d
2
d
x
2
+
m
ω
2
2
x
2
−
ℏ
ω
2
{\displaystyle H^{(1)}={\frac {-\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}-{\frac {\hbar \omega }{2}}}
H
(
2
)
=
−
ℏ
2
2
m
d
2
d
x
2
+
m
ω
2
2
x
2
+
ℏ
ω
2
.
{\displaystyle H^{(2)}={\frac {-\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}+{\frac {\hbar \omega }{2}}.}
We can now see that
H
(
1
)
=
H
(
2
)
−
ℏ
ω
=
H
H
O
−
ℏ
ω
2
.
{\displaystyle H^{(1)}=H^{(2)}-\hbar \omega =H^{\rm {HO}}-{\frac {\hbar \omega }{2}}.}
This is a special case of shape invariance, discussed below. Taking without proof the introductory theorem mentioned above, it is apparent that the spectrum of
H
(
1
)
{\displaystyle H^{(1)}}
will start with
E
0
=
0
{\displaystyle E_{0}=0}
and continue upwards in steps of
ℏ
ω
.
{\displaystyle \hbar \omega .}
The spectra of
H
(
2
)
{\displaystyle H^{(2)}}
and
H
H
O
{\displaystyle H^{\rm {HO}}}
will have the same even spacing, but will be shifted up by amounts
ℏ
ω
{\displaystyle \hbar \omega }
and
ℏ
ω
/
2
{\displaystyle \hbar \omega /2}
, respectively. It follows that the spectrum of
H
H
O
{\displaystyle H^{\rm {HO}}}
is therefore the familiar
E
n
H
O
=
ℏ
ω
(
n
+
1
/
2
)
{\displaystyle E_{n}^{\rm {HO}}=\hbar \omega (n+1/2)}
.
== SUSY QM superalgebra ==
In fundamental quantum mechanics, we learn that an algebra of operators is defined by commutation relations among those operators. For example, the canonical operators of position and momentum have the commutator
[
x
,
p
]
=
i
{\displaystyle [x,p]=i}
. (Here, we use "natural units" where the Planck constant is set equal to 1.) A more intricate case is the algebra of angular momentum operators; these quantities are closely connected to the rotational symmetries of three-dimensional space. To generalize this concept, we define an anticommutator, which relates operators the same way as an ordinary commutator, but with the opposite sign:
{
A
,
B
}
=
A
B
+
B
A
.
{\displaystyle \{A,B\}=AB+BA.}
If operators are related by anticommutators as well as commutators, we say they are part of a Lie superalgebra. Let's say we have a quantum system described by a Hamiltonian
H
{\displaystyle {\mathcal {H}}}
and a set of
N
{\displaystyle N}
operators
Q
i
{\displaystyle Q_{i}}
. We shall call this system supersymmetric if the following anticommutation relation is valid for all
i
,
j
=
1
,
…
,
N
{\displaystyle i,j=1,\ldots ,N}
:
{
Q
i
,
Q
j
†
}
=
H
δ
i
j
.
{\displaystyle \{Q_{i},Q_{j}^{\dagger }\}={\mathcal {H}}\delta _{ij}.}
If this is the case, then we call
Q
i
{\displaystyle Q_{i}}
the system's supercharges.
== Example ==
Let's look at the example of a one-dimensional nonrelativistic particle with a 2D (i.e., two states) internal degree of freedom called "spin" (it's not really spin because "real" spin is a property of 3D particles). Let
b
{\displaystyle b}
be an operator which transforms a "spin up" particle into a "spin down" particle. Its adjoint
b
†
{\displaystyle b^{\dagger }}
then transforms a spin down particle into a spin up particle; the operators are normalized such that the anticommutator
{
b
,
b
†
}
=
1
{\displaystyle \{b,b^{\dagger }\}=1}
. And
b
2
=
0
{\displaystyle b^{2}=0}
. Let
p
{\displaystyle p}
be the momentum of the particle and
x
{\displaystyle x}
be its position with
[
x
,
p
]
=
i
{\displaystyle [x,p]=i}
. Let
W
{\displaystyle W}
(the "superpotential") be an arbitrary complex analytic function of
x
{\displaystyle x}
and define the supersymmetric operators
Q
1
=
1
2
[
(
p
−
i
W
)
b
+
(
p
+
i
W
†
)
b
†
]
{\displaystyle Q_{1}={\frac {1}{2}}\left[(p-iW)b+(p+iW^{\dagger })b^{\dagger }\right]}
Q
2
=
i
2
[
(
p
−
i
W
)
b
−
(
p
+
i
W
†
)
b
†
]
{\displaystyle Q_{2}={\frac {i}{2}}\left[(p-iW)b-(p+iW^{\dagger })b^{\dagger }\right]}
Note that
Q
1
{\displaystyle Q_{1}}
and
Q
2
{\displaystyle Q_{2}}
are self-adjoint. Let the Hamiltonian
H
=
{
Q
1
,
Q
1
}
=
{
Q
2
,
Q
2
}
=
(
p
+
ℑ
{
W
}
)
2
2
+
ℜ
{
W
}
2
2
+
ℜ
{
W
}
′
2
(
b
b
†
−
b
†
b
)
{\displaystyle H=\{Q_{1},Q_{1}\}=\{Q_{2},Q_{2}\}={\frac {(p+\Im \{W\})^{2}}{2}}+{\frac {{\Re \{W\}}^{2}}{2}}+{\frac {\Re \{W\}'}{2}}(bb^{\dagger }-b^{\dagger }b)}
where W′ is the derivative of W. Also note that {Q1, Q2} = 0. This is nothing other than N = 2 supersymmetry. Note that
ℑ
{
W
}
{\displaystyle \Im \{W\}}
acts like an electromagnetic vector potential.
Let's also call the spin down state "bosonic" and the spin up state "fermionic". This is only in analogy to quantum field theory and should not be taken literally. Then, Q1 and Q2 maps "bosonic" states into "fermionic" states and vice versa.
Reformulating this a bit:
Define
Q
=
(
p
−
i
W
)
b
{\displaystyle Q=(p-iW)b}
and,
Q
†
=
(
p
+
i
W
†
)
b
†
{\displaystyle Q^{\dagger }=(p+iW^{\dagger })b^{\dagger }}
{
Q
,
Q
}
=
{
Q
†
,
Q
†
}
=
0
{\displaystyle \{Q,Q\}=\{Q^{\dagger },Q^{\dagger }\}=0}
and
{
Q
†
,
Q
}
=
2
H
{\displaystyle \{Q^{\dagger },Q\}=2H}
An operator is "bosonic" if it maps "bosonic" states to "bosonic" states and "fermionic" states to "fermionic" states. An operator is "fermionic" if it maps "bosonic" states to "fermionic" states and vice versa. Any operator can be expressed uniquely as the sum of a bosonic operator and a fermionic operator. Define the supercommutator [,} as follows: Between two bosonic operators or a bosonic and a fermionic operator, it is none other than the commutator but between two fermionic operators, it is an anticommutator.
Then, x and p are bosonic operators and b,
b
†
{\displaystyle b^{\dagger }}
, Q and
Q
†
{\displaystyle Q^{\dagger }}
are fermionic operators.
Let's work in the Heisenberg picture where x, b and
b
†
{\displaystyle b^{\dagger }}
are functions of time.
Then,
[
Q
,
x
}
=
−
i
b
{\displaystyle [Q,x\}=-ib}
[
Q
,
b
}
=
0
{\displaystyle [Q,b\}=0}
[
Q
,
b
†
}
=
d
x
d
t
−
i
ℜ
{
W
}
{\displaystyle [Q,b^{\dagger }\}={\frac {dx}{dt}}-i\Re \{W\}}
[
Q
†
,
x
}
=
i
b
†
{\displaystyle [Q^{\dagger },x\}=ib^{\dagger }}
[
Q
†
,
b
}
=
d
x
d
t
+
i
ℜ
{
W
}
{\displaystyle [Q^{\dagger },b\}={\frac {dx}{dt}}+i\Re \{W\}}
[
Q
†
,
b
†
}
=
0
{\displaystyle [Q^{\dagger },b^{\dagger }\}=0}
This is nonlinear in general: i.e., x(t), b(t) and
b
†
(
t
)
{\displaystyle b^{\dagger }(t)}
do not form a linear SUSY representation because
ℜ
{
W
}
{\displaystyle \Re \{W\}}
isn't necessarily linear in x. To avoid this problem, define the self-adjoint operator
F
=
ℜ
{
W
}
{\displaystyle F=\Re \{W\}}
. Then,
[
Q
,
x
}
=
−
i
b
{\displaystyle [Q,x\}=-ib}
[
Q
,
b
}
=
0
{\displaystyle [Q,b\}=0}
[
Q
,
b
†
}
=
d
x
d
t
−
i
F
{\displaystyle [Q,b^{\dagger }\}={\frac {dx}{dt}}-iF}
[
Q
,
F
}
=
−
d
b
d
t
{\displaystyle [Q,F\}=-{\frac {db}{dt}}}
[
Q
†
,
x
}
=
i
b
†
{\displaystyle [Q^{\dagger },x\}=ib^{\dagger }}
[
Q
†
,
b
}
=
d
x
d
t
+
i
F
{\displaystyle [Q^{\dagger },b\}={\frac {dx}{dt}}+iF}
[
Q
†
,
b
†
}
=
0
{\displaystyle [Q^{\dagger },b^{\dagger }\}=0}
[
Q
†
,
F
}
=
d
b
†
d
t
{\displaystyle [Q^{\dagger },F\}={\frac {db^{\dagger }}{dt}}}
and we see that we have a linear SUSY representation.
Now let's introduce two "formal" quantities,
θ
{\displaystyle \theta }
; and
θ
¯
{\displaystyle {\bar {\theta }}}
with the latter being the adjoint of the former such that
{
θ
,
θ
}
=
{
θ
¯
,
θ
¯
}
=
{
θ
¯
,
θ
}
=
0
{\displaystyle \{\theta ,\theta \}=\{{\bar {\theta }},{\bar {\theta }}\}=\{{\bar {\theta }},\theta \}=0}
and both of them commute with bosonic operators but anticommute with fermionic ones.
Next, we define a construct called a superfield:
f
(
t
,
θ
¯
,
θ
)
=
x
(
t
)
−
i
θ
b
(
t
)
−
i
θ
¯
b
†
(
t
)
+
θ
¯
θ
F
(
t
)
{\displaystyle f(t,{\bar {\theta }},\theta )=x(t)-i\theta b(t)-i{\bar {\theta }}b^{\dagger }(t)+{\bar {\theta }}\theta F(t)}
f is self-adjoint. Then,
[
Q
,
f
}
=
∂
∂
θ
f
−
i
θ
¯
∂
∂
t
f
,
{\displaystyle [Q,f\}={\frac {\partial }{\partial \theta }}f-i{\bar {\theta }}{\frac {\partial }{\partial t}}f,}
[
Q
†
,
f
}
=
∂
∂
θ
¯
f
−
i
θ
∂
∂
t
f
.
{\displaystyle [Q^{\dagger },f\}={\frac {\partial }{\partial {\bar {\theta }}}}f-i\theta {\frac {\partial }{\partial t}}f.}
Incidentally, there's also a U(1)R symmetry, with p and x and W having zero R-charges and
b
†
{\displaystyle b^{\dagger }}
having an R-charge of 1 and b having an R-charge of −1.
== Shape invariance ==
Suppose
W
{\displaystyle W}
is real for all real
x
{\displaystyle x}
. Then we can simplify the expression for the Hamiltonian to
H
=
(
p
)
2
2
+
W
2
2
+
W
′
2
(
b
b
†
−
b
†
b
)
{\displaystyle H={\frac {(p)^{2}}{2}}+{\frac {{W}^{2}}{2}}+{\frac {W'}{2}}(bb^{\dagger }-b^{\dagger }b)}
There are certain classes of superpotentials such that both the bosonic and fermionic Hamiltonians have similar forms. Specifically
V
+
(
x
,
a
1
)
=
V
−
(
x
,
a
2
)
+
R
(
a
1
)
{\displaystyle V_{+}(x,a_{1})=V_{-}(x,a_{2})+R(a_{1})}
where the
a
{\displaystyle a}
's are parameters. For example, the hydrogen atom potential with angular momentum
l
{\displaystyle l}
can be written this way.
−
e
2
4
π
ϵ
0
1
r
+
h
2
l
(
l
+
1
)
2
m
1
r
2
−
E
0
{\displaystyle {\frac {-e^{2}}{4\pi \epsilon _{0}}}{\frac {1}{r}}+{\frac {h^{2}l(l+1)}{2m}}{\frac {1}{r^{2}}}-E_{0}}
This corresponds to
V
−
{\displaystyle V_{-}}
for the superpotential
W
=
2
m
h
e
2
24
π
ϵ
0
(
l
+
1
)
−
h
(
l
+
1
)
r
2
m
{\displaystyle W={\frac {\sqrt {2m}}{h}}{\frac {e^{2}}{24\pi \epsilon _{0}(l+1)}}-{\frac {h(l+1)}{r{\sqrt {2m}}}}}
V
+
=
−
e
2
4
π
ϵ
0
1
r
+
h
2
(
l
+
1
)
(
l
+
2
)
2
m
1
r
2
+
e
4
m
32
π
2
h
2
ϵ
0
2
(
l
+
1
)
2
{\displaystyle V_{+}={\frac {-e^{2}}{4\pi \epsilon _{0}}}{\frac {1}{r}}+{\frac {h^{2}(l+1)(l+2)}{2m}}{\frac {1}{r^{2}}}+{\frac {e^{4}m}{32\pi ^{2}h^{2}\epsilon _{0}^{2}(l+1)^{2}}}}
This is the potential for
l
+
1
{\displaystyle l+1}
angular momentum shifted by a constant. After solving the
l
=
0
{\displaystyle l=0}
ground state, the supersymmetric operators can be used to construct the rest of the bound state spectrum.
In general, since
V
−
{\displaystyle V_{-}}
and
V
+
{\displaystyle V_{+}}
are partner potentials, they share the same energy spectrum except the one extra ground energy. We can continue this process of finding partner potentials with the shape invariance condition, giving the following formula for the energy levels in terms of the parameters of the potential
E
n
=
∑
i
=
1
n
R
(
a
i
)
{\displaystyle E_{n}=\sum \limits _{i=1}^{n}R(a_{i})}
where
a
i
{\displaystyle a_{i}}
are the parameters for the multiple partnered potentials.
== See also ==
Supersymmetry algebra
Superalgebra
Supersymmetric gauge theory
== References ==
== Further reading ==
F. Cooper, A. Khare and U. Sukhatme, "Supersymmetry and Quantum Mechanics", Phys.Rept.251:267–385, 1995.
D.S. Kulshreshtha, J.Q. Liang and H.J.W. Muller-Kirsten, "Fluctuation equations about classical field configurations and supersymmetric quantum mechanics", Annals Phys. 225:191-211, 1993.
G. Junker, "Supersymmetric Methods in Quantum and Statistical Physics", Springer-Verlag, Berlin, 1996
B. Mielnik and O. Rosas-Ortiz, "Factorization: Little or great algorithm?", J. Phys. A: Math. Gen. 37: 10007–10035, 2004
== External links ==
References from INSPIRE-HEP | Wikipedia/Supersymmetric_quantum_mechanics |
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.
TCP is connection-oriented, meaning that sender and receiver firstly need to establish a connection based on agreed parameters; they do this through three-way handshake procedure. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP) instead, which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities in TCP, including denial of service, connection hijacking, TCP veto, and reset attack.
== Historical origin ==
In May 1974, Vint Cerf and Bob Kahn described an internetworking protocol for sharing resources using packet switching among network nodes. The authors had been working with Gérard Le Lann to incorporate concepts from the French CYCLADES project into the new network. The specification of the resulting protocol, RFC 675 (Specification of Internet Transmission Control Program), was written by Vint Cerf, Yogen Dalal, and Carl Sunshine, and published in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork.
The Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. In version 4, the monolithic Transmission Control Program was divided into a modular architecture consisting of the Transmission Control Protocol and the Internet Protocol. This resulted in a networking model that became known informally as TCP/IP, although formally it was variously referred to as the DoD internet architecture model (DoD model for short) or DARPA model. Later, it became the part of, and synonymous with, the Internet Protocol Suite.
The following Internet Experiment Note (IEN) documents describe the evolution of TCP into the modern version:
IEN 5 Specification of Internet Transmission Control Program TCP Version 2 (March 1977).
IEN 21 Specification of Internetwork Transmission Control Program TCP Version 3 (January 1978).
IEN 27
IEN 40
IEN 44
IEN 55
IEN 81
IEN 112
IEN 124
TCP was standardized in January 1980 as RFC 761.
In 2004, Vint Cerf and Bob Kahn received the Turing Award for their foundational work on TCP/IP.
== Network function ==
The Transmission Control Protocol provides a communication service at an intermediate level between an application program and the Internet Protocol. It provides host-to-host connectivity at the transport layer of the Internet model. An application does not need to know the particular mechanisms for sending data via a link to another host, such as the required IP fragmentation to accommodate the maximum transmission unit of the transmission medium. At the transport layer, TCP handles all handshaking and transmission details and presents an abstraction of the network connection to the application typically through a network socket interface.
At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or unpredictable network behavior, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data and even helps minimize network congestion to reduce the occurrence of the other problems. If the data still remains undelivered, the source is notified of this failure. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the receiving application. Thus, TCP abstracts the application's communication from the underlying networking details.
TCP is used extensively by many internet applications, including the World Wide Web (WWW), email, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and streaming media.
TCP is optimized for accurate delivery rather than timely delivery and can incur relatively long delays (on the order of seconds) while waiting for out-of-order messages or re-transmissions of lost messages. Therefore, it is not particularly suitable for real-time applications such as voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) operating over the User Datagram Protocol (UDP) are usually recommended instead.
TCP is a reliable byte stream delivery service that guarantees that all bytes received will be identical and in the same order as those sent. Since packet transfer by many networks is not reliable, TCP achieves this using a technique known as positive acknowledgment with re-transmission. This requires the receiver to respond with an acknowledgment message as it receives the data. The sender keeps a record of each packet it sends and maintains a timer from when the packet was sent. The sender re-transmits a packet if the timer expires before receiving the acknowledgment. The timer is needed in case a packet gets lost or corrupted.
While IP handles actual delivery of the data, TCP keeps track of segments – the individual units of data transmission that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a web server, the TCP software layer of that server divides the file into segments and forwards them individually to the internet layer in the network stack. The internet layer software encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP software in the transport layer re-assembles the segments and ensures they are correctly ordered and error-free as it streams the file contents to the receiving application.
== TCP segment structure ==
Transmission Control Protocol accepts data from a data stream, divides it into chunks, and adds a TCP header creating a TCP segment. The TCP segment is then encapsulated into an Internet Protocol (IP) datagram, and exchanged with peers.
The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU:
Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module [e.g. IP] to transmit each segment to the destination TCP.
A TCP segment consists of a segment header and a data section. The segment header contains 10 mandatory fields, and an optional extension field (Options, pink background in table). The data section follows the header and is the payload data carried for the application. The length of the data section is not specified in the segment header; it can be calculated by subtracting the combined length of the segment header and IP header from the total IP datagram length specified in the IP header.
Source Port: 16 bits
Identifies the sending port.
Destination Port: 16 bits
Identifies the receiving port.
Sequence Number: 32 bits
Has a dual role:
If the SYN flag is set (1), then this is the initial sequence number. The sequence number of the actual first data byte and the acknowledged number in the corresponding ACK are then this sequence number plus 1.
If the SYN flag is unset (0), then this is the accumulated sequence number of the first data byte of this segment for the current session.
Acknowledgment Number: 32 bits
If the ACK flag is set then the value of this field is the next sequence number that the sender of the ACK is expecting. This acknowledges receipt of all prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial sequence number itself, but no data.
Data Offset (DOffset): 4 bits
Specifies the size of the TCP header in 32-bit words. The minimum size header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header. This field gets its name from the fact that it is also the offset from the start of the TCP segment to the actual data.
Reserved (Rsrvd): 4 bits
For future use and should be set to zero; senders should not set these and receivers should ignore them if set, in the absence of further specification and implementation.
From 2003 to 2017, the last bit (bit 103 of the header) was defined as the NS (Nonce Sum) flag by the experimental RFC 3540, ECN-nonce. ECN-nonce never gained widespread use and the RFC was moved to Historic status.
Flags: 8 bits
Contains 8 1-bit flags (control bits) as follows:
CWR: 1 bit
Congestion window reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set and had responded in congestion control mechanism.
ECE: 1 bit
ECN-Echo has a dual role, depending on the value of the SYN flag. It indicates:
If the SYN flag is set (1), the TCP peer is ECN capable.
If the SYN flag is unset (0), a packet with the Congestion Experienced flag set (ECN=11) in its IP header was received during normal transmission. This serves as an indication of network congestion (or impending congestion) to the TCP sender.
URG: 1 bit
Indicates that the Urgent pointer field is significant.
ACK: 1 bit
Indicates that the Acknowledgment field is significant. All packets after the initial SYN packet sent by the client should have this flag set.
PSH: 1 bit
Push function. Asks to push the buffered data to the receiving application.
RST: 1 bit
Reset the connection
SYN: 1 bit
Synchronize sequence numbers. Only the first packet sent from each end should have this flag set. Some other flags and fields change meaning based on this flag, and some are only valid when it is set, and others when it is clear.
FIN: 1 bit
Last packet from sender
Window: 16 bits
The size of the receive window, which specifies the number of window size units that the sender of this segment is currently willing to receive. (See § Flow control and § Window scaling.)
Checksum: 16 bits
The 16-bit checksum field is used for error-checking of the TCP header, the payload and an IP pseudo-header. The pseudo-header consists of the source IP address, the destination IP address, the protocol number for the TCP protocol (6) and the length of the TCP headers and payload (in bytes).
Urgent Pointer: 16 bits
If the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte.
Options (TCP Option): Variable 0–320 bits, in units of 32 bits; size(Options) == (DOffset - 5) * 32
The length of this field is determined by the Data Offset field. The TCP header padding is used to ensure that the TCP header ends, and data begins, on a 32-bit boundary. The padding is composed of zeros.
Options have up to three fields: Option-Kind (1 byte), Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type of option and is the only field that is not optional. Depending on Option-Kind value, the next two fields may be set. Option-Length indicates the total length of the option, and Option-Data contains data associated with the option, if applicable. For example, an Option-Kind byte of 1 indicates that this is a no operation option used only for padding, and does not have an Option-Length or Option-Data fields following it. An Option-Kind byte of 0 marks the end of options, and is also only one byte. An Option-Kind byte of 2 is used to indicate Maximum Segment Size option, and will be followed by an Option-Length byte specifying the length of the MSS field. Option-Length is the total length of the given options field, including Option-Kind and Option-Length fields. So while the MSS value is typically expressed in two bytes, Option-Length will be 4. As an example, an MSS option field with a value of 0x05B4 is coded as (0x02 0x04 0x05B4) in the TCP options section.
Some options may only be sent when SYN is set; they are indicated below as [SYN]. Option-Kind and standard lengths given as (Option-Kind, Option-Length).
The remaining Option-Kind values are historical, obsolete, experimental, not yet standardized, or unassigned. Option number assignments are maintained by the Internet Assigned Numbers Authority (IANA).
Data: Variable
The payload of the TCP packet
== Protocol operation ==
TCP protocol operations may be divided into three phases. Connection establishment is a multi-step handshake process that establishes a connection before entering the data transfer phase. After data transfer is completed, the connection termination closes the connection and releases all allocated resources.
A TCP connection is managed by an operating system through a resource that represents the local end-point for communications, the Internet socket. During the lifetime of a TCP connection, the local end-point undergoes a series of state changes:
=== Connection establishment ===
Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may establish a connection by initiating an active open using the three-way (or 3-step) handshake:
SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A.
SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number i.e. A+1, and the sequence number that the server chooses for the packet is another random number, B.
ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgment value i.e. A+1, and the acknowledgment number is set to one more than the received sequence number i.e. B+1.
Steps 1 and 2 establish and acknowledge the sequence number for one direction (client to server). Steps 2 and 3 establish and acknowledge the sequence number for the other direction (server to client). Following the completion of these steps, both the client and server have received acknowledgments and a full-duplex communication is established.
=== Connection termination ===
The connection termination phase uses a four-way handshake, with each side of the connection terminating independently. When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After the side that sent the first FIN has responded with the final ACK, it waits for a timeout before finally closing the connection, during which time the local port is unavailable for new connections; this state lets the TCP client resend the final acknowledgment to the server in case the ACK is lost in transit. The time duration is implementation-dependent, but some common values are 30 seconds, 1 minute, and 2 minutes. After the timeout, the client enters the CLOSED state and the local port becomes available for new connections.
It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (combining two steps into one) and host A replies with an ACK.
Some operating systems, such as Linux implement a half-duplex close sequence. If the host actively closes a connection, while still having unread incoming data available, the host sends the signal RST (losing any received data) instead of FIN. This assures that a TCP application is aware there was a data loss.
A connection can be in a half-open state, in which case one side has terminated the connection, but the other has not. The side that has terminated can no longer send any data into the connection, but the other side can. The terminating side should continue reading the data until the other side terminates as well.
=== Resource usage ===
Most implementations allocate an entry in a table that maps a session to a running operating system process. Because TCP packets do not include a session identifier, both endpoints identify the session using the client's address and port. Whenever a packet is received, the TCP implementation must perform a lookup on this table to find the destination process. Each entry in the table is known as a Transmission Control Block or TCB. It contains information about the endpoints (IP and port), status of the connection, running data about the packets that are being exchanged and buffers for sending and receiving data.
The number of sessions in the server side is limited only by memory and can grow as new connections arrive, but the client must allocate an ephemeral port before sending the first SYN to the server. This port remains allocated during the whole conversation and effectively limits the number of outgoing connections from each of the client's IP addresses. If an application fails to properly close unrequired connections, a client can run out of resources and become unable to establish new TCP connections, even from other applications.
Both endpoints must also allocate space for unacknowledged packets and received (but unread) data.
=== Data transfer ===
The Transmission Control Protocol differs in several key features compared to the User Datagram Protocol:
Ordered data transfer: the destination host rearranges segments according to a sequence number
Retransmission of lost packets: any cumulative stream not acknowledged is retransmitted
Error-free data transfer: corrupted packets are treated as lost and are retransmitted
Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received. When the receiving host's buffer fills, the next acknowledgment suspends the transfer and allows the data in the buffer to be processed.
Congestion control: lost packets (presumed due to congestion) trigger a reduction in data delivery rate
==== Reliable transmission ====
TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be reconstructed in order, regardless of any out-of-order delivery that may occur. The sequence number of the first byte is chosen by the transmitter for the first packet, which is flagged SYN. This number can be arbitrary, and should, in fact, be unpredictable to defend against TCP sequence prediction attacks.
Acknowledgments (ACKs) are sent with a sequence number by the receiver of data to tell the sender that data has been received to the specified byte. ACKs do not imply that the data has been delivered to the application, they merely signify that it is now the receiver's responsibility to deliver the data.
Reliability is achieved by the sender detecting lost data and retransmitting it. TCP uses two primary techniques to identify loss. Retransmission timeout (RTO) and duplicate cumulative acknowledgments (DupAcks).
When a TCP segment is retransmitted, it retains the same sequence number as the original delivery attempt. This conflation of delivery and logical data ordering means that, when acknowledgment is received after a retransmission, the sender cannot tell whether the original transmission or the retransmission is being acknowledged, the so-called retransmission ambiguity. TCP incurs complexity due to retransmission ambiguity.
===== Duplicate-ACK-based retransmission =====
If a single segment (say segment number 100) in a stream is lost, then the receiver cannot acknowledge packets above that segment number (100) because it uses cumulative ACKs. Hence the receiver acknowledges packet 99 again on the receipt of another data packet. This duplicate acknowledgement is used as a signal for packet loss. That is, if the sender receives three duplicate acknowledgments, it retransmits the last unacknowledged packet. A threshold of three is used because the network may reorder segments causing duplicate acknowledgements. This threshold has been demonstrated to avoid spurious retransmissions due to reordering. Some TCP implementations use selective acknowledgements (SACKs) to provide explicit feedback about the segments that have been received. This greatly improves TCP's ability to retransmit the right segments.
Retransmission ambiguity can cause spurious fast retransmissions and congestion avoidance if there is reordering beyond the duplicate acknowledgment threshold. In the last two decades more packet reordering has been observed over the Internet which led TCP implementations, such as the one in the Linux Kernel to adopt heuristic methods to scale the duplicate acknowledgment threshold. Recently, there have been efforts to completely phase out duplicate-ACK-based fast-retransmissions and replace them with timer based ones. (Not to be confused with the classic RTO discussed below). The time based loss detection algorithm called Recent Acknowledgment (RACK) has been adopted as the default algorithm in Linux and Windows.
===== Timeout-based retransmission =====
When a sender transmits a segment, it initializes a timer with a conservative estimate of the arrival time of the acknowledgment. The segment is retransmitted if the timer expires, with a new timeout threshold of twice the previous value, resulting in exponential backoff behavior. Typically, the initial timer value is smoothed RTT + max(G, 4 × RTT variation), where G is the clock granularity. This guards against excessive transmission traffic due to faulty or malicious actors, such as man-in-the-middle denial of service attackers.
Accurate RTT estimates are important for loss recovery, as it allows a sender to assume an unacknowledged packet to be lost after sufficient time elapses (i.e., determining the RTO time). Retransmission ambiguity can lead a sender's estimate of RTT to be imprecise. In an environment with variable RTTs, spurious timeouts can occur: if the RTT is under-estimated, then the RTO fires and triggers a needless retransmit and slow-start. After a spurious retransmission, when the acknowledgments for the original transmissions arrive, the sender may believe them to be acknowledging the retransmission and conclude, incorrectly, that segments sent between the original transmission and retransmission have been lost, causing further needless retransmissions to the extent that the link truly becomes congested; selective acknowledgement can reduce this effect. RFC 6298 specifies that implementations must not use retransmitted segments when estimating RTT. Karn's algorithm ensures that a good RTT estimate will be produced—eventually—by waiting until there is an unambiguous acknowledgment before adjusting the RTO. After spurious retransmissions, however, it may take significant time before such an unambiguous acknowledgment arrives, degrading performance in the interim. TCP timestamps also resolve the retransmission ambiguity problem in setting the RTO, though they do not necessarily improve the RTT estimate.
==== Error detection ====
Sequence numbers allow receivers to discard duplicate packets and properly sequence out-of-order packets. Acknowledgments allow senders to determine when to retransmit lost packets.
To assure correctness a checksum field is included; see § Checksum computation for details. The TCP checksum is a weak check by modern standards and is normally paired with a CRC integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame. However, introduction of errors in packets between CRC-protected hops is common and the 16-bit TCP checksum catches most of these.
==== Flow control ====
TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, if a PC sends data to a smartphone that is slowly processing received data, the smartphone must be able to regulate the data flow so as not to be overwhelmed.
TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in the receive window field the amount of additionally received data (in bytes) that it is willing to buffer for the connection. The sending host can send only up to that amount of data before it must wait for an acknowledgment and receive window update from the receiving host.
When a receiver advertises a window size of 0, the sender stops sending data and starts its persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise if a subsequent window size update from the receiver is lost, and the sender cannot send more data until receiving a new window size update from the receiver. When the persist timer expires, the TCP sender attempts recovery by sending a small packet so that the receiver responds by sending another acknowledgment containing the new window size.
If a receiver is processing incoming data in small increments, it may repeatedly advertise a small receive window. This is referred to as the silly window syndrome, since it is inefficient to send only a few bytes of data in a TCP segment, given the relatively large overhead of the TCP header.
==== Congestion control ====
The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid congestive collapse, a gridlock situation where network performance is severely degraded. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger collapse. They also yield an approximately max-min fair allocation between flows.
Acknowledgments for data sent, or the lack of acknowledgments, are used by senders to infer network conditions between the TCP sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more generally referred to as congestion control or congestion avoidance.
Modern implementations of TCP contain four intertwined algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
In addition, senders employ a retransmission timeout (RTO) that is based on the estimated round-trip time (RTT) between the sender and receiver, as well as the variance in this round-trip time. There are subtleties in the estimation of RTT. For example, senders must be careful when calculating RTT samples for retransmitted packets; typically they use Karn's Algorithm or TCP timestamps. These individual RTT samples are then averaged over time to create a smoothed round trip time (SRTT) using Jacobson's algorithm. This SRTT value is what is used as the round-trip time estimate.
Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very high-speed environments are ongoing areas of research and standards development. As a result, there are a number of TCP congestion avoidance algorithm variations.
=== Maximum segment size ===
The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to receive in a single segment. For best performance, the MSS should be set small enough to avoid IP fragmentation, which can lead to packet loss and excessive retransmissions. To accomplish this, typically the MSS is announced by each side using the MSS option when the TCP connection is established. The option value is derived from the maximum transmission unit (MTU) size of the data link layer of the networks to which the sender and receiver are directly attached. TCP senders can use path MTU discovery to infer the minimum MTU along the network path between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation within the network.
MSS announcement may also be called MSS negotiation but, strictly speaking, the MSS is not negotiated. Two completely independent values of MSS are permitted for the two directions of data flow in a TCP connection, so there is no need to agree on a common MSS configuration for a bidirectional connection.
=== Selective acknowledgments ===
Relying purely on the cumulative acknowledgment scheme employed by the original TCP can lead to inefficiencies when packets are lost. For example, suppose bytes with sequence number 1,000 to 10,999 are sent in 10 different TCP segments of equal size, and the second segment (sequence numbers 2,000 to 2,999) is lost during transmission. In a pure cumulative acknowledgment protocol, the receiver can only send a cumulative ACK value of 2,000 (the sequence number immediately following the last sequence number of the received data) and cannot say that it received bytes 3,000 to 10,999 successfully. Thus the sender may then have to resend all data starting with sequence number 2,000.
To alleviate this issue TCP employs the selective acknowledgment (SACK) option, defined in 1996 in RFC 2018, which allows the receiver to acknowledge discontinuous blocks of packets that were received correctly, in addition to the sequence number immediately following the last sequence number of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgment can include a number of SACK blocks, where each SACK block is conveyed by the Left Edge of Block (the first sequence number of the block) and the Right Edge of Block (the sequence number immediately following the last sequence number of the block), with a Block being a contiguous range that the receiver correctly received. In the example above, the receiver would send an ACK segment with a cumulative ACK value of 2,000 and a SACK option header with sequence numbers 3,000 and 11,000. The sender would accordingly retransmit only the second segment with sequence numbers 2,000 to 2,999.
A TCP sender may interpret an out-of-order segment delivery as a lost segment. If it does so, the TCP sender will retransmit the segment previous to the out-of-order packet and slow its data delivery rate for that connection. The duplicate-SACK option, an extension to the SACK option that was defined in May 2000 in RFC 2883, solves this problem. Once the TCP receiver detects a second duplicate packet, it sends a D-ACK to indicate that no segments were lost, allowing the TCP sender to reinstate the higher transmission rate.
The SACK option is not mandatory and comes into operation only if both parties support it. This is negotiated when a connection is established. SACK uses a TCP header option (see § TCP segment structure for details). The use of SACK has become widespread—all popular TCP stacks support it. Selective acknowledgment is also used in Stream Control Transmission Protocol (SCTP).
Selective acknowledgements can be 'reneged', where the receiver unilaterally discards the selectively acknowledged data. RFC 2018 discouraged such behavior, but did not prohibit it to allow receivers the option of reneging if they, for example, ran out of buffer space. The possibility of reneging leads to implementation complexity for both senders and receivers, and also imposes memory costs on the sender.
=== Window scaling ===
For more efficient use of high-bandwidth networks, a larger TCP window size may be used. A 16-bit TCP window size field controls the flow of data and its value is limited to 65,535 bytes. Since the size field cannot be expanded beyond this limit, a scaling factor is used. The TCP window scale option, as defined in RFC 1323, is an option used to increase the maximum window size to 1 gigabyte. Scaling up to these larger window sizes is necessary for TCP tuning.
The window scale option is used only during the TCP 3-way handshake. The window scale value represents the number of bits to left-shift the 16-bit window size field when interpreting it. The window scale value can be set from 0 (no shift) to 14 for each direction independently. Both sides must send the option in their SYN segments to enable window scaling in either direction.
Some routers and packet firewalls rewrite the window scaling factor during a transmission. This causes sending and receiving sides to assume different TCP window sizes. The result is non-stable traffic that may be very slow. The problem is visible on some sites behind a defective router.
=== TCP timestamps ===
TCP timestamps, defined in RFC 1323 in 1992, can help TCP determine in which order packets were sent. TCP timestamps are not normally aligned to the system clock and start at some random value. Many operating systems will increment the timestamp for every elapsed millisecond; however, the RFC only states that the ticks should be proportional.
There are two timestamp fields:
a 4-byte sender timestamp value (my timestamp)
a 4-byte echo reply timestamp value (the most recent timestamp received from you).
TCP timestamps are used in an algorithm known as Protection Against Wrapped Sequence numbers, or PAWS. PAWS is used when the receive window crosses the sequence number wraparound boundary. In the case where a packet was potentially retransmitted, it answers the question: "Is this sequence number in the first 4 GB or the second?" And the timestamp is used to break the tie.
Also, the Eifel detection algorithm uses TCP timestamps to determine if retransmissions are occurring because packets are lost or simply out of order.
TCP timestamps are enabled by default in Linux, and disabled by default in Windows Server 2008, 2012 and 2016.
Recent Statistics show that the level of TCP timestamp adoption has stagnated, at ~40%, owing to Windows Server dropping support since Windows Server 2008.
=== Out-of-band data ===
It is possible to interrupt or abort the queued stream instead of waiting for the stream to finish. This is done by specifying the data as urgent. This marks the transmission as out-of-band data (OOB) and tells the receiving program to process it immediately. When finished, TCP informs the application and resumes the stream queue. An example is when TCP is used for a remote login session where the user can send a keyboard sequence that interrupts or aborts the remotely running program without waiting for the program to finish its current transfer.
The urgent pointer only alters the processing on the remote host and doesn't expedite any processing on the network itself. The capability is implemented differently or poorly on different systems or may not be supported. Where it is available, it is prudent to assume only single bytes of OOB data will be reliably handled. Since the feature is not frequently used, it is not well tested on some platforms and has been associated with vulnerabilities, WinNuke for instance.
=== Forcing data delivery ===
Normally, TCP waits for 200 ms for a full packet of data to send (Nagle's Algorithm tries to group small messages into a single packet). This wait creates small, but potentially serious delays if repeated constantly during a file transfer. For example, a typical send block would be 4 KB, a typical MSS is 1460, so 2 packets go out on a 10 Mbit/s Ethernet taking ~1.2 ms each followed by a third carrying the remaining 1176 after a 197 ms pause because TCP is waiting for a full buffer. In the case of telnet, each user keystroke is echoed back by the server before the user can see it on the screen. This delay would become very annoying.
Setting the socket option TCP_NODELAY overrides the default 200 ms send delay. Application programs use this socket option to force output to be sent after writing a character or line of characters.
The RFC 793 defines the PSH push bit as "a message to the receiving TCP stack to send this data immediately up to the receiving application". There is no way to indicate or control it in user space using Berkeley sockets; it is controlled by the protocol stack only.
== Vulnerabilities ==
TCP may be attacked in a variety of ways. The results of a thorough security assessment of TCP, along with possible mitigations for the identified issues, were published in 2009, and was pursued within the IETF through 2012. Notable vulnerabilities include denial of service, connection hijacking, TCP veto and TCP reset attack.
=== Denial of service ===
By using a spoofed IP address and repeatedly sending purposely assembled SYN packets, followed by many ACK packets, attackers can cause the server to consume large amounts of resources keeping track of the bogus connections. This is known as a SYN flood attack. Proposed solutions to this problem include SYN cookies and cryptographic puzzles, though SYN cookies come with their own set of vulnerabilities. Sockstress is a similar attack, that might be mitigated with system resource management. An advanced DoS attack involving the exploitation of the TCP persist timer was analyzed in Phrack No. 66. PUSH and ACK floods are other variants.
=== Connection hijacking ===
An attacker who is able to eavesdrop on a TCP session and redirect packets can hijack a TCP connection. To do so, the attacker learns the sequence number from the ongoing communication and forges a false segment that looks like the next segment in the stream. A simple hijack can result in one packet being erroneously accepted at one end. When the receiving host acknowledges the false segment, synchronization is lost. Hijacking may be combined with ARP spoofing or other routing attacks that allow an attacker to take permanent control of the TCP connection.
Impersonating a different IP address was not difficult prior to RFC 1948 when the initial sequence number was easily guessable. The earlier implementations allowed an attacker to blindly send a sequence of packets that the receiver would believe came from a different IP address, without the need to intercept communication through ARP or routing attacks: it is enough to ensure that the legitimate host of the impersonated IP address is down, or bring it to that condition using denial-of-service attacks. This is why the initial sequence number is now chosen at random.
=== TCP veto ===
An attacker who can eavesdrop and predict the size of the next packet to be sent can cause the receiver to accept a malicious payload without disrupting the existing connection. The attacker injects a malicious packet with the sequence number and a payload size of the next expected packet. When the legitimate packet is ultimately received, it is found to have the same sequence number and length as a packet already received and is silently dropped as a normal duplicate packet—the legitimate packet is vetoed by the malicious packet. Unlike in connection hijacking, the connection is never desynchronized and communication continues as normal after the malicious payload is accepted. TCP veto gives the attacker less control over the communication but makes the attack particularly resistant to detection. The only evidence to the receiver that something is amiss is a single duplicate packet, a normal occurrence in an IP network. The sender of the vetoed packet never sees any evidence of an attack.
== TCP ports ==
A TCP connection is identified by a four-tuple of the source address, source port, destination address, and destination port. Port numbers are used to identify different services, and to allow multiple connections between hosts. TCP uses 16-bit port numbers, providing 65,536 possible values for each of the source and destination ports. The dependency of connection identity on addresses means that TCP connections are bound to a single network path; TCP cannot use other routes that multihomed hosts have available, and connections break if an endpoint's address changes.
Port numbers are categorized into three basic categories: well-known, registered, and dynamic or private. The well-known ports are assigned by the Internet Assigned Numbers Authority (IANA) and are typically used by system-level processes. Well-known applications running as servers and passively listening for connections typically use these ports. Some examples include: FTP (20 and 21), SSH (22), TELNET (23), SMTP (25), HTTP over SSL/TLS (443), and HTTP (80). Registered ports are typically used by end-user applications as ephemeral source ports when contacting servers, but they can also identify named services that have been registered by a third party. Dynamic or private ports can also be used by end-user applications, however, these ports typically do not contain any meaning outside a particular TCP connection.
Network Address Translation (NAT), typically uses dynamic port numbers, on the public-facing side, to disambiguate the flow of traffic that is passing between a public network and a private subnetwork, thereby allowing many IP addresses (and their ports) on the subnet to be serviced by a single public-facing address.
== Development ==
TCP is a complex protocol. However, while significant enhancements have been made and proposed over the years, its most basic operation has not changed significantly since its first specification RFC 675 in 1974, and the v4 specification RFC 793, published in September 1981. RFC 1122, published in October 1989, clarified a number of TCP protocol implementation requirements. A list of the 8 required specifications and over 20 strongly encouraged enhancements is available in RFC 7414. Among this list is RFC 2581, TCP Congestion Control, one of the most important TCP-related RFCs in recent years, describes updated algorithms that avoid undue congestion. In 2001, RFC 3168 was written to describe Explicit Congestion Notification (ECN), a congestion avoidance signaling mechanism.
The original TCP congestion avoidance algorithm was known as TCP Tahoe, but many alternative algorithms have since been proposed (including TCP Reno, TCP Vegas, FAST TCP, TCP New Reno, and TCP Hybla).
Multipath TCP (MPTCP) is an ongoing effort within the IETF that aims at allowing a TCP connection to use multiple paths to maximize resource usage and increase redundancy. The redundancy offered by Multipath TCP in the context of wireless networks enables the simultaneous use of different networks, which brings higher throughput and better handover capabilities. Multipath TCP also brings performance benefits in datacenter environments. The reference implementation of Multipath TCP was developed in the Linux kernel. Multipath TCP is used to support the Siri voice recognition application on iPhones, iPads and Macs.
tcpcrypt is an extension proposed in July 2010 to provide transport-level encryption directly in TCP itself. It is designed to work transparently and not require any configuration. Unlike TLS (SSL), tcpcrypt itself does not provide authentication, but provides simple primitives down to the application to do that. The tcpcrypt RFC was published by the IETF in May 2019.
TCP Fast Open is an extension to speed up the opening of successive TCP connections between two endpoints. It works by skipping the three-way handshake using a cryptographic cookie. It is similar to an earlier proposal called T/TCP, which was not widely adopted due to security issues. TCP Fast Open was published as RFC 7413 in 2014.
Proposed in May 2013, Proportional Rate Reduction (PRR) is a TCP extension developed by Google engineers. PRR ensures that the TCP window size after recovery is as close to the slow start threshold as possible. The algorithm is designed to improve the speed of recovery and is the default congestion control algorithm in Linux 3.2+ kernels.
=== Deprecated proposals ===
TCP Cookie Transactions (TCPCT) is an extension proposed in December 2009 to secure servers against denial-of-service attacks. Unlike SYN cookies, TCPCT does not conflict with other TCP extensions such as window scaling. TCPCT was designed due to necessities of DNSSEC, where servers have to handle large numbers of short-lived TCP connections. In 2016, TCPCT was deprecated in favor of TCP Fast Open. The status of the original RFC was changed to historic.
== Hardware implementations ==
One way to overcome the processing power requirements of TCP is to build hardware implementations of it, widely known as TCP offload engines (TOE). The main problem of TOEs is that they are hard to integrate into computing systems, requiring extensive changes in the operating system of the computer or device.
== Wire image and ossification ==
The wire data of TCP provides significant information-gathering and modification opportunities to on-path observers, as the protocol metadata is transmitted in cleartext. While this transparency is useful to network operators and researchers, information gathered from protocol metadata may reduce the end-user's privacy. This visibility and malleability of metadata has led to TCP being difficult to extend—a case of protocol ossification—as any intermediate node (a 'middlebox') can make decisions based on that metadata or even modify it, breaking the end-to-end principle. One measurement found that a third of paths across the Internet encounter at least one intermediary that modifies TCP metadata, and 6.5% of paths encounter harmful ossifying effects from intermediaries. Avoiding extensibility hazards from intermediaries placed significant constraints on the design of MPTCP, and difficulties caused by intermediaries have hindered the deployment of TCP Fast Open in web browsers. Another source of ossification is the difficulty of modification of TCP functions at the endpoints, typically in the operating system kernel or in hardware with a TCP offload engine.
== Performance ==
As TCP provides applications with the abstraction of a reliable byte stream, it can suffer from head-of-line blocking: if packets are reordered or lost and need to be retransmitted (and thus are reordered), data from sequentially later parts of the stream may be received before sequentially earlier parts of the stream; however, the later data cannot typically be used until the earlier data has been received, incurring network latency. If multiple independent higher-level messages are encapsulated and multiplexed onto a single TCP connection, then head-of-line blocking can cause processing of a fully-received message that was sent later to wait for delivery of a message that was sent earlier. Web browsers attempt to mitigate head-of-line blocking by opening multiple parallel connections. This incurs the cost of connection establishment repeatedly, as well as multiplying the resources needed to track those connections at the endpoints. Parallel connections also have congestion control operating independently of each other, rather than being able to pool information together and respond more promptly to observed network conditions; TCP's aggressive initial sending patterns can cause congestion if multiple parallel connections are opened; and the per-connection fairness model leads to a monopolization of resources by applications that take this approach.
Connection establishment is a major contributor to latency as experienced by web users. TCP's three-way handshake introduces one RTT of latency during connection establishment before data can be sent. For short flows, these delays are very significant. Transport Layer Security (TLS) requires a handshake of its own for key exchange at connection establishment. Because of the layered design, the TCP handshake and the TLS handshake proceed serially; the TLS handshake cannot begin until the TCP handshake has concluded. Two RTTs are required for connection establishment with TLS 1.2 over TCP. TLS 1.3 allows for zero RTT connection resumption in some circumstances, but, when layered over TCP, one RTT is still required for the TCP handshake, and this cannot assist the initial connection; zero RTT handshakes also present cryptographic challenges, as efficient, replay-safe and forward secure non-interactive key exchange is an open research topic. TCP Fast Open allows the transmission of data in the initial (i.e., SYN and SYN-ACK) packets, removing one RTT of latency during connection establishment. However, TCP Fast Open has been difficult to deploy due to protocol ossification; as of 2020, no Web browsers used it by default.
TCP throughput is affected by packet reordering. Reordered packets can cause duplicate acknowledgments to be sent, which, if they cross a threshold, will then trigger a spurious retransmission and congestion control. Transmission behavior can also become bursty, as large ranges are acknowledged all at once when a reordered packet at the range's start is received (in a manner similar to how head-of-line blocking affects applications). Blanton & Allman (2002) found that throughput was inversely related to the amount of reordering, up to a threshold where all reordering triggers spurious retransmission. Mitigating reordering depends on a sender's ability to determine that it has sent a spurious retransmission, and hence on resolving retransmission ambiguity. Reducing reordering-induced spurious retransmissions may slow recovery from genuine loss.
Selective acknowledgment can provide a significant benefit to throughput; Bruyeron, Hemon & Zhang (1998) measured gains of up to 45%. An important factor in the improvement is that selective acknowledgment can more often avoid going into slow start after a loss and can hence better use available bandwidth. However, TCP can only selectively acknowledge a maximum of three blocks of sequence numbers. This can limit the retransmission rate and hence loss recovery or cause needless retransmissions, especially in high-loss environments.
TCP was originally designed for wired networks where packet loss is considered to be the result of network congestion and the congestion window size is reduced dramatically as a precaution. However, wireless links are known to experience sporadic and usually temporary losses due to fading, shadowing, hand off, interference, and other radio effects, that are not strictly congestion. After the (erroneous) back-off of the congestion window size, due to wireless packet loss, there may be a congestion avoidance phase with a conservative decrease in window size. This causes the radio link to be underused. Extensive research on combating these harmful effects has been conducted. Suggested solutions can be categorized as end-to-end solutions, which require modifications at the client or server, link layer solutions, such as Radio Link Protocol in cellular networks, or proxy-based solutions which require some changes in the network without modifying end nodes. A number of alternative congestion control algorithms, such as Vegas, Westwood, Veno, and Santa Cruz, have been proposed to help solve the wireless problem.
== Acceleration ==
The idea of a TCP accelerator is to terminate TCP connections inside the network processor and then relay the data to a second connection toward the end system. The data packets that originate from the sender are buffered at the accelerator node, which is responsible for performing local retransmissions in the event of packet loss. Thus, in case of losses, the feedback loop between the sender and the receiver is shortened to the one between the acceleration node and the receiver which guarantees a faster delivery of data to the receiver.
Since TCP is a rate-adaptive protocol, the rate at which the TCP sender injects packets into the network is directly proportional to the prevailing load condition within the network as well as the processing capacity of the receiver. The prevalent conditions within the network are judged by the sender on the basis of the acknowledgments received by it. The acceleration node splits the feedback loop between the sender and the receiver and thus guarantees a shorter round trip time (RTT) per packet. A shorter RTT is beneficial as it ensures a quicker response time to any changes in the network and a faster adaptation by the sender to combat these changes.
Disadvantages of the method include the fact that the TCP session has to be directed through the accelerator; this means that if routing changes so that the accelerator is no longer in the path, the connection will be broken. It also destroys the end-to-end property of the TCP ACK mechanism; when the ACK is received by the sender, the packet has been stored by the accelerator, not delivered to the receiver.
== Debugging ==
A packet sniffer, which taps TCP traffic on a network link, can be useful in debugging networks, network stacks, and applications that use TCP by showing an engineer what packets are passing through a link. Some networking stacks support the SO_DEBUG socket option, which can be enabled on the socket using setsockopt. That option dumps all the packets, TCP states, and events on that socket, which is helpful in debugging. Netstat is another utility that can be used for debugging.
== Alternatives ==
For many applications TCP is not appropriate. The application cannot normally access the packets coming after a lost packet until the retransmitted copy of the lost packet is received. This causes problems for real-time applications such as streaming media, real-time multiplayer games and voice over IP (VoIP) where it is generally more useful to get most of the data in a timely fashion than it is to get all of the data in order.
For historical and performance reasons, most storage area networks (SANs) use Fibre Channel Protocol (FCP) over Fibre Channel connections. For embedded systems, network booting, and servers that serve simple requests from huge numbers of clients (e.g. DNS servers) the complexity of TCP can be a problem. Tricks such as transmitting data between two hosts that are both behind NAT (using STUN or similar systems) are far simpler without a relatively complex protocol like TCP in the way.
Generally, where TCP is unsuitable, the User Datagram Protocol (UDP) is used. This provides the same application multiplexing and checksums that TCP does, but does not handle streams or retransmission, giving the application developer the ability to code them in a way suitable for the situation, or to replace them with other methods such as forward error correction or error concealment.
Stream Control Transmission Protocol (SCTP) is another protocol that provides reliable stream-oriented services similar to TCP. It is newer and considerably more complex than TCP, and has not yet seen widespread deployment. However, it is especially designed to be used in situations where reliability and near-real-time considerations are important.
Venturi Transport Protocol (VTP) is a patented proprietary protocol that is designed to replace TCP transparently to overcome perceived inefficiencies related to wireless data transport.
The TCP congestion avoidance algorithm works very well for ad-hoc environments where the data sender is not known in advance. If the environment is predictable, a timing-based protocol such as Asynchronous Transfer Mode (ATM) can avoid TCP's retransmission overhead.
UDP-based Data Transfer Protocol (UDT) has better efficiency and fairness than TCP in networks that have high bandwidth-delay product.
Multipurpose Transaction Protocol (MTP/IP) is patented proprietary software that is designed to adaptively achieve high throughput and transaction performance in a wide variety of network conditions, particularly those where TCP is perceived to be inefficient.
== Checksum computation ==
=== TCP checksum for IPv4 ===
When TCP runs over IPv4, the method used to compute the checksum is defined as follows:
The checksum field is the 16-bit ones' complement of the ones' complement sum of all 16-bit words in the header and text. The checksum computation needs to ensure the 16-bit alignment of the data being summed. If a segment contains an odd number of header and text octets, alignment can be achieved by padding the last octet with zeros on its right to form a 16-bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros.
In other words, after appropriate padding, all 16-bit words are added using ones' complement arithmetic. The sum is then bitwise complemented and inserted as the checksum field. A pseudo-header that mimics the IPv4 packet header used in the checksum computation is as follows:
The checksum is computed over the following fields:
Source address: 32 bits
The source address in the IPv4 header
Destination address: 32 bits
The destination address in the IPv4 header
Zeroes: 8 bits
All zeroes
Protocol: 8 bits
The protocol value for TCP: 6
TCP length: 16 bits
The length of the TCP header and data (measured in octets). For example, let's say we have IPv4 packet with Total Length of 200 bytes and IHL value of 5, which indicates a length of 5 bits × 32 bits = 160 bits = 20 bytes. We can compute the TCP length as (Total Length) − (IPv4 Header Length) i.e. 200 − 20, which results in 180 bytes.
=== TCP checksum for IPv6 ===
When TCP runs over IPv6, the method used to compute the checksum is changed:
Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses.
A pseudo-header that mimics the IPv6 header for computation of the checksum is shown below.
The checksum is computed over the following fields:
Source address: 128 bits
The address in the IPv6 header.
Destination address: 128 bits
The final destination; if the IPv6 packet doesn't contain a Routing header, TCP uses the destination address in the IPv6 header, otherwise, at the originating node, it uses the address in the last element of the Routing header, and, at the receiving node, it uses the destination address in the IPv6 header.
TCP length: 32 bits
The length of the TCP header and data (measured in octets).
Zeroes: 24 bits; Zeroes == 0
All zeroes.
Next header: 8 bits
The protocol value for TCP: 6.
=== Checksum offload ===
Many TCP/IP software stack implementations provide options to use hardware assistance to automatically compute the checksum in the network adapter prior to transmission onto the network or upon reception from the network for validation. This may reduce CPU load associated with calculating the checksum, potentially increasing overall network performance.
This feature may cause packet analyzers that are unaware or uncertain about the use of checksum offload to report invalid checksums in outbound packets that have not yet reached the network adapter. This will only occur for packets that are intercepted before being transmitted by the network adapter; all packets transmitted by the network adaptor on the wire will have valid checksums. This issue can also occur when monitoring packets being transmitted between virtual machines on the same host, where a virtual device driver may omit the checksum calculation (as an optimization), knowing that the checksum will be calculated later by the VM host kernel or its physical hardware.
== See also ==
== Notes ==
== References ==
== Bibliography ==
=== Requests for Comments ===
Cerf, Vint; Dalal, Yogen; Sunshine, Carl (December 1974). Specification of Internet Transmission Control Program, December 1974 Version. doi:10.17487/RFC0675. RFC 675.
Postel, Jon (September 1981). Internet Protocol. doi:10.17487/RFC0791. RFC 791.
Postel, Jon (September 1981). Transmission Control Protocol. doi:10.17487/RFC0793. RFC 793.
Braden, Robert, ed. (October 1989). Requirements for Internet Hosts – Communication Layers. doi:10.17487/RFC1122. RFC 1122.
Jacobson, Van; Braden, Bob; Borman, Dave (May 1992). TCP Extensions for High Performance. doi:10.17487/RFC1323. RFC 1323.
Bellovin, Steven M. (May 1996). Defending Against Sequence Number Attacks. doi:10.17487/RFC1948. RFC 1948.
Mathis, Matt; Mahdavi, Jamshid; Floyd, Sally; Romanow, Allyn (October 1996). TCP Selective Acknowledgment Options. doi:10.17487/RFC2018. RFC 2018.
Allman, Mark; Paxson, Vern; Stevens, W. Richard (April 1999). TCP Congestion Control. doi:10.17487/RFC2581. RFC 2581.
Floyd, Sally; Mahdavi, Jamshid; Mathis, Matt; Podolsky, Matthew (July 2000). An Extension to the Selective Acknowledgement (SACK) Option for TCP. doi:10.17487/RFC2883. RFC 2883.
Ramakrishnan, K. K.; Floyd, Sally; Black, David (September 2001). The Addition of Explicit Congestion Notification (ECN) to IP. doi:10.17487/RFC3168. RFC 3168.
Ludwig, Reiner; Meyer, Michael (April 2003). The Eifel Detection Algorithm for TCP. doi:10.17487/RFC3522. RFC 3522.
Spring, Neil; Weatherall, David; Ely, David (June 2003). Robust Explicit Congestion Notification (ECN) Signaling with Nonces. doi:10.17487/RFC3540. RFC 3540.
Allman, Mark; Paxson, Vern; Blanton, Ethan (September 2009). TCP Congestion Control. doi:10.17487/RFC5681. RFC 5681.
Simpson, William Allen (January 2011). TCP Cookie Transactions (TCPCT). doi:10.17487/RFC6013. RFC 6013.
Ford, Alan; Raiciu, Costin; Handley, Mark; Barre, Sebastien; Iyengar, Janardhan (March 2011). Architectural Guidelines for Multipath TCP Development. doi:10.17487/RFC6182. RFC 6182.
Paxson, Vern; Allman, Mark; Chu, H.K. Jerry; Sargent, Matt (June 2011). Computing TCP's Retransmission Timer. doi:10.17487/RFC6298. RFC 6298.
Ford, Alan; Raiciu, Costin; Handley, Mark; Bonaventure, Olivier (January 2013). TCP Extensions for Multipath Operation with Multiple Addresses. doi:10.17487/RFC6824. RFC 6824.
Mathis, Matt; Dukkipati, Nandita; Cheng, Yuchung (May 2013). Proportional Rate Reduction for TCP. doi:10.17487/RFC6937. RFC 6937.
Borman, David; Braden, Bob; Jacobson, Van (September 2014). Scheffenegger, Richard (ed.). TCP Extensions for High Performance. doi:10.17487/RFC7323. RFC 7323.
Duke, Martin; Braden, Robert; Eddy, Wesley M.; Blanton, Ethan; Zimmermann, Alexander (February 2015). A Roadmap for Transmission Control Protocol (TCP) Specification Documents. doi:10.17487/RFC7414. RFC 7414.
Cheng, Yuchung; Chu, Jerry; Radhakrishnan, Sivasankar; Jain, Arvind (December 2014). TCP Fast Open. doi:10.17487/RFC7413. RFC 7413.
Zimmermann, Alexander; Eddy, Wesley M.; Eggert, Lars (April 2016). Moving Outdated TCP Extensions and TCP-Related Documents to Historic or Informational Status. doi:10.17487/RFC7805. RFC 7805.
Fairhurst, Gorry; Trammell, Brian; Kuehlewind, Mirja, eds. (March 2017). Services Provided by IETF Transport Protocols and Congestion Control Mechanisms. doi:10.17487/RFC8095. RFC 8095.
Cheng, Yuchung; Cardwell, Neal; Dukkipati, Nandita; Jha, Priyaranjan, eds. (February 2021). The RACK-TLP Loss Detection Algorithm for TCP. doi:10.17487/RFC8985. RFC 8985.
Deering, Stephen E.; Hinden, Robert M. (July 2017). Internet Protocol, Version 6 (IPv6) Specification. doi:10.17487/RFC8200. RFC 8200.
Trammell, Brian; Kuehlewind, Mirja (April 2019). The Wire Image of a Network Protocol. doi:10.17487/RFC8546. RFC 8546.
Hardie, Ted, ed. (April 2019). Transport Protocol Path Signals. doi:10.17487/RFC8558. RFC 8558.
Iyengar, Jana; Swett, Ian, eds. (May 2021). QUIC Loss Detection and Congestion Control. doi:10.17487/RFC9002. RFC 9002.
Fairhurst, Gorry; Perkins, Colin (July 2021). Considerations around Transport Header Confidentiality, Network Operations, and the Evolution of Internet Transport Protocols. doi:10.17487/RFC9065. RFC 9065.
Thomson, Martin; Pauly, Tommy (December 2021). Long-Term Viability of Protocol Extension Mechanisms. doi:10.17487/RFC9170. RFC 9170.
Eddy, Wesley M., ed. (August 2022). Transmission Control Protocol (TCP). doi:10.17487/RFC9293. RFC 9293.
=== Other documents ===
Allman, Mark; Paxson, Vern (October 1999). "On estimating end-to-end network path properties". ACM SIGCOMM Computer Communication Review. 29 (4): 263–274. doi:10.1145/316194.316230. hdl:2060/20000004338.
Bhat, Divyashri; Rizk, Amr; Zink, Michael (June 2017). "Not so QUIC: A Performance Study of DASH over QUIC". NOSSDAV'17: Proceedings of the 27th Workshop on Network and Operating Systems Support for Digital Audio and Video. pp. 13–18. doi:10.1145/3083165.3083175. S2CID 32671949.
Blanton, Ethan; Allman, Mark (January 2002). "On making TCP more robust to packet reordering" (PDF). ACM SIGCOMM Computer Communication Review. 32: 20–30. doi:10.1145/510726.510728. S2CID 15305731.
Briscoe, Bob; Brunstrom, Anna; Petlund, Andreas; Hayes, David; Ros, David; Tsang, Ing-Jyh; Gjessing, Stein; Fairhurst, Gorry; Griwodz, Carsten; Welzl, Michael (2016). "Reducing Internet Latency: A Survey of Techniques and Their Merits". IEEE Communications Surveys & Tutorials. 18 (3): 2149–2196. doi:10.1109/COMST.2014.2375213. hdl:2164/8018. S2CID 206576469.
Bruyeron, Renaud; Hemon, Bruno; Zhang, Lixa (April 1998). "Experimentations with TCP selective acknowledgment". ACM SIGCOMM Computer Communication Review. 28 (2): 54–77. doi:10.1145/279345.279350. S2CID 15954837.
Chen, Shan; Jero, Samuel; Jagielski, Matthew; Boldyreva, Alexandra; Nita-Rotaru, Cristina (2021). "Secure Communication Channel Establishment: TLS 1.3 (Over TCP Fast Open) versus QUIC". Journal of Cryptology. 34 (3). doi:10.1007/s00145-021-09389-w. S2CID 235174220.
Corbet, Jonathan (8 December 2015). "Checksum offloads and protocol ossification". LWN.net.
Corbet, Jonathan (29 January 2018). "QUIC as a solution to protocol ossification". LWN.net.
Edeline, Korian; Donnet, Benoit (2019). A Bottom-Up Investigation of the Transport-Layer Ossification. 2019 Network Traffic Measurement and Analysis Conference (TMA). doi:10.23919/TMA.2019.8784690.
Ghedini, Alessandro (26 July 2018). "The Road to QUIC". The Cloudflare Blog. Cloudflare.
Gurtov, Andrei; Floyd, Sally (February 2004). Resolving Acknowledgment Ambiguity in non-SACK TCP (PDF). Next Generation Teletraffic and Wired/Wireless Advanced Networking (NEW2AN'04).
Gurtov, Andrei; Ludwig, Reiner (2003). Responding to Spurious Timeouts in TCP (PDF). IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies. doi:10.1109/INFCOM.2003.1209251.
Hesmans, Benjamin; Duchene, Fabien; Paasch, Christoph; Detal, Gregory; Bonaventure, Olivier (2013). Are TCP extensions middlebox-proof?. HotMiddlebox '13. CiteSeerX 10.1.1.679.6364. doi:10.1145/2535828.2535830.
IETF HTTP Working Group. "HTTP/2 Frequently Asked Questions".
Karn, Phil; Partridge, Craig (November 1991). "Improving round-trip time estimates in reliable transport protocols". ACM Transactions on Computer Systems. 9 (4): 364–373. doi:10.1145/118544.118549.
Ludwig, Reiner; Katz, Randy Howard (January 2000). "The Eifel algorithm: making TCP robust against spurious retransmissions". ACM SIGCOMM Computer Communication Review. doi:10.1145/505688.505692.
Marx, Robin (3 December 2020). "Head-of-Line Blocking in QUIC and HTTP/3: The Details".
Paasch, Christoph; Bonaventure, Olivier (1 April 2014). "Multipath TCP". Communications of the ACM. 57 (4): 51–57. doi:10.1145/2578901. hdl:2078.1/141195. S2CID 17581886.
Papastergiou, Giorgos; Fairhurst, Gorry; Ros, David; Brunstrom, Anna; Grinnemo, Karl-Johan; Hurtig, Per; Khademi, Naeem; Tüxen, Michael; Welzl, Michael; Damjanovic, Dragana; Mangiante, Simone (2017). "De-Ossifying the Internet Transport Layer: A Survey and Future Perspectives". IEEE Communications Surveys & Tutorials. 19: 619–639. doi:10.1109/COMST.2016.2626780. hdl:2164/8317. S2CID 1846371.
Rybczyńska, Marta (13 March 2020). "A QUIC look at HTTP/3". LWN.net.
Sy, Erik; Mueller, Tobias; Burkert, Christian; Federrath, Hannes; Fischer, Mathias (2020). "Enhanced Performance and Privacy for TLS over TCP Fast Open". Proceedings on Privacy Enhancing Technologies. 2020 (2): 271–287. arXiv:1905.03518. doi:10.2478/popets-2020-0027.
Zhang, Lixia (5 August 1986). "Why TCP timers don't work well". ACM SIGCOMM Computer Communication Review. 16 (3): 397–405. doi:10.1145/1013812.18216.
== Further reading ==
Stevens, W. Richard (1994-01-10). TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley Pub. Co. ISBN 978-0-201-63346-7.
Stevens, W. Richard; Wright, Gary R (1994). TCP/IP Illustrated, Volume 2: The Implementation. Addison-Wesley. ISBN 978-0-201-63354-2.
Stevens, W. Richard (1996). TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols. Addison-Wesley. ISBN 978-0-201-63495-2.**
== External links ==
Oral history interview with Robert E. Kahn
IANA Port Assignments
IANA TCP Parameters
John Kristoff's Overview of TCP (Fundamental concepts behind TCP and how it is used to transport data between two endpoints)
Checksum example | Wikipedia/Transmission_Control_Protocol |
Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions. Functional integrals arise in probability, in the study of partial differential equations, and in the path integral approach to the quantum mechanics of particles and fields.
In an ordinary integral (in the sense of Lebesgue integration) there is a function to be integrated (the integrand) and a region of space over which to integrate the function (the domain of integration). The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region, the value of the integrand cannot vary much, so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function, the integrand returns a value to add up. Making this procedure rigorous poses challenges that continue to be topics of current research.
Functional integration was developed by Percy John Daniell in an article of 1919 and Norbert Wiener in a series of studies culminating in his articles of 1921 on Brownian motion. They developed a rigorous method (now known as the Wiener measure) for assigning a probability to a particle's random path. Richard Feynman developed another functional integral, the path integral, useful for computing the quantum properties of systems. In Feynman's path integral, the classical notion of a unique trajectory for a particle is replaced by an infinite sum of classical paths, each weighted differently according to its classical properties.
Functional integration is central to quantization techniques in theoretical physics. The algebraic properties of functional integrals are used to develop series used to calculate properties in quantum electrodynamics and the standard model of particle physics.
== Functional integration ==
Whereas standard Riemann integration sums a function f(x) over a continuous range of values of x, functional integration sums a functional G[f], which can be thought of as a "function of a function" over a continuous range (or space) of functions f. Most functional integrals cannot be evaluated exactly but must be evaluated using perturbation methods. The formal definition of a functional integral is
∫
G
[
f
]
D
[
f
]
≡
∫
R
⋯
∫
R
G
[
f
]
∏
x
d
f
(
x
)
.
{\displaystyle \int G[f]\;{\mathcal {D}}[f]\equiv \int _{\mathbb {R} }\cdots \int _{\mathbb {R} }G[f]\prod _{x}df(x)\;.}
However, in most cases the functions f(x) can be written in terms of an infinite series of orthogonal functions such as
f
(
x
)
=
f
n
H
n
(
x
)
{\displaystyle f(x)=f_{n}H_{n}(x)}
, and then the definition becomes
∫
G
[
f
]
D
[
f
]
≡
∫
R
⋯
∫
R
G
(
f
1
;
f
2
;
…
)
∏
n
d
f
n
,
{\displaystyle \int G[f]\;{\mathcal {D}}[f]\equiv \int _{\mathbb {R} }\cdots \int _{\mathbb {R} }G(f_{1};f_{2};\ldots )\prod _{n}df_{n}\;,}
which is slightly more understandable. The integral is shown to be a functional integral with a capital
D
{\displaystyle {\mathcal {D}}}
. Sometimes the argument is written in square brackets
D
[
f
]
{\displaystyle {\mathcal {D}}[f]}
, to indicate the functional dependence of the function in the functional integration measure.
== Examples ==
Most functional integrals are actually infinite, but often the limit of the quotient of two related functional integrals can still be finite. The functional integrals that can be evaluated exactly usually start with the following Gaussian integral:
∫
exp
{
−
1
2
∫
R
[
∫
R
f
(
x
)
K
(
x
;
y
)
f
(
y
)
d
y
+
J
(
x
)
f
(
x
)
]
d
x
}
D
[
f
]
∫
exp
{
−
1
2
∫
R
2
f
(
x
)
K
(
x
;
y
)
f
(
y
)
d
x
d
y
}
D
[
f
]
=
exp
{
1
2
∫
R
2
J
(
x
)
⋅
K
−
1
(
x
;
y
)
⋅
J
(
y
)
d
x
d
y
}
,
{\displaystyle {\frac {\displaystyle \int \exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} }\left[\int _{\mathbb {R} }f(x)K(x;y)f(y)\,dy+J(x)f(x)\right]dx\right\rbrace {\mathcal {D}}[f]}{\displaystyle \int \exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}}=\exp \left\lbrace {\frac {1}{2}}\int _{\mathbb {R} ^{2}}J(x)\cdot K^{-1}(x;y)\cdot J(y)\,dx\,dy\right\rbrace \,,}
in which
K
(
x
;
y
)
=
K
(
y
;
x
)
{\displaystyle K(x;y)=K(y;x)}
. By functionally differentiating this with respect to J(x) and then setting to 0 this becomes an exponential multiplied by a monomial in f. To see this, let's use the following notation:
G
[
f
,
J
]
=
−
1
2
∫
R
[
∫
R
f
(
x
)
K
(
x
;
y
)
f
(
y
)
d
y
+
J
(
x
)
f
(
x
)
]
d
x
,
W
[
J
]
=
∫
exp
{
G
[
f
,
J
]
}
D
[
f
]
.
{\displaystyle G[f,J]=-{\frac {1}{2}}\int _{\mathbb {R} }\left[\int _{\mathbb {R} }f(x)K(x;y)f(y)\,dy+J(x)f(x)\right]dx\,\quad ,\quad W[J]=\int \exp \lbrace G[f,J]\rbrace {\mathcal {D}}[f]\;.}
With this notation the first equation can be written as:
W
[
J
]
W
[
0
]
=
exp
{
1
2
∫
R
2
J
(
x
)
K
−
1
(
x
;
y
)
J
(
y
)
d
x
d
y
}
.
{\displaystyle {\dfrac {W[J]}{W[0]}}=\exp \left\lbrace {\frac {1}{2}}\int _{\mathbb {R} ^{2}}J(x)K^{-1}(x;y)J(y)\,dx\,dy\right\rbrace .}
Now, taking functional derivatives to the definition of
W
[
J
]
{\displaystyle W[J]}
and then evaluating in
J
=
0
{\displaystyle J=0}
, one obtains:
δ
δ
J
(
a
)
W
[
J
]
|
J
=
0
=
∫
f
(
a
)
exp
{
G
[
f
,
0
]
}
D
[
f
]
,
{\displaystyle {\dfrac {\delta }{\delta J(a)}}W[J]{\Bigg |}_{J=0}=\int f(a)\exp \lbrace G[f,0]\rbrace {\mathcal {D}}[f]\;,}
δ
2
W
[
J
]
δ
J
(
a
)
δ
J
(
b
)
|
J
=
0
=
∫
f
(
a
)
f
(
b
)
exp
{
G
[
f
,
0
]
}
D
[
f
]
,
{\displaystyle {\dfrac {\delta ^{2}W[J]}{\delta J(a)\delta J(b)}}{\Bigg |}_{J=0}=\int f(a)f(b)\exp \lbrace G[f,0]\rbrace {\mathcal {D}}[f]\;,}
⋮
{\displaystyle \qquad \qquad \qquad \qquad \vdots }
which is the result anticipated. More over, by using the first equation one arrives to the useful result:
δ
2
δ
J
(
a
)
δ
J
(
b
)
(
W
[
J
]
W
[
0
]
)
|
J
=
0
=
K
−
1
(
a
;
b
)
;
{\displaystyle {\dfrac {\delta ^{2}}{\delta J(a)\delta J(b)}}\left({\dfrac {W[J]}{W[0]}}\right){\Bigg |}_{J=0}=K^{-1}(a;b)\;;}
Putting these results together and backing to the original notation we have:
∫
f
(
a
)
f
(
b
)
exp
{
−
1
2
∫
R
2
f
(
x
)
K
(
x
;
y
)
f
(
y
)
d
x
d
y
}
D
[
f
]
∫
exp
{
−
1
2
∫
R
2
f
(
x
)
K
(
x
;
y
)
f
(
y
)
d
x
d
y
}
D
[
f
]
=
K
−
1
(
a
;
b
)
.
{\displaystyle {\frac {\displaystyle \int f(a)f(b)\exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}{\displaystyle \int \exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}}=K^{-1}(a;b)\,.}
Another useful integral is the functional delta function:
∫
exp
{
∫
R
f
(
x
)
g
(
x
)
d
x
}
D
[
f
]
=
δ
[
g
]
=
∏
x
δ
(
g
(
x
)
)
,
{\displaystyle \int \exp \left\lbrace \int _{\mathbb {R} }f(x)g(x)dx\right\rbrace {\mathcal {D}}[f]=\delta [g]=\prod _{x}\delta {\big (}g(x){\big )},}
which is useful to specify constraints. Functional integrals can also be done over Grassmann-valued functions
ψ
(
x
)
{\displaystyle \psi (x)}
, where
ψ
(
x
)
ψ
(
y
)
=
−
ψ
(
y
)
ψ
(
x
)
{\displaystyle \psi (x)\psi (y)=-\psi (y)\psi (x)}
, which is useful in quantum electrodynamics for calculations involving fermions.
== Approaches to path integrals ==
Functional integrals where the space of integration consists of paths (ν = 1) can be defined in many different ways. The definitions fall in two different classes: the constructions derived from Wiener's theory yield an integral based on a measure, whereas the constructions following Feynman's path integral do not. Even within these two broad divisions, the integrals are not identical, that is, they are defined differently for different classes of functions.
=== The Wiener integral ===
In the Wiener integral, a probability is assigned to a class of Brownian motion paths. The class consists of the paths w that are known to go through a small region of space at a given time. The passage through different regions of space is assumed independent of each other, and the distance between any two points of the Brownian path is assumed to be Gaussian-distributed with a variance that depends on the time t and on a diffusion constant D:
Pr
(
w
(
s
+
t
)
,
t
∣
w
(
s
)
,
s
)
=
1
2
π
D
t
exp
(
−
‖
w
(
s
+
t
)
−
w
(
s
)
‖
2
2
D
t
)
.
{\displaystyle \Pr {\big (}w(s+t),t\mid w(s),s{\big )}={\frac {1}{\sqrt {2\pi Dt}}}\exp \left(-{\frac {\|w(s+t)-w(s)\|^{2}}{2Dt}}\right).}
The probability for the class of paths can be found by multiplying the probabilities of starting in one region and then being at the next. The Wiener measure can be developed by considering the limit of many small regions.
Itō and Stratonovich calculus
=== The Feynman integral ===
Trotter formula, or Lie product formula.
The Kac idea of Wick rotations.
Using x-dot-dot-squared or i S[x] + x-dot-squared.
The Cartier DeWitt–Morette relies on integrators rather than measures
=== The Lévy integral ===
Fractional quantum mechanics
Fractional Schrödinger equation
Lévy process
Fractional statistical mechanics
== See also ==
Feynman path integral
Partition function (quantum field theory)
Saddle point approximation
== References ==
== Further reading ==
Jean Zinn-Justin (2009), Scholarpedia 4(2):8674.
Kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2004); Paperback ISBN 981-238-107-4 (also available online: PDF-files)
Laskin, Nick (2000). "Fractional quantum mechanics". Physical Review E. 62 (3): 3135–3145. arXiv:0811.1769. Bibcode:2000PhRvE..62.3135L. doi:10.1103/PhysRevE.62.3135. PMID 11088808. S2CID 15480739.
Laskin, Nick (2002). "Fractional Schrödinger equation". Physical Review E. 66 (5): 056108. arXiv:quant-ph/0206098. Bibcode:2002PhRvE..66e6108L. doi:10.1103/PhysRevE.66.056108. PMID 12513557. S2CID 7520956.
Minlos, R. A. (2001) [1994], "Integral over trajectories", Encyclopedia of Mathematics, EMS Press
O. G. Smolyanov, E. T. Shavgulidze. Continual integrals. Moscow, Moscow State University Press, 1990. (in Russian). http://lib.mexmat.ru/books/5132
Victor Popov, Functional Integrals in Quantum Field Theory and Statistical Physics, Springer 1983
Sergio Albeverio, Sonia Mazzucchi, A unified approach to infinite-dimensional integration, Reviews in Mathematical Physics, 28, 1650005 (2016) | Wikipedia/Functional_integration |
In gauge theory and mathematical physics, a topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants.
While TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory and the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for mathematical work related to topological field theory.
In condensed matter physics, topological quantum field theories are the low-energy effective theories of topologically ordered states, such as fractional quantum Hall states, string-net condensed states, and other strongly correlated quantum liquid states.
== Overview ==
In a topological field theory, correlation functions do not depend on the metric of spacetime. This means that the theory is not sensitive to changes in the shape of spacetime; if spacetime warps or contracts, the correlation functions do not change. Consequently, they are topological invariants.
Topological field theories are not very interesting on flat Minkowski spacetime used in particle physics. Minkowski space can be contracted to a point, so a TQFT applied to Minkowski space results in trivial topological invariants. Consequently, TQFTs are usually applied to curved spacetimes, such as, for example, Riemann surfaces. Most of the known topological field theories are defined on spacetimes of dimension less than five. It seems that a few higher-dimensional theories exist, but they are not very well understood .
Quantum gravity is believed to be background-independent (in some suitable sense), and TQFTs provide examples of background independent quantum field theories. This has prompted ongoing theoretical investigations into this class of models.
(Caveat: It is often said that TQFTs have only finitely many degrees of freedom. This is not a fundamental property. It happens to be true in most of the examples that physicists and mathematicians study, but it is not necessary. A topological sigma model targets infinite-dimensional projective space, and if such a thing could be defined it would have countably infinitely many degrees of freedom.)
== Specific models ==
The known topological field theories fall into two general classes: Schwarz-type TQFTs and Witten-type TQFTs. Witten TQFTs are also sometimes referred to as cohomological field theories. See (Schwarz 2000).
=== Schwarz-type TQFTs ===
In Schwarz-type TQFTs, the correlation functions or partition functions of the system are computed by the path integral of metric-independent action functionals. For instance, in the BF model, the spacetime is a two-dimensional manifold M, the observables are constructed from a two-form F, an auxiliary scalar B, and their derivatives. The action (which determines the path integral) is
S
=
∫
M
B
F
{\displaystyle S=\int \limits _{M}BF}
The spacetime metric does not appear anywhere in the theory, so the theory is explicitly topologically invariant. The first example appeared in 1977 and is due to A. Schwarz; its action functional is:
S
=
∫
M
A
∧
d
A
.
{\displaystyle S=\int \limits _{M}A\wedge dA.}
Another more famous example is Chern–Simons theory, which can be applied to knot invariants. In general, partition functions depend on a metric but the above examples are metric-independent.
=== Witten-type TQFTs ===
The first example of Witten-type TQFTs appeared in Witten's paper in 1988 (Witten 1988a), i.e. topological Yang–Mills theory in four dimensions. Though its action functional contains the spacetime metric gαβ, after a topological twist it turns out to be metric independent. The independence of the stress-energy tensor Tαβ of the system from the metric depends on whether the BRST-operator is closed. Following Witten's example many other examples can be found in string theory.
Witten-type TQFTs arise if the following conditions are satisfied:
The action
S
{\displaystyle S}
of the TQFT has a symmetry, i.e. if
δ
{\displaystyle \delta }
denotes a symmetry transformation (e.g. a Lie derivative) then
δ
S
=
0
{\displaystyle \delta S=0}
holds.
The symmetry transformation is exact, i.e.
δ
2
=
0
{\displaystyle \delta ^{2}=0}
There are existing observables
O
1
,
…
,
O
n
{\displaystyle O_{1},\dots ,O_{n}}
which satisfy
δ
O
i
=
0
{\displaystyle \delta O_{i}=0}
for all
i
∈
{
1
,
…
,
n
}
{\displaystyle i\in \{1,\dots ,n\}}
.
The stress-energy-tensor (or similar physical quantities) is of the form
T
α
β
=
δ
G
α
β
{\displaystyle T^{\alpha \beta }=\delta G^{\alpha \beta }}
for an arbitrary tensor
G
α
β
{\displaystyle G^{\alpha \beta }}
.
As an example (Linker 2015): Given a 2-form field
B
{\displaystyle B}
with the differential operator
δ
{\displaystyle \delta }
which satisfies
δ
2
=
0
{\displaystyle \delta ^{2}=0}
, then the action
S
=
∫
M
B
∧
δ
B
{\displaystyle S=\int \limits _{M}B\wedge \delta B}
has a symmetry if
δ
B
∧
δ
B
=
0
{\displaystyle \delta B\wedge \delta B=0}
since
δ
S
=
∫
M
δ
(
B
∧
δ
B
)
=
∫
M
δ
B
∧
δ
B
+
∫
M
B
∧
δ
2
B
=
0.
{\displaystyle \delta S=\int \limits _{M}\delta (B\wedge \delta B)=\int \limits _{M}\delta B\wedge \delta B+\int \limits _{M}B\wedge \delta ^{2}B=0.}
Further, the following holds (under the condition that
δ
{\displaystyle \delta }
is independent on
B
{\displaystyle B}
and acts similarly to a functional derivative):
δ
δ
B
α
β
S
=
∫
M
δ
δ
B
α
β
B
∧
δ
B
+
∫
M
B
∧
δ
δ
δ
B
α
β
B
=
∫
M
δ
δ
B
α
β
B
∧
δ
B
−
∫
M
δ
B
∧
δ
δ
B
α
β
B
=
−
2
∫
M
δ
B
∧
δ
δ
B
α
β
B
.
{\displaystyle {\frac {\delta }{\delta B^{\alpha \beta }}}S=\int \limits _{M}{\frac {\delta }{\delta B^{\alpha \beta }}}B\wedge \delta B+\int \limits _{M}B\wedge \delta {\frac {\delta }{\delta B^{\alpha \beta }}}B=\int \limits _{M}{\frac {\delta }{\delta B^{\alpha \beta }}}B\wedge \delta B-\int \limits _{M}\delta B\wedge {\frac {\delta }{\delta B^{\alpha \beta }}}B=-2\int \limits _{M}\delta B\wedge {\frac {\delta }{\delta B^{\alpha \beta }}}B.}
The expression
δ
δ
B
α
β
S
{\displaystyle {\frac {\delta }{\delta B^{\alpha \beta }}}S}
is proportional to
δ
G
{\displaystyle \delta G}
with another 2-form
G
{\displaystyle G}
.
Now any averages of observables
⟨
O
i
⟩
:=
∫
d
μ
O
i
e
i
S
{\displaystyle \left\langle O_{i}\right\rangle :=\int d\mu O_{i}e^{iS}}
for the corresponding Haar measure
μ
{\displaystyle \mu }
are independent on the "geometric" field
B
{\displaystyle B}
and are therefore topological:
δ
δ
B
⟨
O
i
⟩
=
∫
d
μ
O
i
i
δ
δ
B
S
e
i
S
∝
∫
d
μ
O
i
δ
G
e
i
S
=
δ
(
∫
d
μ
O
i
G
e
i
S
)
=
0
{\displaystyle {\frac {\delta }{\delta B}}\left\langle O_{i}\right\rangle =\int d\mu O_{i}i{\frac {\delta }{\delta B}}Se^{iS}\propto \int d\mu O_{i}\delta Ge^{iS}=\delta \left(\int d\mu O_{i}Ge^{iS}\right)=0}
.
The third equality uses the fact that
δ
O
i
=
δ
S
=
0
{\displaystyle \delta O_{i}=\delta S=0}
and the invariance of the Haar measure under symmetry transformations. Since
∫
d
μ
O
i
G
e
i
S
{\displaystyle \int d\mu O_{i}Ge^{iS}}
is only a number, its Lie derivative vanishes.
== Mathematical formulations ==
=== Original Atiyah–Segal axioms ===
Atiyah suggested a set of axioms for topological quantum field theory, inspired by Segal's proposed axioms for conformal field theory (subsequently, Segal's idea was summarized in Segal (2001)), and Witten's geometric meaning of supersymmetry in Witten (1982). Atiyah's axioms are constructed by gluing the boundary with a differentiable (topological or continuous) transformation, while Segal's axioms are for conformal transformations. These axioms have been relatively useful for mathematical treatments of Schwarz-type QFTs, although it isn't clear that they capture the whole structure of Witten-type QFTs. The basic idea is that a TQFT is a functor from a certain category of cobordisms to the category of vector spaces.
There are in fact two different sets of axioms which could reasonably be called the Atiyah axioms. These axioms differ basically in whether or not they apply to a TQFT defined on a single fixed n-dimensional Riemannian / Lorentzian spacetime M or a TQFT defined on all n-dimensional spacetimes at once.
Let Λ be a commutative ring with 1 (for almost all real-world purposes we will have Λ = Z, R or C). Atiyah originally proposed the axioms of a topological quantum field theory (TQFT) in dimension d defined over a ground ring Λ as following:
A finitely generated Λ-module Z(Σ) associated to each oriented closed smooth d-dimensional manifold Σ (corresponding to the homotopy axiom),
An element Z(M) ∈ Z(∂M) associated to each oriented smooth (d + 1)-dimensional manifold (with boundary) M (corresponding to an additive axiom).
These data are subject to the following axioms (4 and 5 were added by Atiyah):
Z is functorial with respect to orientation preserving diffeomorphisms of Σ and M,
Z is involutory, i.e. Z(Σ*) = Z(Σ)* where Σ* is Σ with opposite orientation and Z(Σ)* denotes the dual module,
Z is multiplicative.
Z(
∅
{\displaystyle \emptyset }
) = Λ for the d-dimensional empty manifold and Z(
∅
{\displaystyle \emptyset }
) = 1 for the (d + 1)-dimensional empty manifold.
Z(M*) = Z(M) (the hermitian axiom). If
∂
M
=
Σ
0
∗
∪
Σ
1
{\displaystyle \partial M=\Sigma _{0}^{*}\cup \Sigma _{1}}
so that Z(M) can be viewed as a linear transformation between hermitian vector spaces, then this is equivalent to Z(M*) being the adjoint of Z(M).
Remark. If for a closed manifold M we view Z(M) as a numerical invariant, then for a manifold with a boundary we should think of Z(M) ∈ Z(∂M) as a "relative" invariant. Let f : Σ → Σ be an orientation-preserving diffeomorphism, and identify opposite ends of Σ × I by f. This gives a manifold Σf and our axioms imply
Z
(
Σ
f
)
=
Trace
Σ
(
f
)
{\displaystyle Z(\Sigma _{f})=\operatorname {Trace} \ \Sigma (f)}
where Σ(f) is the induced automorphism of Z(Σ).
Remark. For a manifold M with boundary Σ we can always form the double
M
∪
Σ
M
∗
{\displaystyle M\cup _{\Sigma }M^{*}}
which is a closed manifold. The fifth axiom shows that
Z
(
M
∪
Σ
M
∗
)
=
|
Z
(
M
)
|
2
{\displaystyle Z\left(M\cup _{\Sigma }M^{*}\right)=|Z(M)|^{2}}
where on the right we compute the norm in the hermitian (possibly indefinite) metric.
=== Relation to physics ===
Physically (2) + (4) are related to relativistic invariance while (3) + (5) are indicative of the quantum nature of the theory.
Σ is meant to indicate the physical space (usually, d = 3 for standard physics) and the extra dimension in Σ × I is "imaginary" time. The space Z(Σ) is the Hilbert space of the quantum theory and a physical theory, with a Hamiltonian H, will have a time evolution operator eitH or an "imaginary time" operator e−tH. The main feature of topological QFTs is that H = 0, which implies that there is no real dynamics or propagation along the cylinder Σ × I. However, there can be non-trivial "propagation" (or tunneling amplitudes) from Σ0 to Σ1 through an intervening manifold M with
∂
M
=
Σ
0
∗
∪
Σ
1
{\displaystyle \partial M=\Sigma _{0}^{*}\cup \Sigma _{1}}
; this reflects the topology of M.
If ∂M = Σ, then the distinguished vector Z(M) in the Hilbert space Z(Σ) is thought of as the vacuum state defined by M. For a closed manifold M the number Z(M) is the vacuum expectation value. In analogy with statistical mechanics it is also called the partition function.
The reason why a theory with a zero Hamiltonian can be sensibly formulated resides in the Feynman path integral approach to QFT. This incorporates relativistic invariance (which applies to general (d + 1)-dimensional "spacetimes") and the theory is formally defined by a suitable Lagrangian—a functional of the classical fields of the theory. A Lagrangian which involves only first derivatives in time formally leads to a zero Hamiltonian, but the Lagrangian itself may have non-trivial features which relate to the topology of M.
=== Atiyah's examples ===
In 1988, M. Atiyah published a paper in which he described many new examples of topological quantum field theory that were considered at that time (Atiyah 1988a)(Atiyah 1988b). It contains some new topological invariants along with some new ideas: Casson invariant, Donaldson invariant, Gromov's theory, Floer homology and Jones–Witten theory.
==== d = 0 ====
In this case Σ consists of finitely many points. To a single point we associate a vector space V = Z(point) and to n-points the n-fold tensor product: V⊗n = V ⊗ … ⊗ V. The symmetric group Sn acts on V⊗n. A standard way to get the quantum Hilbert space is to start with a classical symplectic manifold (or phase space) and then quantize it. Let us extend Sn to a compact Lie group G and consider "integrable" orbits for which the symplectic structure comes from a line bundle, then quantization leads to the irreducible representations V of G. This is the physical interpretation of the Borel–Weil theorem or the Borel–Weil–Bott theorem. The Lagrangian of these theories is the classical action (holonomy of the line bundle). Thus topological QFT's with d = 0 relate naturally to the classical representation theory of Lie groups and the symmetric group.
==== d = 1 ====
We should consider periodic boundary conditions given by closed loops in a compact symplectic manifold X. Along with Witten (1982) holonomy such loops as used in the case of d = 0 as a Lagrangian are then used to modify the Hamiltonian. For a closed surface M the invariant Z(M) of the theory is the number of pseudo holomorphic maps f : M → X in the sense of Gromov (they are ordinary holomorphic maps if X is a Kähler manifold). If this number becomes infinite i.e. if there are "moduli", then we must fix further data on M. This can be done by picking some points Pi and then looking at holomorphic maps f : M → X with f(Pi) constrained to lie on a fixed hyperplane. Witten (1988b) has written down the relevant Lagrangian for this theory. Floer has given a rigorous treatment, i.e. Floer homology, based on Witten's Morse theory ideas; for the case when the boundary conditions are over the interval instead of being periodic, the path initial and end-points lie on two fixed Lagrangian submanifolds. This theory has been developed as Gromov–Witten invariant theory.
Another example is Holomorphic Conformal Field Theory. This might not have been considered strictly topological quantum field theory at the time because Hilbert spaces are infinite dimensional. The conformal field theories are also related to the compact Lie group G in which the classical phase consists of a central extension of the loop group (LG). Quantizing these produces the Hilbert spaces of the theory of irreducible (projective) representations of LG. The group Diff+(S1) now substitutes for the symmetric group and plays an important role. As a result, the partition function in such theories depends on complex structure, thus it is not purely topological.
==== d = 2 ====
Jones–Witten theory is the most important theory in this case. Here the classical phase space, associated with a closed surface Σ is the moduli space of a flat G-bundle over Σ. The Lagrangian is an integer multiple of the Chern–Simons function of a G-connection on a 3-manifold (which has to be "framed"). The integer multiple k, called the level, is a parameter of the theory and k → ∞ gives the classical limit. This theory can be naturally coupled with the d = 0 theory to produce a "relative" theory. The details have been described by Witten who shows that the partition function for a (framed) link in the 3-sphere is just the value of the Jones polynomial for a suitable root of unity. The theory can be defined over the relevant cyclotomic field, see Atiyah (1988b). By considering a Riemann surface with boundary, we can couple it to the d = 1 conformal theory instead of coupling d = 2 theory to d = 0. This has developed into Jones–Witten theory and has led to the discovery of deep connections between knot theory and quantum field theory.
==== d = 3 ====
Donaldson has defined the integer invariant of smooth 4-manifolds by using moduli spaces of SU(2)-instantons. These invariants are polynomials on the second homology. Thus 4-manifolds should have extra data consisting of the symmetric algebra of H2. Witten (1988a) has produced a super-symmetric Lagrangian which formally reproduces the Donaldson theory. Witten's formula might be understood as an infinite-dimensional analogue of the Gauss–Bonnet theorem. At a later date, this theory was further developed and became the Seiberg–Witten gauge theory which reduces SU(2) to U(1) in N = 2, d = 4 gauge theory. The Hamiltonian version of the theory has been developed by Andreas Floer in terms of the space of connections on a 3-manifold. Floer uses the Chern–Simons function, which is the Lagrangian of Jones–Witten theory to modify the Hamiltonian. For details, see Atiyah (1988b). Witten (1988a) has also shown how one can couple the d = 3 and d = 1 theories together: this is quite analogous to the coupling between d = 2 and d = 0 in Jones–Witten theory.
Now, topological field theory is viewed as a functor, not on a fixed dimension but on all dimensions at the same time.
=== Case of a fixed spacetime ===
Let BordM be the category whose morphisms are n-dimensional submanifolds of M and whose objects are connected components of the boundaries of such submanifolds. Regard two morphisms as equivalent if they are homotopic via submanifolds of M, and so form the quotient category hBordM: The objects in hBordM are the objects of BordM, and the morphisms of hBordM are homotopy equivalence classes of morphisms in BordM. A TQFT on M is a symmetric monoidal functor from hBordM to the category of vector spaces.
Note that cobordisms can, if their boundaries match, be sewn together to form a new bordism. This is the composition law for morphisms in the cobordism category. Since functors are required to preserve composition, this says that the linear map corresponding to a sewn together morphism is just the composition of the linear map for each piece.
There is an equivalence of categories between the category of 2-dimensional topological quantum field theories and the category of commutative Frobenius algebras.
=== All n-dimensional spacetimes at once ===
To consider all spacetimes at once, it is necessary to replace hBordM by a larger category. So let Bordn be the category of bordisms, i.e. the category whose morphisms are n-dimensional manifolds with boundary, and whose objects are the connected components of the boundaries of n-dimensional manifolds. (Note that any (n−1)-dimensional manifold may appear as an object in Bordn.) As above, regard two morphisms in Bordn as equivalent if they are homotopic, and form the quotient category hBordn. Bordn is a monoidal category under the operation which maps two bordisms to the bordism made from their disjoint union. A TQFT on n-dimensional manifolds is then a functor from hBordn to the category of vector spaces, which maps disjoint unions of bordisms to their tensor product.
For example, for (1 + 1)-dimensional bordisms (2-dimensional bordisms between 1-dimensional manifolds), the map associated with a pair of pants gives a product or coproduct, depending on how the boundary components are grouped – which is commutative or cocommutative, while the map associated with a disk gives a counit (trace) or unit (scalars), depending on the grouping of boundary components, and thus (1+1)-dimension TQFTs correspond to Frobenius algebras.
Furthermore, we can consider simultaneously 4-dimensional, 3-dimensional and 2-dimensional manifolds related by the above bordisms, and from them we can obtain ample and important examples.
=== Development at a later time ===
Looking at the development of topological quantum field theory, we should consider its many applications to Seiberg–Witten gauge theory, topological string theory, the relationship between knot theory and quantum field theory, and quantum knot invariants. Furthermore, it has generated topics of great interest in both mathematics and physics. Also of important recent interest are non-local operators in TQFT (Gukov & Kapustin (2013)). If string theory is viewed as the fundamental, then non-local TQFTs can be viewed as non-physical models that provide a computationally efficient approximation to local string theory.
=== Witten-type TQFTs and dynamical systems ===
Stochastic (partial) differential equations (SDEs) are the foundation for models of everything in nature above the scale of quantum degeneracy and coherence and are essentially Witten-type TQFTs. All SDEs possess topological or BRST supersymmetry,
δ
{\displaystyle \delta }
, and in the operator representation of stochastic dynamics is the exterior derivative, which is commutative with the stochastic evolution operator. This supersymmetry preserves the continuity of phase space by continuous flows, and the phenomenon of supersymmetric spontaneous breakdown by a global non-supersymmetric ground state encompasses such well-established physical concepts as chaos, turbulence, 1/f and crackling noises, self-organized criticality etc. The topological sector of the theory for any SDE can be recognized as a Witten-type TQFT.
== See also ==
== References ==
Atiyah, Michael (1988a). "New invariants of three and four dimensional manifolds". The Mathematical Heritage of Hermann Weyl. Proceedings of Symposia in Pure Mathematics. Vol. 48. American Mathematical Society. pp. 285–299. doi:10.1090/pspum/048/974342. ISBN 9780821814826.
Atiyah, Michael (1988b). "Topological quantum field theories" (PDF). Publications Mathématiques de l'IHÉS. 68 (68): 175–186. doi:10.1007/BF02698547. MR 1001453. S2CID 121647908.
Gukov, Sergei; Kapustin, Anton (2013). "Topological Quantum Field Theory, Nonlocal Operators, and Gapped Phases of Gauge Theories". arXiv:1307.4793 [hep-th].
Linker, Patrick (2015). "Topological Dipole Field Theory" (PDF). The Winnower. 2: e144311.19292. doi:10.15200/winn.144311.19292.
Lurie, Jacob (2009). "On the Classification of Topological Field Theories". arXiv:0905.0465 [math.CT].
Schwarz, Albert (2000). "Topological quantum field theories". arXiv:hep-th/0011260.
Segal, Graeme (2001). "Topological structures in string theory". Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 359 (1784): 1389–1398. Bibcode:2001RSPTA.359.1389S. doi:10.1098/rsta.2001.0841. S2CID 120834154.
Witten, Edward (1982). "Super-symmetry and Morse Theory". Journal of Differential Geometry. 17 (4): 661–692. doi:10.4310/jdg/1214437492.
Witten, Edward (1988a). "Topological quantum field theory". Communications in Mathematical Physics. 117 (3): 353–386. Bibcode:1988CMaPh.117..353W. doi:10.1007/BF01223371. MR 0953828. S2CID 43230714.
Witten, Edward (1988b). "Topological sigma models". Communications in Mathematical Physics. 118 (3): 411–449. Bibcode:1988CMaPh.118..411W. doi:10.1007/bf01466725. S2CID 34042140. | Wikipedia/Topological_field_theory |
The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functions
and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action.
Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology.
== History ==
The calculus of variations began with the work of Isaac Newton, such as with Newton's minimal resistance problem, which he formulated and solved in 1685, and later published in his Principia in 1687, which was the first problem in the field to be formulated and correctly solved, and was also one of the most difficult problems tackled by variational methods prior to the twentieth century. This problem was followed by the brachistochrone curve problem raised by Johann Bernoulli (1696), which was similar to one raised by Galileo Galilei in 1638, but he did not solve the problem explicity nor did he use the methods based on calculus. Bernoulli had solved the problem, using the principle of least time in the process, but not calculus of variations, whereas Newton did to solve the problem in 1697, and as a result, he pioneered the field with his work on the two problems. The problem would immediately occupy the attention of Jacob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Joseph-Louis Lagrange was influenced by Euler's work to contribute greatly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the calculus of variations in his 1756 lecture Elementa Calculi Variationum.
Adrien-Marie Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject. To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Pierre Frédéric Sarrus (1842) which was condensed and improved by Augustin-Louis Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), John Hewitt Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that of Karl Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The 20th and the 23rd Hilbert problem published in 1900 encouraged further development.
In the 20th century David Hilbert, Oskar Bolza, Gilbert Ames Bliss, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an alternative to the calculus of variations.
== Extrema ==
The calculus of variations is concerned with the maxima or minima (collectively called extrema) of functionals. A functional maps functions to scalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elements
y
{\displaystyle y}
of a given function space defined over a given domain. A functional
J
[
y
]
{\displaystyle J[y]}
is said to have an extremum at the function
f
{\displaystyle f}
if
Δ
J
=
J
[
y
]
−
J
[
f
]
{\displaystyle \Delta J=J[y]-J[f]}
has the same sign for all
y
{\displaystyle y}
in an arbitrarily small neighborhood of
f
.
{\displaystyle f.}
The function
f
{\displaystyle f}
is called an extremal function or extremal. The extremum
J
[
f
]
{\displaystyle J[f]}
is called a local maximum if
Δ
J
≤
0
{\displaystyle \Delta J\leq 0}
everywhere in an arbitrarily small neighborhood of
f
,
{\displaystyle f,}
and a local minimum if
Δ
J
≥
0
{\displaystyle \Delta J\geq 0}
there. For a function space of continuous functions, extrema of corresponding functionals are called strong extrema or weak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not.
Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation.
== Euler–Lagrange equation ==
Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.
Consider the functional
J
[
y
]
=
∫
x
1
x
2
L
(
x
,
y
(
x
)
,
y
′
(
x
)
)
d
x
,
{\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx,}
where
x
1
,
x
2
{\displaystyle x_{1},x_{2}}
are constants,
y
(
x
)
{\displaystyle y(x)}
is twice continuously differentiable,
y
′
(
x
)
=
d
y
d
x
,
{\displaystyle y'(x)={\frac {dy}{dx}},}
L
(
x
,
y
(
x
)
,
y
′
(
x
)
)
{\displaystyle L\left(x,y(x),y'(x)\right)}
is twice continuously differentiable with respect to its arguments
x
,
y
,
{\displaystyle x,y,}
and
y
′
.
{\displaystyle y'.}
If the functional
J
[
y
]
{\displaystyle J[y]}
attains a local minimum at
f
,
{\displaystyle f,}
and
η
(
x
)
{\displaystyle \eta (x)}
is an arbitrary function that has at least one derivative and vanishes at the endpoints
x
1
{\displaystyle x_{1}}
and
x
2
,
{\displaystyle x_{2},}
then for any number
ε
{\displaystyle \varepsilon }
close to 0,
J
[
f
]
≤
J
[
f
+
ε
η
]
.
{\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.}
The term
ε
η
{\displaystyle \varepsilon \eta }
is called the variation of the function
f
{\displaystyle f}
and is denoted by
δ
f
.
{\displaystyle \delta f.}
Substituting
f
+
ε
η
{\displaystyle f+\varepsilon \eta }
for
y
{\displaystyle y}
in the functional
J
[
y
]
,
{\displaystyle J[y],}
the result is a function of
ε
,
{\displaystyle \varepsilon ,}
Φ
(
ε
)
=
J
[
f
+
ε
η
]
.
{\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.}
Since the functional
J
[
y
]
{\displaystyle J[y]}
has a minimum for
y
=
f
{\displaystyle y=f}
the function
Φ
(
ε
)
{\displaystyle \Phi (\varepsilon )}
has a minimum at
ε
=
0
{\displaystyle \varepsilon =0}
and thus,
Φ
′
(
0
)
≡
d
Φ
d
ε
|
ε
=
0
=
∫
x
1
x
2
d
L
d
ε
|
ε
=
0
d
x
=
0
.
{\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.}
Taking the total derivative of
L
[
x
,
y
,
y
′
]
,
{\displaystyle L\left[x,y,y'\right],}
where
y
=
f
+
ε
η
{\displaystyle y=f+\varepsilon \eta }
and
y
′
=
f
′
+
ε
η
′
{\displaystyle y'=f'+\varepsilon \eta '}
are considered as functions of
ε
{\displaystyle \varepsilon }
rather than
x
,
{\displaystyle x,}
yields
d
L
d
ε
=
∂
L
∂
y
d
y
d
ε
+
∂
L
∂
y
′
d
y
′
d
ε
{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}}
and because
d
y
d
ε
=
η
{\displaystyle {\frac {dy}{d\varepsilon }}=\eta }
and
d
y
′
d
ε
=
η
′
,
{\displaystyle {\frac {dy'}{d\varepsilon }}=\eta ',}
d
L
d
ε
=
∂
L
∂
y
η
+
∂
L
∂
y
′
η
′
.
{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.}
Therefore,
∫
x
1
x
2
d
L
d
ε
|
ε
=
0
d
x
=
∫
x
1
x
2
(
∂
L
∂
f
η
+
∂
L
∂
f
′
η
′
)
d
x
=
∫
x
1
x
2
∂
L
∂
f
η
d
x
+
∂
L
∂
f
′
η
|
x
1
x
2
−
∫
x
1
x
2
η
d
d
x
∂
L
∂
f
′
d
x
=
∫
x
1
x
2
(
∂
L
∂
f
η
−
η
d
d
x
∂
L
∂
f
′
)
d
x
{\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}}
where
L
[
x
,
y
,
y
′
]
→
L
[
x
,
f
,
f
′
]
{\displaystyle L\left[x,y,y'\right]\to L\left[x,f,f'\right]}
when
ε
=
0
{\displaystyle \varepsilon =0}
and we have used integration by parts on the second term. The second term on the second line vanishes because
η
=
0
{\displaystyle \eta =0}
at
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
by definition. Also, as previously mentioned the left side of the equation is zero so that
∫
x
1
x
2
η
(
x
)
(
∂
L
∂
f
−
d
d
x
∂
L
∂
f
′
)
d
x
=
0
.
{\displaystyle \int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.}
According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e.
∂
L
∂
f
−
d
d
x
∂
L
∂
f
′
=
0
{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}
which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of
J
[
f
]
{\displaystyle J[f]}
and is denoted
δ
J
{\displaystyle \delta J}
or
δ
f
(
x
)
.
{\displaystyle \delta f(x).}
In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function
f
(
x
)
.
{\displaystyle f(x).}
The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum
J
[
f
]
.
{\displaystyle J[f].}
A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum.
=== Example ===
In order to illustrate this process, consider the problem of finding the extremal function
y
=
f
(
x
)
,
{\displaystyle y=f(x),}
which is the shortest curve that connects two points
(
x
1
,
y
1
)
{\displaystyle \left(x_{1},y_{1}\right)}
and
(
x
2
,
y
2
)
.
{\displaystyle \left(x_{2},y_{2}\right).}
The arc length of the curve is given by
A
[
y
]
=
∫
x
1
x
2
1
+
[
y
′
(
x
)
]
2
d
x
,
{\displaystyle A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,}
with
y
′
(
x
)
=
d
y
d
x
,
y
1
=
f
(
x
1
)
,
y
2
=
f
(
x
2
)
.
{\displaystyle y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.}
Note that assuming y is a function of x loses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes.
The Euler–Lagrange equation will now be used to find the extremal function
f
(
x
)
{\displaystyle f(x)}
that minimizes the functional
A
[
y
]
.
{\displaystyle A[y].}
∂
L
∂
f
−
d
d
x
∂
L
∂
f
′
=
0
{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}
with
L
=
1
+
[
f
′
(
x
)
]
2
.
{\displaystyle L={\sqrt {1+[f'(x)]^{2}}}\,.}
Since
f
{\displaystyle f}
does not appear explicitly in
L
,
{\displaystyle L,}
the first term in the Euler–Lagrange equation vanishes for all
f
(
x
)
{\displaystyle f(x)}
and thus,
d
d
x
∂
L
∂
f
′
=
0
.
{\displaystyle {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.}
Substituting for
L
{\displaystyle L}
and taking the derivative,
d
d
x
f
′
(
x
)
1
+
[
f
′
(
x
)
]
2
=
0
.
{\displaystyle {\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.}
Thus
f
′
(
x
)
1
+
[
f
′
(
x
)
]
2
=
c
,
{\displaystyle {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,}
for some constant
c
{\displaystyle c}
. Then
[
f
′
(
x
)
]
2
1
+
[
f
′
(
x
)
]
2
=
c
2
,
{\displaystyle {\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,}
where
0
≤
c
2
<
1.
{\displaystyle 0\leq c^{2}<1.}
Solving, we get
[
f
′
(
x
)
]
2
=
c
2
1
−
c
2
{\displaystyle [f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}}
which implies that
f
′
(
x
)
=
m
{\displaystyle f'(x)=m}
is a constant and therefore that the shortest curve that connects two points
(
x
1
,
y
1
)
{\displaystyle \left(x_{1},y_{1}\right)}
and
(
x
2
,
y
2
)
{\displaystyle \left(x_{2},y_{2}\right)}
is
f
(
x
)
=
m
x
+
b
with
m
=
y
2
−
y
1
x
2
−
x
1
and
b
=
x
2
y
1
−
x
1
y
2
x
2
−
x
1
{\displaystyle f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}}
and we have thus found the extremal function
f
(
x
)
{\displaystyle f(x)}
that minimizes the functional
A
[
y
]
{\displaystyle A[y]}
so that
A
[
f
]
{\displaystyle A[f]}
is a minimum. The equation for a straight line is
y
=
m
x
+
b
.
{\displaystyle y=mx+b.}
In other words, the shortest distance between two points is a straight line.
== Beltrami's identity ==
In physics problems it may be the case that
∂
L
∂
x
=
0
,
{\displaystyle {\frac {\partial L}{\partial x}}=0,}
meaning the integrand is a function of
f
(
x
)
{\displaystyle f(x)}
and
f
′
(
x
)
{\displaystyle f'(x)}
but
x
{\displaystyle x}
does not appear separately. In that case, the Euler–Lagrange equation can be simplified to the Beltrami identity
L
−
f
′
∂
L
∂
f
′
=
C
,
{\displaystyle L-f'{\frac {\partial L}{\partial f'}}=C\,,}
where
C
{\displaystyle C}
is a constant. The left hand side is the Legendre transformation of
L
{\displaystyle L}
with respect to
f
′
(
x
)
.
{\displaystyle f'(x).}
The intuition behind this result is that, if the variable
x
{\displaystyle x}
is actually time, then the statement
∂
L
∂
x
=
0
{\displaystyle {\frac {\partial L}{\partial x}}=0}
implies that the Lagrangian is time-independent. By Noether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity.
== Euler–Poisson equation ==
If
S
{\displaystyle S}
depends on higher-derivatives of
y
(
x
)
{\displaystyle y(x)}
, that is, if
S
=
∫
a
b
f
(
x
,
y
(
x
)
,
y
′
(
x
)
,
…
,
y
(
n
)
(
x
)
)
d
x
,
{\displaystyle S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,}
then
y
{\displaystyle y}
must satisfy the Euler–Poisson equation,
∂
f
∂
y
−
d
d
x
(
∂
f
∂
y
′
)
+
⋯
+
(
−
1
)
n
d
n
d
x
n
[
∂
f
∂
y
(
n
)
]
=
0.
{\displaystyle {\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.}
== Du Bois-Reymond's theorem ==
The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral
J
{\displaystyle J}
requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a weak form of the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If
L
{\displaystyle L}
has continuous first and second derivatives with respect to all of its arguments, and if
∂
2
L
∂
f
′
2
≠
0
,
{\displaystyle {\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,}
then
f
{\displaystyle f}
has two continuous derivatives, and it satisfies the Euler–Lagrange equation.
== Lavrentiev phenomenon ==
Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior.
However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:
L
[
x
]
=
∫
0
1
(
x
3
−
t
)
2
x
′
6
,
{\displaystyle L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},}
A
=
{
x
∈
W
1
,
1
(
0
,
1
)
:
x
(
0
)
=
0
,
x
(
1
)
=
1
}
.
{\displaystyle {A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.}
Clearly,
x
(
t
)
=
t
1
3
{\displaystyle x(t)=t^{\frac {1}{3}}}
minimizes the functional, but we find any function
x
∈
W
1
,
∞
{\displaystyle x\in W^{1,\infty }}
gives a value bounded away from the infimum.
Examples (in one-dimension) are traditionally manifested across
W
1
,
1
{\displaystyle W^{1,1}}
and
W
1
,
∞
,
{\displaystyle W^{1,\infty },}
but Ball and Mizel procured the first functional that displayed Lavrentiev's Phenomenon across
W
1
,
p
{\displaystyle W^{1,p}}
and
W
1
,
q
{\displaystyle W^{1,q}}
for
1
≤
p
<
q
<
∞
.
{\displaystyle 1\leq p<q<\infty .}
There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals.
Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property.
== Functions of several variables ==
For example, if
φ
(
x
,
y
)
{\displaystyle \varphi (x,y)}
denotes the displacement of a membrane above the domain
D
{\displaystyle D}
in the
x
,
y
{\displaystyle x,y}
plane, then its potential energy is proportional to its surface area:
U
[
φ
]
=
∬
D
1
+
∇
φ
⋅
∇
φ
d
x
d
y
.
{\displaystyle U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.}
Plateau's problem consists of finding a function that minimizes the surface area while assuming prescribed values on the boundary of
D
{\displaystyle D}
; the solutions are called minimal surfaces. The Euler–Lagrange equation for this problem is nonlinear:
φ
x
x
(
1
+
φ
y
2
)
+
φ
y
y
(
1
+
φ
x
2
)
−
2
φ
x
φ
y
φ
x
y
=
0.
{\displaystyle \varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.}
See Courant (1950) for details.
=== Dirichlet's principle ===
It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by
V
[
φ
]
=
1
2
∬
D
∇
φ
⋅
∇
φ
d
x
d
y
.
{\displaystyle V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.}
The functional
V
{\displaystyle V}
is to be minimized among all trial functions
φ
{\displaystyle \varphi }
that assume prescribed values on the boundary of
D
{\displaystyle D}
. If
u
{\displaystyle u}
is the minimizing function and
v
{\displaystyle v}
is an arbitrary smooth function that vanishes on the boundary of
D
{\displaystyle D}
, then the first variation of
V
[
u
+
ε
v
]
{\displaystyle V[u+\varepsilon v]}
must vanish:
d
d
ε
V
[
u
+
ε
v
]
|
ε
=
0
=
∬
D
∇
u
⋅
∇
v
d
x
d
y
=
0.
{\displaystyle \left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.}
Provided that
u
{\displaystyle u}
has two derivatives, we may apply the divergence theorem to obtain
∬
D
∇
⋅
(
v
∇
u
)
d
x
d
y
=
∬
D
∇
u
⋅
∇
v
+
v
∇
⋅
∇
u
d
x
d
y
=
∫
C
v
∂
u
∂
n
d
s
,
{\displaystyle \iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,}
where
C
{\displaystyle C}
is the boundary of
D
,
{\displaystyle D,}
s
{\displaystyle s}
is arclength along
C
{\displaystyle C}
and
∂
u
/
∂
n
{\displaystyle \partial u/\partial n}
is the normal derivative of
u
{\displaystyle u}
on
C
.
{\displaystyle C.}
Since
v
{\displaystyle v}
vanishes on
C
{\displaystyle C}
and the first variation vanishes, the result is
∬
D
v
∇
⋅
∇
u
d
x
d
y
=
0
{\displaystyle \iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0}
for all smooth functions
v
{\displaystyle v}
that vanish on the boundary of
D
{\displaystyle D}
. The proof for the case of one dimensional integrals may be adapted to this case to show that
∇
⋅
∇
u
=
0
{\displaystyle \nabla \cdot \nabla u=0}
in
D
.
{\displaystyle D.}
The difficulty with this reasoning is the assumption that the minimizing function
u
{\displaystyle u}
must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize
W
[
φ
]
=
∫
−
1
1
(
x
φ
′
)
2
d
x
{\displaystyle W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx}
among all functions
φ
{\displaystyle \varphi }
that satisfy
φ
(
−
1
)
=
−
1
{\displaystyle \varphi (-1)=-1}
and
φ
(
1
)
=
1.
{\displaystyle \varphi (1)=1.}
W
{\displaystyle W}
can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makes
W
=
0.
{\displaystyle W=0.}
Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998).
=== Generalization to other boundary value problems ===
A more general expression for the potential energy of a membrane is
V
[
φ
]
=
∬
D
[
1
2
∇
φ
⋅
∇
φ
+
f
(
x
,
y
)
φ
]
d
x
d
y
+
∫
C
[
1
2
σ
(
s
)
φ
2
+
g
(
s
)
φ
]
d
s
.
{\displaystyle V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.}
This corresponds to an external force density
f
(
x
,
y
)
{\displaystyle f(x,y)}
in
D
,
{\displaystyle D,}
an external force
g
(
s
)
{\displaystyle g(s)}
on the boundary
C
,
{\displaystyle C,}
and elastic forces with modulus
σ
(
s
)
{\displaystyle \sigma (s)}
acting on
C
{\displaystyle C}
. The function that minimizes the potential energy with no restriction on its boundary values will be denoted by
u
{\displaystyle u}
. Provided that
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous, regularity theory implies that the minimizing function
u
{\displaystyle u}
will have two derivatives. In taking the first variation, no boundary condition need be imposed on the increment
v
{\displaystyle v}
. The first variation of
V
[
u
+
ε
v
]
{\displaystyle V[u+\varepsilon v]}
is given by
∬
D
[
∇
u
⋅
∇
v
+
f
v
]
d
x
d
y
+
∫
C
[
σ
u
v
+
g
v
]
d
s
=
0.
{\displaystyle \iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.}
If we apply the divergence theorem, the result is
∬
D
[
−
v
∇
⋅
∇
u
+
v
f
]
d
x
d
y
+
∫
C
v
[
∂
u
∂
n
+
σ
u
+
g
]
d
s
=
0.
{\displaystyle \iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.}
If we first set
v
=
0
{\displaystyle v=0}
on
C
,
{\displaystyle C,}
the boundary integral vanishes, and we conclude as before that
−
∇
⋅
∇
u
+
f
=
0
{\displaystyle -\nabla \cdot \nabla u+f=0}
in
D
{\displaystyle D}
. Then if we allow
v
{\displaystyle v}
to assume arbitrary boundary values, this implies that
u
{\displaystyle u}
must satisfy the boundary condition
∂
u
∂
n
+
σ
u
+
g
=
0
,
{\displaystyle {\frac {\partial u}{\partial n}}+\sigma u+g=0,}
on
C
{\displaystyle C}
. This boundary condition is a consequence of the minimizing property of
u
{\displaystyle u}
: it is not imposed beforehand. Such conditions are called natural boundary conditions.
The preceding reasoning is not valid if
σ
{\displaystyle \sigma }
vanishes identically on
C
.
{\displaystyle C.}
In such a case, we could allow a trial function
φ
≡
c
{\displaystyle \varphi \equiv c}
, where
c
{\displaystyle c}
is a constant. For such a trial function,
V
[
c
]
=
c
[
∬
D
f
d
x
d
y
+
∫
C
g
d
s
]
.
{\displaystyle V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].}
By appropriate choice of
c
{\displaystyle c}
,
V
{\displaystyle V}
can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless
∬
D
f
d
x
d
y
+
∫
C
g
d
s
=
0.
{\displaystyle \iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.}
This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953).
== Eigenvalue problems ==
Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems.
=== Sturm–Liouville problems ===
The Sturm–Liouville eigenvalue problem involves a general quadratic form
Q
[
y
]
=
∫
x
1
x
2
[
p
(
x
)
y
′
(
x
)
2
+
q
(
x
)
y
(
x
)
2
]
d
x
,
{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,}
where
y
{\displaystyle y}
is restricted to functions that satisfy the boundary conditions
y
(
x
1
)
=
0
,
y
(
x
2
)
=
0.
{\displaystyle y(x_{1})=0,\quad y(x_{2})=0.}
Let
R
{\displaystyle R}
be a normalization integral
R
[
y
]
=
∫
x
1
x
2
r
(
x
)
y
(
x
)
2
d
x
.
{\displaystyle R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.}
The functions
p
(
x
)
{\displaystyle p(x)}
and
r
(
x
)
{\displaystyle r(x)}
are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratio
Q
/
R
{\displaystyle Q/R}
among all
y
{\displaystyle y}
satisfying the endpoint conditions, which is equivalent to minimizing
Q
[
y
]
{\displaystyle Q[y]}
under the constraint that
R
[
y
]
{\displaystyle R[y]}
is constant. It is shown below that the Euler–Lagrange equation for the minimizing
u
{\displaystyle u}
is
−
(
p
u
′
)
′
+
q
u
−
λ
r
u
=
0
,
{\displaystyle -(pu')'+qu-\lambda ru=0,}
where
λ
{\displaystyle \lambda }
is the quotient
λ
=
Q
[
u
]
R
[
u
]
.
{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}
It can be shown (see Gelfand and Fomin 1963) that the minimizing
u
{\displaystyle u}
has two derivatives and satisfies the Euler–Lagrange equation. The associated
λ
{\displaystyle \lambda }
will be denoted by
λ
1
{\displaystyle \lambda _{1}}
; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted by
u
1
(
x
)
{\displaystyle u_{1}(x)}
. This variational characterization of eigenvalues leads to the Rayleigh–Ritz method: choose an approximating
u
{\displaystyle u}
as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate.
The next smallest eigenvalue and eigenfunction can be obtained by minimizing
Q
{\displaystyle Q}
under the additional constraint
∫
x
1
x
2
r
(
x
)
u
1
(
x
)
y
(
x
)
d
x
=
0.
{\displaystyle \int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.}
This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem.
The variational problem also applies to more general boundary conditions. Instead of requiring that
y
{\displaystyle y}
vanish at the endpoints, we may not impose any condition at the endpoints, and set
Q
[
y
]
=
∫
x
1
x
2
[
p
(
x
)
y
′
(
x
)
2
+
q
(
x
)
y
(
x
)
2
]
d
x
+
a
1
y
(
x
1
)
2
+
a
2
y
(
x
2
)
2
,
{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},}
where
a
1
{\displaystyle a_{1}}
and
a
2
{\displaystyle a_{2}}
are arbitrary. If we set
y
=
u
+
ε
v
{\displaystyle y=u+\varepsilon v}
, the first variation for the ratio
Q
/
R
{\displaystyle Q/R}
is
V
1
=
2
R
[
u
]
(
∫
x
1
x
2
[
p
(
x
)
u
′
(
x
)
v
′
(
x
)
+
q
(
x
)
u
(
x
)
v
(
x
)
−
λ
r
(
x
)
u
(
x
)
v
(
x
)
]
d
x
+
a
1
u
(
x
1
)
v
(
x
1
)
+
a
2
u
(
x
2
)
v
(
x
2
)
)
,
{\displaystyle V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),}
where
λ
{\displaystyle \lambda }
is given by the ratio
Q
[
u
]
/
R
[
u
]
{\displaystyle Q[u]/R[u]}
as previously.
After integration by parts,
R
[
u
]
2
V
1
=
∫
x
1
x
2
v
(
x
)
[
−
(
p
u
′
)
′
+
q
u
−
λ
r
u
]
d
x
+
v
(
x
1
)
[
−
p
(
x
1
)
u
′
(
x
1
)
+
a
1
u
(
x
1
)
]
+
v
(
x
2
)
[
p
(
x
2
)
u
′
(
x
2
)
+
a
2
u
(
x
2
)
]
.
{\displaystyle {\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].}
If we first require that
v
{\displaystyle v}
vanish at the endpoints, the first variation will vanish for all such
v
{\displaystyle v}
only if
−
(
p
u
′
)
′
+
q
u
−
λ
r
u
=
0
for
x
1
<
x
<
x
2
.
{\displaystyle -(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.}
If
u
{\displaystyle u}
satisfies this condition, then the first variation will vanish for arbitrary
v
{\displaystyle v}
only if
−
p
(
x
1
)
u
′
(
x
1
)
+
a
1
u
(
x
1
)
=
0
,
and
p
(
x
2
)
u
′
(
x
2
)
+
a
2
u
(
x
2
)
=
0.
{\displaystyle -p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.}
These latter conditions are the natural boundary conditions for this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization.
=== Eigenvalue problems in several dimensions ===
Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain
D
{\displaystyle D}
with boundary
B
{\displaystyle B}
in three dimensions we may define
Q
[
φ
]
=
∭
D
p
(
X
)
∇
φ
⋅
∇
φ
+
q
(
X
)
φ
2
d
x
d
y
d
z
+
∬
B
σ
(
S
)
φ
2
d
S
,
{\displaystyle Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,}
and
R
[
φ
]
=
∭
D
r
(
X
)
φ
(
X
)
2
d
x
d
y
d
z
.
{\displaystyle R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.}
Let
u
{\displaystyle u}
be the function that minimizes the quotient
Q
[
φ
]
/
R
[
φ
]
{\displaystyle Q[\varphi ]/R[\varphi ]}
,
with no condition prescribed on the boundary
B
.
{\displaystyle B.}
The Euler–Lagrange equation satisfied by
u
{\displaystyle u}
is
−
∇
⋅
(
p
(
X
)
∇
u
)
+
q
(
x
)
u
−
λ
r
(
x
)
u
=
0
,
{\displaystyle -\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,}
where
λ
=
Q
[
u
]
R
[
u
]
.
{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}
The minimizing
u
{\displaystyle u}
must also satisfy the natural boundary condition
p
(
S
)
∂
u
∂
n
+
σ
(
S
)
u
=
0
,
{\displaystyle p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,}
on the boundary
B
.
{\displaystyle B.}
This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953).
== Applications ==
=== Optics ===
Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the
x
{\displaystyle x}
-coordinate is chosen as the parameter along the path, and
y
=
f
(
x
)
{\displaystyle y=f(x)}
along the path, then the optical length is given by
A
[
f
]
=
∫
x
0
x
1
n
(
x
,
f
(
x
)
)
1
+
f
′
(
x
)
2
d
x
,
{\displaystyle A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,}
where the refractive index
n
(
x
,
y
)
{\displaystyle n(x,y)}
depends upon the material.
If we try
f
(
x
)
=
f
0
(
x
)
+
ε
f
1
(
x
)
{\displaystyle f(x)=f_{0}(x)+\varepsilon f_{1}(x)}
then the first variation of
A
{\displaystyle A}
(the derivative of
A
{\displaystyle A}
with respect to
ε
{\displaystyle \varepsilon }
) is
δ
A
[
f
0
,
f
1
]
=
∫
x
0
x
1
[
n
(
x
,
f
0
)
f
0
′
(
x
)
f
1
′
(
x
)
1
+
f
0
′
(
x
)
2
+
n
y
(
x
,
f
0
)
f
1
1
+
f
0
′
(
x
)
2
]
d
x
.
{\displaystyle \delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.}
After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation
−
d
d
x
[
n
(
x
,
f
0
)
f
0
′
1
+
f
0
′
2
]
+
n
y
(
x
,
f
0
)
1
+
f
0
′
(
x
)
2
=
0.
{\displaystyle -{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.}
The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics.
==== Snell's law ====
There is a discontinuity of the refractive index when light enters or leaves a lens. Let
n
(
x
,
y
)
=
{
n
(
−
)
if
x
<
0
,
n
(
+
)
if
x
>
0
,
{\displaystyle n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}}
where
n
(
−
)
{\displaystyle n_{(-)}}
and
n
(
+
)
{\displaystyle n_{(+)}}
are constants. Then the Euler–Lagrange equation holds as before in the region where
x
<
0
{\displaystyle x<0}
or
x
>
0
{\displaystyle x>0}
, and in fact the path is a straight line there, since the refractive index is constant. At the
x
=
0
{\displaystyle x=0}
,
f
{\displaystyle f}
must be continuous, but
f
′
{\displaystyle f'}
may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the form
δ
A
[
f
0
,
f
1
]
=
f
1
(
0
)
[
n
(
−
)
f
0
′
(
0
−
)
1
+
f
0
′
(
0
−
)
2
−
n
(
+
)
f
0
′
(
0
+
)
1
+
f
0
′
(
0
+
)
2
]
.
{\displaystyle \delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].}
The factor multiplying
n
(
−
)
{\displaystyle n_{(-)}}
is the sine of angle of the incident ray with the
x
{\displaystyle x}
axis, and the factor multiplying
n
(
+
)
{\displaystyle n_{(+)}}
is the sine of angle of the refracted ray with the
x
{\displaystyle x}
axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length.
==== Fermat's principle in three dimensions ====
It is expedient to use vector notation: let
X
=
(
x
1
,
x
2
,
x
3
)
,
{\displaystyle X=(x_{1},x_{2},x_{3}),}
let
t
{\displaystyle t}
be a parameter, let
X
(
t
)
{\displaystyle X(t)}
be the parametric representation of a curve
C
,
{\displaystyle C,}
and let
X
˙
(
t
)
{\displaystyle {\dot {X}}(t)}
be its tangent vector. The optical length of the curve is given by
A
[
C
]
=
∫
t
0
t
1
n
(
X
)
X
˙
⋅
X
˙
d
t
.
{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.}
Note that this integral is invariant with respect to changes in the parametric representation of
C
.
{\displaystyle C.}
The Euler–Lagrange equations for a minimizing curve have the symmetric form
d
d
t
P
=
X
˙
⋅
X
˙
∇
n
,
{\displaystyle {\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,}
where
P
=
n
(
X
)
X
˙
X
˙
⋅
X
˙
.
{\displaystyle P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.}
It follows from the definition that
P
{\displaystyle P}
satisfies
P
⋅
P
=
n
(
X
)
2
.
{\displaystyle P\cdot P=n(X)^{2}.}
Therefore, the integral may also be written as
A
[
C
]
=
∫
t
0
t
1
P
⋅
X
˙
d
t
.
{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.}
This form suggests that if we can find a function
ψ
{\displaystyle \psi }
whose gradient is given by
P
,
{\displaystyle P,}
then the integral
A
{\displaystyle A}
is given by the difference of
ψ
{\displaystyle \psi }
at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of
ψ
{\displaystyle \psi }
. In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of Lagrangian optics and Hamiltonian optics.
===== Connection with the wave equation =====
The wave equation for an inhomogeneous medium is
u
t
t
=
c
2
∇
⋅
∇
u
,
{\displaystyle u_{tt}=c^{2}\nabla \cdot \nabla u,}
where
c
{\displaystyle c}
is the velocity, which generally depends upon
X
{\displaystyle X}
. Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy
φ
t
2
=
c
(
X
)
2
∇
φ
⋅
∇
φ
.
{\displaystyle \varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .}
We may look for solutions in the form
φ
(
t
,
X
)
=
t
−
ψ
(
X
)
.
{\displaystyle \varphi (t,X)=t-\psi (X).}
In that case,
ψ
{\displaystyle \psi }
satisfies
∇
ψ
⋅
∇
ψ
=
n
2
,
{\displaystyle \nabla \psi \cdot \nabla \psi =n^{2},}
where
n
=
1
/
c
{\displaystyle n=1/c}
. According to the theory of first-order partial differential equations, if
P
=
∇
ψ
,
{\displaystyle P=\nabla \psi ,}
then
P
{\displaystyle P}
satisfies
d
P
d
s
=
n
∇
n
,
{\displaystyle {\frac {dP}{ds}}=n\,\nabla n,}
along a system of curves (the light rays) that are given by
d
X
d
s
=
P
.
{\displaystyle {\frac {dX}{ds}}=P.}
These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification
d
s
d
t
=
X
˙
⋅
X
˙
n
.
{\displaystyle {\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.}
We conclude that the function
ψ
{\displaystyle \psi }
is the value of the minimizing integral
A
{\displaystyle A}
as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton–Jacobi theory, which applies to more general variational problems.
=== Mechanics ===
In classical mechanics, the action,
S
,
{\displaystyle S,}
is defined as the time integral of the Lagrangian,
L
{\displaystyle L}
. The Lagrangian is the difference of energies,
L
=
T
−
U
,
{\displaystyle L=T-U,}
where
T
{\displaystyle T}
is the kinetic energy of a mechanical system and
U
{\displaystyle U}
its potential energy. Hamilton's principle (or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integral
S
=
∫
t
0
t
1
L
(
x
,
x
˙
,
t
)
d
t
{\displaystyle S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt}
is stationary with respect to variations in the path
x
(
t
)
{\displaystyle x(t)}
.
The Euler–Lagrange equations for this system are known as Lagrange's equations:
d
d
t
∂
L
∂
x
˙
=
∂
L
∂
x
,
{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},}
and they are equivalent to Newton's equations of motion (for such systems).
The conjugate momenta
P
{\displaystyle P}
are defined by
p
=
∂
L
∂
x
˙
.
{\displaystyle p={\frac {\partial L}{\partial {\dot {x}}}}.}
For example, if
T
=
1
2
m
x
˙
2
,
{\displaystyle T={\frac {1}{2}}m{\dot {x}}^{2},}
then
p
=
m
x
˙
.
{\displaystyle p=m{\dot {x}}.}
Hamiltonian mechanics results if the conjugate momenta are introduced in place of
x
˙
{\displaystyle {\dot {x}}}
by a Legendre transformation of the Lagrangian
L
{\displaystyle L}
into the Hamiltonian
H
{\displaystyle H}
defined by
H
(
x
,
p
,
t
)
=
p
x
˙
−
L
(
x
,
x
˙
,
t
)
.
{\displaystyle H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).}
The Hamiltonian is the total energy of the system:
H
=
T
+
U
{\displaystyle H=T+U}
.
Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function of
X
{\displaystyle X}
. This function is a solution of the Hamilton–Jacobi equation:
∂
ψ
∂
t
+
H
(
x
,
∂
ψ
∂
x
,
t
)
=
0.
{\displaystyle {\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.}
=== Further applications ===
Further applications of the calculus of variations include the following:
The derivation of the catenary shape
Solution to Newton's minimal resistance problem
Solution to the brachistochrone problem
Solution to the tautochrone problem
Solution to isoperimetric problems
Calculating geodesics
Finding minimal surfaces and solving Plateau's problem
Optimal control
Analytical mechanics, or reformulations of Newton's laws of motion, most notably Lagrangian and Hamiltonian mechanics;
Geometric optics, especially Lagrangian and Hamiltonian optics;
Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states;
Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning;
Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity;
Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations;
Total variation denoising, an image processing method for filtering high variance or noisy signals.
== Variations and sufficient condition for a minimum ==
Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. The first variation is defined as the linear part of the change in the functional, and the second variation is defined as the quadratic part.
For example, if
J
[
y
]
{\displaystyle J[y]}
is a functional with the function
y
=
y
(
x
)
{\displaystyle y=y(x)}
as its argument, and there is a small change in its argument from
y
{\displaystyle y}
to
y
+
h
,
{\displaystyle y+h,}
where
h
=
h
(
x
)
{\displaystyle h=h(x)}
is a function in the same function space as
y
{\displaystyle y}
, then the corresponding change in the functional is
Δ
J
[
h
]
=
J
[
y
+
h
]
−
J
[
y
]
.
{\displaystyle \Delta J[h]=J[y+h]-J[y].}
The functional
J
[
y
]
{\displaystyle J[y]}
is said to be differentiable if
Δ
J
[
h
]
=
φ
[
h
]
+
ε
‖
h
‖
,
{\displaystyle \Delta J[h]=\varphi [h]+\varepsilon \|h\|,}
where
φ
[
h
]
{\displaystyle \varphi [h]}
is a linear functional,
‖
h
‖
{\displaystyle \|h\|}
is the norm of
h
,
{\displaystyle h,}
and
ε
→
0
{\displaystyle \varepsilon \to 0}
as
‖
h
‖
→
0.
{\displaystyle \|h\|\to 0.}
The linear functional
φ
[
h
]
{\displaystyle \varphi [h]}
is the first variation of
J
[
y
]
{\displaystyle J[y]}
and is denoted by,
δ
J
[
h
]
=
φ
[
h
]
.
{\displaystyle \delta J[h]=\varphi [h].}
The functional
J
[
y
]
{\displaystyle J[y]}
is said to be twice differentiable if
Δ
J
[
h
]
=
φ
1
[
h
]
+
φ
2
[
h
]
+
ε
‖
h
‖
2
,
{\displaystyle \Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},}
where
φ
1
[
h
]
{\displaystyle \varphi _{1}[h]}
is a linear functional (the first variation),
φ
2
[
h
]
{\displaystyle \varphi _{2}[h]}
is a quadratic functional, and
ε
→
0
{\displaystyle \varepsilon \to 0}
as
‖
h
‖
→
0.
{\displaystyle \|h\|\to 0.}
The quadratic functional
φ
2
[
h
]
{\displaystyle \varphi _{2}[h]}
is the second variation of
J
[
y
]
{\displaystyle J[y]}
and is denoted by,
δ
2
J
[
h
]
=
φ
2
[
h
]
.
{\displaystyle \delta ^{2}J[h]=\varphi _{2}[h].}
The second variation
δ
2
J
[
h
]
{\displaystyle \delta ^{2}J[h]}
is said to be strongly positive if
δ
2
J
[
h
]
≥
k
‖
h
‖
2
,
{\displaystyle \delta ^{2}J[h]\geq k\|h\|^{2},}
for all
h
{\displaystyle h}
and for some constant
k
>
0
{\displaystyle k>0}
.
Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated.
== See also ==
== Notes ==
== References ==
== Further reading ==
Benesova, B. and Kruzik, M.: "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review 59(4) (2017), 703–766.
Bolza, O.: Lectures on the Calculus of Variations. Chelsea Publishing Company, 1904, available on Digital Mathematics library. 2nd edition republished in 1961, paperback in 2005, ISBN 978-1-4181-8201-4.
Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968.
Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950.
Dacorogna, Bernard: "Introduction" Introduction to the Calculus of Variations, 3rd edition. 2014, World Scientific Publishing, ISBN 978-1-78326-551-0.
Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962.
Forsyth, A.R.: Calculus of Variations, Dover, 1960.
Fox, Charles: An Introduction to the Calculus of Variations, Dover Publ., 1987.
Giaquinta, Mariano; Hildebrandt, Stefan: Calculus of Variations I and II, Springer-Verlag, ISBN 978-3-662-03278-7 and ISBN 978-3-662-06201-2
Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998.
Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1–98.
Logan, J. David: Applied Mathematics, 3rd edition. Wiley-Interscience, 2006
Pike, Ralph W. "Chapter 8: Calculus of Variations". Optimization for Engineering Systems. Louisiana State University. Archived from the original on 2007-07-05.
Roubicek, T.: "Calculus of variations". Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588.
Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992.
Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974 (reprint of 1952 ed.).
== External links ==
Variational calculus. Encyclopedia of Mathematics.
calculus of variations. PlanetMath.
Calculus of Variations. MathWorld.
Calculus of variations. Example problems.
Mathematics - Calculus of Variations and Integral Equations. Lectures on YouTube.
Selected papers on Geodesic Fields. Part I, Part II. | Wikipedia/Calculus_of_variations |
"A Mathematical Theory of Communication" is an article by mathematician Claude E. Shannon published in Bell System Technical Journal in 1948. It was renamed The Mathematical Theory of Communication in the 1949 book of the same name, a small but significant title change after realizing the generality of this work. It has tens of thousands of citations, being one of the most influential and cited scientific papers of all time, as it gave rise to the field of information theory, with Scientific American referring to the paper as the "Magna Carta of the Information Age", while the electrical engineer Robert G. Gallager called the paper a "blueprint for the digital era". Historian James Gleick rated the paper as the most important development of 1948, placing the transistor second in the same time period, with Gleick emphasizing that the paper by Shannon was "even more profound and more fundamental" than the transistor.
It is also noted that "as did relativity and quantum theory, information theory radically changed the way scientists look at the universe". The paper also formally introduced the term "bit" and serves as its theoretical foundation.
== Publication ==
The article was the founding work of the field of information theory. It was later published in 1949 as a book titled The Mathematical Theory of Communication (ISBN 0-252-72546-8), which was published as a paperback in 1963 (ISBN 0-252-72548-4). The book contains an additional article by Warren Weaver, providing an overview of the theory for a more general audience.
== Contents ==
This work is known for introducing the concepts of channel capacity as well as the noisy channel coding theorem.
Shannon's article laid out the basic elements of communication:
An information source that produces a message
A transmitter that operates on the message to create a signal which can be sent through a channel
A channel, which is the medium over which the signal, carrying the information that composes the message, is sent
A receiver, which transforms the signal back into the message intended for delivery
A destination, which can be a person or a machine, for whom or which the message is intended
It also developed the concepts of information entropy, redundancy and the source coding theorem, and introduced the term bit (which Shannon credited to John Tukey) as a unit of information. It was also in this paper that the Shannon–Fano coding technique was proposed – a technique developed in conjunction with Robert Fano.
== References ==
== External links ==
(PDF) "A Mathematical Theory of Communication" by C. E. Shannon (reprint with corrections) hosted by the Harvard Mathematics Department, at Harvard University
Original publications: The Bell System Technical Journal 1948-07: Vol 27 Iss 3. AT & T Bell Laboratories. 1948-07-01. pp. 379–423., The Bell System Technical Journal 1948-10: Vol 27 Iss 4. AT & T Bell Laboratories. 1948-10-01. pp. 623–656.
Khan Academy video about "A Mathematical Theory of Communication" | Wikipedia/A_Mathematical_Theory_of_Communication |
Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. This field was created and started by the Japanese mathematician Kiyosi Itô during World War II.
The best-known stochastic process to which stochastic calculus is applied is the Wiener process (named in honor of Norbert Wiener), which is used for modeling Brownian motion as described by Louis Bachelier in 1900 and by Albert Einstein in 1905 and other physical diffusion processes in space of particles subject to random forces. Since the 1970s, the Wiener process has been widely applied in financial mathematics and economics to model the evolution in time of stock prices and bond interest rates.
The main flavours of stochastic calculus are the Itô calculus and its variational relative the Malliavin calculus. For technical reasons the Itô integral is the most useful for general classes of processes, but the related Stratonovich integral is frequently useful in problem formulation (particularly in engineering disciplines). The Stratonovich integral can readily be expressed in terms of the Itô integral, and vice versa. The main benefit of the Stratonovich integral is that it obeys the usual chain rule and therefore does not require Itô's lemma. This enables problems to be expressed in a coordinate system invariant form, which is invaluable when developing stochastic calculus on manifolds other than Rn.
The dominated convergence theorem does not hold for the Stratonovich integral; consequently it is very difficult to prove results without re-expressing the integrals in Itô form.
== Itô integral ==
The Itô integral is central to the study of stochastic calculus. The integral
∫
H
d
X
{\displaystyle \int H\,dX}
is defined for a semimartingale X and locally bounded predictable process H.
== Stratonovich integral ==
The Stratonovich integral or Fisk–Stratonovich integral of a semimartingale
X
{\displaystyle X}
against another semimartingale Y can be defined in terms of the Itô integral as
∫
0
t
X
s
−
∘
d
Y
s
:=
∫
0
t
X
s
−
d
Y
s
+
1
2
[
X
,
Y
]
t
c
,
{\displaystyle \int _{0}^{t}X_{s-}\circ dY_{s}:=\int _{0}^{t}X_{s-}dY_{s}+{\frac {1}{2}}\left[X,Y\right]_{t}^{c},}
where [X, Y]tc denotes the optional quadratic covariation of the continuous parts of X
and Y, which is the optional quadratic covariation minus the jumps of the processes
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, i.e.
[
X
,
Y
]
t
c
:=
[
X
,
Y
]
t
−
∑
s
≤
t
Δ
X
s
Δ
Y
s
{\displaystyle \left[X,Y\right]_{t}^{c}:=[X,Y]_{t}-\sum \limits _{s\leq t}\Delta X_{s}\Delta Y_{s}}
.
The alternative notation
∫
0
t
X
s
∂
Y
s
{\displaystyle \int _{0}^{t}X_{s}\,\partial Y_{s}}
is also used to denote the Stratonovich integral.
== Applications ==
An important application of stochastic calculus is in mathematical finance, in which asset prices are often assumed to follow stochastic differential equations. For example, the Black–Scholes model prices options as if they follow a geometric Brownian motion, illustrating the opportunities and risks from applying stochastic calculus.
== Stochastic integrals ==
Besides the classical Itô and Fisk–Stratonovich integrals, many other notions of stochastic integrals exist, such as the Hitsuda–Skorokhod integral, the Marcus integral, and the Ogawa integral.
== See also ==
== References ==
Thomas Mikosch, 1998, Elementary Stochastic Calculus, World Scientific, ISBN 981-023543-7
Fima C Klebaner, 2012, Introduction to Stochastic Calculus with Application (3rd Edition). World Scientific Publishing, ISBN 9781848168312
Szabados, T.S.; Székely, B.Z. (2008). "Stochastic Integration Based on Simple, Symmetric Random Walks". Journal of Theoretical Probability. 22: 203–219. arXiv:0712.3908. doi:10.1007/s10959-007-0140-8. Preprint | Wikipedia/Stochastic_calculus |
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks.
In neuroscience, a biological neural network is a physical structure found in brains and complex nervous systems – a population of nerve cells connected by synapses.
In machine learning, an artificial neural network is a mathematical model used to approximate nonlinear functions. Artificial neural networks are used to solve artificial intelligence problems.
== In biology ==
In the context of biology, a neural network is a population of biological neurons chemically connected to each other by synapses. A given neuron can be connected to hundreds of thousands of synapses.
Each neuron sends and receives electrochemical signals called action potentials to its connected neighbors. A neuron can serve an excitatory role, amplifying and propagating signals it receives, or an inhibitory role, suppressing signals instead.
Populations of interconnected neurons that are smaller than neural networks are called neural circuits. Very large interconnected networks are called large scale brain networks, and many of these together form brains and nervous systems.
Signals generated by neural networks in the brain eventually travel through the nervous system and across neuromuscular junctions to muscle cells, where they cause contraction and thereby motion.
== In machine learning ==
In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines, today they are almost always implemented in software.
Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).
The "signal" input to each neuron is a number, specifically a linear combination of the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to its activation function. The behavior of the network depends on the strengths (or weights) of the connections between neurons. A network is trained by modifying these weights through empirical risk minimization or backpropagation in order to fit some preexisting dataset.
The term deep neural network refers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers.
Neural networks are used to solve problems in artificial intelligence, and have thereby found applications in many disciplines, including predictive modeling, adaptive control, facial recognition, handwriting recognition, general game playing, and generative AI.
== History ==
The theoretical base for contemporary neural networks was independently proposed by Alexander Bain in 1873 and William James in 1890. Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949, Donald Hebb described Hebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.
Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach of connectionism. However, starting with the invention of the perceptron, a simple artificial neural network, by Warren McCulloch and Walter Pitts in 1943, followed by the implementation of one in hardware by Frank Rosenblatt in 1957,
artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
== See also ==
Emergence
Biological cybernetics
Biologically-inspired computing
== References == | Wikipedia/Neural_network |
Synchronous Data Link Control (SDLC) is a computer serial communications protocol first introduced by IBM as part of its Systems Network Architecture (SNA). SDLC is used as layer 2, the data link layer, in the SNA protocol stack. It supports multipoint links as well as error correction. It also runs under the assumption that an SNA header is present after the SDLC header. SDLC was mainly used by IBM mainframe and midrange systems; however, implementations exist on many platforms from many vendors. In the United States and Canada, SDLC can be found in traffic control cabinets. SDLC was released in 1975, based on work done for IBM in the early 1970s.
SDLC operates independently on each communications link in the network and can operate on point-to-point multipoint or loop facilities, on switched or dedicated, two-wire or four-wire circuits, and with full-duplex and half-duplex operation. A unique characteristic of SDLC is its ability to mix half-duplex secondary stations with full-duplex primary stations on four-wire circuits, thus reducing the cost of dedicated facilities.
This de facto standard has been adopted by ISO as High-Level Data Link Control (HDLC) in 1979 and by ANSI as Advanced Data Communication Control Procedures (ADCCP). The latter standards added features such as the Asynchronous Balanced Mode, frame sizes that did not need to be multiples of bit-octets, but also removed some of the procedures and messages (such as the TEST message).
Intel used SDLC as a base protocol for BITBUS, still popular in Europe as fieldbus and included support in several controllers (i8044/i8344, i80152). The 8044 controller is still in production by third-party vendors. Other vendors putting hardware support for SDLC (and the slightly different HDLC) into communication controller chips of the 1980s included Zilog, Motorola, and National Semiconductor. As a result, a wide variety of equipment in the 1980s used it and it was very common in the mainframe-centric corporate networks which were the norm in the 1980s. The most common alternatives for SNA with SDLC were probably DECnet with Digital Data Communications Message Protocol (DDCMP), Burroughs Network Architecture (BNA) with Burroughs Data Link Control (BDLC), and ARPANET with IMPs.
== Differences between SDLC and HDLC ==
HDLC is mostly an extension of SDLC,: 69–72 but some features were deleted or renamed.
=== HDLC features not in SDLC ===
Features present in HDLC, but not SDLC, are:
frames not a multiple of 8 bits long are illegal in SDLC, but optionally legal in HDLC.
HDLC optionally allows addresses more than 1 byte long.
HDLC has an option for a 32-bit frame check sequence.
asynchronous response mode, and the associated SARM and SARME U frames,
asynchronous balanced mode, and the associated SABM and SABME U frames,
and several other frame types created for HDLC:
the selective reject (SREJ) S frame,
the reset (RSET) command, and
the nonreserved (NR0 through NR3) U frames.
Also not in SDLC are later HDLC extensions in ISO/IEC 13239 such as:
15- and 31-bit sequence numbers,
the set mode (SM) U frame,
8-bit frame check sequence,
a frame format field preceding the address,
an information field in mode set U frames, and
the "unnumbered information with header check" (UIH) U frame.
=== Naming differences ===
HDLC renamed some SDLC frames. The HDLC names were incorporated into later versions of SDLC:: 73
=== HDLC extensions added to SDLC ===
Some features were added in HDLC, and subsequently added back to later versions of SDLC.
Extended (modulo-128) sequence numbers and the corresponding SNRME U frame, were added to SDLC after the publication of the HDLC standard.
=== SDLC features not in HDLC ===
Two U frames in SDLC which do not exist in HDLC are:
BCN (Beacon): When a secondary loses carrier (stops receiving any signal) from the primary, it begins transmitting a stream of "beacon" responses, identifying the location of the communication failure. This is particularly useful in SDLC loop mode.
CFGR (Configure for test) command and response: The CFGR command contains a 1-byte payload which identifies some special diagnostic operation to be performed by the secondary.: 47–49 The least significant bit indicates that the diagnostic mode should start (1) or stop (0). A payload byte of 0 stops all diagnostic modes. The secondary echoes the byte in its response.
0: Stop all diagnostic modes.
2 (off)/3 (on): Beacon test. Disable all output, causing the next recipient to lose carrier (and begin beaconing).
4 (off)/5 (on): Monitor mode. Disable all frame generation, becoming silent, but do not stop carrier or loop mode operation.
8 (off)/9 (on): Wrap mode. Enter local loopback, connecting the secondary's input to its own output for the duration of the test.
10 (off)/11 (on): Self-test. Perform local diagnostics. CFGR response is delayed until the diagnostics complete, at which time the response is 10 (self-test failed) or 11 (self-test successful).
12 (off)/13 (on): Modified link test. Rather than echoing TEST commands verbatim, generate a TEST response consisting of a number of copies of the first byte of the TEST command.
Several U frames are almost entirely unused in HDLC, existing primarily for SDLC compatibility:
Initialization mode, and the associated RIM and SIM U frames, are so vaguely defined in HDLC as to be useless, but are used by some peripherals in SDLC.
Unnumbered poll (UP) is almost never used in HDLC, its function having been superseded by asynchronous response mode. UP is an exception to the usual rule in normal response mode that a secondary must receive the poll flag before transmitting; while a secondary must respond to any frame with the poll bit set, it may respond to a UP frame with the poll bit clear if it has data to transmit. If the lower-level communication channel is capable of avoiding collisions (as it is in loop mode), UP to the broadcast address allows multiple secondaries to respond without having to poll them individually.
The TEST U frame was not included in early HDLC standards, but was added later.
==== Loop mode ====
A special mode of SDLC operation which is supported by e.g. the Zilog SCC but was not incorporated into HDLC is SDLC loop mode.: 42–49,58–59 In this mode, a primary and a number of secondaries are connected in a unidirectional ring network, with each one's output connected to the next's input. Each secondary is responsible for copying all frames which arrive at its input so that they reach the rest of the ring and eventually return to the primary. Except for this copying, a secondary operates in half-duplex mode; it only transmits when the protocol guarantees it will receive no input.
When a secondary is powered off, a relay connects its input directly to its output. When powering on, a secondary waits for an opportune moment and then goes "on-loop" inserting itself into the data stream with a one-bit delay. A similar opportunity is used to go "off-loop" as part of a clean shutdown.
In SDLC loop mode, frames arrive in a group, ending (after the final flag) with an all-ones idle signal. The first seven 1-bits of this (the pattern 01111111) constitute a "go-ahead" sequence (also called EOP, end of poll) giving a secondary permission to transmit. A secondary which wishes to transmit uses its 1-bit delay to convert the final 1 bit in this sequence to a 0 bit, making it a flag character, and then transmits its own frames. After its own final flag, it transmits an all-ones idle signal, which will serve as a go-ahead for the next station on the loop.
The group starts with commands from the primary, and each secondary appends its responses. When the primary receives the go-ahead idle sequence, it knows that the secondaries are finished and it may transmit more commands.
The beacon (BCN) response is designed to help locate breaks in the loop. A secondary which does not see any incoming traffic for a long time begins sending "beacon" response frames, telling the primary that the link between that secondary and its predecessor is broken.
Because the primary also receives a copy of the commands it sent, which are indistinguishable from responses, it appends a special "turnaround" frame at the end of its commands to separate them from the responses. Any unique sequence which will not be interpreted by the secondaries will do, but the conventional one is a single all-zero byte.: 44 This is a "runt frame" with an address of 0 (reserved, unused) and no control field or frame check sequence. (Secondaries capable of full-duplex operation also interpret this as a "shut-off sequence", forcing them to abort transmission.: 45 )
== Notes ==
== References ==
McFadyen, J. H. (1976). "System Network Architecture: An Overview" (PDF). IBM Systems Journal. 15 (1): 4–23. doi:10.1147/sj.151.0004.
Odom, Wendell (2004). CCNA INTRO Exam Certification Guide: CCNA Self-study. Indianapolis, IN: Cisco Press. ISBN 1-58720-094-5.
Friend, George E.; Fike, John L; Baker, H. Charles; Bellamy, John C (1988). Understanding Data Communications (2nd ed.). Indianapolis: Howard W. Sams & Company. ISBN 0-672-27270-9.
Pooch, Udo W.; Greene, William H; Moss, Gary G (1983). Telecommunications and Networking. Boston: Little, Brown and Company. ISBN 0-316-71498-4.
Hura, Gurdeep S.; Mukesh Singhal (2001). Data and computer communications: networking and internetworking. Indianapolis: CRC Press. ISBN 0-8493-0928-X.
ITS Cabinet Standard. v01.02.17b. Washington, DC: Institute of Transportation Engineers. November 16, 2006. p. 96. All communication within the ATC controller unit shall be SDLC-compatible command-response protocol, support 0-bit stuffing, and operate at a data rate of 614.4 Kilobits per second.
== External links ==
IBM Communication Products Division (March 1979). IBM Synchronous Data Link Control: General Information (PDF) (Technical report) (3rd ed.). Document No. GA27-3093-2.
Cisco page on Synchronous Data Link Control and Derivatives
Bitbus/fieldbus community site. | Wikipedia/Synchronous_Data_Link_Control |
In mathematics, the Dedekind zeta function of an algebraic number field K, generally denoted ζK(s), is a generalization of the Riemann zeta function (which is obtained in the case where K is the field of rational numbers Q). It can be defined as a Dirichlet series, it has an Euler product expansion, it satisfies a functional equation, it has an analytic continuation to a meromorphic function on the complex plane C with only a simple pole at s = 1, and its values encode arithmetic data of K. The extended Riemann hypothesis states that if ζK(s) = 0 and 0 < Re(s) < 1, then Re(s) = 1/2.
The Dedekind zeta function is named for Richard Dedekind who introduced it in his supplement to Peter Gustav Lejeune Dirichlet's Vorlesungen über Zahlentheorie.
== Definition and basic properties ==
Let K be an algebraic number field. Its Dedekind zeta function is first defined for complex numbers s with real part Re(s) > 1 by the Dirichlet series
ζ
K
(
s
)
=
∑
I
⊆
O
K
1
(
N
K
/
Q
(
I
)
)
s
{\displaystyle \zeta _{K}(s)=\sum _{I\subseteq {\mathcal {O}}_{K}}{\frac {1}{(N_{K/\mathbf {Q} }(I))^{s}}}}
where I ranges through the non-zero ideals of the ring of integers OK of K and NK/Q(I) denotes the absolute norm of I (which is equal to both the index [OK : I] of I in OK or equivalently the cardinality of the quotient ring OK / I). This sum converges absolutely for all complex numbers s with real part Re(s) > 1. In the case K = Q, this definition reduces to that of the Riemann zeta function.
=== Euler product ===
The Dedekind zeta function of
K
{\displaystyle K}
has an Euler product which is a product over all the non-zero prime ideals
p
{\displaystyle {\mathfrak {p}}}
of
O
K
{\displaystyle {\mathcal {O}}_{K}}
ζ
K
(
s
)
=
∏
p
⊆
O
K
1
1
−
N
K
/
Q
(
p
)
−
s
,
for Re
(
s
)
>
1.
{\displaystyle \zeta _{K}(s)=\prod _{{\mathfrak {p}}\subseteq {\mathcal {O}}_{K}}{\frac {1}{1-N_{K/\mathbf {Q} }({\mathfrak {p}})^{-s}}},{\text{ for Re}}(s)>1.}
This is the expression in analytic terms of the uniqueness of prime factorization of ideals in
O
K
{\displaystyle {\mathcal {O}}_{K}}
. For
R
e
(
s
)
>
1
,
ζ
K
(
s
)
{\displaystyle \mathrm {Re} (s)>1,\ \zeta _{K}(s)}
is non-zero.
=== Analytic continuation and functional equation ===
Erich Hecke first proved that ζK(s) has an analytic continuation to a meromorphic function that is analytic at all points of the complex plane except for one simple pole at s = 1. The residue at that pole is given by the analytic class number formula and is made up of important arithmetic data involving invariants of the unit group and class group of K.
The Dedekind zeta function satisfies a functional equation relating its values at s and 1 − s. Specifically, let ΔK denote the discriminant of K, let r1 (resp. r2) denote the number of real places (resp. complex places) of K, and let
Γ
R
(
s
)
=
π
−
s
/
2
Γ
(
s
/
2
)
{\displaystyle \Gamma _{\mathbf {R} }(s)=\pi ^{-s/2}\Gamma (s/2)}
and
Γ
C
(
s
)
=
(
2
π
)
−
s
Γ
(
s
)
{\displaystyle \Gamma _{\mathbf {C} }(s)=(2\pi )^{-s}\Gamma (s)}
where Γ(s) is the gamma function. Then, the functions
Λ
K
(
s
)
=
|
Δ
K
|
s
/
2
Γ
R
(
s
)
r
1
Γ
C
(
s
)
r
2
ζ
K
(
s
)
Ξ
K
(
s
)
=
1
2
(
s
2
+
1
4
)
Λ
K
(
1
2
+
i
s
)
{\displaystyle \Lambda _{K}(s)=\left|\Delta _{K}\right|^{s/2}\Gamma _{\mathbf {R} }(s)^{r_{1}}\Gamma _{\mathbf {C} }(s)^{r_{2}}\zeta _{K}(s)\qquad \Xi _{K}(s)={\tfrac {1}{2}}(s^{2}+{\tfrac {1}{4}})\Lambda _{K}({\tfrac {1}{2}}+is)}
satisfy the functional equation
Λ
K
(
s
)
=
Λ
K
(
1
−
s
)
.
Ξ
K
(
−
s
)
=
Ξ
K
(
s
)
{\displaystyle \Lambda _{K}(s)=\Lambda _{K}(1-s).\qquad \Xi _{K}(-s)=\Xi _{K}(s)\;}
== Special values ==
Analogously to the Riemann zeta function, the values of the Dedekind zeta function at integers encode (at least conjecturally) important arithmetic data of the field K. For example, the analytic class number formula relates the residue at s = 1 to the class number h(K) of K, the regulator R(K) of K, the number w(K) of roots of unity in K, the absolute discriminant of K, and the number of real and complex places of K. Another example is at s = 0 where it has a zero whose order r is equal to the rank of the unit group of OK and the leading term is given by
lim
s
→
0
s
−
r
ζ
K
(
s
)
=
−
h
(
K
)
R
(
K
)
w
(
K
)
.
{\displaystyle \lim _{s\rightarrow 0}s^{-r}\zeta _{K}(s)=-{\frac {h(K)R(K)}{w(K)}}.}
It follows from the functional equation that
r
=
r
1
+
r
2
−
1
{\displaystyle r=r_{1}+r_{2}-1}
.
Combining the functional equation and the fact that Γ(s) is infinite at all integers less than or equal to zero yields that ζK(s) vanishes at all negative even integers. It even vanishes at all negative odd integers unless K is totally real (i.e. r2 = 0; e.g. Q or a real quadratic field). In the totally real case, Carl Ludwig Siegel showed that ζK(s) is a non-zero rational number at negative odd integers. Stephen Lichtenbaum conjectured specific values for these rational numbers in terms of the algebraic K-theory of K.
== Relations to other L-functions ==
For the case in which K is an abelian extension of Q, its Dedekind zeta function can be written as a product of Dirichlet L-functions. For example, when K is a quadratic field this shows that the ratio
ζ
K
(
s
)
ζ
Q
(
s
)
{\displaystyle {\frac {\zeta _{K}(s)}{\zeta _{\mathbf {Q} }(s)}}}
is the L-function L(s, χ), where χ is a Jacobi symbol used as Dirichlet character. That the zeta function of a quadratic field is a product of the Riemann zeta function and a certain Dirichlet L-function is an analytic formulation of the quadratic reciprocity law of Gauss.
In general, if K is a Galois extension of Q with Galois group G, its Dedekind zeta function is the Artin L-function of the regular representation of G and hence has a factorization in terms of Artin L-functions of irreducible Artin representations of G.
The relation with Artin L-functions shows that if L/K is a Galois extension then
ζ
L
(
s
)
ζ
K
(
s
)
{\displaystyle {\frac {\zeta _{L}(s)}{\zeta _{K}(s)}}}
is holomorphic (
ζ
K
(
s
)
{\displaystyle \zeta _{K}(s)}
"divides"
ζ
L
(
s
)
{\displaystyle \zeta _{L}(s)}
): for general extensions the result would follow from the Artin conjecture for L-functions.
Additionally, ζK(s) is the Hasse–Weil zeta function of Spec OK and the motivic L-function of the motive coming from the cohomology of Spec K.
== Arithmetically equivalent fields ==
Two fields are called arithmetically equivalent if they have the same Dedekind zeta function. Wieb Bosma and Bart de Smit (2002) used Gassmann triples to give some examples of pairs of non-isomorphic fields that are arithmetically equivalent. In particular some of these pairs have different class numbers, so the Dedekind zeta function of a number field does not determine its class number.
Perlis (1977) showed that two number fields K and L are arithmetically equivalent if and only if all but finitely many prime numbers p have the same inertia degrees in the two fields, i.e., if
p
i
{\displaystyle {\mathfrak {p}}_{i}}
are the prime ideals in K lying over p, then the tuples
(
dim
Z
/
p
O
K
/
p
i
)
{\displaystyle (\dim _{\mathbf {Z} /p}{\mathcal {O}}_{K}/{\mathfrak {p}}_{i})}
need to be the same for K and for L for almost all p.
== Notes ==
== References ==
Bosma, Wieb; de Smit, Bart (2002), "On arithmetically equivalent number fields of small degree", in Kohel, David R.; Fieker, Claus (eds.), Algorithmic number theory (Sydney, 2002), Lecture Notes in Comput. Sci., vol. 2369, Berlin, New York: Springer-Verlag, pp. 67–79, doi:10.1007/3-540-45455-1_6, ISBN 978-3-540-43863-2, MR 2041074
Section 10.5.1 of Cohen, Henri (2007), Number theory, Volume II: Analytic and modern tools, Graduate Texts in Mathematics, vol. 240, New York: Springer, doi:10.1007/978-0-387-49894-2, ISBN 978-0-387-49893-5, MR 2312338
Deninger, Christopher (1994), "L-functions of mixed motives", in Jannsen, Uwe; Kleiman, Steven; Serre, Jean-Pierre (eds.), Motives, Part 1, Proceedings of Symposia in Pure Mathematics, vol. 55, American Mathematical Society, pp. 517–525, ISBN 978-0-8218-1635-6
Flach, Mathias (2004), "The equivariant Tamagawa number conjecture: a survey", in Burns, David; Popescu, Christian; Sands, Jonathan; et al. (eds.), Stark's conjectures: recent work and new directions (PDF), Contemporary Mathematics, vol. 358, American Mathematical Society, pp. 79–125, ISBN 978-0-8218-3480-0
Martinet, J. (1977), "Character theory and Artin L-functions", in Fröhlich, A. (ed.), Algebraic Number Fields, Proc. Symp. London Math. Soc., Univ. Durham 1975, Academic Press, pp. 1–87, ISBN 0-12-268960-7, Zbl 0359.12015
Narkiewicz, Władysław (2004), Elementary and analytic theory of algebraic numbers, Springer Monographs in Mathematics (3 ed.), Berlin: Springer-Verlag, Chapter 7, ISBN 978-3-540-21902-6, MR 2078267
Perlis, Robert (1977), "On the equation
ζ
K
(
s
)
=
ζ
K
′
(
s
)
{\displaystyle \zeta _{K}(s)=\zeta _{K'}(s)}
", Journal of Number Theory, 9 (3): 342–360, doi:10.1016/0022-314X(77)90070-1 | Wikipedia/Dedekind_zeta_function |
In mathematics, a Dirichlet
L
{\displaystyle L}
-series is a function of the form
L
(
s
,
χ
)
=
∑
n
=
1
∞
χ
(
n
)
n
s
.
{\displaystyle L(s,\chi )=\sum _{n=1}^{\infty }{\frac {\chi (n)}{n^{s}}}.}
where
χ
{\displaystyle \chi }
is a Dirichlet character and
s
{\displaystyle s}
a complex variable with real part greater than
1
{\displaystyle 1}
. It is a special case of a Dirichlet series. By analytic continuation, it can be extended to a meromorphic function on the whole complex plane, and is then called a Dirichlet
L
{\displaystyle L}
-function and also denoted
L
(
s
,
χ
)
{\displaystyle L(s,\chi )}
.
These functions are named after Peter Gustav Lejeune Dirichlet who introduced them in (Dirichlet 1837) to prove the theorem on primes in arithmetic progressions that also bears his name. In the course of the proof, Dirichlet shows that
L
(
s
,
χ
)
{\displaystyle L(s,\chi )}
is non-zero at
s
=
1
{\displaystyle s=1}
. Moreover, if
χ
{\displaystyle \chi }
is principal, then the corresponding Dirichlet
L
{\displaystyle L}
-function has a simple pole at
s
=
1
{\displaystyle s=1}
. Otherwise, the
L
{\displaystyle L}
-function is entire.
== Euler product ==
Since a Dirichlet character
χ
{\displaystyle \chi }
is completely multiplicative, its
L
{\displaystyle L}
-function can also be written as an Euler product in the half-plane of absolute convergence:
L
(
s
,
χ
)
=
∏
p
(
1
−
χ
(
p
)
p
−
s
)
−
1
for
Re
(
s
)
>
1
,
{\displaystyle L(s,\chi )=\prod _{p}\left(1-\chi (p)p^{-s}\right)^{-1}{\text{ for }}{\text{Re}}(s)>1,}
where the product is over all prime numbers.
== Primitive characters ==
Results about L-functions are often stated more simply if the character is assumed to be primitive, although the results typically can be extended to imprimitive characters with minor complications. This is because of the relationship between a imprimitive character
χ
{\displaystyle \chi }
and the primitive character
χ
⋆
{\displaystyle \chi ^{\star }}
which induces it:
χ
(
n
)
=
{
χ
⋆
(
n
)
,
i
f
gcd
(
n
,
q
)
=
1
0
,
i
f
gcd
(
n
,
q
)
≠
1
{\displaystyle \chi (n)={\begin{cases}\chi ^{\star }(n),&\mathrm {if} \gcd(n,q)=1\\0,&\mathrm {if} \gcd(n,q)\neq 1\end{cases}}}
(Here, q is the modulus of χ.) An application of the Euler product gives a simple relationship between the corresponding L-functions:
L
(
s
,
χ
)
=
L
(
s
,
χ
⋆
)
∏
p
|
q
(
1
−
χ
⋆
(
p
)
p
s
)
{\displaystyle L(s,\chi )=L(s,\chi ^{\star })\prod _{p\,|\,q}\left(1-{\frac {\chi ^{\star }(p)}{p^{s}}}\right)}
(This formula holds for all s, by analytic continuation, even though the Euler product is only valid when Re(s) > 1.) The formula shows that the L-function of χ is equal to the L-function of the primitive character which induces χ, multiplied by only a finite number of factors.
As a special case, the L-function of the principal character
χ
0
{\displaystyle \chi _{0}}
modulo q can be expressed in terms of the Riemann zeta function:
L
(
s
,
χ
0
)
=
ζ
(
s
)
∏
p
|
q
(
1
−
p
−
s
)
{\displaystyle L(s,\chi _{0})=\zeta (s)\prod _{p\,|\,q}(1-p^{-s})}
== Functional equation ==
Dirichlet L-functions satisfy a functional equation, which provides a way to analytically continue them throughout the complex plane. The functional equation relates the value of
L
(
s
,
χ
)
{\displaystyle L(s,\chi )}
to the value of
L
(
1
−
s
,
χ
¯
)
{\displaystyle L(1-s,{\overline {\chi }})}
. Let χ be a primitive character modulo q, where q > 1. One way to express the functional equation is:
L
(
s
,
χ
)
=
W
(
χ
)
2
s
π
s
−
1
q
1
/
2
−
s
sin
(
π
2
(
s
+
δ
)
)
Γ
(
1
−
s
)
L
(
1
−
s
,
χ
¯
)
.
{\displaystyle L(s,\chi )=W(\chi )2^{s}\pi ^{s-1}q^{1/2-s}\sin \left({\frac {\pi }{2}}(s+\delta )\right)\Gamma (1-s)L(1-s,{\overline {\chi }}).}
In this equation, Γ denotes the gamma function;
χ
(
−
1
)
=
(
−
1
)
δ
{\displaystyle \chi (-1)=(-1)^{\delta }}
; and
W
(
χ
)
=
τ
(
χ
)
i
δ
q
{\displaystyle W(\chi )={\frac {\tau (\chi )}{i^{\delta }{\sqrt {q}}}}}
where τ ( χ) is a Gauss sum:
τ
(
χ
)
=
∑
a
=
1
q
χ
(
a
)
exp
(
2
π
i
a
/
q
)
.
{\displaystyle \tau (\chi )=\sum _{a=1}^{q}\chi (a)\exp(2\pi ia/q).}
It is a property of Gauss sums that |τ ( χ) | = q1/2, so |W ( χ) | = 1.
Another way to state the functional equation is in terms of
Λ
(
s
,
χ
)
=
q
s
/
2
π
−
(
s
+
δ
)
/
2
Γ
(
s
+
δ
2
)
L
(
s
,
χ
)
.
{\displaystyle \Lambda (s,\chi )=q^{s/2}\pi ^{-(s+\delta )/2}\operatorname {\Gamma } \left({\frac {s+\delta }{2}}\right)L(s,\chi ).}
The functional equation can be expressed as:
Λ
(
s
,
χ
)
=
W
(
χ
)
Λ
(
1
−
s
,
χ
¯
)
.
{\displaystyle \Lambda (s,\chi )=W(\chi )\Lambda (1-s,{\overline {\chi }}).}
The functional equation implies that
L
(
s
,
χ
)
{\displaystyle L(s,\chi )}
(and
Λ
(
s
,
χ
)
{\displaystyle \Lambda (s,\chi )}
) are entire functions of s. (Again, this assumes that χ is primitive character modulo q with q > 1. If q = 1, then
L
(
s
,
χ
)
=
ζ
(
s
)
{\displaystyle L(s,\chi )=\zeta (s)}
has a pole at s = 1.)
For generalizations, see: Functional equation (L-function).
== Zeros ==
Let χ be a primitive character modulo q, with q > 1.
There are no zeros of L(s, χ) with Re(s) > 1. For Re(s) < 0, there are zeros at certain negative integers s:
If χ(−1) = 1, the only zeros of L(s, χ) with Re(s) < 0 are simple zeros at −2, −4, −6, .... (There is also a zero at s = 0.) These correspond to the poles of
Γ
(
s
2
)
{\displaystyle \textstyle \Gamma ({\frac {s}{2}})}
.
If χ(−1) = −1, then the only zeros of L(s, χ) with Re(s) < 0 are simple zeros at −1, −3, −5, .... These correspond to the poles of
Γ
(
s
+
1
2
)
{\displaystyle \textstyle \Gamma ({\frac {s+1}{2}})}
.
These are called the trivial zeros.
The remaining zeros lie in the critical strip 0 ≤ Re(s) ≤ 1, and are called the non-trivial zeros. The non-trivial zeros are symmetrical about the critical line Re(s) = 1/2. That is, if
L
(
ρ
,
χ
)
=
0
{\displaystyle L(\rho ,\chi )=0}
then
L
(
1
−
ρ
¯
,
χ
)
=
0
{\displaystyle L(1-{\overline {\rho }},\chi )=0}
too, because of the functional equation. If χ is a real character, then the non-trivial zeros are also symmetrical about the real axis, but not if χ is a complex character. The generalized Riemann hypothesis is the conjecture that all the non-trivial zeros lie on the critical line Re(s) = 1/2.
Up to the possible existence of a Siegel zero, zero-free regions including and beyond the line Re(s) = 1 similar to that of the Riemann zeta function are known to exist for all Dirichlet L-functions: for example, for χ a non-real character of modulus q, we have
β
<
1
−
c
log
(
q
(
2
+
|
γ
|
)
)
{\displaystyle \beta <1-{\frac {c}{\log \!\!\;{\big (}q(2+|\gamma |){\big )}}}\ }
for β + iγ a non-real zero.
== Relation to the Hurwitz zeta function ==
The Dirichlet L-functions may be written as a linear combination of the Hurwitz zeta function at rational values. Fixing an integer k ≥ 1, the Dirichlet L-functions for characters modulo k are linear combinations, with constant coefficients, of the ζ(s,a) where a = r/k and r = 1, 2, ..., k. This means that the Hurwitz zeta function for rational a has analytic properties that are closely related to the Dirichlet L-functions. Specifically, let χ be a character modulo k. Then we can write its Dirichlet L-function as:
L
(
s
,
χ
)
=
∑
n
=
1
∞
χ
(
n
)
n
s
=
1
k
s
∑
r
=
1
k
χ
(
r
)
ζ
(
s
,
r
k
)
.
{\displaystyle L(s,\chi )=\sum _{n=1}^{\infty }{\frac {\chi (n)}{n^{s}}}={\frac {1}{k^{s}}}\sum _{r=1}^{k}\chi (r)\operatorname {\zeta } \left(s,{\frac {r}{k}}\right).}
== See also ==
Generalized Riemann hypothesis
L-function
Modularity theorem
Artin conjecture
Special values of L-functions
== Notes ==
== References ==
Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
Apostol, T. M. (2010), "Dirichlet L-function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
Davenport, H. (2000). Multiplicative Number Theory (3rd ed.). Springer. ISBN 0-387-95097-4.
Dirichlet, P. G. L. (1837). "Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält". Abhand. Ak. Wiss. Berlin. 48.
Ireland, Kenneth; Rosen, Michael (1990). A Classical Introduction to Modern Number Theory (2nd ed.). Springer-Verlag.
Montgomery, Hugh L.; Vaughan, Robert C. (2006). Multiplicative number theory. I. Classical theory. Cambridge tracts in advanced mathematics. Vol. 97. Cambridge University Press. ISBN 978-0-521-84903-6.
Iwaniec, Henryk; Kowalski, Emmanuel (2004). Analytic Number Theory. American Mathematical Society Colloquium Publications. Vol. 53. Providence, RI: American Mathematical Society.
"Dirichlet-L-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Dirichlet_L-function |
In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.
If the defining polynomial of a plane algebraic curve is irreducible, then one has an irreducible plane algebraic curve. Otherwise, the algebraic curve is the union of one or several irreducible curves, called its components, that are defined by the irreducible factors.
More generally, an algebraic curve is an algebraic variety of dimension one. In some contexts, an algebraic set of dimension one is also called an algebraic curve, but this will not be the case in this article. Equivalently, an algebraic curve is an algebraic variety that is birationally equivalent to an irreducible algebraic plane curve. If the curve is contained in an affine space or a projective space, one can take a projection for such a birational equivalence.
These birational equivalences reduce most of the study of algebraic curves to the study of algebraic plane curves. However, some properties are not kept under birational equivalence and must be studied on non-plane curves. This is, in particular, the case for the degree and smoothness. For example, there exist smooth curves of genus 0 and degree greater than two, but any plane projection of such curves has singular points (see Genus–degree formula).
A non-plane curve is often called a space curve or a skew curve.
== In Euclidean geometry ==
An algebraic curve in the Euclidean plane is the set of the points whose coordinates are the solutions of a bivariate polynomial equation p(x, y) = 0. This equation is often called the implicit equation of the curve, in contrast to the curves that are the graph of a function defining explicitly y as a function of x.
With a curve given by such an implicit equation, the first problems are to determine the shape of the curve and to draw it. These problems are not as easy to solve as in the case of the graph of a function, for which y may easily be computed for various values of x. The fact that the defining equation is a polynomial implies that the curve has some structural properties that may help in solving these problems.
Every algebraic curve may be uniquely decomposed into a finite number of smooth monotone arcs (also called branches) sometimes connected by some points sometimes called "remarkable points", and possibly a finite number of isolated points called acnodes. A smooth monotone arc is the graph of a smooth function which is defined and monotone on an open interval of the x-axis. In each direction, an arc is either unbounded (usually called an infinite arc) or has an endpoint which is either a singular point (this will be defined below) or a point with a tangent parallel to one of the coordinate axes.
For example, for the Tschirnhausen cubic, there are two infinite arcs having the origin (0,0) as of endpoint. This point is the only singular point of the curve. There are also two arcs having this singular point as one endpoint and having a second endpoint with a horizontal tangent. Finally, there are two other arcs each having one of these points with horizontal tangent as the first endpoint and having the unique point with vertical tangent as the second endpoint. In contrast, the sinusoid is certainly not an algebraic curve, having an infinite number of monotone arcs.
To draw an algebraic curve, it is important to know the remarkable points and their tangents, the infinite branches and their asymptotes (if any) and the way in which the arcs connect them. It is also useful to consider the inflection points as remarkable points. When all this information is drawn on a sheet of paper, the shape of the curve usually appears rather clearly. If not, it suffices to add a few other points and their tangents to get a good description of the curve.
The methods for computing the remarkable points and their tangents are described below in the section Remarkable points of a plane curve.
== Plane projective curves ==
It is often desirable to consider curves in the projective space. An algebraic curve in the projective plane or plane projective curve is the set of the points in a projective plane whose projective coordinates are zeros of a homogeneous polynomial in three variables P(x, y, z).
Every affine algebraic curve of equation p(x, y) = 0 may be completed into the projective curve of equation
h
p
(
x
,
y
,
z
)
=
0
,
{\displaystyle ^{h}p(x,y,z)=0,}
where
h
p
(
x
,
y
,
z
)
=
z
deg
(
p
)
p
(
x
z
,
y
z
)
{\displaystyle ^{h}p(x,y,z)=z^{\deg(p)}p\left({\frac {x}{z}},{\frac {y}{z}}\right)}
is the result of the homogenization of p. Conversely, if P(x, y, z) = 0 is the homogeneous equation of a projective curve, then P(x, y, 1) = 0 is the equation of an affine curve, which consists of the points of the projective curve whose third projective coordinate is not zero. These two operations are reciprocal one to the other, as
h
p
(
x
,
y
,
1
)
=
p
(
x
,
y
)
{\displaystyle ^{h}p(x,y,1)=p(x,y)}
and, if p is defined by
p
(
x
,
y
)
=
P
(
x
,
y
,
1
)
{\displaystyle p(x,y)=P(x,y,1)}
, then
h
p
(
x
,
y
,
z
)
=
P
(
x
,
y
,
z
)
,
{\displaystyle ^{h}p(x,y,z)=P(x,y,z),}
as soon as the homogeneous polynomial P is not divisible by z.
For example, the projective curve of equation x2 + y2 − z2 is the projective completion of the unit circle of equation x2 + y2 − 1 = 0.
This implies that an affine curve and its projective completion are the same curves, or, more precisely that the affine curve is a part of the projective curve that is large enough to well define the "complete" curve. This point of view is commonly expressed by calling "points at infinity" of the affine curve the points (in finite number) of the projective completion that do not belong to the affine part.
Projective curves are frequently studied for themselves. They are also useful for the study of affine curves. For example, if p(x, y) is the polynomial defining an affine curve, beside the partial derivatives
p
x
′
{\displaystyle p'_{x}}
and
p
y
′
{\displaystyle p'_{y}}
, it is useful to consider the derivative at infinity
p
∞
′
(
x
,
y
)
=
h
p
z
′
(
x
,
y
,
1
)
.
{\displaystyle p'_{\infty }(x,y)={^{h}p'_{z}(x,y,1)}.}
For example, the equation of the tangent of the affine curve of equation p(x, y) = 0 at a point (a, b) is
x
p
x
′
(
a
,
b
)
+
y
p
y
′
(
a
,
b
)
+
p
∞
′
(
a
,
b
)
=
0.
{\displaystyle xp'_{x}(a,b)+yp'_{y}(a,b)+p'_{\infty }(a,b)=0.}
== Remarkable points of a plane curve ==
In this section, we consider a plane algebraic curve defined by a bivariate polynomial p(x, y) and its projective completion, defined by the homogenization
P
(
x
,
y
,
z
)
=
h
p
(
x
,
y
,
z
)
{\displaystyle P(x,y,z)={}^{h}p(x,y,z)}
of p.
=== Intersection with a line ===
Knowing the points of intersection of a curve with a given line is frequently useful. The intersection with the axes of coordinates and the asymptotes are useful to draw the curve. Intersecting with a line parallel to the axes allows one to find at least a point in each branch of the curve. If an efficient root-finding algorithm is available, this allows to draw the curve by plotting the intersection point with all the lines parallel to the y-axis and passing through each pixel on the x-axis.
If the polynomial defining the curve has a degree d, any line cuts the curve in at most d points. Bézout's theorem asserts that this number is exactly d, if the points are searched in the projective plane over an algebraically closed field (for example the complex numbers), and counted with their multiplicity. The method of computation that follows proves again this theorem, in this simple case.
To compute the intersection of the curve defined by the polynomial p with the line of equation ax+by+c = 0, one solves the equation of the line for x (or for y if a = 0). Substituting the result in p, one gets a univariate equation q(y) = 0 (or q(x) = 0, if the equation of the line has been solved in y), each of whose roots is one coordinate of an intersection point. The other coordinate is deduced from the equation of the line. The multiplicity of an intersection point is the multiplicity of the corresponding root. There is an intersection point at infinity if the degree of q is lower than the degree of p; the multiplicity of such an intersection point at infinity is the difference of the degrees of p and q.
=== Tangent at a point ===
The tangent at a point (a, b) of the curve is the line of equation
(
x
−
a
)
p
x
′
(
a
,
b
)
+
(
y
−
b
)
p
y
′
(
a
,
b
)
=
0
{\displaystyle (x-a)p'_{x}(a,b)+(y-b)p'_{y}(a,b)=0}
, like for every differentiable curve defined by an implicit equation. In the case of polynomials, another formula for the tangent has a simpler constant term and is more symmetric:
x
p
x
′
(
a
,
b
)
+
y
p
y
′
(
a
,
b
)
+
p
∞
′
(
a
,
b
)
=
0
,
{\displaystyle xp'_{x}(a,b)+yp'_{y}(a,b)+p'_{\infty }(a,b)=0,}
where
p
∞
′
(
x
,
y
)
=
P
z
′
(
x
,
y
,
1
)
{\displaystyle p'_{\infty }(x,y)=P'_{z}(x,y,1)}
is the derivative at infinity. The equivalence of the two equations results from Euler's homogeneous function theorem applied to P.
If
p
x
′
(
a
,
b
)
=
p
y
′
(
a
,
b
)
=
0
,
{\displaystyle p'_{x}(a,b)=p'_{y}(a,b)=0,}
the tangent is not defined and the point is a singular point.
This extends immediately to the projective case: The equation of the tangent of at the point of projective coordinates (a:b:c) of the projective curve of equation P(x, y, z) = 0 is
x
P
x
′
(
a
,
b
,
c
)
+
y
P
y
′
(
a
,
b
,
c
)
+
z
P
z
′
(
a
,
b
,
c
)
=
0
,
{\displaystyle xP'_{x}(a,b,c)+yP'_{y}(a,b,c)+zP'_{z}(a,b,c)=0,}
and the points of the curves that are singular are the points such that
P
x
′
(
a
,
b
,
c
)
=
P
y
′
(
a
,
b
,
c
)
=
P
z
′
(
a
,
b
,
c
)
=
0.
{\displaystyle P'_{x}(a,b,c)=P'_{y}(a,b,c)=P'_{z}(a,b,c)=0.}
(The condition P(a, b, c) = 0 is implied by these conditions, by Euler's homogeneous function theorem.)
=== Asymptotes ===
Every infinite branch of an algebraic curve corresponds to a point at infinity on the curve, that is a point of the projective completion of the curve that does not belong to its affine part. The corresponding asymptote is the tangent of the curve at that point. The general formula for a tangent to a projective curve may apply, but it is worth to make it explicit in this case.
Let
p
=
p
d
+
⋯
+
p
0
{\displaystyle p=p_{d}+\cdots +p_{0}}
be the decomposition of the polynomial defining the curve into its homogeneous parts, where pi is the sum of the monomials of p of degree i. It follows that
P
=
h
p
=
p
d
+
z
p
d
−
1
+
⋯
+
z
d
p
0
{\displaystyle P={^{h}p}=p_{d}+zp_{d-1}+\cdots +z^{d}p_{0}}
and
P
z
′
(
a
,
b
,
0
)
=
p
d
−
1
(
a
,
b
)
.
{\displaystyle P'_{z}(a,b,0)=p_{d-1}(a,b).}
A point at infinity of the curve is a zero of p of the form (a, b, 0). Equivalently, (a, b) is a zero of pd. The fundamental theorem of algebra implies that, over an algebraically closed field (typically, the field of complex numbers), pd factors into a product of linear factors. Each factor defines a point at infinity on the curve: if bx − ay is such a factor, then it defines the point at infinity (a, b, 0). Over the reals, pd factors into linear and quadratic factors. The irreducible quadratic factors define non-real points at infinity, and the real points are given by the linear factors.
If (a, b, 0) is a point at infinity of the curve, one says that (a, b) is an asymptotic direction. Setting q = pd the equation of the corresponding asymptote is
x
q
x
′
(
a
,
b
)
+
y
q
y
′
(
a
,
b
)
+
p
d
−
1
(
a
,
b
)
=
0.
{\displaystyle xq'_{x}(a,b)+yq'_{y}(a,b)+p_{d-1}(a,b)=0.}
If
q
x
′
(
a
,
b
)
=
q
y
′
(
a
,
b
)
=
0
{\displaystyle q'_{x}(a,b)=q'_{y}(a,b)=0}
and
p
d
−
1
(
a
,
b
)
≠
0
,
{\displaystyle p_{d-1}(a,b)\neq 0,}
the asymptote is the line at infinity, and, in the real case, the curve has a branch that looks like a parabola. In this case one says that the curve has a parabolic branch. If
q
x
′
(
a
,
b
)
=
q
y
′
(
a
,
b
)
=
p
d
−
1
(
a
,
b
)
=
0
,
{\displaystyle q'_{x}(a,b)=q'_{y}(a,b)=p_{d-1}(a,b)=0,}
the curve has a singular point at infinity and may have several asymptotes. They may be computed by the method of computing the tangent cone of a singular point.
=== Singular points ===
The singular points of a curve of degree d defined by a polynomial p(x,y) of degree d are the solutions of the system of equations:
p
x
′
(
x
,
y
)
=
p
y
′
(
x
,
y
)
=
p
(
x
,
y
)
=
0.
{\displaystyle p'_{x}(x,y)=p'_{y}(x,y)=p(x,y)=0.}
In characteristic zero, this system is equivalent to
p
x
′
(
x
,
y
)
=
p
y
′
(
x
,
y
)
=
p
∞
′
(
x
,
y
)
=
0
,
{\displaystyle p'_{x}(x,y)=p'_{y}(x,y)=p'_{\infty }(x,y)=0,}
where, with the notation of the preceding section,
p
∞
′
(
x
,
y
)
=
P
z
′
(
x
,
y
,
1
)
.
{\displaystyle p'_{\infty }(x,y)=P'_{z}(x,y,1).}
The systems are equivalent because of Euler's homogeneous function theorem. The latter system has the advantage of having its third polynomial of degree d-1 instead of d.
Similarly, for a projective curve defined by a homogeneous polynomial P(x,y,z) of degree d, the singular points have the solutions of the system
P
x
′
(
x
,
y
,
z
)
=
P
y
′
(
x
,
y
,
z
)
=
P
z
′
(
x
,
y
,
z
)
=
0
{\displaystyle P'_{x}(x,y,z)=P'_{y}(x,y,z)=P'_{z}(x,y,z)=0}
as homogeneous coordinates. (In positive characteristic, the equation
P
(
x
,
y
,
z
)
{\displaystyle P(x,y,z)}
has to be added to the system.)
This implies that the number of singular points is finite as long as p(x,y) or P(x,y,z) is square free. Bézout's theorem implies thus that the number of singular points is at most (d − 1)2, but this bound is not sharp because the system of equations is overdetermined. If reducible polynomials are allowed, the sharp bound is d(d − 1)/2, this value is reached when the polynomial factors in linear factors, that is if the curve is the union of d lines. For irreducible curves and polynomials, the number of singular points is at most (d − 1)(d − 2)/2, because of the formula expressing the genus in term of the singularities (see below). The maximum is reached by the curves of genus zero whose all singularities have multiplicity two and distinct tangents (see below).
The equation of the tangents at a singular point is given by the nonzero homogeneous part of the lowest degree in the Taylor series of the polynomial at the singular point. When one changes the coordinates to put the singular point at the origin, the equation of the tangents at the singular point is thus the nonzero homogeneous part of the lowest degree of the polynomial, and the multiplicity of the singular point is the degree of this homogeneous part.
== Analytic structure ==
The study of the analytic structure of an algebraic curve in the neighborhood of a singular point provides accurate information of the topology of singularities. In fact, near a singular point, a real algebraic curve is the union of a finite number of branches that intersect only at the singular point and look either as a cusp or as a smooth curve.
Near a regular point, one of the coordinates of the curve may be expressed as an analytic function of the other coordinate. This is a corollary of the analytic implicit function theorem, and implies that the curve is smooth near the point. Near a singular point, the situation is more complicated and involves Puiseux series, which provide analytic parametric equations of the branches.
For describing a singularity, it is worth to translate the curve for having the singularity at the origin. This consists of a change of variable of the form
X
=
x
−
a
,
Y
=
y
−
b
,
{\displaystyle X=x-a,Y=y-b,}
where
a
,
b
{\displaystyle a,b}
are the coordinates of the singular point. In the following, the singular point under consideration is always supposed to be at the origin.
The equation of an algebraic curve is
f
(
x
,
y
)
=
0
,
{\displaystyle f(x,y)=0,}
where f is a polynomial in x and y. This polynomial may be considered as a polynomial in y, with coefficients in the algebraically closed field of the Puiseux series in x. Thus f may be factored in factors of the form
y
−
P
(
x
)
,
{\displaystyle y-P(x),}
where P is a Puiseux series. These factors are all different if f is an irreducible polynomial, because this implies that f is square-free, a property which is independent of the field of coefficients.
The Puiseux series that occur here have the form
P
(
x
)
=
∑
n
=
n
0
∞
a
n
x
n
/
d
,
{\displaystyle P(x)=\sum _{n=n_{0}}^{\infty }a_{n}x^{n/d},}
where d is a positive integer, and
n
0
{\displaystyle n_{0}}
is an integer that may also be supposed to be positive, because we consider only the branches of the curve that pass through the origin. Without loss of generality, we may suppose that d is coprime with the greatest common divisor of the n such that
a
n
≠
0
{\displaystyle a_{n}\neq 0}
(otherwise, one could choose a smaller common denominator for the exponents).
Let
ω
d
{\displaystyle \omega _{d}}
be a primitive dth root of unity. If the above Puiseux series occurs in the factorization of
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0}
, then the d series
P
i
(
x
)
=
∑
n
=
n
0
∞
a
n
ω
d
i
x
n
/
d
{\displaystyle P_{i}(x)=\sum _{n=n_{0}}^{\infty }a_{n}\omega _{d}^{i}x^{n/d}}
occur also in the factorization (a consequence of Galois theory). These d series are said conjugate, and are considered as a single branch of the curve, of ramification index d.
In the case of a real curve, that is a curve defined by a polynomial with real coefficients, three cases may occur. If none
P
i
(
x
)
{\displaystyle P_{i}(x)}
has real coefficients, then one has a non-real branch. If some
P
i
(
x
)
{\displaystyle P_{i}(x)}
has real coefficients, then one may choose it as
P
0
(
x
)
{\displaystyle P_{0}(x)}
. If d is odd, then every real value of x provides a real value of
P
0
(
x
)
{\displaystyle P_{0}(x)}
, and one has a real branch that looks regular, although it is singular if d > 1. If d is even, then
P
0
(
x
)
{\displaystyle P_{0}(x)}
and
P
d
/
2
(
x
)
{\displaystyle P_{d/2}(x)}
have real values, but only for x ≥ 0. In this case, the real branch looks as a cusp (or is a cusp, depending on the definition of a cusp that is used).
For example, the ordinary cusp has only one branch. If it is defined by the equation
y
2
−
x
3
=
0
,
{\displaystyle y^{2}-x^{3}=0,}
then the factorization is
(
y
−
x
3
/
2
)
(
y
+
x
3
/
2
)
;
{\displaystyle (y-x^{3/2})(y+x^{3/2});}
the ramification index is 2, and the two factors are real and define each a half branch. If the cusp is rotated, it equation becomes
y
3
−
x
2
=
0
,
{\displaystyle y^{3}-x^{2}=0,}
and the factorization is
(
y
−
x
2
/
3
)
(
y
−
j
2
x
2
/
3
)
(
y
−
(
j
2
)
2
x
2
/
3
)
,
{\displaystyle (y-x^{2/3})(y-j^{2}x^{2/3})(y-(j^{2})^{2}x^{2/3}),}
with
j
=
(
1
+
−
3
)
/
2
{\displaystyle j=(1+{\sqrt {-3}})/2}
(the coefficient
(
j
2
)
2
{\displaystyle (j^{2})^{2}}
has not been simplified to j for showing how the above definition of
P
i
(
x
)
{\displaystyle P_{i}(x)}
is specialized). Here the ramification index is 3, and only one factor is real; this shows that, in the first case, the two factors must be considered as defining the same branch.
== Non-plane algebraic curves ==
An algebraic curve is an algebraic variety of dimension one. This implies that an affine curve in an affine space of dimension n is defined by, at least, n − 1 polynomials in n variables. To define a curve, these polynomials must generate a prime ideal of Krull dimension 1. This condition is not easy to test in practice. Therefore, the following way to represent non-plane curves may be preferred.
Let
f
,
g
0
,
g
3
,
…
,
g
n
{\displaystyle f,g_{0},g_{3},\ldots ,g_{n}}
be n polynomials in two variables x1 and x2 such that f is irreducible. The points in the affine space of dimension n such whose coordinates satisfy the equations and inequations
f
(
x
1
,
x
2
)
=
0
g
0
(
x
1
,
x
2
)
≠
0
x
3
=
g
3
(
x
1
,
x
2
)
g
0
(
x
1
,
x
2
)
⋮
x
n
=
g
n
(
x
1
,
x
2
)
g
0
(
x
1
,
x
2
)
{\displaystyle {\begin{aligned}&f(x_{1},x_{2})=0\\&g_{0}(x_{1},x_{2})\neq 0\\x_{3}&={\frac {g_{3}(x_{1},x_{2})}{g_{0}(x_{1},x_{2})}}\\&{}\ \vdots \\x_{n}&={\frac {g_{n}(x_{1},x_{2})}{g_{0}(x_{1},x_{2})}}\end{aligned}}}
are all the points of an algebraic curve in which a finite number of points have been removed. This curve is defined by a system of generators of the ideal of the polynomials h such that it exists an integer k such
g
0
k
h
{\displaystyle g_{0}^{k}h}
belongs to the ideal generated by
f
,
x
3
g
0
−
g
3
,
…
,
x
n
g
0
−
g
n
{\displaystyle f,x_{3}g_{0}-g_{3},\ldots ,x_{n}g_{0}-g_{n}}
.
This representation is a birational equivalence between the curve and the plane curve defined by f. Every algebraic curve may be represented in this way. However, a linear change of variables may be needed in order to make almost always injective the projection on the two first variables. When a change of variables is needed, almost every change is convenient, as soon as it is defined over an infinite field.
This representation allows us to deduce easily any property of a non-plane algebraic curve, including its graphical representation, from the corresponding property of its plane projection.
For a curve defined by its implicit equations, above representation of the curve may easily deduced from a Gröbner basis for a block ordering such that the block of the smaller variables is (x1, x2). The polynomial f is the unique polynomial in the base that depends only of x1 and x2. The fractions gi/g0 are obtained by choosing, for i = 3, ..., n, a polynomial in the basis that is linear in xi and depends only on x1, x2 and xi. If these choices are not possible, this means either that the equations define an algebraic set that is not a variety, or that the variety is not of dimension one, or that one must change of coordinates. The latter case occurs when f exists and is unique, and, for i = 3, …, n, there exist polynomials whose leading monomial depends only on x1, x2 and xi.
== Algebraic function fields ==
The study of algebraic curves can be reduced to the study of irreducible algebraic curves: those curves that cannot be written as the union of two smaller curves. Up to birational equivalence, the irreducible curves over a field F are categorically equivalent to algebraic function fields in one variable over F. Such an algebraic function field is a field extension K of F that contains an element x which is transcendental over F, and such that K is a finite algebraic extension of F(x), which is the field of rational functions in the indeterminate x over F.
For example, consider the field C of complex numbers, over which we may define the field C(x) of rational functions in C. If y2 = x3 − x − 1, then the field C(x, y) is an elliptic function field. The element x is not uniquely determined; the field can also be regarded, for instance, as an extension of C(y). The algebraic curve corresponding to the function field is simply the set of points (x, y) in C2 satisfying y2 = x3 − x − 1.
If the field F is not algebraically closed, the point of view of function fields is a little more general than that of considering the locus of points, since we include, for instance, "curves" with no points on them. For example, if the base field F is the field R of real numbers, then x2 + y2 = −1 defines an algebraic extension field of R(x), but the corresponding curve considered as a subset of R2 has no points. The equation x2 + y2 = −1 does define an irreducible algebraic curve over R in the scheme sense (an integral, separated one-dimensional schemes of finite type over R). In this sense, the one-to-one correspondence between irreducible algebraic curves over F (up to birational equivalence) and algebraic function fields in one variable over F holds in general.
Two curves can be birationally equivalent (i.e. have isomorphic function fields) without being isomorphic as curves. The situation becomes easier when dealing with nonsingular curves, i.e. those that lack any singularities. Two nonsingular projective curves over a field are isomorphic if and only if their function fields are isomorphic.
Tsen's theorem is about the function field of an algebraic curve over an algebraically closed field.
== Complex curves and real surfaces ==
A complex projective algebraic curve resides in n-dimensional complex projective space CPn. This has complex dimension n, but topological dimension, as a real manifold, 2n, and is compact, connected, and orientable. An algebraic curve over C likewise has topological dimension two; in other words, it is a surface.
The topological genus of this surface, that is the number of handles or donut holes, is equal to the geometric genus of the algebraic curve that may be computed by algebraic means. In short, if one consider a plane projection of a nonsingular curve that has degree d and only ordinary singularities (singularities of multiplicity two with distinct tangents), then the genus is (d − 1)(d − 2)/2 − k, where k is the number of these singularities.
=== Compact Riemann surfaces ===
A Riemann surface is a connected complex analytic manifold of one complex dimension, which makes it a connected real manifold of two dimensions. It is compact if it is compact as a topological space.
There is a triple equivalence of categories between the category of smooth irreducible projective algebraic curves over C (with non-constant regular maps as morphisms), the category of compact Riemann surfaces (with non-constant holomorphic maps as morphisms), and the opposite of the category of algebraic function fields in one variable over C (with field homomorphisms that fix C as morphisms). This means that in studying these three subjects we are in a sense studying one and the same thing. It allows complex analytic methods to be used in algebraic geometry, and algebraic-geometric methods in complex analysis and field-theoretic methods to be used in both. This is characteristic of a much wider class of problems in algebraic geometry.
See also algebraic geometry and analytic geometry for a more general theory.
== Singularities ==
Using the intrinsic concept of tangent space, points P on an algebraic curve C are classified as smooth (synonymous: non-singular), or else singular. Given n − 1 homogeneous polynomials in n + 1 variables, we may find the Jacobian matrix as the (n − 1)×(n + 1) matrix of the partial derivatives. If the rank of this matrix is n − 1, then the polynomials define an algebraic curve (otherwise they define an algebraic variety of higher dimension). If the rank remains n − 1 when the Jacobian matrix is evaluated at a point P on the curve, then the point is a smooth or regular point; otherwise it is a singular point. In particular, if the curve is a plane projective algebraic curve, defined by a single homogeneous polynomial equation f(x,y,z) = 0, then the singular points are precisely the points P where the rank of the 1×(n + 1) matrix is zero, that is, where
∂
f
∂
x
(
P
)
=
∂
f
∂
y
(
P
)
=
∂
f
∂
z
(
P
)
=
0.
{\displaystyle {\frac {\partial f}{\partial x}}(P)={\frac {\partial f}{\partial y}}(P)={\frac {\partial f}{\partial z}}(P)=0.}
Since f is a polynomial, this definition is purely algebraic and makes no assumption about the nature of the field F, which in particular need not be the real or complex numbers. It should, of course, be recalled that (0,0,0) is not a point of the curve and hence not a singular point.
Similarly, for an affine algebraic curve defined by a single polynomial equation f(x,y) = 0, then the singular points are precisely the points P of the curve where the rank of the 1×n Jacobian matrix is zero, that is, where
f
(
P
)
=
∂
f
∂
x
(
P
)
=
∂
f
∂
y
(
P
)
=
0.
{\displaystyle f(P)={\frac {\partial f}{\partial x}}(P)={\frac {\partial f}{\partial y}}(P)=0.}
The singularities of a curve are not birational invariants. However, locating and classifying the singularities of a curve is one way of computing the genus, which is a birational invariant. For this to work, we should consider the curve projectively and require F to be algebraically closed, so that all the singularities which belong to the curve are considered.
=== Classification of singularities ===
Singular points include multiple points where the curve crosses over itself, and also various types of cusp, for example that shown by the curve with equation x3 = y2 at (0,0).
A curve C has at most a finite number of singular points. If it has none, it can be called smooth or non-singular. Commonly, this definition is understood over an algebraically closed field and for a curve C in a projective space (i.e., complete in the sense of algebraic geometry). For example, the plane curve of equation
y
−
x
3
=
0
{\displaystyle y-x^{3}=0}
is considered as singular, as having a singular point (a cusp) at infinity.
In the remainder of this section, one considers a plane curve C defined as the zero set of a bivariate polynomial f(x, y). Some of the results, but not all, may be generalized to non-plane curves.
The singular points are classified by means of several invariants. The multiplicity m is defined as the maximum integer such that the derivatives of f to all orders up to m – 1 vanish (also the minimal intersection number between the curve and a straight line at P).
Intuitively, a singular point has delta invariant δ if it concentrates δ ordinary double points at P. To make this precise, the blow up process produces so-called infinitely near points, and summing m(m − 1)/2 over the infinitely near points, where m is their multiplicity, produces δ.
For an irreducible and reduced curve and a point P we can define δ algebraically as the length of
O
~
P
/
O
P
{\displaystyle {\widetilde {\mathcal {O}}}_{P}/{\mathcal {O}}_{P}}
where
O
P
{\displaystyle {\mathcal {O}}_{P}}
is the local ring at P and
O
~
P
{\displaystyle {\widetilde {\mathcal {O}}}_{P}}
is its integral closure.
The Milnor number μ of a singularity is the degree of the mapping grad f(x,y)/|grad f(x,y)| on the small sphere of radius ε, in the sense of the topological degree of a continuous mapping, where grad f is the (complex) gradient vector field of f. It is related to δ and r by the Milnor–Jung formula,
Here, the branching number r of P is the number of locally irreducible branches at P. For example, r = 1 at an ordinary cusp, and r = 2 at an ordinary double point. The multiplicity m is at least r, and that P is singular if and only if m is at least 2. Moreover, δ is at least m(m-1)/2.
Computing the delta invariants of all of the singularities allows the genus g of the curve to be determined; if d is the degree, then
g
=
1
2
(
d
−
1
)
(
d
−
2
)
−
∑
P
δ
P
,
{\displaystyle g={\frac {1}{2}}(d-1)(d-2)-\sum _{P}\delta _{P},}
where the sum is taken over all singular points P of the complex projective plane curve. It is called the genus formula.
Assign the invariants [m, δ, r] to a singularity, where m is the multiplicity, δ is the delta-invariant, and r is the branching number. Then an ordinary cusp is a point with invariants [2,1,1] and an ordinary double point is a point with invariants [2,1,2], and an ordinary m-multiple point is a point with invariants [m, m(m − 1)/2, m].
== Examples of curves ==
=== Rational curves ===
A rational curve, also called a unicursal curve, is any curve which is birationally equivalent to a line, which we may take to be a projective line; accordingly, we may identify the function field of the curve with the field of rational functions in one indeterminate F(x). If F is algebraically closed, this is equivalent to a curve of genus zero; however, the field of all real algebraic functions defined on the real algebraic variety x2 + y2 = −1 is a field of genus zero which is not a rational function field.
Concretely, a rational curve embedded in an affine space of dimension n over F can be parameterized (except for isolated exceptional points) by means of n rational functions of a single parameter t; by reducing these rational functions to the same denominator, the n+1 resulting polynomials define a polynomial parametrization of the projective completion of the curve in the projective space. An example is the
rational normal curve, where all these polynomials are monomials.
Any conic section defined over F with a rational point in F is a rational curve. It can be parameterized by drawing a line with slope t through the rational point, and an intersection with the plane quadratic curve; this gives a polynomial with F-rational coefficients and one F-rational root, hence the other root is F-rational (i.e., belongs to F) also.
For example, consider the ellipse x2 + xy + y2 = 1, where (−1, 0) is a rational point. Drawing a line with slope t from (−1,0), y = t(x + 1), substituting it in the equation of the ellipse, factoring, and solving for x, we obtain
x
=
1
−
t
2
1
+
t
+
t
2
.
{\displaystyle x={\frac {1-t^{2}}{1+t+t^{2}}}.}
Then the equation for y is
y
=
t
(
x
+
1
)
=
t
(
t
+
2
)
1
+
t
+
t
2
,
{\displaystyle y=t(x+1)={\frac {t(t+2)}{1+t+t^{2}}}\,,}
which defines a rational parameterization of the ellipse and hence shows the ellipse is a rational curve. All points of the ellipse are given, except for (−1,1), which corresponds to t = ∞; the entire curve is parameterized therefore by the real projective line.
Such a rational parameterization may be considered in the projective space by equating the first projective coordinates to the numerators of the parameterization and the last one to the common denominator. As the parameter is defined in a projective line, the polynomials in the parameter should be homogenized. For example, the projective parameterization of the above ellipse is
X
=
U
2
−
T
2
,
Y
=
T
(
T
+
2
U
)
,
Z
=
T
2
+
T
U
+
U
2
.
{\displaystyle X=U^{2}-T^{2},\quad Y=T\,(T+2\,U),\quad Z=T^{2}+TU+U^{2}.}
Eliminating T and U between these equations we get again the projective equation of the ellipse
X
2
+
X
Y
+
Y
2
=
Z
2
,
{\displaystyle X^{2}+X\,Y+Y^{2}=Z^{2},}
which may be easily obtained directly by homogenizing the above equation.
Many of the curves on Wikipedia's list of curves are rational and hence have similar rational parameterizations.
=== Rational plane curves ===
Rational plane curves are rational curves embedded into
P
2
{\displaystyle \mathbb {P} ^{2}}
. Given generic sections
s
1
,
s
2
,
s
3
∈
Γ
(
P
1
,
O
(
d
)
)
{\displaystyle s_{1},s_{2},s_{3}\in \Gamma (\mathbb {P} ^{1},{\mathcal {O}}(d))}
of degree
d
{\displaystyle d}
homogeneous polynomials in two coordinates,
x
,
y
{\displaystyle x,y}
, there is a map
s
:
P
1
→
P
2
{\displaystyle s:\mathbb {P} ^{1}\to \mathbb {P} ^{2}}
given by
s
(
[
x
:
y
]
)
=
[
s
1
(
[
x
:
y
]
)
:
s
2
(
[
x
:
y
]
)
:
s
3
(
[
x
:
y
]
)
]
{\displaystyle s([x:y])=[s_{1}([x:y]):s_{2}([x:y]):s_{3}([x:y])]}
defining a rational plane curve of degree
d
{\displaystyle d}
. There is an associated moduli space
M
=
M
¯
0
,
0
(
P
2
,
d
⋅
[
H
]
)
{\displaystyle {\mathcal {M}}={\overline {\mathcal {M}}}_{0,0}(\mathbb {P} ^{2},d\cdot [H])}
(where
[
H
]
{\displaystyle [H]}
is the hyperplane class) parametrizing all such stable curves. A dimension count can be made to determine the moduli spaces dimension: There are
d
+
1
{\displaystyle d+1}
parameters in
Γ
(
P
1
,
O
(
d
)
)
{\displaystyle \Gamma (\mathbb {P} ^{1},{\mathcal {O}}(d))}
giving
3
d
+
3
{\displaystyle 3d+3}
parameters total for each of the sections. Then, since they are considered up to a projective quotient in
P
2
{\displaystyle \mathbb {P} ^{2}}
there is
1
{\displaystyle 1}
less parameter in
M
{\displaystyle {\mathcal {M}}}
. Furthermore, there is a three dimensional group of automorphisms of
P
1
{\displaystyle \mathbb {P} ^{1}}
, hence
M
{\displaystyle {\mathcal {M}}}
has dimension
3
d
+
3
−
1
−
3
=
3
d
−
1
{\displaystyle 3d+3-1-3=3d-1}
. This moduli space can be used to count the number
N
d
{\displaystyle N_{d}}
of degree
d
{\displaystyle d}
rational plane curves intersecting
3
d
−
1
{\displaystyle 3d-1}
points using Gromov–Witten theory. It is given by the recursive relation
N
d
=
∑
d
A
+
d
B
=
d
N
d
A
N
d
B
d
A
2
d
B
(
d
B
(
3
d
−
4
3
d
A
−
2
)
−
d
A
(
3
d
−
4
3
d
A
−
1
)
)
{\displaystyle N_{d}=\sum _{d_{A}+d_{B}=d}N_{d_{A}}N_{d_{B}}d_{A}^{2}d_{B}\left(d_{B}{\binom {3d-4}{3d_{A}-2}}-d_{A}{\binom {3d-4}{3d_{A}-1}}\right)}
where
N
1
=
N
2
=
1
{\displaystyle N_{1}=N_{2}=1}
.
=== Elliptic curves ===
An elliptic curve may be defined as any curve of genus one with a rational point: a common model is a nonsingular cubic curve, which suffices to model any genus one curve. In this model the distinguished point is commonly taken to be an inflection point at infinity; this amounts to requiring that the curve can be written in Tate-Weierstrass form, which in its projective version is
y
2
z
+
a
1
x
y
z
+
a
3
y
z
2
=
x
3
+
a
2
x
2
z
+
a
4
x
z
2
+
a
6
z
3
.
{\displaystyle y^{2}z+a_{1}xyz+a_{3}yz^{2}=x^{3}+a_{2}x^{2}z+a_{4}xz^{2}+a_{6}z^{3}.}
If the characteristic of the field is different from 2 and 3, then a linear change of coordinates allows putting
a
1
=
a
2
=
a
3
=
0
,
{\displaystyle a_{1}=a_{2}=a_{3}=0,}
which gives the classical Weierstrass form
y
2
=
x
3
+
p
x
+
q
.
{\displaystyle y^{2}=x^{3}+px+q.}
Elliptic curves carry the structure of an abelian group with the distinguished point as the identity of the group law. In a plane cubic model three points sum to zero in the group if and only if they are collinear. For an elliptic curve defined over the complex numbers the group is isomorphic to the additive group of the complex plane modulo the period lattice of the corresponding elliptic functions.
The intersection of two quadric surfaces is, in general, a nonsingular curve of genus one and degree four, and thus an elliptic curve, if it has a rational point. In special cases, the intersection either may be a rational singular quartic or is decomposed in curves of smaller degrees which are not always distinct (either a cubic curve and a line, or two conics, or a conic and two lines, or four lines).
=== Curves of genus greater than one ===
Curves of genus greater than one differ markedly from both rational and elliptic curves. Such curves defined over the rational numbers, by Faltings's theorem, can have only a finite number of rational points, and they may be viewed as having a hyperbolic geometry structure. Examples are the hyperelliptic curves, the Klein quartic curve, and the Fermat curve xn + yn = zn when n is greater than three. Also projective plane curves in
P
2
{\displaystyle \mathbb {P} ^{2}}
and curves in
P
1
×
P
1
{\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}}
provide many useful examples.
==== Projective plane curves ====
Plane curves
C
⊂
P
2
{\displaystyle C\subset \mathbb {P} ^{2}}
of degree
k
{\displaystyle k}
, which can be constructed as the vanishing locus of a generic section
s
∈
Γ
(
P
2
,
O
(
k
)
)
{\displaystyle s\in \Gamma (\mathbb {P} ^{2},{\mathcal {O}}(k))}
, have genus
(
k
−
1
)
(
k
−
2
)
2
{\displaystyle {\frac {(k-1)(k-2)}{2}}}
which can be computed using coherent sheaf cohomology. Here's a brief summary of the curves' genera relative to their degree
For example, the curve
x
4
+
y
4
+
z
4
{\displaystyle x^{4}+y^{4}+z^{4}}
defines a curve of genus
3
{\displaystyle 3}
which is smooth since the differentials
4
x
3
,
4
y
3
,
4
z
3
{\displaystyle 4x^{3},4y^{3},4z^{3}}
have no common zeros with the curve. A non-example of a generic section is the curve
x
(
x
2
+
y
2
+
z
2
)
{\displaystyle x(x^{2}+y^{2}+z^{2})}
which, by Bezout's theorem, should intersect at most
2
{\displaystyle 2}
points; it is the union of two rational curves
C
1
∪
C
2
{\displaystyle C_{1}\cup C_{2}}
intersecting at two points. Note
C
1
{\displaystyle C_{1}}
is given by the vanishing locus of
x
{\displaystyle x}
and
C
2
{\displaystyle C_{2}}
is given by the vanishing locus of
x
2
+
y
2
+
z
2
{\displaystyle x^{2}+y^{2}+z^{2}}
. These can be found explicitly: a point lies in both if
x
=
0
{\displaystyle x=0}
. So the two solutions are the points
[
0
:
y
:
z
]
{\displaystyle [0:y:z]}
such that
y
2
+
z
2
=
0
{\displaystyle y^{2}+z^{2}=0}
, which are
[
0
:
1
:
−
−
1
]
{\displaystyle [0:1:-{\sqrt {-1}}]}
and
[
0
:
1
:
−
1
]
{\displaystyle [0:1:{\sqrt {-1}}]}
.
==== Curves in product of projective lines ====
Curve
C
⊂
P
1
×
P
1
{\displaystyle C\subset \mathbb {P} ^{1}\times \mathbb {P} ^{1}}
given by the vanishing locus of
s
∈
Γ
(
P
1
×
P
1
,
O
(
a
,
b
)
)
{\displaystyle s\in \Gamma (\mathbb {P} ^{1}\times \mathbb {P} ^{1},{\mathcal {O}}(a,b))}
, for
a
,
b
≥
2
{\displaystyle a,b\geq 2}
, give curves of genus
a
b
−
a
−
b
+
1
{\displaystyle ab-a-b+1}
which can be checked using coherent sheaf cohomology. If
a
=
2
{\displaystyle a=2}
, then they define curves of genus
2
b
−
2
−
b
+
1
=
b
−
1
{\displaystyle 2b-2-b+1=b-1}
, hence a curve of any genus can be constructed as a curve in
P
1
×
P
1
{\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}}
. Their genera can be summarized in the table
and for
a
=
3
{\displaystyle a=3}
, this is
== See also ==
=== Classical algebraic geometry ===
=== Modern algebraic geometry ===
=== Geometry of Riemann surfaces ===
== Notes ==
== References ==
Brieskorn, Egbert; Knörrer, Horst (2013). Plane Algebraic Curves. Translated by Stillwell, John. Birkhäuser. ISBN 978-3-0348-5097-1.
Chevalley, Claude (1951). Introduction to the Theory of Algebraic Functions of One Variable. Mathematical surveys. Vol. 6. American Mathematical Society. ISBN 978-0-8218-1506-9. {{cite book}}: ISBN / Date incompatibility (help)
Coolidge, Julian L. (2004) [1931]. A Treatise on Algebraic Plane Curves. Dover. ISBN 978-0-486-49576-7.
Farkas, H. M.; Kra, I. (2012) [1980]. Riemann Surfaces. Graduate Texts in Mathematics. Vol. 71. Springer. ISBN 978-1-4684-9930-8.
Fulton, William (1989). Algebraic Curves: An Introduction to Algebraic Geometry. Mathematics lecture note series. Vol. 30 (3rd ed.). Addison-Wesley. ISBN 978-0-201-51010-2.
Gibson, C.G. (1998). Elementary Geometry of Algebraic Curves: An Undergraduate Introduction. Cambridge University Press. ISBN 978-0-521-64641-3.
Griffiths, Phillip A. (1985). Introduction to Algebraic Curves. Translation of Mathematical Monographs. Vol. 70 (3rd ed.). American Mathematical Society. ISBN 9780821845370.
Hartshorne, Robin (2013) [1977]. Algebraic Geometry. Graduate Texts in Mathematics. Vol. 52. Springer. ISBN 978-1-4757-3849-0.
Iitaka, Shigeru (2011) [1982]. Algebraic Geometry: An Introduction to Birational Geometry of Algebraic Varieties. Graduate Texts in Mathematics. Vol. 76. Springer New York. ISBN 978-1-4613-8121-1.
Milnor, John (1968). Singular Points of Complex Hypersurfaces. Princeton University Press. ISBN 0-691-08065-8.
Serre, Jean-Pierre (2012) [1988]. Algebraic Groups and Class Fields. Graduate Texts in Mathematics. Vol. 117. Springer. ISBN 978-1-4612-1035-1.
Kötter, Ernst (1887). "Grundzüge einer rein geometrischen Theorie der algebraischen ebenen Curven" [Fundamentals of a purely geometrical theory of algebraic plane curves]. Transactions of the Royal Academy of Berlin. — gained the 1886 Academy prize | Wikipedia/Algebraic_curve |
In algebra, an absolute value is a function that generalizes the usual absolute value. More precisely, if D is a field or (more generally) an integral domain, an absolute value on D is a function, commonly denoted
|
x
|
,
{\displaystyle |x|,}
from D to the real numbers satisfying:
It follows from the axioms that
|
1
|
=
1
,
{\displaystyle |1|=1,}
|
−
1
|
=
1
,
{\displaystyle |-1|=1,}
and
|
−
x
|
=
|
x
|
{\displaystyle |-x|=|x|}
for every
x
{\displaystyle x}
. Furthermore, for every positive integer n,
|
n
|
≤
n
,
{\displaystyle |n|\leq n,}
where the leftmost n denotes the sum of n summands equal to the identity element of D.
The classical absolute value and its square root are examples of absolute values, but not the square of the classical absolute value, which does not fulfill the triangular inequality.
An absolute value such that
|
x
+
y
|
≤
max
(
|
x
|
,
|
y
|
)
{\displaystyle |x+y|\leq \max(|x|,|y|)}
is an ultrametric absolute value.
An absolute value induces a metric (and thus a topology) by
d
(
f
,
g
)
=
|
f
−
g
|
.
{\displaystyle d(f,g)=|f-g|.}
== Examples ==
The standard absolute value on the integers.
The standard absolute value on the complex numbers.
The p-adic absolute value on the rational numbers.
If
F
(
x
)
{\displaystyle F(x)}
is the field of rational fractions over a field F and
P
{\displaystyle P}
is an irreducible polynomial over F, the P-adic absolute value on
F
(
x
)
{\displaystyle F(x)}
is defined as
|
f
|
P
=
2
−
n
,
{\displaystyle |f|_{P}=2^{-n},}
where n is the unique integer such that
f
(
x
)
=
P
n
G
H
,
{\textstyle f(x)=P^{n}{\frac {G}{H}},}
where G and H are two polynomials, both coprime with P.
== Types of absolute value ==
The trivial absolute value is the absolute value with |x| = 0 when x = 0 and |x| = 1 otherwise. Every integral domain can carry at least the trivial absolute value. The trivial value is the only possible absolute value on a finite field because any non-zero element can be raised to some power to yield 1.
If an absolute value satisfies the stronger property |x + y| ≤ max(|x|, |y|) for all x and y, then |x| is called an ultrametric or non-Archimedean absolute value, and otherwise an Archimedean absolute value.
== Places ==
If |x|1 and |x|2 are two absolute values on the same integral domain D, then the two absolute values are equivalent if |x|1 < 1 if and only if |x|2 < 1 for all x. If two nontrivial absolute values are equivalent, then for some exponent e we have |x|1e = |x|2 for all x. Raising an absolute value to a power less than 1 results in another absolute value, but raising to a power greater than 1 does not necessarily result in an absolute value. (For instance, squaring the usual absolute value on the real numbers yields a function which is not an absolute value because it violates the rule |x+y| ≤ |x|+|y|.) Absolute values up to equivalence, or in other words, an equivalence class of absolute values, is called a place.
Ostrowski's theorem states that the nontrivial places of the rational numbers Q are the ordinary absolute value and the p-adic absolute value for each prime p. For a given prime p, any rational number q can be written as pn(a/b), where a and b are integers not divisible by p and n is an integer. The p-adic absolute value of q is
|
p
n
a
b
|
p
=
p
−
n
.
{\displaystyle \left|p^{n}{\frac {a}{b}}\right|_{p}=p^{-n}.}
Since the ordinary absolute value and the p-adic absolute values are absolute values according to the definition above, these define places.
== Valuations ==
If for some ultrametric absolute value and any base b > 1, we define ν(x) = −logb|x| for x ≠ 0 and ν(0) = ∞, where ∞ is ordered to be greater than all real numbers, then we obtain a function from D to R ∪ {∞}, with the following properties:
ν(x) = ∞ ⇒ x = 0,
ν(xy) = ν(x) + ν(y),
ν(x + y) ≥ min(ν(x), ν(y)).
Such a function is known as a valuation in the terminology of Bourbaki, but other authors use the term valuation for absolute value and then say exponential valuation instead of valuation.
== Completions ==
Given an integral domain D with an absolute value, we can define the Cauchy sequences of elements of D with respect to the absolute value by requiring that for every ε > 0 there is a positive integer N such that for all integers m, n > N one has |xm − xn| < ε. Cauchy sequences form a ring under pointwise addition and multiplication. One can also define null sequences as sequences (an) of elements of D such that |an| converges to zero. Null sequences are a prime ideal in the ring of Cauchy sequences, and the quotient ring is therefore an integral domain. The domain D is embedded in this quotient ring, called the completion of D with respect to the absolute value |x|.
Since fields are integral domains, this is also a construction for the completion of a field with respect to an absolute value. To show that the result is a field, and not just an integral domain, we can either show that null sequences form a maximal ideal, or else construct the inverse directly. The latter can be easily done by taking, for all nonzero elements of the quotient ring, a sequence starting from a point beyond the last zero element of the sequence. Any nonzero element of the quotient ring will differ by a null sequence from such a sequence, and by taking pointwise inversion we can find a representative inverse element.
Another theorem of Alexander Ostrowski has it that any field complete with respect to an Archimedean absolute value is isomorphic to either the real or the complex numbers, and the valuation is equivalent to the usual one. The Gelfand-Tornheim theorem states that any field with an Archimedean valuation is isomorphic to a subfield of C, the valuation being equivalent to the usual absolute value on C.
== Fields and integral domains ==
If D is an integral domain with absolute value |x|, then we may extend the definition of the absolute value to the field of fractions of D by setting
|
x
/
y
|
=
|
x
|
/
|
y
|
.
{\displaystyle |x/y|=|x|/|y|.\,}
On the other hand, if F is a field with ultrametric absolute value |x|, then the set of elements of F such that |x| ≤ 1 defines a valuation ring, which is a subring D of F such that for every nonzero element x of F, at least one of x or x−1 belongs to D. Since F is a field, D has no zero divisors and is an integral domain. It has a unique maximal ideal consisting of all x such that |x| < 1, and is therefore a local ring.
== Notes ==
== References == | Wikipedia/Absolute_value_(algebra) |
In abstract algebra and number theory, Kummer theory provides a description of certain types of field extensions involving the adjunction of nth roots of elements of the base field. The theory was originally developed by Ernst Eduard Kummer around the 1840s in his pioneering work on Fermat's Last Theorem. The main statements do not depend on the nature of the field – apart from its characteristic, which should not divide the integer n – and therefore belong to abstract algebra. The theory of cyclic extensions of the field K when the characteristic of K does divide n is called Artin–Schreier theory.
Kummer theory is basic, for example, in class field theory and in general in understanding abelian extensions; it says that in the presence of enough roots of unity, cyclic extensions can be understood in terms of extracting roots. The main burden in class field theory is to dispense with extra roots of unity ('descending' back to smaller fields); which is something much more serious.
== Kummer extensions ==
A Kummer extension is a field extension L/K, where for some given integer n > 1 we have
K contains n distinct nth roots of unity (i.e., roots of Xn − 1)
L/K has abelian Galois group of exponent n.
For example, when n = 2, the first condition is always true if K has characteristic ≠ 2. The Kummer extensions in this case include quadratic extensions
L
=
K
(
a
)
{\displaystyle L=K({\sqrt {a}})}
where a in K is a non-square element. By the usual solution of quadratic equations, any extension of degree 2 of K has this form. The Kummer extensions in this case also include biquadratic extensions and more general multiquadratic extensions. When K has characteristic 2, there are no such Kummer extensions.
Taking n = 3, there are no degree 3 Kummer extensions of the rational number field Q, since for three cube roots of 1 complex numbers are required. If one takes L to be the splitting field of X3 − a over Q, where a is not a cube in the rational numbers, then L contains a subfield K with three cube roots of 1; that is because if α and β are roots of the cubic polynomial, we shall have (α/β)3 =1 and the cubic is a separable polynomial. Then L/K is a Kummer extension.
More generally, it is true that when K contains n distinct nth roots of unity, which implies that the characteristic of K doesn't divide n, then adjoining to K the nth root of any element a of K creates a Kummer extension (of degree m, for some m dividing n). As the splitting field of the polynomial Xn − a, the Kummer extension is necessarily Galois, with Galois group that is cyclic of order m. It is easy to track the Galois action via the root of unity in front of
a
n
.
{\displaystyle {\sqrt[{n}]{a}}.}
Kummer theory provides converse statements. When K contains n distinct nth roots of unity, it states that any abelian extension of K of exponent dividing n is formed by extraction of roots of elements of K. Further, if K× denotes the multiplicative group of non-zero elements of K, abelian extensions of K of exponent n correspond bijectively with subgroups of
K
×
/
(
K
×
)
n
,
{\displaystyle K^{\times }/(K^{\times })^{n},}
that is, elements of K× modulo nth powers. The correspondence can be described explicitly as follows. Given a subgroup
Δ
⊆
K
×
/
(
K
×
)
n
,
{\displaystyle \Delta \subseteq K^{\times }/(K^{\times })^{n},}
the corresponding extension is given by
K
(
Δ
1
n
)
,
{\displaystyle K\left(\Delta ^{\frac {1}{n}}\right),}
where
Δ
1
n
=
{
a
n
:
a
∈
K
×
,
a
⋅
(
K
×
)
n
∈
Δ
}
.
{\displaystyle \Delta ^{\frac {1}{n}}=\left\{{\sqrt[{n}]{a}}:a\in K^{\times },a\cdot \left(K^{\times }\right)^{n}\in \Delta \right\}.}
In fact it suffices to adjoin nth root of one representative of each element of any set of generators of the group Δ. Conversely, if L is a Kummer extension of K, then Δ is recovered by the rule
Δ
=
(
K
×
∩
(
L
×
)
n
)
/
(
K
×
)
n
.
{\displaystyle \Delta =\left(K^{\times }\cap (L^{\times })^{n}\right)/(K^{\times })^{n}.}
In this case there is an isomorphism
Δ
≅
Hom
c
(
Gal
(
L
/
K
)
,
μ
n
)
{\displaystyle \Delta \cong \operatorname {Hom} _{\text{c}}(\operatorname {Gal} (L/K),\mu _{n})}
given by
a
↦
(
σ
↦
σ
(
α
)
α
)
,
{\displaystyle a\mapsto \left(\sigma \mapsto {\frac {\sigma (\alpha )}{\alpha }}\right),}
where α is any nth root of a in L. Here
μ
n
{\displaystyle \mu _{n}}
denotes the multiplicative group of nth roots of unity (which belong to K) and
Hom
c
(
Gal
(
L
/
K
)
,
μ
n
)
{\displaystyle \operatorname {Hom} _{\text{c}}(\operatorname {Gal} (L/K),\mu _{n})}
is the group of continuous homomorphisms from
Gal
(
L
/
K
)
{\displaystyle \operatorname {Gal} (L/K)}
equipped with Krull topology to
μ
n
{\displaystyle \mu _{n}}
with discrete topology (with group operation given by pointwise multiplication). This group (with discrete topology) can also be viewed as Pontryagin dual of
Gal
(
L
/
K
)
{\displaystyle \operatorname {Gal} (L/K)}
, assuming we regard
μ
n
{\displaystyle \mu _{n}}
as a subgroup of circle group. If the extension L/K is finite, then
Gal
(
L
/
K
)
{\displaystyle \operatorname {Gal} (L/K)}
is a finite discrete group and we have
Δ
≅
Hom
(
Gal
(
L
/
K
)
,
μ
n
)
≅
Gal
(
L
/
K
)
,
{\displaystyle \Delta \cong \operatorname {Hom} (\operatorname {Gal} (L/K),\mu _{n})\cong \operatorname {Gal} (L/K),}
however the last isomorphism isn't natural.
=== Recovering a1/n from a primitive element ===
For
p
{\displaystyle p}
prime, let
K
{\displaystyle K}
be a field containing
ζ
p
{\displaystyle \zeta _{p}}
and
K
(
β
)
/
K
{\displaystyle K(\beta )/K}
a degree
p
{\displaystyle p}
Galois extension. Note the Galois group is cyclic, generated by
σ
{\displaystyle \sigma }
. Let
α
=
∑
l
=
0
p
−
1
ζ
p
l
σ
l
(
β
)
∈
K
(
β
)
{\displaystyle \alpha =\sum _{l=0}^{p-1}\zeta _{p}^{l}\sigma ^{l}(\beta )\in K(\beta )}
Then
ζ
p
σ
(
α
)
=
∑
l
=
0
p
−
1
ζ
p
l
+
1
σ
l
+
1
(
β
)
=
α
.
{\displaystyle \zeta _{p}\sigma (\alpha )=\sum _{l=0}^{p-1}\zeta _{p}^{l+1}\sigma ^{l+1}(\beta )=\alpha .}
Since
α
≠
σ
(
α
)
,
K
(
α
)
=
K
(
β
)
{\displaystyle \alpha \neq \sigma (\alpha ),K(\alpha )=K(\beta )}
and
α
p
=
±
∏
l
=
0
p
−
1
ζ
p
−
l
α
=
±
∏
l
=
0
p
−
1
σ
l
(
α
)
=
±
N
K
(
β
)
/
K
(
α
)
∈
K
{\displaystyle \alpha ^{p}=\pm \prod _{l=0}^{p-1}\zeta _{p}^{-l}\alpha =\pm \prod _{l=0}^{p-1}\sigma ^{l}(\alpha )=\pm N_{K(\beta )/K}(\alpha )\in K}
,
where the
±
{\displaystyle \pm }
sign is
+
{\displaystyle +}
if
p
{\displaystyle p}
is odd and
−
{\displaystyle -}
if
p
=
2
{\displaystyle p=2}
.
When
L
/
K
{\displaystyle L/K}
is an abelian extension of degree
n
=
∏
j
=
1
m
p
j
{\displaystyle n=\prod _{j=1}^{m}p_{j}}
square-free such that
ζ
n
∈
K
{\displaystyle \zeta _{n}\in K}
, apply the same argument to the subfields
K
(
β
j
)
/
K
{\displaystyle K(\beta _{j})/K}
Galois of degree
p
j
{\displaystyle p_{j}}
to obtain
L
=
K
(
a
1
1
/
p
1
,
…
,
a
m
1
/
p
m
)
=
K
(
A
1
/
p
1
,
…
,
A
1
/
p
m
)
=
K
(
A
1
/
n
)
{\displaystyle L=K\left(a_{1}^{1/p_{1}},\ldots ,a_{m}^{1/p_{m}}\right)=K\left(A^{1/p_{1}},\ldots ,A^{1/p_{m}}\right)=K\left(A^{1/n}\right)}
where
A
=
∏
j
=
1
m
a
j
n
/
p
j
∈
K
{\displaystyle A=\prod _{j=1}^{m}a_{j}^{n/p_{j}}\in K}
.
== The Kummer Map ==
One of the main tools in Kummer theory is the Kummer map. Let
m
{\displaystyle m}
be a positive integer and let
K
{\displaystyle K}
be a field, not necessarily containing the
m
{\displaystyle m}
th roots of unity. Letting
K
¯
{\displaystyle {\overline {K}}}
denote the algebraic closure of
K
{\displaystyle K}
, there is a short exact sequence
0
→
K
¯
×
[
m
]
→
K
¯
×
→
z
↦
z
m
K
¯
×
→
0
{\displaystyle 0\xrightarrow {} {\overline {K}}^{\times }[m]\xrightarrow {} {\overline {K}}^{\times }\xrightarrow {z\mapsto z^{m}} {\overline {K}}^{\times }\xrightarrow {} 0}
Choosing an extension
L
/
K
{\displaystyle L/K}
and taking
G
a
l
(
K
¯
/
L
)
{\displaystyle \mathrm {Gal} ({\overline {K}}/L)}
-cohomology one obtains the sequence
0
→
L
×
/
(
L
×
)
m
→
H
1
(
L
,
K
¯
×
[
m
]
)
→
H
1
(
L
,
K
¯
×
)
[
m
]
→
0
{\displaystyle 0\xrightarrow {} L^{\times }/(L^{\times })^{m}\xrightarrow {} H^{1}\left(L,{\overline {K}}^{\times }[m]\right)\xrightarrow {} H^{1}\left(L,{\overline {K}}^{\times }\right)[m]\xrightarrow {} 0}
By Hilbert's Theorem 90
H
1
(
L
,
K
¯
×
)
=
0
{\displaystyle H^{1}\left(L,{\overline {K}}^{\times }\right)=0}
, and hence we get an isomorphism
δ
:
L
×
/
(
L
×
)
m
→
∼
H
1
(
L
,
K
¯
×
[
m
]
)
{\displaystyle \delta :L^{\times }/\left(L^{\times }\right)^{m}\xrightarrow {\sim } H^{1}\left(L,{\overline {K}}^{\times }[m]\right)}
. This is the Kummer map. A version of this map also exists when all
m
{\displaystyle m}
are considered simultaneously. Namely, since
L
×
/
(
L
×
)
m
=
L
×
⊗
m
−
1
Z
/
Z
{\displaystyle L^{\times }/(L^{\times })^{m}=L^{\times }\otimes m^{-1}\mathbb {Z} /\mathbb {Z} }
, taking the direct limit over
m
{\displaystyle m}
yields an isomorphism
δ
:
L
×
⊗
Q
/
Z
→
∼
H
1
(
L
,
K
¯
t
o
r
s
)
{\displaystyle \delta :L^{\times }\otimes \mathbb {Q} /\mathbb {Z} \xrightarrow {\sim } H^{1}\left(L,{\overline {K}}_{tors}\right)}
,
where tors denotes the torsion subgroup of roots of unity.
== For Elliptic Curves ==
Kummer theory is often used in the context of elliptic curves. Let
E
/
K
{\displaystyle E/K}
be an elliptic curve. There is a short exact sequence
0
→
E
[
m
]
→
E
→
P
↦
m
⋅
P
E
→
0
{\displaystyle 0\xrightarrow {} E[m]\xrightarrow {} E\xrightarrow {P\mapsto m\cdot P} E\xrightarrow {} 0}
,
where the multiplication by
m
{\displaystyle m}
map is surjective since
E
{\displaystyle E}
is divisible. Choosing an algebraic extension
L
/
K
{\displaystyle L/K}
and taking cohomology, we obtain the Kummer sequence for
E
{\displaystyle E}
:
0
→
E
(
L
)
/
m
E
(
L
)
→
H
1
(
L
,
E
[
m
]
)
→
H
1
(
L
,
E
)
[
m
]
→
0
{\displaystyle 0\xrightarrow {} E(L)/mE(L)\xrightarrow {} H^{1}(L,E[m])\xrightarrow {} H^{1}(L,E)[m]\xrightarrow {} 0}
.
The computation of the weak Mordell-Weil group
E
(
L
)
/
m
E
(
L
)
{\displaystyle E(L)/mE(L)}
is a key part of the proof of the Mordell-Weil theorem. The failure of
H
1
(
L
,
E
)
{\displaystyle H^{1}(L,E)}
to vanish adds a key complexity to the theory.
== Generalizations ==
Suppose that G is a profinite group acting on a module A with a surjective homomorphism π from the G-module A to itself. Suppose also that G acts trivially on the kernel C of π and that the first cohomology group H1(G,A) is trivial. Then the exact sequence of group cohomology shows that there is an isomorphism between AG/π(AG) and Hom(G,C).
Kummer theory is the special case of this when A is the multiplicative group of the separable closure of a field k, G is the Galois group, π is the nth power map, and C the group of nth roots of unity. Artin–Schreier theory is the special case when A is the additive group of the separable closure of a field k of positive characteristic p, G is the Galois group, π is the Frobenius map minus the identity, and C the finite field of order p. Taking A to be a ring of truncated Witt vectors gives Witt's generalization of Artin–Schreier theory to extensions of exponent dividing pn.
== See also ==
Quadratic field
== References ==
"Kummer extension", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Bryan Birch, "Cyclotomic fields and Kummer extensions", in J.W.S. Cassels and A. Frohlich (edd), Algebraic number theory, Academic Press, 1973. Chap.III, pp. 85–93. | Wikipedia/Kummer_theory |
In mathematics, an Artin L-function is a type of Dirichlet series associated to a linear representation ρ of a Galois group G. These functions were introduced in 1923 by Emil Artin, in connection with his research into class field theory. Their fundamental properties, in particular the Artin conjecture described below, have turned out to be resistant to easy proof. One of the aims of proposed non-abelian class field theory is to incorporate the complex-analytic nature of Artin L-functions into a larger framework, such as is provided by automorphic forms and the Langlands program. So far, only a small part of such a theory has been put on a firm basis.
== Definition ==
Given
ρ
{\displaystyle \rho }
, a representation of
G
{\displaystyle G}
on a finite-dimensional complex vector space
V
{\displaystyle V}
, where
G
{\displaystyle G}
is the Galois group of the finite extension
L
/
K
{\displaystyle L/K}
of number fields, the Artin
L
{\displaystyle L}
-function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
is defined by an Euler product. For each prime ideal
p
{\displaystyle {\mathfrak {p}}}
in
K
{\displaystyle K}
's ring of integers, there is an Euler factor, which is easiest to define in the case where
p
{\displaystyle {\mathfrak {p}}}
is unramified in
L
{\displaystyle L}
(true for almost all
p
{\displaystyle {\mathfrak {p}}}
). In that case, the Frobenius element
F
r
o
b
(
p
)
{\displaystyle \mathbf {Frob} ({\mathfrak {p}})}
is defined as a conjugacy class in
G
{\displaystyle G}
. Therefore, the characteristic polynomial of
ρ
(
F
r
o
b
(
p
)
)
{\displaystyle \rho (\mathbf {Frob} ({\mathfrak {p}}))}
is well-defined. The Euler factor for
p
{\displaystyle {\mathfrak {p}}}
is a slight modification of the characteristic polynomial, equally well-defined,
charpoly
(
ρ
(
F
r
o
b
(
p
)
)
)
−
1
=
det
[
I
−
t
ρ
(
F
r
o
b
(
p
)
)
]
−
1
,
{\displaystyle \operatorname {charpoly} (\rho (\mathbf {Frob} ({\mathfrak {p}})))^{-1}=\operatorname {det} \left[I-t\rho (\mathbf {Frob} ({\mathfrak {p}}))\right]^{-1},}
as rational function in t, evaluated at
t
=
N
(
p
)
−
s
{\displaystyle t=N({\mathfrak {p}})^{-s}}
, with
s
{\displaystyle s}
a complex variable in the usual Riemann zeta function notation. (Here N is the field norm of an ideal.)
When
p
{\displaystyle {\mathfrak {p}}}
is ramified, and I is the inertia group which is a subgroup of G, a similar construction is applied, but to the subspace of V fixed (pointwise) by I.
The Artin L-function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
is then the infinite product over all prime ideals
p
{\displaystyle {\mathfrak {p}}}
of these factors. As Artin reciprocity shows, when G is an abelian group these L-functions have a second description (as Dirichlet L-functions when K is the rational number field, and as Hecke L-functions in general). Novelty comes in with non-abelian G and their representations.
One application is to give factorisations of Dedekind zeta-functions, for example in the case of a number field that is Galois over the rational numbers. In accordance with the decomposition of the regular representation into irreducible representations, such a zeta-function splits into a product of Artin L-functions, for each irreducible representation of G. For example, the simplest case is when G is the symmetric group on three letters. Since G has an irreducible representation of degree 2, an Artin L-function for such a representation occurs, squared, in the factorisation of the Dedekind zeta-function for such a number field, in a product with the Riemann zeta-function (for the trivial representation) and an L-function of Dirichlet's type for the signature representation.
More precisely for
L
/
K
{\displaystyle L/K}
a Galois extension of degree n, the factorization
ζ
L
(
s
)
=
L
(
s
,
ρ
regular
)
=
∏
ρ
Irr rep
Gal
(
L
/
K
)
L
(
ρ
,
s
)
deg
(
ρ
)
{\displaystyle \zeta _{L}(s)=L(s,\rho _{\text{regular}})=\prod _{\rho {\text{ Irr rep }}{\text{Gal}}(L/K)}L(\rho ,s)^{\deg(\rho )}}
follows from
L
(
ρ
,
s
)
=
∏
p
∈
K
1
det
[
I
−
N
(
p
)
−
s
ρ
(
F
r
o
b
p
)
|
V
p
,
ρ
]
{\displaystyle L(\rho ,s)=\prod _{{\mathfrak {p}}\in K}{\frac {1}{\det \left[I-N({\mathfrak {p}})^{-s}\rho (\mathbf {Frob} _{\mathfrak {p}}){|V_{{\mathfrak {p}},\rho }}\right]}}}
−
log
det
[
I
−
N
(
p
)
−
s
ρ
(
F
r
o
b
p
)
]
=
∑
m
=
1
∞
tr
(
ρ
(
F
r
o
b
p
)
m
)
m
N
(
p
)
−
s
m
{\displaystyle -\log \det \left[I-N({\mathfrak {p}})^{-s}\rho \left(\mathbf {Frob} _{\mathfrak {p}}\right)\right]=\sum _{m=1}^{\infty }{\frac {{\text{tr}}(\rho (\mathbf {Frob} _{\mathfrak {p}})^{m})}{m}}N({\mathfrak {p}})^{-sm}}
∑
ρ
Irr
deg
(
ρ
)
tr
(
ρ
(
σ
)
)
=
{
n
σ
=
1
0
σ
≠
1
{\displaystyle \sum _{\rho {\text{ Irr}}}\deg(\rho ){\text{tr}}(\rho (\sigma ))={\begin{cases}n&\sigma =1\\0&\sigma \neq 1\end{cases}}}
−
∑
ρ
Irr
deg
(
ρ
)
log
det
[
I
−
N
(
p
−
s
)
ρ
(
F
r
o
b
p
)
]
=
n
∑
m
=
1
∞
N
(
p
)
−
s
f
m
f
m
=
−
log
[
(
1
−
N
(
p
)
−
s
f
)
n
f
]
{\displaystyle -\sum _{\rho {\text{ Irr}}}\deg(\rho )\log \det \left[I-N\left({\mathfrak {p}}^{-s}\right)\rho \left(\mathbf {Frob} _{\mathfrak {p}}\right)\right]=n\sum _{m=1}^{\infty }{\frac {N({\mathfrak {p}})^{-sfm}}{fm}}=-\log \left[\left(1-N({\mathfrak {p}})^{-sf}\right)^{\frac {n}{f}}\right]}
where
deg
(
ρ
)
{\displaystyle \deg(\rho )}
is the multiplicity of the irreducible representation in the regular representation, f is the order of
F
r
o
b
p
{\displaystyle \mathbf {Frob} _{\mathfrak {p}}}
and n is replaced by n/e at the ramified primes.
Since characters are an orthonormal basis of the class functions, after showing some analytic properties of the
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
we obtain the Chebotarev density theorem as a generalization of Dirichlet's theorem on arithmetic progressions.
== Functional equation ==
Artin L-functions satisfy a functional equation. The function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
is related in its values to
L
(
ρ
∗
,
1
−
s
)
{\displaystyle L(\rho ^{*},1-s)}
, where
ρ
∗
{\displaystyle \rho ^{*}}
denotes the complex conjugate representation. More precisely L is replaced by
Λ
(
ρ
,
s
)
{\displaystyle \Lambda (\rho ,s)}
, which is L multiplied by certain gamma factors, and then there is an equation of meromorphic functions
Λ
(
ρ
,
s
)
=
W
(
ρ
)
Λ
(
ρ
∗
,
1
−
s
)
{\displaystyle \Lambda (\rho ,s)=W(\rho )\Lambda (\rho ^{*},1-s)}
,
with a certain complex number W(ρ) of absolute value 1. It is the Artin root number. It has been studied deeply with respect to two types of properties. Firstly Robert Langlands and Pierre Deligne established a factorisation into Langlands–Deligne local constants; this is significant in relation to conjectural relationships to automorphic representations. Also the case of ρ and ρ* being equivalent representations is exactly the one in which the functional equation has the same L-function on each side. It is, algebraically speaking, the case when ρ is a real representation or quaternionic representation. The Artin root number is, then, either +1 or −1. The question of which sign occurs is linked to Galois module theory.
== The Artin conjecture ==
The Artin conjecture on Artin L-functions (also known as Artin's holomorphy conjecture) states that the Artin L-function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
of a non-trivial irreducible representation ρ is analytic in the whole complex plane.
This is known for one-dimensional representations, the L-functions being then associated to Hecke characters — and in particular for Dirichlet L-functions. More generally Artin showed that the Artin conjecture is true for all representations induced from 1-dimensional representations. If the Galois group is supersolvable or more generally monomial, then all representations are of this form so the Artin conjecture holds.
André Weil proved the Artin conjecture in the case of function fields.
Two-dimensional representations are classified by the nature of the image subgroup: it may be cyclic, dihedral, tetrahedral, octahedral, or icosahedral. The Artin conjecture for the cyclic or dihedral case follows easily from Erich Hecke's work. Langlands used the base change lifting to prove the tetrahedral case, and Jerrold Tunnell extended his work to cover the octahedral case; Andrew Wiles used these cases in his proof of the Modularity conjecture. Richard Taylor and others have made some progress on the (non-solvable) icosahedral case; this is an active area of research. The Artin conjecture for odd, irreducible, two-dimensional representations follows from the proof of Serre's modularity conjecture, regardless of projective image subgroup.
Brauer's theorem on induced characters implies that all Artin L-functions are products of positive and negative integral powers of Hecke L-functions, and are therefore meromorphic in the whole complex plane.
Langlands (1970) pointed out that the Artin conjecture follows from strong enough results from the Langlands philosophy, relating to the L-functions associated to automorphic representations for GL(n) for all
n
≥
1
{\displaystyle n\geq 1}
. More precisely, the Langlands conjectures associate an automorphic representation of the adelic group GLn(AQ) to every n-dimensional irreducible representation of the Galois group, which is a cuspidal representation if the Galois representation is irreducible, such that the Artin L-function of the Galois representation is the same as the automorphic L-function of the automorphic representation. The Artin conjecture then follows immediately from the known fact that the L-functions of cuspidal automorphic representations are holomorphic. This was one of the major motivations for Langlands' work.
== The Dedekind conjecture ==
A weaker conjecture (sometimes known as Dedekind conjecture) states that
if M/K is an extension of number fields, then the quotient
s
↦
ζ
M
(
s
)
/
ζ
K
(
s
)
{\displaystyle s\mapsto \zeta _{M}(s)/\zeta _{K}(s)}
of their Dedekind zeta functions is entire.
The Aramata-Brauer theorem states that the conjecture holds if M/K is Galois.
More generally, let N be the Galois closure of M over K,
and G the Galois group of N/K.
The quotient
s
↦
ζ
M
(
s
)
/
ζ
K
(
s
)
{\displaystyle s\mapsto \zeta _{M}(s)/\zeta _{K}(s)}
is equal to the
Artin L-functions associated to the natural representation associated to the
action of G on the K-invariants complex embedding of M. Thus the Artin conjecture implies the Dedekind conjecture.
The conjecture was proven when G is a solvable group, independently by Koji Uchida and R. W. van der Waall in 1975.
== See also ==
Equivariant L-function
== Notes ==
== References ==
== Bibliography == | Wikipedia/Artin_L-function |
In abstract algebra, an abelian group
(
G
,
+
)
{\displaystyle (G,+)}
is called finitely generated if there exist finitely many elements
x
1
,
…
,
x
s
{\displaystyle x_{1},\dots ,x_{s}}
in
G
{\displaystyle G}
such that every
x
{\displaystyle x}
in
G
{\displaystyle G}
can be written in the form
x
=
n
1
x
1
+
n
2
x
2
+
⋯
+
n
s
x
s
{\displaystyle x=n_{1}x_{1}+n_{2}x_{2}+\cdots +n_{s}x_{s}}
for some integers
n
1
,
…
,
n
s
{\displaystyle n_{1},\dots ,n_{s}}
. In this case, we say that the set
{
x
1
,
…
,
x
s
}
{\displaystyle \{x_{1},\dots ,x_{s}\}}
is a generating set of
G
{\displaystyle G}
or that
x
1
,
…
,
x
s
{\displaystyle x_{1},\dots ,x_{s}}
generate
G
{\displaystyle G}
. So, finitely generated abelian groups can be thought of as a generalization of cyclic groups.
Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified.
== Examples ==
The integers,
(
Z
,
+
)
{\displaystyle \left(\mathbb {Z} ,+\right)}
, are a finitely generated abelian group.
The integers modulo
n
{\displaystyle n}
,
(
Z
/
n
Z
,
+
)
{\displaystyle \left(\mathbb {Z} /n\mathbb {Z} ,+\right)}
, are a finite (hence finitely generated) abelian group.
Any direct sum of finitely many finitely generated abelian groups is again a finitely generated abelian group.
Every lattice forms a finitely generated free abelian group.
There are no other examples (up to isomorphism). In particular, the group
(
Q
,
+
)
{\displaystyle \left(\mathbb {Q} ,+\right)}
of rational numbers is not finitely generated: if
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are rational numbers, pick a natural number
k
{\displaystyle k}
coprime to all the denominators; then
1
/
k
{\displaystyle 1/k}
cannot be generated by
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
. The group
(
Q
∗
,
⋅
)
{\displaystyle \left(\mathbb {Q} ^{*},\cdot \right)}
of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition
(
R
,
+
)
{\displaystyle \left(\mathbb {R} ,+\right)}
and non-zero real numbers under multiplication
(
R
∗
,
⋅
)
{\displaystyle \left(\mathbb {R} ^{*},\cdot \right)}
are also not finitely generated.
== Classification ==
The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of finite abelian groups. The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain, which in turn admits further generalizations.
=== Primary decomposition ===
The primary decomposition formulation states that every finitely generated abelian group G is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups. A primary cyclic group is one whose order is a power of a prime. That is, every finitely generated abelian group is isomorphic to a group of the form
Z
n
⊕
Z
/
q
1
Z
⊕
⋯
⊕
Z
/
q
t
Z
,
{\displaystyle \mathbb {Z} ^{n}\oplus \mathbb {Z} /q_{1}\mathbb {Z} \oplus \cdots \oplus \mathbb {Z} /q_{t}\mathbb {Z} ,}
where n ≥ 0 is the rank, and the numbers q1, ..., qt are powers of (not necessarily distinct) prime numbers. In particular, G is finite if and only if n = 0. The values of n, q1, ..., qt are (up to rearranging the indices) uniquely determined by G, that is, there is one and only one way to represent G as such a decomposition.
The proof of this statement uses the basis theorem for finite abelian group: every finite abelian group is a direct sum of primary cyclic groups. Denote the torsion subgroup of G as tG. Then, G/tG is a torsion-free abelian group and thus it is free abelian. tG is a direct summand of G, which means there exists a subgroup F of G s.t.
G
=
t
G
⊕
F
{\displaystyle G=tG\oplus F}
, where
F
≅
G
/
t
G
{\displaystyle F\cong G/tG}
. Then, F is also free abelian. Since tG is finitely generated and each element of tG has finite order, tG is finite. By the basis theorem for finite abelian group, tG can be written as direct sum of primary cyclic groups.
=== Invariant factor decomposition ===
We can also write any finitely generated abelian group G as a direct sum of the form
Z
n
⊕
Z
/
k
1
Z
⊕
⋯
⊕
Z
/
k
u
Z
,
{\displaystyle \mathbb {Z} ^{n}\oplus \mathbb {Z} /{k_{1}}\mathbb {Z} \oplus \cdots \oplus \mathbb {Z} /{k_{u}}\mathbb {Z} ,}
where k1 divides k2, which divides k3 and so on up to ku. Again, the rank n and the invariant factors k1, ..., ku are uniquely determined by G (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism.
=== Equivalence ===
These statements are equivalent as a result of the Chinese remainder theorem, which implies that
Z
j
k
≅
Z
j
⊕
Z
k
{\displaystyle \mathbb {Z} _{jk}\cong \mathbb {Z} _{j}\oplus \mathbb {Z} _{k}}
if and only if j and k are coprime.
=== History ===
The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. The finitely presented case is solved by Smith normal form, and hence frequently credited to (Smith 1861), though the finitely generated case is sometimes instead credited to Poincaré in 1900; details follow.
Group theorist László Fuchs states:
As far as the fundamental theorem on finite abelian groups is concerned, it is not clear how far back in time one needs to go to trace its origin. ... it took a long time to formulate and prove the fundamental theorem in its present form ...
The fundamental theorem for finite abelian groups was proven by Leopold Kronecker in 1870, using a group-theoretic proof, though without stating it in group-theoretic terms; a modern presentation of Kronecker's proof is given in (Stillwell 2012), 5.2.2 Kronecker's Theorem, 176–177. This generalized an earlier result of Carl Friedrich Gauss from Disquisitiones Arithmeticae (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882.
The fundamental theorem for finitely presented abelian groups was proven by Henry John Stephen Smith in (Smith 1861), as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups.
The fundamental theorem for finitely generated abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). This was done in the context of computing the
homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part.
Kronecker's proof was generalized to finitely generated abelian groups by Emmy Noether in 1926.
== Corollaries ==
Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of G. The rank of G is defined as the rank of the torsion-free part of G; this is just the number n in the above formulas.
A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here:
Q
{\displaystyle \mathbb {Q} }
is torsion-free but not free abelian.
Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms, form an abelian category which is a Serre subcategory of the category of abelian groups.
== Non-finitely generated abelian groups ==
Note that not every abelian group of finite rank is finitely generated; the rank 1 group
Q
{\displaystyle \mathbb {Q} }
is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of
Z
2
{\displaystyle \mathbb {Z} _{2}}
is another one.
== See also ==
The composition series in the Jordan–Hölder theorem is a non-abelian generalization.
== Notes ==
== References == | Wikipedia/Fundamental_theorem_of_finitely_generated_abelian_groups |
In mathematics, non-abelian class field theory is a catchphrase, meaning the extension of the results of class field theory, the relatively complete and classical set of results on abelian extensions of any number field K, to the general Galois extension L/K. While class field theory was essentially known by 1930, the corresponding non-abelian theory has never been formulated in a definitive and accepted sense.
== History ==
A presentation of class field theory in terms of group cohomology was carried out by Claude Chevalley, Emil Artin and others, mainly in the 1940s. This resulted in a formulation of the central results by means of the group cohomology of the idele class group. The theorems of the cohomological approach are independent of whether or not the Galois group G of L/K is abelian. This theory has never been regarded as the sought-after non-abelian theory. The first reason that can be cited for that is that it did not provide fresh information on the splitting of prime ideals in a Galois extension; a common way to explain the objective of a non-abelian class field theory is that it should provide a more explicit way to express such patterns of splitting.
The cohomological approach therefore was of limited use in even formulating non-abelian class field theory. Behind the history was the wish of Chevalley to write proofs for class field theory without using Dirichlet series: in other words to eliminate L-functions. The first wave of proofs of the central theorems of class field theory was structured as consisting of two 'inequalities' (the same structure as in the proofs now given of the fundamental theorem of Galois theory, though much more complex). One of the two inequalities involved an argument with L-functions.
In a later reversal of this development, it was realised that to generalize Artin reciprocity to the non-abelian case, it was essential in fact to seek a new way of expressing Artin L-functions. The contemporary formulation of this ambition is by means of the Langlands program: in which grounds are given for believing Artin L-functions are also L-functions of automorphic representations. As of the early twenty-first century, this is the formulation of the notion of non-abelian class field theory that has widest expert acceptance.
== See also ==
Anabelian geometry
Frobenioid
Langlands correspondences
== Notes == | Wikipedia/Non-abelian_class_field_theory |
The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter ζ (zeta), is a mathematical function of a complex variable defined as
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
1
s
+
1
2
s
+
1
3
s
+
⋯
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots }
for
Re
(
s
)
>
1
{\displaystyle \operatorname {Re} (s)>1}
, and its analytic continuation elsewhere.
The Riemann zeta function plays a pivotal role in analytic number theory and has applications in physics, probability theory, and applied statistics.
Leonhard Euler first introduced and studied the function over the reals in the first half of the eighteenth century. Bernhard Riemann's 1859 article "On the Number of Primes Less Than a Given Magnitude" extended the Euler definition to a complex variable, proved its meromorphic continuation and functional equation, and established a relation between its zeros and the distribution of prime numbers. This paper also contained the Riemann hypothesis, a conjecture about the distribution of complex zeros of the Riemann zeta function that many mathematicians consider the most important unsolved problem in pure mathematics.
The values of the Riemann zeta function at even positive integers were computed by Euler. The first of them, ζ(2), provides a solution to the Basel problem. In 1979 Roger Apéry proved the irrationality of ζ(3). The values at negative integer points, also found by Euler, are rational numbers and play an important role in the theory of modular forms. Many generalizations of the Riemann zeta function, such as Dirichlet series, Dirichlet L-functions and L-functions, are known.
== Definition ==
The Riemann zeta function ζ(s) is a function of a complex variable s = σ + it, where σ and t are real numbers. (The notation s, σ, and t is used traditionally in the study of the zeta function, following Riemann.) When Re(s) = σ > 1, the function can be written as a converging summation or as an integral:
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
Γ
(
s
)
∫
0
∞
x
s
−
1
e
x
−
1
d
x
,
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}\,\mathrm {d} x\,,}
where
Γ
(
s
)
=
∫
0
∞
x
s
−
1
e
−
x
d
x
{\displaystyle \Gamma (s)=\int _{0}^{\infty }x^{s-1}\,e^{-x}\,\mathrm {d} x}
is the gamma function. The Riemann zeta function is defined for other complex values via analytic continuation of the function defined for σ > 1.
Leonhard Euler considered the above series in 1740 for positive integer values of s, and later Chebyshev extended the definition to
Re
(
s
)
>
1.
{\displaystyle \operatorname {Re} (s)>1.}
The above series is a prototypical Dirichlet series that converges absolutely to an analytic function for s such that σ > 1 and diverges for all other values of s. Riemann showed that the function defined by the series on the half-plane of convergence can be continued analytically to all complex values s ≠ 1. For s = 1, the series is the harmonic series which diverges to +∞, and
lim
s
→
1
(
s
−
1
)
ζ
(
s
)
=
1.
{\displaystyle \lim _{s\to 1}(s-1)\zeta (s)=1.}
Thus the Riemann zeta function is a meromorphic function on the whole complex plane, which is holomorphic everywhere except for a simple pole at s = 1 with residue 1.
== Euler's product formula ==
In 1737, the connection between the zeta function and prime numbers was discovered by Euler, who proved the identity
∑
n
=
1
∞
1
n
s
=
∏
p
prime
1
1
−
p
−
s
,
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}},}
where, by definition, the left hand side is ζ(s) and the infinite product on the right hand side extends over all prime numbers p (such expressions are called Euler products):
∏
p
prime
1
1
−
p
−
s
=
1
1
−
2
−
s
⋅
1
1
−
3
−
s
⋅
1
1
−
5
−
s
⋅
1
1
−
7
−
s
⋅
1
1
−
11
−
s
⋯
1
1
−
p
−
s
⋯
{\displaystyle \prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}={\frac {1}{1-2^{-s}}}\cdot {\frac {1}{1-3^{-s}}}\cdot {\frac {1}{1-5^{-s}}}\cdot {\frac {1}{1-7^{-s}}}\cdot {\frac {1}{1-11^{-s}}}\cdots {\frac {1}{1-p^{-s}}}\cdots }
Both sides of the Euler product formula converge for Re(s) > 1. The proof of Euler's identity uses only the formula for the geometric series and the fundamental theorem of arithmetic. Since the harmonic series, obtained when s = 1, diverges, Euler's formula (which becomes Πp p/p − 1) implies that there are infinitely many primes. Since the logarithm of p/p − 1 is approximately 1/p, the formula can also be used to prove the stronger result that the sum of the reciprocals of the primes is infinite. On the other hand, combining that with the sieve of Eratosthenes shows that the density of the set of primes within the set of positive integers is zero.
The Euler product formula can be used to calculate the asymptotic probability that s randomly selected integers are set-wise coprime. Intuitively, the probability that any single number is divisible by a prime (or any integer) p is 1/p. Hence the probability that s numbers are all divisible by this prime is 1/ps, and the probability that at least one of them is not is 1 − 1/ps. Now, for distinct primes, these divisibility events are mutually independent because the candidate divisors are coprime (a number is divisible by coprime divisors n and m if and only if it is divisible by nm, an event which occurs with probability 1/nm). Thus the asymptotic probability that s numbers are coprime is given by a product over all primes,
∏
p
prime
(
1
−
1
p
s
)
=
(
∏
p
prime
1
1
−
p
−
s
)
−
1
=
1
ζ
(
s
)
.
{\displaystyle \prod _{p{\text{ prime}}}\left(1-{\frac {1}{p^{s}}}\right)=\left(\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}\right)^{-1}={\frac {1}{\zeta (s)}}.}
== Riemann's functional equation ==
This zeta function satisfies the functional equation
ζ
(
s
)
=
2
s
π
s
−
1
sin
(
π
s
2
)
Γ
(
1
−
s
)
ζ
(
1
−
s
)
,
{\displaystyle \zeta (s)=2^{s}\pi ^{s-1}\ \sin \left({\frac {\pi s}{2}}\right)\ \Gamma (1-s)\ \zeta (1-s)\ ,}
where Γ(s) is the gamma function. This is an equality of meromorphic functions valid on the whole complex plane. The equation relates values of the Riemann zeta function at the points s and 1 − s, in particular relating even positive integers with odd negative integers. Owing to the zeros of the sine function, the functional equation implies that ζ(s) has a simple zero at each even negative integer s = −2n, known as the trivial zeros of ζ(s). When s is an even positive integer, the product sin( π s / 2 ) Γ(1 − s) on the right is non-zero because Γ(1 − s) has a simple pole, which cancels the simple zero of the sine factor.
The functional equation was established by Riemann in his 1859 paper "On the Number of Primes Less Than a Given Magnitude" and used to construct the analytic continuation in the first place.
== Riemann's Xi function ==
Riemann also found a symmetric version of the functional equation by setting
ξ
(
s
)
=
s
(
s
−
1
)
2
×
π
−
s
2
Γ
(
s
2
)
ζ
(
s
)
=
(
s
−
1
)
π
−
s
2
Γ
(
s
2
+
1
)
ζ
(
s
)
,
{\displaystyle \xi (s)={\frac {s(s-1)}{2}}\times \pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}\right)\zeta (s)=(s-1)\pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}+1\right)\zeta (s)\ ,}
which satisfies:
ξ
(
s
)
=
ξ
(
1
−
s
)
.
{\displaystyle \xi (s)=\xi (1-s)~.}
Returning to the functional equation's derivation in the previous section, we have
ξ
(
s
)
=
1
2
+
s
(
s
−
1
)
2
∫
1
∞
(
x
−
s
2
−
1
2
+
x
s
2
−
1
)
ψ
(
x
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+{\frac {s(s-1)}{2}}\int _{1}^{\infty }\left(x^{-{\frac {s}{2}}-{\frac {1}{2}}}+x^{{\frac {s}{2}}-1}\right)\psi (x)dx}
Using integration by parts,
ξ
(
s
)
=
1
2
−
[
(
s
x
1
−
s
2
+
(
1
−
s
)
x
s
2
)
ψ
(
x
)
]
1
∞
+
∫
1
∞
(
s
x
1
−
s
2
+
(
1
−
s
)
x
s
2
)
ψ
′
(
x
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}-\left[\left(sx^{\frac {1-s}{2}}+(1-s)x^{\frac {s}{2}}\right)\psi (x)\right]_{1}^{\infty }+\int _{1}^{\infty }\left(sx^{\frac {1-s}{2}}+(1-s)x^{\frac {s}{2}}\right)\psi '(x)dx}
ξ
(
s
)
=
1
2
+
ψ
(
1
)
+
∫
1
∞
(
s
x
1
−
s
2
+
(
1
−
s
)
x
s
2
)
ψ
′
(
x
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+\psi (1)+\int _{1}^{\infty }\left(sx^{\frac {1-s}{2}}+(1-s)x^{\frac {s}{2}}\right)\psi '(x)dx}
Using integration by parts again with a factorization of
x
3
2
{\displaystyle x^{\frac {3}{2}}}
,
ξ
(
s
)
=
1
2
+
ψ
(
1
)
−
2
[
x
3
2
ψ
′
(
x
)
(
x
s
−
1
2
+
x
−
s
2
)
]
1
∞
+
2
∫
1
∞
(
x
s
−
1
2
+
x
−
s
2
)
d
d
x
[
x
3
2
ψ
′
(
x
)
]
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+\psi (1)-2\left[x^{\frac {3}{2}}\psi '(x)\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right)\right]_{1}^{\infty }+2\int _{1}^{\infty }\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right){\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]dx}
ξ
(
s
)
=
1
2
+
ψ
(
1
)
+
4
ψ
′
(
1
)
+
2
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
(
x
s
−
1
2
+
x
−
s
2
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+\psi (1)+4\psi '(1)+2\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right)dx}
As
1
2
+
ψ
(
1
)
+
4
ψ
′
(
1
)
=
0
{\displaystyle {\frac {1}{2}}+\psi (1)+4\psi '(1)=0}
,
ξ
(
s
)
=
2
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
(
x
s
−
1
2
+
x
−
s
2
)
d
x
{\displaystyle \xi (s)=2\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right)dx}
Remove a factor of
x
−
1
4
{\displaystyle x^{-{\frac {1}{4}}}}
to make the exponents in the remainder opposites.
ξ
(
s
)
=
2
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
x
−
1
4
(
x
s
−
1
2
2
+
x
1
2
−
s
2
)
d
x
{\displaystyle \xi (s)=2\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]x^{-{\frac {1}{4}}}\left(x^{\frac {s-{\frac {1}{2}}}{2}}+x^{\frac {{\frac {1}{2}}-s}{2}}\right)dx}
Using the hyperbolic functions, namely
cos
(
x
)
=
cosh
(
i
x
)
=
e
i
x
+
e
−
i
x
2
{\displaystyle \cos(x)=\cosh(ix)={\frac {e^{ix}+e^{-ix}}{2}}}
, and letting
s
=
1
2
+
i
t
{\displaystyle s={\frac {1}{2}}+it}
gives
ξ
(
s
)
=
4
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
x
−
1
4
cos
(
t
2
log
x
)
d
x
{\displaystyle \xi (s)=4\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]x^{-{\frac {1}{4}}}\cos({\frac {t}{2}}\log x)dx}
and by separating the integral and using the power series for
cos
{\displaystyle \cos }
,
ξ
(
s
)
=
∑
n
=
0
∞
a
2
n
t
2
n
{\displaystyle \xi (s)=\sum _{n=0}^{\infty }a_{2n}t^{2n}}
which led Riemann to his famous hypothesis.
== Zeros, the critical line, and the Riemann hypothesis ==
The functional equation shows that the Riemann zeta function has zeros at −2, −4,.... These are called the trivial zeros. They are trivial in the sense that their existence is relatively easy to prove, for example, from sin πs/2 being 0 in the functional equation. The non-trivial zeros have captured far more attention because their distribution not only is far less understood but, more importantly, their study yields important results concerning prime numbers and related objects in number theory. It is known that any non-trivial zero lies in the open strip
{
s
∈
C
:
0
<
Re
(
s
)
<
1
}
{\displaystyle \{s\in \mathbb {C} :0<\operatorname {Re} (s)<1\}}
, which is called the critical strip. The set
{
s
∈
C
:
Re
(
s
)
=
1
/
2
}
{\displaystyle \{s\in \mathbb {C} :\operatorname {Re} (s)=1/2\}}
is called the critical line. The Riemann hypothesis, considered one of the greatest unsolved problems in mathematics, asserts that all non-trivial zeros are on the critical line. In 1989, Conrey proved that more than 40% of the non-trivial zeros of the Riemann zeta function are on the critical line. This has since been improved to 41.7%.
For the Riemann zeta function on the critical line, see Z-function.
=== Number of zeros in the critical strip ===
Let
N
(
T
)
{\displaystyle N(T)}
be the number of zeros of
ζ
(
s
)
{\displaystyle \zeta (s)}
in the critical strip
0
<
Re
(
s
)
<
1
{\displaystyle 0<\operatorname {Re} (s)<1}
, whose imaginary parts are in the interval
0
<
Im
(
s
)
<
T
{\displaystyle 0<\operatorname {Im} (s)<T}
.
Timothy Trudgian proved that, if
T
>
e
{\displaystyle T>e}
, then
|
N
(
T
)
−
T
2
π
log
T
2
π
e
|
≤
0.112
log
T
+
0.278
log
log
T
+
3.385
+
0.2
T
{\displaystyle \left|N(T)-{\frac {T}{2\pi }}\log {\frac {T}{2\pi e}}\right|\leq 0.112\log T+0.278\log \log T+3.385+{\frac {0.2}{T}}}
.
=== The Hardy–Littlewood conjectures ===
In 1914, G. H. Hardy proved that ζ (1/2 + it) has infinitely many real zeros.
Hardy and J. E. Littlewood formulated two conjectures on the density and distance between the zeros of ζ (1/2 + it) on intervals of large positive real numbers. In the following, N(T) is the total number of real zeros and N0(T) the total number of zeros of odd order of the function ζ (1/2 + it) lying in the interval (0, T].
These two conjectures opened up new directions in the investigation of the Riemann zeta function.
=== Zero-free region ===
The location of the Riemann zeta function's zeros is of great importance in number theory. The prime number theorem is equivalent to the fact that there are no zeros of the zeta function on the Re(s) = 1 line. It is also known that zeros do not exist in certain regions slightly to the left of the Re(s) = 1 line, known as zero-free regions. For instance, Korobov and Vinogradov independently showed via the Vinogradov's mean-value theorem that for sufficiently large
|
t
|
{\displaystyle |t|}
,
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
for
σ
≥
1
−
c
(
log
|
t
|
)
2
/
3
+
ε
{\displaystyle \sigma \geq 1-{\frac {c}{(\log |t|)^{2/3+\varepsilon }}}}
for any
ε
>
0
{\displaystyle \varepsilon >0}
and a number
c
>
0
{\displaystyle c>0}
depending on
ε
{\displaystyle \varepsilon }
. Asymptotically, this is the largest known zero-free region for the zeta function.
Explicit zero-free regions are also known. Platt and Trudgian
verified computationally that
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
if
σ
≠
1
/
2
{\displaystyle \sigma \neq 1/2}
and
|
t
|
≤
3
⋅
10
12
{\displaystyle |t|\leq 3\cdot 10^{12}}
. Mossinghoff, Trudgian and Yang proved that zeta has no zeros in the region
σ
≥
1
−
1
5.558691
log
|
t
|
{\displaystyle \sigma \geq 1-{\frac {1}{5.558691\log |t|}}}
for |t| ≥ 2, which is the largest known zero-free region in the critical strip for
3
⋅
10
12
<
|
t
|
<
e
64.1
≈
7
⋅
10
27
{\displaystyle 3\cdot 10^{12}<|t|<e^{64.1}\approx 7\cdot 10^{27}}
(for previous results see).
Yang showed that
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
if
σ
≥
1
−
log
log
|
t
|
21.233
log
|
t
|
{\displaystyle \sigma \geq 1-{\frac {\log \log |t|}{21.233\log |t|}}}
and
|
t
|
≥
3
{\displaystyle |t|\geq 3}
which is the largest known zero-free region for
e
170.2
<
|
t
|
<
e
4.8
⋅
10
5
{\displaystyle e^{170.2}<|t|<e^{4.8\cdot 10^{5}}}
.
Bellotti proved (building on the work of Ford) the zero-free region
σ
≥
1
−
1
53.989
(
log
|
t
|
)
2
/
3
(
log
log
|
t
|
)
1
/
3
{\displaystyle \sigma \geq 1-{\frac {1}{53.989(\log |t|)^{2/3}(\log \log |t|)^{1/3}}}}
and
|
t
|
≥
3
{\displaystyle |t|\geq 3}
.
This is the largest known zero-free region for fixed
|
t
|
≥
exp
(
4.8
⋅
10
5
)
.
{\displaystyle |t|\geq \exp(4.8\cdot 10^{5}).}
Bellotti also showed that for sufficiently large
|
t
|
{\displaystyle |t|}
, the following better result is known:
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
for
σ
≥
1
−
1
48.0718
(
log
|
t
|
)
2
/
3
(
log
log
|
t
|
)
1
/
3
.
{\displaystyle \sigma \geq 1-{\frac {1}{48.0718(\log |t|)^{2/3}(\log \log |t|)^{1/3}}}.}
The strongest result of this kind one can hope for is the truth of the Riemann hypothesis, which would have many profound consequences in the theory of numbers.
=== Other results ===
It is known that there are infinitely many zeros on the critical line. Littlewood showed that if the sequence (γn) contains the imaginary parts of all zeros in the upper half-plane in ascending order, then
lim
n
→
∞
(
γ
n
+
1
−
γ
n
)
=
0.
{\displaystyle \lim _{n\rightarrow \infty }\left(\gamma _{n+1}-\gamma _{n}\right)=0.}
The critical line theorem asserts that a positive proportion of the nontrivial zeros lies on the critical line. (The Riemann hypothesis would imply that this proportion is 1.)
In the critical strip, the zero with smallest non-negative imaginary part is 1/2 + 14.13472514...i (OEIS: A058303). The fact that
ζ
(
s
)
=
ζ
(
s
¯
)
¯
{\displaystyle \zeta (s)={\overline {\zeta ({\overline {s}})}}}
for all complex s ≠ 1 implies that the zeros of the Riemann zeta function are symmetric about the real axis. Combining this symmetry with the functional equation, furthermore, one sees that the non-trivial zeros are symmetric about the critical line Re(s) = 1/2.
It is also known that no zeros lie on the line with real part 1.
== Specific values ==
For any positive even integer 2n,
ζ
(
2
n
)
=
|
B
2
n
|
(
2
π
)
2
n
2
(
2
n
)
!
,
{\displaystyle \zeta (2n)={\frac {|{B_{2n}}|(2\pi )^{2n}}{2(2n)!}},}
where B2n is the 2n-th Bernoulli number.
For odd positive integers, no such simple expression is known, although these values are thought to be related to the algebraic K-theory of the integers; see Special values of L-functions.
For nonpositive integers, one has
ζ
(
−
n
)
=
−
B
n
+
1
n
+
1
{\displaystyle \zeta (-n)=-{\frac {B_{n+1}}{n+1}}}
for n ≥ 0 (using the convention that B1 = 1/2).
In particular, ζ vanishes at the negative even integers because Bm = 0 for all odd m other than 1. These are the so-called "trivial zeros" of the zeta function.
Via analytic continuation, one can show that
ζ
(
−
1
)
=
−
1
12
{\displaystyle \zeta (-1)=-{\tfrac {1}{12}}}
This gives a pretext for assigning a finite value to the divergent series 1 + 2 + 3 + 4 + ⋯, which has been used in certain contexts (Ramanujan summation) such as string theory. Analogously, the particular value
ζ
(
0
)
=
−
1
2
{\displaystyle \zeta (0)=-{\tfrac {1}{2}}}
can be viewed as assigning a finite result to the divergent series 1 + 1 + 1 + 1 + ⋯.
The value
ζ
(
1
2
)
=
−
1.46035450880958681288
…
{\displaystyle \zeta {\bigl (}{\tfrac {1}{2}}{\bigr )}=-1.46035450880958681288\ldots }
is employed in calculating kinetic boundary layer problems of linear kinetic equations.
Although
ζ
(
1
)
=
1
+
1
2
+
1
3
+
⋯
{\displaystyle \zeta (1)=1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\cdots }
diverges, its Cauchy principal value
lim
ε
→
0
ζ
(
1
+
ε
)
+
ζ
(
1
−
ε
)
2
{\displaystyle \lim _{\varepsilon \to 0}{\frac {\zeta (1+\varepsilon )+\zeta (1-\varepsilon )}{2}}}
exists and is equal to the Euler–Mascheroni constant γ = 0.5772....
The demonstration of the particular value
ζ
(
2
)
=
1
+
1
2
2
+
1
3
2
+
⋯
=
π
2
6
{\displaystyle \zeta (2)=1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}}
is known as the Basel problem. The reciprocal of this sum answers the question: What is the probability that two numbers selected at random are relatively prime?
The value
ζ
(
3
)
=
1
+
1
2
3
+
1
3
3
+
⋯
=
1.202056903159594285399...
{\displaystyle \zeta (3)=1+{\frac {1}{2^{3}}}+{\frac {1}{3^{3}}}+\cdots =1.202056903159594285399...}
is Apéry's constant.
Taking the limit
s
→
+
∞
{\displaystyle s\rightarrow +\infty }
through the real numbers, one obtains
ζ
(
+
∞
)
=
1
{\displaystyle \zeta (+\infty )=1}
. But at complex infinity on the Riemann sphere the zeta function has an essential singularity.
== Various properties ==
For sums involving the zeta function at integer and half-integer values, see rational zeta series.
=== Reciprocal ===
The reciprocal of the zeta function may be expressed as a Dirichlet series over the Möbius function μ(n):
1
ζ
(
s
)
=
∑
n
=
1
∞
μ
(
n
)
n
s
{\displaystyle {\frac {1}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}}
for every complex number s with real part greater than 1. There are a number of similar relations involving various well-known multiplicative functions; these are given in the article on the Dirichlet series.
The Riemann hypothesis is equivalent to the claim that this expression is valid when the real part of s is greater than 1/2.
=== Universality ===
The critical strip of the Riemann zeta function has the remarkable property of universality. This zeta function universality states that there exists some location on the critical strip that approximates any holomorphic function arbitrarily well. Since holomorphic functions are very general, this property is quite remarkable. The first proof of universality was provided by Sergei Mikhailovitch Voronin in 1975. More recent work has included effective versions of Voronin's theorem and extending it to Dirichlet L-functions.
=== Estimates of the maximum of the modulus of the zeta function ===
Let the functions F(T;H) and G(s0;Δ) be defined by the equalities
F
(
T
;
H
)
=
max
|
t
−
T
|
≤
H
|
ζ
(
1
2
+
i
t
)
|
,
G
(
s
0
;
Δ
)
=
max
|
s
−
s
0
|
≤
Δ
|
ζ
(
s
)
|
.
{\displaystyle F(T;H)=\max _{|t-T|\leq H}\left|\zeta \left({\tfrac {1}{2}}+it\right)\right|,\qquad G(s_{0};\Delta )=\max _{|s-s_{0}|\leq \Delta }|\zeta (s)|.}
Here T is a sufficiently large positive number, 0 < H ≪ log log T, s0 = σ0 + iT, 1/2 ≤ σ0 ≤ 1, 0 < Δ < 1/3. Estimating the values F and G from below shows, how large (in modulus) values ζ(s) can take on short intervals of the critical line or in small neighborhoods of points lying in the critical strip 0 ≤ Re(s) ≤ 1.
The case H ≫ log log T was studied by Kanakanahalli Ramachandra; the case Δ > c, where c is a sufficiently large constant, is trivial.
Anatolii Karatsuba proved, in particular, that if the values H and Δ exceed certain sufficiently small constants, then the estimates
F
(
T
;
H
)
≥
T
−
c
1
,
G
(
s
0
;
Δ
)
≥
T
−
c
2
,
{\displaystyle F(T;H)\geq T^{-c_{1}},\qquad G(s_{0};\Delta )\geq T^{-c_{2}},}
hold, where c1 and c2 are certain absolute constants.
=== The argument of the Riemann zeta function ===
The function
S
(
t
)
=
1
π
arg
ζ
(
1
2
+
i
t
)
{\displaystyle S(t)={\frac {1}{\pi }}\arg {\zeta \left({\tfrac {1}{2}}+it\right)}}
is called the argument of the Riemann zeta function. Here arg ζ(1/2 + it) is the increment of an arbitrary continuous branch of arg ζ(s) along the broken line joining the points 2, 2 + it and 1/2 + it.
There are some theorems on properties of the function S(t). Among those results are the mean value theorems for S(t) and its first integral
S
1
(
t
)
=
∫
0
t
S
(
u
)
d
u
{\displaystyle S_{1}(t)=\int _{0}^{t}S(u)\,\mathrm {d} u}
on intervals of the real line, and also the theorem claiming that every interval (T, T + H] for
H
≥
T
27
82
+
ε
{\displaystyle H\geq T^{{\frac {27}{82}}+\varepsilon }}
contains at least
H
ln
T
3
e
−
c
ln
ln
T
{\displaystyle H{\sqrt[{3}]{\ln T}}e^{-c{\sqrt {\ln \ln T}}}}
points where the function S(t) changes sign. Earlier similar results were obtained by Atle Selberg for the case
H
≥
T
1
2
+
ε
.
{\displaystyle H\geq T^{{\frac {1}{2}}+\varepsilon }.}
== Representations ==
=== Dirichlet series ===
An extension of the area of convergence can be obtained by rearranging the original series. The series
ζ
(
s
)
=
1
s
−
1
∑
n
=
1
∞
(
n
(
n
+
1
)
s
−
n
−
s
n
s
)
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{n=1}^{\infty }\left({\frac {n}{(n+1)^{s}}}-{\frac {n-s}{n^{s}}}\right)}
converges for Re(s) > 0, while
ζ
(
s
)
=
1
s
−
1
∑
n
=
1
∞
n
(
n
+
1
)
2
(
2
n
+
3
+
s
(
n
+
1
)
s
+
2
−
2
n
−
1
−
s
n
s
+
2
)
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{n=1}^{\infty }{\frac {n(n+1)}{2}}\left({\frac {2n+3+s}{(n+1)^{s+2}}}-{\frac {2n-1-s}{n^{s+2}}}\right)}
converge even for Re(s) > −1. In this way, the area of convergence can be extended to Re(s) > −k for any negative integer −k.
The recurrence connection is clearly visible from the expression valid for Re(s) > −2 enabling further expansion by integration by parts.
ζ
(
s
)
=
1
+
1
s
−
1
−
s
2
!
[
ζ
(
s
+
1
)
−
1
]
−
s
(
s
+
1
)
3
!
[
ζ
(
s
+
2
)
−
1
]
−
s
(
s
+
1
)
(
s
+
2
)
3
!
∑
n
=
1
∞
∫
0
1
t
3
d
t
(
n
+
t
)
s
+
3
{\displaystyle {\begin{aligned}\zeta (s)=&1+{\frac {1}{s-1}}-{\frac {s}{2!}}[\zeta (s+1)-1]\\-&{\frac {s(s+1)}{3!}}[\zeta (s+2)-1]\\&-{\frac {s(s+1)(s+2)}{3!}}\sum _{n=1}^{\infty }\int _{0}^{1}{\frac {t^{3}dt}{(n+t)^{s+3}}}\end{aligned}}}
=== Mellin-type integrals ===
The Mellin transform of a function f(x) is defined as
∫
0
∞
f
(
x
)
x
s
d
x
x
{\displaystyle \int _{0}^{\infty }f(x)x^{s}\,{\frac {\mathrm {d} x}{x}}}
in the region where the integral is defined. There are various expressions for the zeta function as Mellin transform-like integrals. If the real part of s is greater than one, we have
Γ
(
s
)
ζ
(
s
)
=
∫
0
∞
x
s
−
1
e
x
−
1
d
x
{\displaystyle \Gamma (s)\zeta (s)=\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}\,\mathrm {d} x\quad }
and
Γ
(
s
)
ζ
(
s
)
=
1
2
s
∫
0
∞
x
s
cosh
(
x
)
−
1
d
x
{\displaystyle \quad \Gamma (s)\zeta (s)={\frac {1}{2s}}\int _{0}^{\infty }{\frac {x^{s}}{\cosh(x)-1}}\,\mathrm {d} x}
,
where Γ denotes the gamma function. By modifying the contour, Riemann showed that
2
sin
(
π
s
)
Γ
(
s
)
ζ
(
s
)
=
i
∮
H
(
−
x
)
s
−
1
e
x
−
1
d
x
{\displaystyle 2\sin(\pi s)\Gamma (s)\zeta (s)=i\oint _{H}{\frac {(-x)^{s-1}}{e^{x}-1}}\,\mathrm {d} x}
for all s (where H denotes the Hankel contour).
We can also find expressions which relate to prime numbers and the prime number theorem. If π(x) is the prime-counting function, then
ln
ζ
(
s
)
=
s
∫
0
∞
π
(
x
)
x
(
x
s
−
1
)
d
x
,
{\displaystyle \ln \zeta (s)=s\int _{0}^{\infty }{\frac {\pi (x)}{x(x^{s}-1)}}\,\mathrm {d} x,}
for values with Re(s) > 1.
A similar Mellin transform involves the Riemann function J(x), which counts prime powers pn with a weight of 1/n, so that
J
(
x
)
=
∑
π
(
x
1
n
)
n
.
{\displaystyle J(x)=\sum {\frac {\pi \left(x^{\frac {1}{n}}\right)}{n}}.}
Now
ln
ζ
(
s
)
=
s
∫
0
∞
J
(
x
)
x
−
s
−
1
d
x
.
{\displaystyle \ln \zeta (s)=s\int _{0}^{\infty }J(x)x^{-s-1}\,\mathrm {d} x.}
These expressions can be used to prove the prime number theorem by means of the inverse Mellin transform. Riemann's prime-counting function is easier to work with, and π(x) can be recovered from it by Möbius inversion.
=== Theta functions ===
The Riemann zeta function can be given by a Mellin transform
2
π
−
s
2
Γ
(
s
2
)
ζ
(
s
)
=
∫
0
∞
(
θ
(
i
t
)
−
1
)
t
s
2
−
1
d
t
,
{\displaystyle 2\pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}\right)\zeta (s)=\int _{0}^{\infty }{\bigl (}\theta (it)-1{\bigr )}t^{{\frac {s}{2}}-1}\,\mathrm {d} t,}
in terms of Jacobi's theta function
θ
(
τ
)
=
∑
n
=
−
∞
∞
e
π
i
n
2
τ
.
{\displaystyle \theta (\tau )=\sum _{n=-\infty }^{\infty }e^{\pi in^{2}\tau }.}
However, this integral only converges if the real part of s is greater than 1, but it can be regularized. This gives the following expression for the zeta function, which is well defined for all s except 0 and 1:
π
−
s
2
Γ
(
s
2
)
ζ
(
s
)
=
1
s
−
1
−
1
s
+
1
2
∫
0
1
(
θ
(
i
t
)
−
t
−
1
2
)
t
s
2
−
1
d
t
+
1
2
∫
1
∞
(
θ
(
i
t
)
−
1
)
t
s
2
−
1
d
t
.
{\displaystyle \pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}\right)\zeta (s)={\frac {1}{s-1}}-{\frac {1}{s}}+{\frac {1}{2}}\int _{0}^{1}\left(\theta (it)-t^{-{\frac {1}{2}}}\right)t^{{\frac {s}{2}}-1}\,\mathrm {d} t+{\frac {1}{2}}\int _{1}^{\infty }{\bigl (}\theta (it)-1{\bigr )}t^{{\frac {s}{2}}-1}\,\mathrm {d} t.}
=== Laurent series ===
The Riemann zeta function is meromorphic with a single pole of order one at s = 1. It can therefore be expanded as a Laurent series about s = 1; the series development is then
ζ
(
s
)
=
1
s
−
1
+
∑
n
=
0
∞
γ
n
n
!
(
1
−
s
)
n
.
{\displaystyle \zeta (s)={\frac {1}{s-1}}+\sum _{n=0}^{\infty }{\frac {\gamma _{n}}{n!}}(1-s)^{n}.}
The constants γn here are called the Stieltjes constants and can be defined by the limit
γ
n
=
lim
m
→
∞
(
(
∑
k
=
1
m
(
ln
k
)
n
k
)
−
(
ln
m
)
n
+
1
n
+
1
)
.
{\displaystyle \gamma _{n}=\lim _{m\rightarrow \infty }{\left(\left(\sum _{k=1}^{m}{\frac {(\ln k)^{n}}{k}}\right)-{\frac {(\ln m)^{n+1}}{n+1}}\right)}.}
The constant term γ0 is the Euler–Mascheroni constant.
=== Integral ===
For all s ∈ ℂ, s ≠ 1, the integral relation (cf. Abel–Plana formula)
ζ
(
s
)
=
1
s
−
1
+
1
2
+
2
∫
0
∞
sin
(
s
arctan
t
)
(
1
+
t
2
)
s
/
2
(
e
2
π
t
−
1
)
d
t
{\displaystyle \ \zeta (s)\ =\ {\frac {1}{\ s-1\ }}+{\frac {\ 1\ }{2}}+2\int _{0}^{\infty }{\frac {\sin(\ s\ \arctan t\ )}{\ \left(1+t^{2}\right)^{s/2}\left(e^{2\pi t}-1\right)\ }}\ \operatorname {d} t\ }
holds true, which may be used for a numerical evaluation of the zeta function.
=== Rising factorial ===
Another series development using the rising factorial valid for the entire complex plane is
ζ
(
s
)
=
s
s
−
1
−
∑
n
=
1
∞
(
ζ
(
s
+
n
)
−
1
)
s
(
s
+
1
)
⋯
(
s
+
n
−
1
)
(
n
+
1
)
!
.
{\displaystyle \zeta (s)={\frac {s}{s-1}}-\sum _{n=1}^{\infty }{\bigl (}\zeta (s+n)-1{\bigr )}{\frac {s(s+1)\cdots (s+n-1)}{(n+1)!}}.}
This can be used recursively to extend the Dirichlet series definition to all complex numbers.
The Riemann zeta function also appears in a form similar to the Mellin transform in an integral over the Gauss–Kuzmin–Wirsing operator acting on xs − 1; that context gives rise to a series expansion in terms of the falling factorial.
=== Hadamard product ===
On the basis of Weierstrass's factorization theorem, Hadamard gave the infinite product expansion
ζ
(
s
)
=
e
(
log
(
2
π
)
−
1
−
γ
2
)
s
2
(
s
−
1
)
Γ
(
1
+
s
2
)
∏
ρ
(
1
−
s
ρ
)
e
s
ρ
,
{\displaystyle \zeta (s)={\frac {e^{\left(\log(2\pi )-1-{\frac {\gamma }{2}}\right)s}}{2(s-1)\Gamma \left(1+{\frac {s}{2}}\right)}}\prod _{\rho }\left(1-{\frac {s}{\rho }}\right)e^{\frac {s}{\rho }},}
where the product is over the non-trivial zeros ρ of ζ and the letter γ again denotes the Euler–Mascheroni constant. A simpler infinite product expansion is
ζ
(
s
)
=
π
s
2
∏
ρ
(
1
−
s
ρ
)
2
(
s
−
1
)
Γ
(
1
+
s
2
)
.
{\displaystyle \zeta (s)=\pi ^{\frac {s}{2}}{\frac {\prod _{\rho }\left(1-{\frac {s}{\rho }}\right)}{2(s-1)\Gamma \left(1+{\frac {s}{2}}\right)}}.}
This form clearly displays the simple pole at s = 1, the trivial zeros at −2, −4, ... due to the gamma function term in the denominator, and the non-trivial zeros at s = ρ. (To ensure convergence in the latter formula, the product should be taken over "matching pairs" of zeros, i.e. the factors for a pair of zeros of the form ρ and 1 − ρ should be combined.)
=== Globally convergent series ===
A globally convergent series for the zeta function, valid for all complex numbers s except s = 1 + 2πi/ln 2n for some integer n, was conjectured by Konrad Knopp in 1926 and proven by Helmut Hasse in 1930 (cf. Euler summation):
ζ
(
s
)
=
1
1
−
2
1
−
s
∑
n
=
0
∞
1
2
n
+
1
∑
k
=
0
n
(
n
k
)
(
−
1
)
k
(
k
+
1
)
s
.
{\displaystyle \zeta (s)={\frac {1}{1-2^{1-s}}}\sum _{n=0}^{\infty }{\frac {1}{2^{n+1}}}\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{(k+1)^{s}}}.}
The series appeared in an appendix to Hasse's paper, and was published for the second time by Jonathan Sondow in 1994.
Hasse also proved the globally converging series
ζ
(
s
)
=
1
s
−
1
∑
n
=
0
∞
1
n
+
1
∑
k
=
0
n
(
n
k
)
(
−
1
)
k
(
k
+
1
)
s
−
1
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{n=0}^{\infty }{\frac {1}{n+1}}\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{(k+1)^{s-1}}}}
in the same publication. Research by Iaroslav Blagouchine
has found that a similar, equivalent series was published by Joseph Ser in 1926.
In 1997 K. Maślanka gave another globally convergent (except s = 1) series for the Riemann zeta function:
ζ
(
s
)
=
1
s
−
1
∑
k
=
0
∞
(
∏
i
=
1
k
(
i
−
s
2
)
)
A
k
k
!
=
1
s
−
1
∑
k
=
0
∞
(
1
−
s
2
)
k
A
k
k
!
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{k=0}^{\infty }{\biggl (}\prod _{i=1}^{k}(i-{\frac {s}{2}}){\biggl )}{\frac {A_{k}}{k!}}={\frac {1}{s-1}}\sum _{k=0}^{\infty }{\biggl (}1-{\frac {s}{2}}{\biggl )}_{k}{\frac {A_{k}}{k!}}}
where real coefficients
A
k
{\displaystyle A_{k}}
are given by:
A
k
=
∑
j
=
0
k
(
−
1
)
j
(
k
j
)
(
2
j
+
1
)
ζ
(
2
j
+
2
)
=
∑
j
=
0
k
(
k
j
)
B
2
j
+
2
π
2
j
+
2
(
2
)
j
(
1
2
)
j
{\displaystyle A_{k}=\sum _{j=0}^{k}(-1)^{j}{\binom {k}{j}}(2j+1)\zeta (2j+2)=\sum _{j=0}^{k}{\binom {k}{j}}{\frac {B_{2j+2}\pi ^{2j+2}}{\left(2\right)_{j}\left({\frac {1}{2}}\right)_{j}}}}
Here
B
n
{\displaystyle B_{n}}
are the Bernoulli numbers and
(
x
)
k
{\displaystyle (x)_{k}}
denotes the Pochhammer symbol.
Note that this representation of the zeta function is essentially an interpolation with nodes, where the nodes are points
s
=
2
,
4
,
6
,
…
{\displaystyle s=2,4,6,\ldots }
, i.e. exactly those where the zeta values are precisely known, as Euler showed. An elegant and very short proof of this representation of the zeta function, based on Carlson's theorem, was presented by Philippe Flajolet in 2006.
The asymptotic behavior of the coefficients
A
k
{\displaystyle A_{k}}
is rather curious: for growing
k
{\displaystyle k}
values, we observe regular oscillations with a nearly exponentially decreasing amplitude and slowly decreasing frequency (roughly as
k
−
2
/
3
{\displaystyle k^{-2/3}}
). Using the saddle point method, we can show that
A
k
∼
4
π
3
/
2
3
κ
exp
(
−
3
κ
2
+
π
2
4
κ
)
cos
(
4
π
3
−
3
3
κ
2
+
3
π
2
4
κ
)
{\displaystyle A_{k}\sim {\frac {4\pi ^{3/2}}{\sqrt {3\kappa }}}\exp {\biggl (}-{\frac {3\kappa }{2}}+{\frac {\pi ^{2}}{4\kappa }}{\biggl )}\cos {\biggl (}{\frac {4\pi }{3}}-{\frac {3{\sqrt {3}}\kappa }{2}}+{\frac {{\sqrt {3}}\pi ^{2}}{4\kappa }}{\biggl )}}
where
κ
{\displaystyle \kappa }
stands for:
κ
:=
π
2
k
3
{\displaystyle \kappa :={\sqrt[{3}]{\pi ^{2}k}}}
(see for details).
On the basis of this representation, in 2003 Luis Báez-Duarte provided a new criterion for the Riemann hypothesis. Namely, if we define the coefficients
c
k
{\displaystyle c_{k}}
as
c
k
:=
∑
j
=
0
k
(
−
1
)
j
(
k
j
)
1
ζ
(
2
j
+
2
)
{\displaystyle c_{k}:=\sum _{j=0}^{k}(-1)^{j}{\binom {k}{j}}{\frac {1}{\zeta (2j+2)}}}
then the Riemann hypothesis is equivalent to
c
k
=
O
(
k
−
3
/
4
+
ε
)
(
∀
ε
>
0
)
{\displaystyle c_{k}={\mathcal {O}}{\biggl (}k^{-3/4+\varepsilon }{\biggl )}\qquad (\forall \varepsilon >0)}
=== Rapidly convergent series ===
Peter Borwein developed an algorithm that applies Chebyshev polynomials to the Dirichlet eta function to produce a very rapidly convergent series suitable for high precision numerical calculations.
=== Series representation at positive integers via the primorial ===
ζ
(
k
)
=
2
k
2
k
−
1
+
∑
r
=
2
∞
(
p
r
−
1
#
)
k
J
k
(
p
r
#
)
k
=
2
,
3
,
…
.
{\displaystyle \zeta (k)={\frac {2^{k}}{2^{k}-1}}+\sum _{r=2}^{\infty }{\frac {(p_{r-1}\#)^{k}}{J_{k}(p_{r}\#)}}\qquad k=2,3,\ldots .}
Here pn# is the primorial sequence and Jk is Jordan's totient function.
=== Series representation by the incomplete poly-Bernoulli numbers ===
The function ζ can be represented, for Re(s) > 1, by the infinite series
ζ
(
s
)
=
∑
n
=
0
∞
B
n
,
≥
2
(
s
)
(
W
k
(
−
1
)
)
n
n
!
,
{\displaystyle \zeta (s)=\sum _{n=0}^{\infty }B_{n,\geq 2}^{(s)}{\frac {(W_{k}(-1))^{n}}{n!}},}
where k ∈ {−1, 0}, Wk is the kth branch of the Lambert W-function, and B(μ)n, ≥2 is an incomplete poly-Bernoulli number.
=== The Mellin transform of the Engel map ===
The function
g
(
x
)
=
x
(
1
+
⌊
x
−
1
⌋
)
−
1
{\displaystyle g(x)=x\left(1+\left\lfloor x^{-1}\right\rfloor \right)-1}
is iterated to find the coefficients appearing in Engel expansions.
The Mellin transform of the map
g
(
x
)
{\displaystyle g(x)}
is related to the Riemann zeta function by the formula
∫
0
1
g
(
x
)
x
s
−
1
d
x
=
∑
n
=
1
∞
∫
1
n
+
1
1
n
(
x
(
n
+
1
)
−
1
)
x
s
−
1
d
x
=
∑
n
=
1
∞
n
−
s
(
s
−
1
)
+
(
n
+
1
)
−
s
−
1
(
n
2
+
2
n
+
1
)
+
n
−
s
−
1
s
−
n
1
−
s
(
s
+
1
)
s
(
n
+
1
)
=
ζ
(
s
+
1
)
s
+
1
−
1
s
(
s
+
1
)
{\displaystyle {\begin{aligned}\int _{0}^{1}g(x)x^{s-1}\,dx&=\sum _{n=1}^{\infty }\int _{\frac {1}{n+1}}^{\frac {1}{n}}(x(n+1)-1)x^{s-1}\,dx\\[6pt]&=\sum _{n=1}^{\infty }{\frac {n^{-s}(s-1)+(n+1)^{-s-1}(n^{2}+2n+1)+n^{-s-1}s-n^{1-s}}{(s+1)s(n+1)}}\\[6pt]&={\frac {\zeta (s+1)}{s+1}}-{\frac {1}{s(s+1)}}\end{aligned}}}
=== Thue-Morse sequence ===
Certain linear combinations of Dirichlet series whose coefficients are terms of the Thue-Morse sequence give rise to identities involving the Riemann Zeta function. For instance:
∑
n
≥
1
5
t
n
−
1
+
3
t
n
n
2
=
4
ζ
(
2
)
=
2
π
2
3
,
∑
n
≥
1
9
t
n
−
1
+
7
t
n
n
3
=
8
ζ
(
3
)
,
{\displaystyle {\begin{aligned}\sum _{n\geq 1}{\frac {5t_{n-1}+3t_{n}}{n^{2}}}&=4\zeta (2)={\frac {2\pi ^{2}}{3}},\\\sum _{n\geq 1}{\frac {9t_{n-1}+7t_{n}}{n^{3}}}&=8\zeta (3),\end{aligned}}}
where
(
t
n
)
n
≥
0
{\displaystyle (t_{n})_{n\geq 0}}
is the
n
t
h
{\displaystyle n^{\rm {th}}}
term of the Thue-Morse sequence. In fact, for all
s
{\displaystyle s}
with real part greater than
1
{\displaystyle 1}
, we have
(
2
s
+
1
)
∑
n
≥
1
t
n
−
1
n
s
+
(
2
s
−
1
)
∑
n
≥
1
t
n
n
s
=
2
s
ζ
(
s
)
.
{\displaystyle (2^{s}+1)\sum _{n\geq 1}{\frac {t_{n-1}}{n^{s}}}+(2^{s}-1)\sum _{n\geq 1}{\frac {t_{n}}{n^{s}}}=2^{s}\zeta (s).}
== Numerical algorithms ==
A classical algorithm, in use prior to about 1930, proceeds by applying the Euler-Maclaurin formula to obtain, for n and m positive integers,
ζ
(
s
)
=
∑
j
=
1
n
−
1
j
−
s
+
1
2
n
−
s
+
n
1
−
s
s
−
1
+
∑
k
=
1
m
T
k
,
n
(
s
)
+
E
m
,
n
(
s
)
{\displaystyle \zeta (s)=\sum _{j=1}^{n-1}j^{-s}+{\tfrac {1}{2}}n^{-s}+{\frac {n^{1-s}}{s-1}}+\sum _{k=1}^{m}T_{k,n}(s)+E_{m,n}(s)}
where, letting
B
2
k
{\displaystyle B_{2k}}
denote the indicated Bernoulli number,
T
k
,
n
(
s
)
=
B
2
k
(
2
k
)
!
n
1
−
s
−
2
k
∏
j
=
0
2
k
−
2
(
s
+
j
)
{\displaystyle T_{k,n}(s)={\frac {B_{2k}}{(2k)!}}n^{1-s-2k}\prod _{j=0}^{2k-2}(s+j)}
and the error satisfies
|
E
m
,
n
(
s
)
|
<
|
s
+
2
m
+
1
σ
+
2
m
+
1
T
m
+
1
,
n
(
s
)
|
,
{\displaystyle |E_{m,n}(s)|<\left|{\frac {s+2m+1}{\sigma +2m+1}}T_{m+1,n}(s)\right|,}
with σ = Re(s).
A modern numerical algorithm is the Odlyzko–Schönhage algorithm.
== Applications ==
The zeta function occurs in applied statistics including Zipf's law, Zipf–Mandelbrot law, and Lotka's law.
Zeta function regularization is used as one possible means of regularization of divergent series and divergent integrals in quantum field theory. In one notable example, the Riemann zeta function shows up explicitly in one method of calculating the Casimir effect. The zeta function is also useful for the analysis of dynamical systems.
=== Musical tuning ===
In the theory of musical tunings, the zeta function can be used to find equal divisions of the octave (EDOs) that closely approximate the intervals of the harmonic series. For increasing values of
t
∈
R
{\displaystyle t\in \mathbb {R} }
, the value of
|
ζ
(
1
2
+
2
π
i
ln
(
2
)
t
)
|
{\displaystyle \left\vert \zeta \left({\frac {1}{2}}+{\frac {2\pi {i}}{\ln {(2)}}}t\right)\right\vert }
peaks near integers that correspond to such EDOs. Examples include popular choices such as 12, 19, and 53.
=== Infinite series ===
The zeta function evaluated at equidistant positive integers appears in infinite series representations of a number of constants.
∑
n
=
2
∞
(
ζ
(
n
)
−
1
)
=
1
{\displaystyle \sum _{n=2}^{\infty }{\bigl (}\zeta (n)-1{\bigr )}=1}
In fact the even and odd terms give the two sums
∑
n
=
1
∞
(
ζ
(
2
n
)
−
1
)
=
3
4
{\displaystyle \sum _{n=1}^{\infty }{\bigl (}\zeta (2n)-1{\bigr )}={\frac {3}{4}}}
and
∑
n
=
1
∞
(
ζ
(
2
n
+
1
)
−
1
)
=
1
4
{\displaystyle \sum _{n=1}^{\infty }{\bigl (}\zeta (2n+1)-1{\bigr )}={\frac {1}{4}}}
Parametrized versions of the above sums are given by
∑
n
=
1
∞
(
ζ
(
2
n
)
−
1
)
t
2
n
=
t
2
t
2
−
1
+
1
2
(
1
−
π
t
cot
(
t
π
)
)
{\displaystyle \sum _{n=1}^{\infty }(\zeta (2n)-1)\,t^{2n}={\frac {t^{2}}{t^{2}-1}}+{\frac {1}{2}}\left(1-\pi t\cot(t\pi )\right)}
and
∑
n
=
1
∞
(
ζ
(
2
n
+
1
)
−
1
)
t
2
n
=
t
2
t
2
−
1
−
1
2
(
ψ
0
(
t
)
+
ψ
0
(
−
t
)
)
−
γ
{\displaystyle \sum _{n=1}^{\infty }(\zeta (2n+1)-1)\,t^{2n}={\frac {t^{2}}{t^{2}-1}}-{\frac {1}{2}}\left(\psi ^{0}(t)+\psi ^{0}(-t)\right)-\gamma }
with
|
t
|
<
2
{\displaystyle |t|<2}
and where
ψ
{\displaystyle \psi }
and
γ
{\displaystyle \gamma }
are the polygamma function and Euler's constant, respectively, as well as
∑
n
=
1
∞
ζ
(
2
n
)
−
1
n
t
2
n
=
log
(
1
−
t
2
sinc
(
π
t
)
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {\zeta (2n)-1}{n}}\,t^{2n}=\log \left({\dfrac {1-t^{2}}{\operatorname {sinc} (\pi \,t)}}\right)}
all of which are continuous at
t
=
1
{\displaystyle t=1}
. Other sums include
∑
n
=
2
∞
ζ
(
n
)
−
1
n
=
1
−
γ
{\displaystyle \sum _{n=2}^{\infty }{\frac {\zeta (n)-1}{n}}=1-\gamma }
∑
n
=
1
∞
ζ
(
2
n
)
−
1
n
=
ln
2
{\displaystyle \sum _{n=1}^{\infty }{\frac {\zeta (2n)-1}{n}}=\ln 2}
∑
n
=
2
∞
ζ
(
n
)
−
1
n
(
(
3
2
)
n
−
1
−
1
)
=
1
3
ln
π
{\displaystyle \sum _{n=2}^{\infty }{\frac {\zeta (n)-1}{n}}\left(\left({\tfrac {3}{2}}\right)^{n-1}-1\right)={\frac {1}{3}}\ln \pi }
∑
n
=
1
∞
(
ζ
(
4
n
)
−
1
)
=
7
8
−
π
4
(
e
2
π
+
1
e
2
π
−
1
)
.
{\displaystyle \sum _{n=1}^{\infty }{\bigl (}\zeta (4n)-1{\bigr )}={\frac {7}{8}}-{\frac {\pi }{4}}\left({\frac {e^{2\pi }+1}{e^{2\pi }-1}}\right).}
∑
n
=
2
∞
ζ
(
n
)
−
1
n
ℑ
(
(
1
+
i
)
n
−
1
−
i
n
)
=
π
4
{\displaystyle \sum _{n=2}^{\infty }{\frac {\zeta (n)-1}{n}}\Im {\bigl (}(1+i)^{n}-1-i^{n}{\bigr )}={\frac {\pi }{4}}}
where
ℑ
{\displaystyle \Im }
denotes the imaginary part of a complex number.
Another interesting series that relates to the natural logarithm of the lemniscate constant is the following
∑
n
=
2
∞
[
2
(
−
1
)
n
ζ
(
n
)
4
n
n
−
(
−
1
)
n
ζ
(
n
)
2
n
n
]
=
ln
(
ϖ
2
2
)
{\displaystyle \sum _{n=2}^{\infty }\left[{\frac {2(-1)^{n}\zeta (n)}{4^{n}n}}-{\frac {(-1)^{n}\zeta (n)}{2^{n}n}}\right]=\ln \left({\frac {\varpi }{2{\sqrt {2}}}}\right)}
There are yet more formulas in the article Harmonic number.
== Generalizations ==
There are a number of related zeta functions that can be considered to be generalizations of the Riemann zeta function. These include the Hurwitz zeta function
ζ
(
s
,
q
)
=
∑
k
=
0
∞
1
(
k
+
q
)
s
{\displaystyle \zeta (s,q)=\sum _{k=0}^{\infty }{\frac {1}{(k+q)^{s}}}}
(the convergent series representation was given by Helmut Hasse in 1930, cf. Hurwitz zeta function), which coincides with the Riemann zeta function when q = 1 (the lower limit of summation in the Hurwitz zeta function is 0, not 1), the Dirichlet L-functions and the Dedekind zeta function. For other related functions see the articles zeta function and L-function.
The polylogarithm is given by
Li
s
(
z
)
=
∑
k
=
1
∞
z
k
k
s
{\displaystyle \operatorname {Li} _{s}(z)=\sum _{k=1}^{\infty }{\frac {z^{k}}{k^{s}}}}
which coincides with the Riemann zeta function when z = 1.
The Clausen function Cls(θ) can be chosen as the real or imaginary part of Lis(eiθ).
The Lerch transcendent is given by
Φ
(
z
,
s
,
q
)
=
∑
k
=
0
∞
z
k
(
k
+
q
)
s
{\displaystyle \Phi (z,s,q)=\sum _{k=0}^{\infty }{\frac {z^{k}}{(k+q)^{s}}}}
which coincides with the Riemann zeta function when z = 1 and q = 1 (the lower limit of summation in the Lerch transcendent is 0, not 1).
The multiple zeta functions are defined by
ζ
(
s
1
,
s
2
,
…
,
s
n
)
=
∑
k
1
>
k
2
>
⋯
>
k
n
>
0
k
1
−
s
1
k
2
−
s
2
⋯
k
n
−
s
n
.
{\displaystyle \zeta (s_{1},s_{2},\ldots ,s_{n})=\sum _{k_{1}>k_{2}>\cdots >k_{n}>0}{k_{1}}^{-s_{1}}{k_{2}}^{-s_{2}}\cdots {k_{n}}^{-s_{n}}.}
One can analytically continue these functions to the n-dimensional complex space. The special values taken by these functions at positive integer arguments are called multiple zeta values by number theorists and have been connected to many different branches in mathematics and physics.
== See also ==
1 + 2 + 3 + 4 + ···
Arithmetic zeta function
Dirichlet eta function
Generalized Riemann hypothesis
Lehmer pair
Particular values of the Riemann zeta function
Prime zeta function
Renormalization
Riemann–Siegel theta function
ZetaGrid
== References ==
== Sources ==
== External links ==
Media related to Riemann zeta function at Wikimedia Commons
"Zeta-function". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Riemann Zeta Function, in Wolfram Mathworld — an explanation with a more mathematical approach
Tables of selected zeros Archived 17 May 2009 at the Wayback Machine
Prime Numbers Get Hitched A general, non-technical description of the significance of the zeta function in relation to prime numbers.
X-Ray of the Zeta Function Visually oriented investigation of where zeta is real or purely imaginary.
Formulas and identities for the Riemann Zeta function functions.wolfram.com
Riemann Zeta Function and Other Sums of Reciprocal Powers, section 23.2 of Abramowitz and Stegun
Frenkel, Edward. "Million Dollar Math Problem" (video). Brady Haran. Archived from the original on 11 December 2021. Retrieved 11 March 2014.
Mellin transform and the functional equation of the Riemann Zeta function—Computational examples of Mellin transform methods involving the Riemann Zeta Function
Visualizing the Riemann zeta function and analytic continuation a video from 3Blue1Brown | Wikipedia/Riemann_zeta_function |
Sieve theory is a set of general techniques in number theory, designed to count, or more realistically to estimate the size of, sifted sets of integers. The prototypical example of a sifted set is the set of prime numbers up to some prescribed limit X. Correspondingly, the prototypical example of a sieve is the sieve of Eratosthenes, or the more general Legendre sieve. The direct attack on prime numbers using these methods soon reaches apparently insuperable obstacles, in the way of the accumulation of error terms. In one of the major strands of number theory in the twentieth century, ways were found of avoiding some of the difficulties of a frontal attack with a naive idea of what sieving should be.
One successful approach is to approximate a specific sifted set of numbers (e.g. the set of prime numbers) by another, simpler set (e.g. the set of almost prime numbers), which is typically somewhat larger than the original set, and easier to analyze. More sophisticated sieves also do not work directly with sets per se, but instead count them according to carefully chosen weight functions on these sets (options for giving some elements of these sets more "weight" than others). Furthermore, in some modern applications, sieves are used not to estimate the size of a sifted
set, but to produce a function that is large on the set and mostly small outside it, while being easier to analyze than the characteristic function of the set.
The term sieve was first used by the Norwegian mathematician Viggo Brun in 1915. However Brun's work was inspired by the works of the French mathematician Jean Merlin who died in the World War I and only two of his manuscripts survived.
== Basic sieve theory ==
For information on notation see at the end. We follow the Ansatz from Opera de Cribro by John Friedlander and Henryk Iwaniec.
We start with some countable sequence of non-negative numbers
A
=
(
a
n
)
{\displaystyle {\mathcal {A}}=(a_{n})}
. In the most basic case this sequence is just the indicator function
a
n
=
1
A
(
n
)
{\displaystyle a_{n}=1_{A}(n)}
of some set
A
=
{
s
:
s
≤
x
}
{\displaystyle A=\{s:s\leq x\}}
we want to sieve. However this abstraction allows for more general situations. Next we introduce a general set of prime numbers called the sifting range
P
⊆
P
{\displaystyle {\mathcal {P}}\subseteq \mathbb {P} }
and their product up to
z
{\displaystyle z}
as a function
P
(
z
)
=
∏
p
∈
P
,
p
<
z
p
{\displaystyle P(z)=\prod \limits _{p\in {\mathcal {P}},p<z}p}
.
The goal of sieve theory is to estimate the sifting function
S
(
A
,
P
,
z
)
=
∑
n
≤
x
,
gcd
(
n
,
P
(
z
)
)
=
1
a
n
.
{\displaystyle S({\mathcal {A}},{\mathcal {P}},z)=\sum \limits _{n\leq x,{\text{gcd}}(n,P(z))=1}a_{n}.}
In the case of
a
n
=
1
A
(
n
)
{\displaystyle a_{n}=1_{A}(n)}
this just counts the cardinality of a subset
A
sift
⊆
A
{\displaystyle A_{\operatorname {sift} }\subseteq A}
of numbers, that are coprime to the prime factors of
P
(
z
)
{\displaystyle P(z)}
.
=== The inclusion–exclusion principle ===
For
P
{\displaystyle {\mathcal {P}}}
define
A
sift
:=
{
a
∈
A
|
(
a
,
p
1
⋯
p
k
)
=
1
}
,
p
1
,
…
,
p
k
∈
P
{\displaystyle A_{\operatorname {sift} }:=\{a\in A|(a,p_{1}\cdots p_{k})=1\},\quad p_{1},\dots ,p_{k}\in {\mathcal {P}}}
and for each prime
p
∈
P
{\displaystyle p\in {\mathcal {P}}}
denote the subset
E
p
⊆
A
{\displaystyle E_{p}\subseteq A}
of multiples
E
p
:=
{
p
n
:
n
∈
N
}
{\displaystyle E_{p}:=\{pn:n\in \mathbb {N} \}}
and let
|
E
p
|
{\displaystyle |E_{p}|}
be the cardinality.
We now introduce a way to calculate the cardinality of
A
sift
{\displaystyle A_{\operatorname {sift} }}
. For this the sifting range
P
{\displaystyle {\mathcal {P}}}
will be a concrete example of primes of the form
P
:=
{
2
,
3
,
5
,
7
,
11
,
13
…
}
{\displaystyle {\mathcal {P}}:=\{2,3,5,7,11,13\dots \}}
.
If one wants to calculate the cardinality of
A
sift
{\displaystyle A_{\operatorname {sift} }}
, one can apply the inclusion–exclusion principle. This algorithm works like this: first one removes from the cardinality of
|
A
|
{\displaystyle |A|}
the cardinality
|
E
2
|
{\displaystyle |E_{2}|}
and
|
E
3
|
{\displaystyle |E_{3}|}
. Now since one has removed the numbers that are divisible by
2
{\displaystyle 2}
and
3
{\displaystyle 3}
twice, one has to add the cardinality
|
E
6
|
{\displaystyle |E_{6}|}
. In the next step one removes
|
E
5
|
{\displaystyle |E_{5}|}
and adds
|
E
10
|
{\displaystyle |E_{10}|}
and
|
E
15
|
{\displaystyle |E_{15}|}
again. Additionally one has now to remove
|
E
30
|
{\displaystyle |E_{30}|}
, i.e. the cardinality of all numbers divisible by
2
,
3
{\displaystyle 2,3}
and
5
{\displaystyle 5}
. This leads to the inclusion–exclusion principle
|
A
sift
|
=
|
A
|
−
|
E
2
|
−
|
E
3
|
+
|
E
6
|
−
|
E
5
|
+
|
E
10
|
+
|
E
15
|
−
|
E
30
|
+
⋯
{\displaystyle |A_{\operatorname {sift} }|=|A|-|E_{2}|-|E_{3}|+|E_{6}|-|E_{5}|+|E_{10}|+|E_{15}|-|E_{30}|+\cdots }
Notice that one can write this as
|
A
sift
|
=
∑
d
|
P
μ
(
d
)
|
E
d
|
{\displaystyle |A_{\operatorname {sift} }|=\sum \limits _{d|P}\mu (d)|E_{d}|}
where
μ
{\displaystyle \mu }
is the Möbius function and
P
:=
∏
p
∈
P
p
{\displaystyle P:=\prod \limits _{p\in {\mathcal {P}}}p}
the product of all primes in
P
{\displaystyle {\mathcal {P}}}
and
E
1
:=
A
{\displaystyle E_{1}:=A}
.
=== Legendre's identity ===
We can rewrite the sifting function with Legendre's identity
S
(
A
,
P
,
z
)
=
∑
d
∣
P
(
z
)
μ
(
d
)
A
d
(
x
)
{\displaystyle S({\mathcal {A}},{\mathcal {P}},z)=\sum \limits _{d\mid P(z)}\mu (d)A_{d}(x)}
by using the Möbius function and some functions
A
d
(
x
)
{\displaystyle A_{d}(x)}
induced by the elements of
P
{\displaystyle {\mathcal {P}}}
A
d
(
x
)
=
∑
n
≤
x
,
n
≡
0
(
mod
d
)
a
n
.
{\displaystyle A_{d}(x)=\sum \limits _{n\leq x,n\equiv 0{\pmod {d}}}a_{n}.}
==== Example ====
Let
z
=
7
{\displaystyle z=7}
and
P
=
P
{\displaystyle {\mathcal {P}}=\mathbb {P} }
. The Möbius function is negative for every prime, so we get
S
(
A
,
P
,
7
)
=
A
1
(
x
)
−
A
2
(
x
)
−
A
3
(
x
)
−
A
5
(
x
)
+
A
6
(
x
)
+
A
10
(
x
)
+
A
15
(
x
)
−
A
30
(
x
)
.
{\displaystyle {\begin{aligned}S({\mathcal {A}},\mathbb {P} ,7)&=A_{1}(x)-A_{2}(x)-A_{3}(x)-A_{5}(x)+A_{6}(x)+A_{10}(x)+A_{15}(x)-A_{30}(x).\end{aligned}}}
=== Approximation of the congruence sum ===
One assumes then that
A
d
(
x
)
{\displaystyle A_{d}(x)}
can be written as
A
d
(
x
)
=
g
(
d
)
X
+
r
d
(
x
)
{\displaystyle A_{d}(x)=g(d)X+r_{d}(x)}
where
g
(
d
)
{\displaystyle g(d)}
is a density, meaning a multiplicative function such that
g
(
1
)
=
1
,
0
≤
g
(
p
)
<
1
p
∈
P
{\displaystyle g(1)=1,\qquad 0\leq g(p)<1\qquad p\in \mathbb {P} }
and
X
{\displaystyle X}
is an approximation of
A
1
(
x
)
{\displaystyle A_{1}(x)}
and
r
d
(
x
)
{\displaystyle r_{d}(x)}
is some remainder term. The sifting function becomes
S
(
A
,
P
,
z
)
=
X
∑
d
∣
P
(
z
)
μ
(
d
)
g
(
d
)
+
∑
d
∣
P
(
z
)
μ
(
d
)
r
d
(
x
)
{\displaystyle S({\mathcal {A}},{\mathcal {P}},z)=X\sum \limits _{d\mid P(z)}\mu (d)g(d)+\sum \limits _{d\mid P(z)}\mu (d)r_{d}(x)}
or in short
S
(
A
,
P
,
z
)
=
X
G
(
x
,
z
)
+
R
(
x
,
z
)
.
{\displaystyle S({\mathcal {A}},{\mathcal {P}},z)=XG(x,z)+R(x,z).}
One tries then to estimate the sifting function by finding upper and lower bounds for
S
{\displaystyle S}
respectively
G
{\displaystyle G}
and
R
{\displaystyle R}
.
The partial sum of the sifting function alternately over- and undercounts, so the remainder term will be huge. Brun's idea to improve this was to replace
μ
(
d
)
{\displaystyle \mu (d)}
in the sifting function with a weight sequence
(
λ
d
)
{\displaystyle (\lambda _{d})}
consisting of restricted Möbius functions. Choosing two appropriate sequences
(
λ
d
−
)
{\displaystyle (\lambda _{d}^{-})}
and
(
λ
d
+
)
{\displaystyle (\lambda _{d}^{+})}
and denoting the sifting functions with
S
−
{\displaystyle S^{-}}
and
S
+
{\displaystyle S^{+}}
, one can get lower and upper bounds for the original sifting functions
S
−
≤
S
≤
S
+
.
{\displaystyle S^{-}\leq S\leq S^{+}.}
Since
g
{\displaystyle g}
is multiplicative, one can also work with the identity
∑
d
∣
n
μ
(
d
)
g
(
d
)
=
∏
p
|
n
;
p
∈
P
(
1
−
g
(
p
)
)
,
∀
n
∈
N
.
{\displaystyle \sum \limits _{d\mid n}\mu (d)g(d)=\prod \limits _{\begin{array}{c}p|n;\;p\in \mathbb {P} \end{array}}(1-g(p)),\quad \forall \;n\in \mathbb {N} .}
Notation: a word of caution regarding the notation, in the literature one often identifies the set of sequences
A
{\displaystyle {\mathcal {A}}}
with the set
A
{\displaystyle A}
itself. This means one writes
A
=
{
s
:
s
≤
x
}
{\displaystyle {\mathcal {A}}=\{s:s\leq x\}}
to define a sequence
A
=
(
a
n
)
{\displaystyle {\mathcal {A}}=(a_{n})}
. Also in the literature the sum
A
d
(
x
)
{\displaystyle A_{d}(x)}
is sometimes notated as the cardinality
|
A
d
(
x
)
|
{\displaystyle |A_{d}(x)|}
of some set
A
d
(
x
)
{\displaystyle A_{d}(x)}
, while we have defined
A
d
(
x
)
{\displaystyle A_{d}(x)}
to be already the cardinality of this set. We used
P
{\displaystyle \mathbb {P} }
to denote the set of primes and
(
a
,
b
)
{\displaystyle (a,b)}
for the greatest common divisor of
a
{\displaystyle a}
and
b
{\displaystyle b}
.
== Types of sieving ==
Modern sieves include the Brun sieve, the Selberg sieve, the Turán sieve, the large sieve, the larger sieve and the Goldston–Pintz–Yıldırım sieve. One of the original purposes of sieve theory was to try to prove conjectures in number theory such as the twin prime conjecture. While the original broad aims of sieve theory still are largely unachieved, there have been some partial successes, especially in combination with other number theoretic tools. Highlights include:
Brun's theorem, which shows that the sum of the reciprocals of the twin primes converges (whereas the sum of the reciprocals of all primes diverges);
Chen's theorem, which shows that there are infinitely many primes p such that p + 2 is either a prime or a semiprime (the product of two primes); a closely related theorem of Chen Jingrun asserts that every sufficiently large even number is the sum of a prime and another number which is either a prime or a semiprime. These can be considered to be near-misses to the twin prime conjecture and the Goldbach conjecture respectively.
The fundamental lemma of sieve theory, which asserts that if one is sifting a set of N numbers, then one can accurately estimate the number of elements left in the sieve after
N
ε
{\displaystyle N^{\varepsilon }}
iterations provided that
ε
{\displaystyle \varepsilon }
is sufficiently small (fractions such as 1/10 are quite typical here). This lemma is usually too weak to sieve out primes (which generally require something like
N
1
/
2
{\displaystyle N^{1/2}}
iterations), but can be enough to obtain results regarding almost primes.
The Friedlander–Iwaniec theorem, which asserts that there are infinitely many primes of the form
a
2
+
b
4
{\displaystyle a^{2}+b^{4}}
.
Zhang's theorem (Zhang 2014), which shows that there are infinitely many pairs of primes within a bounded distance. The Maynard–Tao theorem (Maynard 2015) generalizes Zhang's theorem to arbitrarily long sequences of primes.
== Techniques of sieve theory ==
The techniques of sieve theory can be quite powerful, but they seem to be limited by an obstacle known as the parity problem, which roughly speaking asserts that sieve theory methods have extreme difficulty distinguishing between numbers with an odd number of prime factors and numbers with an even number of prime factors. This parity problem is still not very well understood.
Compared with other methods in number theory, sieve theory is comparatively elementary, in the sense that it does not necessarily require sophisticated concepts from either algebraic number theory or analytic number theory. Nevertheless, the more advanced sieves can still get very intricate and delicate (especially when combined with other deep techniques in number theory), and entire textbooks have been devoted to this single subfield of number theory; a classic reference is (Halberstam & Richert 1974) and a more modern text is (Iwaniec & Friedlander 2010).
The sieve methods discussed in this article are not closely related to the integer factorization sieve methods such as the quadratic sieve and the general number field sieve. Those factorization methods use the idea of the sieve of Eratosthenes to determine efficiently which members of a list of numbers can be completely factored into small primes.
== Literature ==
Cojocaru, Alina Carmen; Murty, M. Ram (2006), An introduction to sieve methods and their applications, London Mathematical Society Student Texts, vol. 66, Cambridge University Press, ISBN 0-521-84816-4, MR 2200366
Motohashi, Yoichi (1983), Lectures on Sieve Methods and Prime Number Theory, Tata Institute of Fundamental Research Lectures on Mathematics and Physics, vol. 72, Berlin: Springer-Verlag, ISBN 3-540-12281-8, MR 0735437
Greaves, George (2001), Sieves in number theory, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), vol. 43, Berlin: Springer-Verlag, doi:10.1007/978-3-662-04658-6, ISBN 3-540-41647-1, MR 1836967
Harman, Glyn (2007). Prime-detecting sieves. London Mathematical Society Monographs. Vol. 33. Princeton, NJ: Princeton University Press. ISBN 978-0-691-12437-7. MR 2331072. Zbl 1220.11118.
Halberstam, Heini; Richert, Hans-Egon (1974). Sieve Methods. London Mathematical Society Monographs. Vol. 4. London-New York: Academic Press. ISBN 0-12-318250-6. MR 0424730.
Iwaniec, Henryk; Friedlander, John (2010), Opera de cribro, American Mathematical Society Colloquium Publications, vol. 57, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4970-5, MR 2647984
Hooley, Christopher (1976), Applications of sieve methods to the theory of numbers, Cambridge Tracts in Mathematics, vol. 70, Cambridge-New York-Melbourne: Cambridge University Press, ISBN 0-521-20915-3, MR 0404173
Maynard, James (2015). "Small gaps between primes". Annals of Mathematics. 181 (1): 383–413. arXiv:1311.4600. doi:10.4007/annals.2015.181.1.7. MR 3272929.
Tenenbaum, Gérald (1995), Introduction to Analytic and Probabilistic Number Theory, Cambridge studies in advanced mathematics, vol. 46, Translated from the second French edition (1995) by C. B. Thomas, Cambridge University Press, pp. 56–79, ISBN 0-521-41261-7, MR 1342300
Zhang, Yitang (2014). "Bounded gaps between primes". Annals of Mathematics. 179 (3): 1121–1174. doi:10.4007/annals.2014.179.3.7. MR 3171761.
== External links ==
Bredikhin, B.M. (2001) [1994], "Sieve method", Encyclopedia of Mathematics, EMS Press
== References == | Wikipedia/Sieve_theory |
In mathematics, the ideal class group (or class group) of an algebraic number field
K
{\displaystyle K}
is the quotient group
J
K
/
P
K
{\displaystyle J_{K}/P_{K}}
where
J
K
{\displaystyle J_{K}}
is the group of fractional ideals of the ring of integers of
K
{\displaystyle K}
, and
P
K
{\displaystyle P_{K}}
is its subgroup of principal ideals. The class group is a measure of the extent to which unique factorization fails in the ring of integers of
K
{\displaystyle K}
. The order of the group, which is finite, is called the class number of
K
{\displaystyle K}
.
The theory extends to Dedekind domains and their fields of fractions, for which the multiplicative properties are intimately tied to the structure of the class group. For example, the class group of a Dedekind domain is trivial if and only if the ring is a unique factorization domain.
== History and origin of the ideal class group ==
Ideal class groups (or, rather, what were effectively ideal class groups) were studied some time before the idea of an ideal was formulated. These groups appeared in the theory of quadratic forms: in the case of binary integral quadratic forms, as put into something like a final form by Carl Friedrich Gauss, a composition law was defined on certain equivalence classes of forms. This gave a finite abelian group, as was recognised at the time.
Later Ernst Kummer was working towards a theory of cyclotomic fields. It had been realised (probably by several people) that failure to complete proofs in the general case of Fermat's Last Theorem by factorisation using the roots of unity was for a very good reason: a failure of unique factorization – i.e., the fundamental theorem of arithmetic – to hold in the rings generated by those roots of unity was a major obstacle. Out of Kummer's work for the first time came a study of the obstruction to the factorization. We now recognise this as part of the ideal class group: in fact Kummer had isolated the
p
{\displaystyle p}
-torsion in that group for the field of
p
{\displaystyle p}
-roots of unity, for any prime number
p
{\displaystyle p}
, as the reason for the failure of the standard method of attack on the Fermat problem (see regular prime).
Somewhat later again Richard Dedekind formulated the concept of an ideal, Kummer having worked in a different way. At this point the existing examples could be unified. It was shown that while rings of algebraic integers do not always have unique factorization into primes (because they need not be principal ideal domains), they do have the property that every proper ideal admits a unique factorization as a product of prime ideals (that is, every ring of algebraic integers is a Dedekind domain). The size of the ideal class group can be considered as a measure for the deviation of a ring from being a principal ideal domain; a ring is a principal ideal domain if and only if it has a trivial ideal class group.
== Definition ==
If
R
{\displaystyle R}
is an integral domain, define a relation
∼
{\displaystyle \sim }
on nonzero fractional ideals of
R
{\displaystyle R}
by
I
∼
J
{\displaystyle I\sim J}
whenever there exist nonzero elements
a
{\displaystyle a}
and
b
{\displaystyle b}
of
R
{\displaystyle R}
such that
(
a
)
I
=
(
b
)
J
{\displaystyle (a)I=(b)J}
. It is easily shown that this is an equivalence relation. The equivalence classes are called the ideal classes of
R
{\displaystyle R}
.
Ideal classes can be multiplied: if
[
I
]
{\displaystyle [I]}
denotes the equivalence class of the ideal
I
{\displaystyle I}
, then the multiplication
[
I
]
[
J
]
=
[
I
J
]
{\displaystyle [I][J]=[IJ]}
is well-defined and commutative. The principal ideals form the ideal class
[
R
]
{\displaystyle [R]}
which serves as an identity element for this multiplication. Thus a class
[
I
]
{\displaystyle [I]}
has an inverse
[
J
]
{\displaystyle [J]}
if and only if there is an ideal
J
{\displaystyle J}
such that
I
J
{\displaystyle IJ}
is a principal ideal. In general, such a
J
{\displaystyle J}
may not exist and consequently the set of ideal classes of
R
{\displaystyle R}
may only be a monoid.
However, if
R
{\displaystyle R}
is the ring of algebraic integers in an algebraic number field, or more generally a Dedekind domain, the multiplication defined above turns the set of fractional ideal classes into an abelian group, the ideal class group of
R
{\displaystyle R}
. The group property of existence of inverse elements follows easily from the fact that, in a Dedekind domain, every non-zero ideal (except
R
{\displaystyle R}
) is a product of prime ideals.
== Properties ==
The ideal class group is trivial (i.e. has only one element) if and only if all ideals of
R
{\displaystyle R}
are principal. In this sense, the ideal class group measures how far
R
{\displaystyle R}
is from being a principal ideal domain, and hence from satisfying unique prime factorization (Dedekind domains are unique factorization domains if and only if they are principal ideal domains).
The number of ideal classes—the class number of
R
{\displaystyle R}
—may be infinite in general. In fact, every abelian group is isomorphic to the ideal class group of some Dedekind domain. But if
R
{\displaystyle R}
is a ring of algebraic integers, then the class number is always finite. This is one of the main results of classical algebraic number theory.
Computation of the class group is hard, in general; it can be done by hand for the ring of integers in an algebraic number field of small discriminant, using Minkowski's bound. This result gives a bound, depending on the ring, such that every ideal class contains an ideal norm less than the bound. In general the bound is not sharp enough to make the calculation practical for fields with large discriminant, but computers are well suited to the task.
The mapping from rings of integers
R
{\displaystyle R}
to their corresponding class groups is functorial, and the class group can be subsumed under the heading of algebraic K-theory, with
K
0
(
R
)
{\displaystyle K_{0}(R)}
being the functor assigning to
R
{\displaystyle R}
its ideal class group; more precisely,
K
0
(
R
)
=
Z
×
C
(
R
)
{\displaystyle K_{0}(R)=\mathbb {Z} \times C(R)}
, where
C
(
R
)
{\displaystyle C(R)}
is the class group. Higher
K
{\displaystyle K}
-groups can also be employed and interpreted arithmetically in connection to rings of integers.
== Relation with the group of units ==
It was remarked above that the ideal class group provides part of the answer to the question of how much ideals in a Dedekind domain behave like elements. The other part of the answer is provided by the group of units of the Dedekind domain, since passage from principal ideals to their generators requires the use of units (and this is the rest of the reason for introducing the concept of fractional ideal, as well).
Define a map from
R
×
{\displaystyle R^{\times }}
to the set of all nonzero fractional ideals of
R
{\displaystyle R}
by sending every element to the principal (fractional) ideal it generates. This is a group homomorphism; its kernel is the group of units of
R
{\displaystyle R}
, and its cokernel is the ideal class group of
R
{\displaystyle R}
. The failure of these groups to be trivial is a measure of the failure of the map to be an isomorphism: that is the failure of ideals to act like ring elements, that is to say, like numbers.
== Examples of ideal class groups ==
The rings
Z
,
Z
[
i
]
{\displaystyle \mathbb {Z} ,\mathbb {Z} [i]}
, and
Z
[
ω
]
{\displaystyle \mathbb {Z} [\omega ]}
, respectively the integers, Gaussian integers, and Eisenstein integers, are all principal ideal domains (and in fact are all Euclidean domains), and so have class number 1: i.e., they have trivial ideal class groups.
If
K
{\displaystyle K}
is a field, then the polynomial ring
K
[
x
1
,
x
2
,
x
3
,
…
]
{\displaystyle K[x_{1},x_{2},x_{3},\dots ]}
is an integral domain. It has a countably infinite set of ideal classes.
=== Class numbers of quadratic fields ===
If
d
{\displaystyle d}
is a square-free integer (a product of distinct primes) other than 1, then
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
is a quadratic extension of
Q
{\displaystyle \mathbb {Q} }
. If
d
<
0
{\displaystyle d<0}
, then the class number of the ring
R
{\displaystyle R}
of algebraic integers of
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
is equal to 1 for precisely the following values of
d
{\displaystyle d}
:
d
=
−
1
,
−
2
,
−
3
,
−
7
,
−
11
,
−
19
,
−
43
,
−
67
,
−
163
{\displaystyle d=-1,-2,-3,-7,-11,-19,-43,-67,-163}
. This result was first conjectured by Gauss and proven by Kurt Heegner, although Heegner's proof was not believed until Harold Stark gave a later proof in 1967 (see Stark–Heegner theorem). This is a special case of the famous class number problem.
If, on the other hand,
d
>
0
{\displaystyle d>0}
, then it is unknown whether there are infinitely many fields
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
with class number 1. Computational results indicate that there are a great many such fields. However, it is not even known if there are infinitely many number fields with class number 1.
For
d
<
0
{\displaystyle d<0}
, the ideal class group of
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
is isomorphic to the class group of integral binary quadratic forms of discriminant equal to the discriminant of
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
. For
d
>
0
{\displaystyle d>0}
, the ideal class group may be half the size since the class group of integral binary quadratic forms is isomorphic to the narrow class group of
Q
(
d
)
{\displaystyle \mathbb {Q} ({\sqrt {d}})}
.
For real quadratic integer rings, the class number is given in OEIS A003649; for the imaginary case, they are given in OEIS A000924.
==== Example of a non-trivial class group ====
The quadratic integer ring
R
=
Z
[
−
5
]
{\displaystyle R=\mathbb {Z} [{\sqrt {-5}}]}
is the ring of integers of
Q
(
−
5
)
{\displaystyle \mathbb {Q} ({\sqrt {-5}})}
. It does not possess unique factorization; in fact the class group of
R
{\displaystyle R}
is cyclic of order 2. Indeed, the ideal
J
=
(
2
,
1
+
−
5
)
{\displaystyle J=(2,1+{\sqrt {-5}})}
is not principal, which can be proved by contradiction as follows:
R
{\displaystyle R}
has a multiplicative norm function defined by
N
(
a
+
b
−
5
)
=
a
2
+
5
b
2
{\displaystyle N(a+b{\sqrt {-5}})=a^{2}+5b^{2}}
, which satisfies
N
(
u
)
=
1
{\displaystyle N(u)=1}
if and only if
u
{\displaystyle u}
is a unit in
R
{\displaystyle R}
.
Firstly,
J
≠
R
{\displaystyle J\neq R}
, because the quotient ring of
R
{\displaystyle R}
modulo the ideal
(
1
+
−
5
)
{\displaystyle (1+{\sqrt {-5}})}
is isomorphic to
Z
/
6
Z
{\displaystyle \mathbb {Z} /6\mathbb {Z} }
, so that the quotient ring of
R
{\displaystyle R}
modulo
J
{\displaystyle J}
is isomorphic to
Z
/
3
Z
{\displaystyle \mathbb {Z} /3\mathbb {Z} }
. Now if
J
=
(
a
)
{\displaystyle J=(a)}
were principal (that is, generated by an element
a
{\displaystyle a}
of
R
{\displaystyle R}
), then
a
{\displaystyle a}
would divide both
2
{\displaystyle 2}
and
1
+
−
5
{\displaystyle 1+{\sqrt {-5}}}
. Then the norm
N
(
a
)
{\displaystyle N(a)}
would divide both
N
(
2
)
=
4
{\displaystyle N(2)=4}
and
N
(
1
+
−
5
)
=
6
{\displaystyle N(1+{\sqrt {-5}})=6}
, so
N
(
a
)
{\displaystyle N(a)}
would divide 2. If
N
(
a
)
=
1
{\displaystyle N(a)=1}
then
a
{\displaystyle a}
is a unit and so
J
=
R
{\displaystyle J=R}
, a contradiction. But
N
(
a
)
{\displaystyle N(a)}
cannot be 2 either, because
R
{\displaystyle R}
has no elements of norm 2, because the Diophantine equation
b
2
+
5
c
2
=
2
{\displaystyle b^{2}+5c^{2}=2}
has no solutions in integers, as it has no solutions modulo 5.
One also computes that
J
2
=
(
2
)
{\displaystyle J^{2}=(2)}
, which is principal, so the class of
J
{\displaystyle J}
in the ideal class group has order two. Showing that there aren't any other ideal classes requires more effort.
The fact that this
J
{\displaystyle J}
is not principal is also related to the fact that the element
6
{\displaystyle 6}
has two distinct factorisations into irreducibles:
6
=
2
×
3
=
(
1
+
−
5
)
(
1
−
−
5
)
{\displaystyle 6=2\times 3=(1+{\sqrt {-5}})(1-{\sqrt {-5}})}
.
== Connections to class field theory ==
Class field theory is a branch of algebraic number theory which seeks to classify all the abelian extensions of a given algebraic number field, meaning Galois extensions with abelian Galois group. A particularly beautiful example is found in the Hilbert class field of a number field, which can be defined as the maximal unramified abelian extension of such a field. The Hilbert class field L of a number field K is unique and has the following properties:
Every ideal of the ring of integers of K becomes principal in L, i.e., if I is an integral ideal of K then the image of I is a principal ideal in L.
L is a Galois extension of K with Galois group isomorphic to the ideal class group of K.
Neither property is particularly easy to prove.
== See also ==
Class number formula
Class number problem
Brauer–Siegel theorem—an asymptotic formula for the class number
List of number fields with class number one
Principal ideal domain
Algebraic K-theory
Galois theory
Fermat's Last Theorem
Narrow class group
Picard group—a generalisation of the class group appearing in algebraic geometry
Arakelov class group
== Notes ==
== References ==
Claborn, Luther (1966), "Every abelian group is a class group", Pacific Journal of Mathematics, 18 (2): 219–222, doi:10.2140/pjm.1966.18.219
Fröhlich, Albrecht; Taylor, Martin (1993), Algebraic number theory, Cambridge Studies in Advanced Mathematics, vol. 27, Cambridge University Press, ISBN 978-0-521-43834-6, MR 1215934
Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021. | Wikipedia/Class_number_(number_theory) |
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function whose domain is the set of positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.
An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n.
Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum.
== Multiplicative and additive functions ==
An arithmetic function a is
completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n;
completely multiplicative if a(1) = 1 and a(mn) = a(m)a(n) for all natural numbers m and n;
Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them.
Then an arithmetic function a is
additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n;
multiplicative if a(1) = 1 and a(mn) = a(m)a(n) for all coprime natural numbers m and n.
== Notation ==
In this article,
∑
p
f
(
p
)
{\textstyle \sum _{p}f(p)}
and
∏
p
f
(
p
)
{\textstyle \prod _{p}f(p)}
mean that the sum or product is over all prime numbers:
∑
p
f
(
p
)
=
f
(
2
)
+
f
(
3
)
+
f
(
5
)
+
⋯
{\displaystyle \sum _{p}f(p)=f(2)+f(3)+f(5)+\cdots }
and
∏
p
f
(
p
)
=
f
(
2
)
f
(
3
)
f
(
5
)
⋯
.
{\displaystyle \prod _{p}f(p)=f(2)f(3)f(5)\cdots .}
Similarly,
∑
p
k
f
(
p
k
)
{\textstyle \sum _{p^{k}}f(p^{k})}
and
∏
p
k
f
(
p
k
)
{\textstyle \prod _{p^{k}}f(p^{k})}
mean that the sum or product is over all prime powers with strictly positive exponent (so k = 0 is not included):
∑
p
k
f
(
p
k
)
=
∑
p
∑
k
>
0
f
(
p
k
)
=
f
(
2
)
+
f
(
3
)
+
f
(
4
)
+
f
(
5
)
+
f
(
7
)
+
f
(
8
)
+
f
(
9
)
+
⋯
.
{\displaystyle \sum _{p^{k}}f(p^{k})=\sum _{p}\sum _{k>0}f(p^{k})=f(2)+f(3)+f(4)+f(5)+f(7)+f(8)+f(9)+\cdots .}
The notations
∑
d
∣
n
f
(
d
)
{\textstyle \sum _{d\mid n}f(d)}
and
∏
d
∣
n
f
(
d
)
{\textstyle \prod _{d\mid n}f(d)}
mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if n = 12, then
∏
d
∣
12
f
(
d
)
=
f
(
1
)
f
(
2
)
f
(
3
)
f
(
4
)
f
(
6
)
f
(
12
)
.
{\displaystyle \prod _{d\mid 12}f(d)=f(1)f(2)f(3)f(4)f(6)f(12).}
The notations can be combined:
∑
p
∣
n
f
(
p
)
{\textstyle \sum _{p\mid n}f(p)}
and
∏
p
∣
n
f
(
p
)
{\textstyle \prod _{p\mid n}f(p)}
mean that the sum or product is over all prime divisors of n. For example, if n = 18, then
∑
p
∣
18
f
(
p
)
=
f
(
2
)
+
f
(
3
)
,
{\displaystyle \sum _{p\mid 18}f(p)=f(2)+f(3),}
and similarly
∑
p
k
∣
n
f
(
p
k
)
{\textstyle \sum _{p^{k}\mid n}f(p^{k})}
and
∏
p
k
∣
n
f
(
p
k
)
{\textstyle \prod _{p^{k}\mid n}f(p^{k})}
mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then
∏
p
k
∣
24
f
(
p
k
)
=
f
(
2
)
f
(
3
)
f
(
4
)
f
(
8
)
.
{\displaystyle \prod _{p^{k}\mid 24}f(p^{k})=f(2)f(3)f(4)f(8).}
== Ω(n), ω(n), νp(n) – prime power decomposition ==
The fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes:
n
=
p
1
a
1
⋯
p
k
a
k
{\displaystyle n=p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}}
where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.)
It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the p-adic valuation νp(n) to be the exponent of the highest power of the prime p that divides n. That is, if p is one of the pi then νp(n) = ai, otherwise it is zero. Then
n
=
∏
p
p
ν
p
(
n
)
.
{\displaystyle n=\prod _{p}p^{\nu _{p}(n)}.}
In terms of the above the prime omega functions ω and Ω are defined by
To avoid repetition, formulas for the functions listed in this article are, whenever possible, given in terms of n and the corresponding pi, ai, ω, and Ω.
== Multiplicative functions ==
=== σk(n), τ(n), d(n) – divisor sums ===
σk(n) is the sum of the kth powers of the positive divisors of n, including 1 and n, where k is a complex number.
σ1(n), the sum of the (positive) divisors of n, is usually denoted by σ(n).
Since a positive number to the zero power is one, σ0(n) is therefore the number of (positive) divisors of n; it is usually denoted by d(n) or τ(n) (for the German Teiler = divisors).
σ
k
(
n
)
=
∏
i
=
1
ω
(
n
)
p
i
(
a
i
+
1
)
k
−
1
p
i
k
−
1
=
∏
i
=
1
ω
(
n
)
(
1
+
p
i
k
+
p
i
2
k
+
⋯
+
p
i
a
i
k
)
.
{\displaystyle \sigma _{k}(n)=\prod _{i=1}^{\omega (n)}{\frac {p_{i}^{(a_{i}+1)k}-1}{p_{i}^{k}-1}}=\prod _{i=1}^{\omega (n)}\left(1+p_{i}^{k}+p_{i}^{2k}+\cdots +p_{i}^{a_{i}k}\right).}
Setting k = 0 in the second product gives
τ
(
n
)
=
d
(
n
)
=
(
1
+
a
1
)
(
1
+
a
2
)
⋯
(
1
+
a
ω
(
n
)
)
.
{\displaystyle \tau (n)=d(n)=(1+a_{1})(1+a_{2})\cdots (1+a_{\omega (n)}).}
=== φ(n) – Euler totient function ===
φ(n), the Euler totient function, is the number of positive integers not greater than n that are coprime to n.
φ
(
n
)
=
n
∏
p
∣
n
(
1
−
1
p
)
=
n
(
p
1
−
1
p
1
)
(
p
2
−
1
p
2
)
⋯
(
p
ω
(
n
)
−
1
p
ω
(
n
)
)
.
{\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right)=n\left({\frac {p_{1}-1}{p_{1}}}\right)\left({\frac {p_{2}-1}{p_{2}}}\right)\cdots \left({\frac {p_{\omega (n)}-1}{p_{\omega (n)}}}\right).}
=== Jk(n) – Jordan totient function ===
Jk(n), the Jordan totient function, is the number of k-tuples of positive integers all less than or equal to n that form a coprime (k + 1)-tuple together with n. It is a generalization of Euler's totient, φ(n) = J1(n).
J
k
(
n
)
=
n
k
∏
p
∣
n
(
1
−
1
p
k
)
=
n
k
(
p
1
k
−
1
p
1
k
)
(
p
2
k
−
1
p
2
k
)
⋯
(
p
ω
(
n
)
k
−
1
p
ω
(
n
)
k
)
.
{\displaystyle J_{k}(n)=n^{k}\prod _{p\mid n}\left(1-{\frac {1}{p^{k}}}\right)=n^{k}\left({\frac {p_{1}^{k}-1}{p_{1}^{k}}}\right)\left({\frac {p_{2}^{k}-1}{p_{2}^{k}}}\right)\cdots \left({\frac {p_{\omega (n)}^{k}-1}{p_{\omega (n)}^{k}}}\right).}
=== μ(n) – Möbius function ===
μ(n), the Möbius function, is important because of the Möbius inversion formula. See § Dirichlet convolution, below.
μ
(
n
)
=
{
(
−
1
)
ω
(
n
)
=
(
−
1
)
Ω
(
n
)
if
ω
(
n
)
=
Ω
(
n
)
0
if
ω
(
n
)
≠
Ω
(
n
)
.
{\displaystyle \mu (n)={\begin{cases}(-1)^{\omega (n)}=(-1)^{\Omega (n)}&{\text{if }}\;\omega (n)=\Omega (n)\\0&{\text{if }}\;\omega (n)\neq \Omega (n).\end{cases}}}
This implies that μ(1) = 1. (Because Ω(1) = ω(1) = 0.)
=== τ(n) – Ramanujan tau function ===
τ(n), the Ramanujan tau function, is defined by its generating function identity:
∑
n
≥
1
τ
(
n
)
q
n
=
q
∏
n
≥
1
(
1
−
q
n
)
24
.
{\displaystyle \sum _{n\geq 1}\tau (n)q^{n}=q\prod _{n\geq 1}(1-q^{n})^{24}.}
Although it is hard to say exactly what "arithmetical property of n" it "expresses", (τ(n) is (2π)−12 times the nth Fourier coefficient in the q-expansion of the modular discriminant function) it is included among the arithmetical functions because it is multiplicative and it occurs in identities involving certain σk(n) and rk(n) functions (because these are also coefficients in the expansion of modular forms).
=== cq(n) – Ramanujan's sum ===
cq(n), Ramanujan's sum, is the sum of the nth powers of the primitive qth roots of unity:
c
q
(
n
)
=
∑
gcd
(
a
,
q
)
=
1
1
≤
a
≤
q
e
2
π
i
a
q
n
.
{\displaystyle c_{q}(n)=\sum _{\stackrel {1\leq a\leq q}{\gcd(a,q)=1}}e^{2\pi i{\tfrac {a}{q}}n}.}
Even though it is defined as a sum of complex numbers (irrational for most values of q), it is an integer. For a fixed value of n it is multiplicative in q:
If q and r are coprime, then
c
q
(
n
)
c
r
(
n
)
=
c
q
r
(
n
)
.
{\displaystyle c_{q}(n)c_{r}(n)=c_{qr}(n).}
=== ψ(n) – Dedekind psi function ===
The Dedekind psi function, used in the theory of modular functions, is defined by the formula
ψ
(
n
)
=
n
∏
p
|
n
(
1
+
1
p
)
.
{\displaystyle \psi (n)=n\prod _{p|n}\left(1+{\frac {1}{p}}\right).}
== Completely multiplicative functions ==
=== λ(n) – Liouville function ===
λ(n), the Liouville function, is defined by
λ
(
n
)
=
(
−
1
)
Ω
(
n
)
.
{\displaystyle \lambda (n)=(-1)^{\Omega (n)}.}
=== χ(n) – characters ===
All Dirichlet characters χ(n) are completely multiplicative. Two characters have special notations:
The principal character (mod n) is denoted by χ0(a) (or χ1(a)). It is defined as
χ
0
(
a
)
=
{
1
if
gcd
(
a
,
n
)
=
1
,
0
if
gcd
(
a
,
n
)
≠
1.
{\displaystyle \chi _{0}(a)={\begin{cases}1&{\text{if }}\gcd(a,n)=1,\\0&{\text{if }}\gcd(a,n)\neq 1.\end{cases}}}
The quadratic character (mod n) is denoted by the Jacobi symbol for odd n (it is not defined for even n):
(
a
n
)
=
(
a
p
1
)
a
1
(
a
p
2
)
a
2
⋯
(
a
p
ω
(
n
)
)
a
ω
(
n
)
.
{\displaystyle \left({\frac {a}{n}}\right)=\left({\frac {a}{p_{1}}}\right)^{a_{1}}\left({\frac {a}{p_{2}}}\right)^{a_{2}}\cdots \left({\frac {a}{p_{\omega (n)}}}\right)^{a_{\omega (n)}}.}
In this formula
(
a
p
)
{\displaystyle ({\tfrac {a}{p}})}
is the Legendre symbol, defined for all integers a and all odd primes p by
(
a
p
)
=
{
0
if
a
≡
0
(
mod
p
)
,
+
1
if
a
≢
0
(
mod
p
)
and for some integer
x
,
a
≡
x
2
(
mod
p
)
−
1
if there is no such
x
.
{\displaystyle \left({\frac {a}{p}}\right)={\begin{cases}\;\;\,0&{\text{if }}a\equiv 0{\pmod {p}},\\+1&{\text{if }}a\not \equiv 0{\pmod {p}}{\text{ and for some integer }}x,\;a\equiv x^{2}{\pmod {p}}\\-1&{\text{if there is no such }}x.\end{cases}}}
Following the normal convention for the empty product,
(
a
1
)
=
1.
{\displaystyle \left({\frac {a}{1}}\right)=1.}
== Additive functions ==
=== ω(n) – distinct prime divisors ===
ω(n), defined above as the number of distinct primes dividing n, is additive (see Prime omega function).
== Completely additive functions ==
=== Ω(n) – prime divisors ===
Ω(n), defined above as the number of prime factors of n counted with multiplicities, is completely additive (see Prime omega function).
=== νp(n) – p-adic valuation of an integer n ===
For a fixed prime p, νp(n), defined above as the exponent of the largest power of p dividing n, is completely additive.
=== Logarithmic derivative ===
ld
(
n
)
=
D
(
n
)
n
=
∑
p
prime
p
∣
n
v
p
(
n
)
p
{\displaystyle \operatorname {ld} (n)={\frac {D(n)}{n}}=\sum _{\stackrel {p\mid n}{p{\text{ prime}}}}{\frac {v_{p}(n)}{p}}}
, where
D
(
n
)
{\displaystyle D(n)}
is the arithmetic derivative.
== Neither multiplicative nor additive ==
=== π(x), Π(x), ϑ(x), ψ(x) – prime-counting functions ===
These important functions (which are not arithmetic functions) are defined for non-negative real arguments, and are used in the various statements and proofs of the prime number theorem. They are summation functions (see the main section just below) of arithmetic functions which are neither multiplicative nor additive.
π(x), the prime-counting function, is the number of primes not exceeding x. It is the summation function of the characteristic function of the prime numbers.
π
(
x
)
=
∑
p
≤
x
1
{\displaystyle \pi (x)=\sum _{p\leq x}1}
A related function counts prime powers with weight 1 for primes, 1/2 for their squares, 1/3 for cubes, etc. It is the summation function of the arithmetic function which takes the value 1/k on integers which are the kth power of some prime number, and the value 0 on other integers.
Π
(
x
)
=
∑
p
k
≤
x
1
k
.
{\displaystyle \Pi (x)=\sum _{p^{k}\leq x}{\frac {1}{k}}.}
ϑ(x) and ψ(x), the Chebyshev functions, are defined as sums of the natural logarithms of the primes not exceeding x.
ϑ
(
x
)
=
∑
p
≤
x
log
p
,
{\displaystyle \vartheta (x)=\sum _{p\leq x}\log p,}
ψ
(
x
)
=
∑
p
k
≤
x
log
p
.
{\displaystyle \psi (x)=\sum _{p^{k}\leq x}\log p.}
The second Chebyshev function ψ(x) is the summation function of the von Mangoldt function just below.
=== Λ(n) – von Mangoldt function ===
Λ(n), the von Mangoldt function, is 0 unless the argument n is a prime power pk, in which case it is the natural logarithm of the prime p:
Λ
(
n
)
=
{
log
p
if
n
=
2
,
3
,
4
,
5
,
7
,
8
,
9
,
11
,
13
,
16
,
…
=
p
k
is a prime power
0
if
n
=
1
,
6
,
10
,
12
,
14
,
15
,
18
,
20
,
21
,
…
is not a prime power
.
{\displaystyle \Lambda (n)={\begin{cases}\log p&{\text{if }}n=2,3,4,5,7,8,9,11,13,16,\ldots =p^{k}{\text{ is a prime power}}\\0&{\text{if }}n=1,6,10,12,14,15,18,20,21,\dots \;\;\;\;{\text{ is not a prime power}}.\end{cases}}}
=== p(n) – partition function ===
p(n), the partition function, is the number of ways of representing n as a sum of positive integers, where two representations with the same summands in a different order are not counted as being different:
p
(
n
)
=
|
{
(
a
1
,
a
2
,
…
a
k
)
:
0
<
a
1
≤
a
2
≤
⋯
≤
a
k
∧
n
=
a
1
+
a
2
+
⋯
+
a
k
}
|
.
{\displaystyle p(n)=\left|\left\{(a_{1},a_{2},\dots a_{k}):0<a_{1}\leq a_{2}\leq \cdots \leq a_{k}\;\land \;n=a_{1}+a_{2}+\cdots +a_{k}\right\}\right|.}
=== λ(n) – Carmichael function ===
λ(n), the Carmichael function, is the smallest positive number such that
a
λ
(
n
)
≡
1
(
mod
n
)
{\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}
for all a coprime to n. Equivalently, it is the least common multiple of the orders of the elements of the multiplicative group of integers modulo n.
For powers of odd primes and for 2 and 4, λ(n) is equal to the Euler totient function of n; for powers of 2 greater than 4 it is equal to one half of the Euler totient function of n:
λ
(
n
)
=
{
ϕ
(
n
)
if
n
=
2
,
3
,
4
,
5
,
7
,
9
,
11
,
13
,
17
,
19
,
23
,
25
,
27
,
…
1
2
ϕ
(
n
)
if
n
=
8
,
16
,
32
,
64
,
…
{\displaystyle \lambda (n)={\begin{cases}\;\;\phi (n)&{\text{if }}n=2,3,4,5,7,9,11,13,17,19,23,25,27,\dots \\{\tfrac {1}{2}}\phi (n)&{\text{if }}n=8,16,32,64,\dots \end{cases}}}
and for general n it is the least common multiple of λ of each of the prime power factors of n:
λ
(
p
1
a
1
p
2
a
2
…
p
ω
(
n
)
a
ω
(
n
)
)
=
lcm
[
λ
(
p
1
a
1
)
,
λ
(
p
2
a
2
)
,
…
,
λ
(
p
ω
(
n
)
a
ω
(
n
)
)
]
.
{\displaystyle \lambda (p_{1}^{a_{1}}p_{2}^{a_{2}}\dots p_{\omega (n)}^{a_{\omega (n)}})=\operatorname {lcm} [\lambda (p_{1}^{a_{1}}),\;\lambda (p_{2}^{a_{2}}),\dots ,\lambda (p_{\omega (n)}^{a_{\omega (n)}})].}
=== h(n) – class number ===
h(n), the class number function, is the order of the ideal class group of an algebraic extension of the rationals with discriminant n. The notation is ambiguous, as there are in general many extensions with the same discriminant. See quadratic field and cyclotomic field for classical examples.
=== rk(n) – sum of k squares ===
rk(n) is the number of ways n can be represented as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the square roots are counted as different.
r
k
(
n
)
=
|
{
(
a
1
,
a
2
,
…
,
a
k
)
:
n
=
a
1
2
+
a
2
2
+
⋯
+
a
k
2
}
|
{\displaystyle r_{k}(n)=\left|\left\{(a_{1},a_{2},\dots ,a_{k}):n=a_{1}^{2}+a_{2}^{2}+\cdots +a_{k}^{2}\right\}\right|}
=== D(n) – Arithmetic derivative ===
Using the Heaviside notation for the derivative, the arithmetic derivative D(n) is a function such that
D
(
n
)
=
1
{\displaystyle D(n)=1}
if n prime, and
D
(
m
n
)
=
m
D
(
n
)
+
D
(
m
)
n
{\displaystyle D(mn)=mD(n)+D(m)n}
(the product rule)
== Summation functions ==
Given an arithmetic function a(n), its summation function A(x) is defined by
A
(
x
)
:=
∑
n
≤
x
a
(
n
)
.
{\displaystyle A(x):=\sum _{n\leq x}a(n).}
A can be regarded as a function of a real variable. Given a positive integer m, A is constant along open intervals m < x < m + 1, and has a jump discontinuity at each integer for which a(m) ≠ 0.
Since such functions are often represented by series and integrals, to achieve pointwise convergence it is usual to define the value at the discontinuities as the average of the values to the left and right:
A
0
(
m
)
:=
1
2
(
∑
n
<
m
a
(
n
)
+
∑
n
≤
m
a
(
n
)
)
=
A
(
m
)
−
1
2
a
(
m
)
.
{\displaystyle A_{0}(m):={\frac {1}{2}}\left(\sum _{n<m}a(n)+\sum _{n\leq m}a(n)\right)=A(m)-{\frac {1}{2}}a(m).}
Individual values of arithmetic functions may fluctuate wildly – as in most of the above examples. Summation functions "smooth out" these fluctuations. In some cases it may be possible to find asymptotic behaviour for the summation function for large x.
A classical example of this phenomenon is given by the divisor summatory function, the summation function of d(n), the number of divisors of n:
lim inf
n
→
∞
d
(
n
)
=
2
{\displaystyle \liminf _{n\to \infty }d(n)=2}
lim sup
n
→
∞
log
d
(
n
)
log
log
n
log
n
=
log
2
{\displaystyle \limsup _{n\to \infty }{\frac {\log d(n)\log \log n}{\log n}}=\log 2}
lim
n
→
∞
d
(
1
)
+
d
(
2
)
+
⋯
+
d
(
n
)
log
(
1
)
+
log
(
2
)
+
⋯
+
log
(
n
)
=
1.
{\displaystyle \lim _{n\to \infty }{\frac {d(1)+d(2)+\cdots +d(n)}{\log(1)+\log(2)+\cdots +\log(n)}}=1.}
An average order of an arithmetic function is some simpler or better-understood function which has the same summation function asymptotically, and hence takes the same values "on average". We say that g is an average order of f if
∑
n
≤
x
f
(
n
)
∼
∑
n
≤
x
g
(
n
)
{\displaystyle \sum _{n\leq x}f(n)\sim \sum _{n\leq x}g(n)}
as x tends to infinity. The example above shows that d(n) has the average order log(n).
== Dirichlet convolution ==
Given an arithmetic function a(n), let Fa(s), for complex s, be the function defined by the corresponding Dirichlet series (where it converges):
F
a
(
s
)
:=
∑
n
=
1
∞
a
(
n
)
n
s
.
{\displaystyle F_{a}(s):=\sum _{n=1}^{\infty }{\frac {a(n)}{n^{s}}}.}
Fa(s) is called a generating function of a(n). The simplest such series, corresponding to the constant function a(n) = 1 for all n, is ζ(s) the Riemann zeta function.
The generating function of the Möbius function is the inverse of the zeta function:
ζ
(
s
)
∑
n
=
1
∞
μ
(
n
)
n
s
=
1
,
ℜ
s
>
1.
{\displaystyle \zeta (s)\,\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}=1,\;\;\Re s>1.}
Consider two arithmetic functions a and b and their respective generating functions Fa(s) and Fb(s). The product Fa(s)Fb(s) can be computed as follows:
F
a
(
s
)
F
b
(
s
)
=
(
∑
m
=
1
∞
a
(
m
)
m
s
)
(
∑
n
=
1
∞
b
(
n
)
n
s
)
.
{\displaystyle F_{a}(s)F_{b}(s)=\left(\sum _{m=1}^{\infty }{\frac {a(m)}{m^{s}}}\right)\left(\sum _{n=1}^{\infty }{\frac {b(n)}{n^{s}}}\right).}
It is a straightforward exercise to show that if c(n) is defined by
c
(
n
)
:=
∑
i
j
=
n
a
(
i
)
b
(
j
)
=
∑
i
∣
n
a
(
i
)
b
(
n
i
)
,
{\displaystyle c(n):=\sum _{ij=n}a(i)b(j)=\sum _{i\mid n}a(i)b\left({\frac {n}{i}}\right),}
then
F
c
(
s
)
=
F
a
(
s
)
F
b
(
s
)
.
{\displaystyle F_{c}(s)=F_{a}(s)F_{b}(s).}
This function c is called the Dirichlet convolution of a and b, and is denoted by
a
∗
b
{\displaystyle a*b}
.
A particularly important case is convolution with the constant function a(n) = 1 for all n, corresponding to multiplying the generating function by the zeta function:
g
(
n
)
=
∑
d
∣
n
f
(
d
)
.
{\displaystyle g(n)=\sum _{d\mid n}f(d).}
Multiplying by the inverse of the zeta function gives the Möbius inversion formula:
f
(
n
)
=
∑
d
∣
n
μ
(
n
d
)
g
(
d
)
.
{\displaystyle f(n)=\sum _{d\mid n}\mu \left({\frac {n}{d}}\right)g(d).}
If f is multiplicative, then so is g. If f is completely multiplicative, then g is multiplicative, but may or may not be completely multiplicative.
== Relations among the functions ==
There are a great many formulas connecting arithmetical functions with each other and with the functions of analysis, especially powers, roots, and the exponential and log functions. The page divisor sum identities contains many more generalized and related examples of identities involving arithmetic functions.
Here are a few examples:
=== Dirichlet convolutions ===
∑
δ
∣
n
μ
(
δ
)
=
∑
δ
∣
n
λ
(
n
δ
)
|
μ
(
δ
)
|
=
{
1
if
n
=
1
0
if
n
≠
1
{\displaystyle \sum _{\delta \mid n}\mu (\delta )=\sum _{\delta \mid n}\lambda \left({\frac {n}{\delta }}\right)|\mu (\delta )|={\begin{cases}1&{\text{if }}n=1\\0&{\text{if }}n\neq 1\end{cases}}}
where λ is the Liouville function.
∑
δ
∣
n
φ
(
δ
)
=
n
.
{\displaystyle \sum _{\delta \mid n}\varphi (\delta )=n.}
φ
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
δ
=
n
∑
δ
∣
n
μ
(
δ
)
δ
.
{\displaystyle \varphi (n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\delta =n\sum _{\delta \mid n}{\frac {\mu (\delta )}{\delta }}.}
Möbius inversion
∑
d
∣
n
J
k
(
d
)
=
n
k
.
{\displaystyle \sum _{d\mid n}J_{k}(d)=n^{k}.}
J
k
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
δ
k
=
n
k
∑
δ
∣
n
μ
(
δ
)
δ
k
.
{\displaystyle J_{k}(n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\delta ^{k}=n^{k}\sum _{\delta \mid n}{\frac {\mu (\delta )}{\delta ^{k}}}.}
Möbius inversion
∑
δ
∣
n
δ
s
J
r
(
δ
)
J
s
(
n
δ
)
=
J
r
+
s
(
n
)
{\displaystyle \sum _{\delta \mid n}\delta ^{s}J_{r}(\delta )J_{s}\left({\frac {n}{\delta }}\right)=J_{r+s}(n)}
∑
δ
∣
n
φ
(
δ
)
d
(
n
δ
)
=
σ
(
n
)
.
{\displaystyle \sum _{\delta \mid n}\varphi (\delta )d\left({\frac {n}{\delta }}\right)=\sigma (n).}
∑
δ
∣
n
|
μ
(
δ
)
|
=
2
ω
(
n
)
.
{\displaystyle \sum _{\delta \mid n}|\mu (\delta )|=2^{\omega (n)}.}
|
μ
(
n
)
|
=
∑
δ
∣
n
μ
(
n
δ
)
2
ω
(
δ
)
.
{\displaystyle |\mu (n)|=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)2^{\omega (\delta )}.}
Möbius inversion
∑
δ
∣
n
2
ω
(
δ
)
=
d
(
n
2
)
.
{\displaystyle \sum _{\delta \mid n}2^{\omega (\delta )}=d(n^{2}).}
2
ω
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
d
(
δ
2
)
.
{\displaystyle 2^{\omega (n)}=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)d(\delta ^{2}).}
Möbius inversion
∑
δ
∣
n
d
(
δ
2
)
=
d
2
(
n
)
.
{\displaystyle \sum _{\delta \mid n}d(\delta ^{2})=d^{2}(n).}
d
(
n
2
)
=
∑
δ
∣
n
μ
(
n
δ
)
d
2
(
δ
)
.
{\displaystyle d(n^{2})=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)d^{2}(\delta ).}
Möbius inversion
∑
δ
∣
n
d
(
n
δ
)
2
ω
(
δ
)
=
d
2
(
n
)
.
{\displaystyle \sum _{\delta \mid n}d\left({\frac {n}{\delta }}\right)2^{\omega (\delta )}=d^{2}(n).}
∑
δ
∣
n
λ
(
δ
)
=
{
1
if
n
is a square
0
if
n
is not square.
{\displaystyle \sum _{\delta \mid n}\lambda (\delta )={\begin{cases}&1{\text{ if }}n{\text{ is a square }}\\&0{\text{ if }}n{\text{ is not square.}}\end{cases}}}
where λ is the Liouville function.
∑
δ
∣
n
Λ
(
δ
)
=
log
n
.
{\displaystyle \sum _{\delta \mid n}\Lambda (\delta )=\log n.}
Λ
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
log
(
δ
)
.
{\displaystyle \Lambda (n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\log(\delta ).}
Möbius inversion
=== Sums of squares ===
For all
k
≥
4
,
r
k
(
n
)
>
0.
{\displaystyle k\geq 4,\;\;\;r_{k}(n)>0.}
(Lagrange's four-square theorem).
r
2
(
n
)
=
4
∑
d
∣
n
(
−
4
d
)
,
{\displaystyle r_{2}(n)=4\sum _{d\mid n}\left({\frac {-4}{d}}\right),}
where the Kronecker symbol has the values
(
−
4
n
)
=
{
+
1
if
n
≡
1
(
mod
4
)
−
1
if
n
≡
3
(
mod
4
)
0
if
n
is even
.
{\displaystyle \left({\frac {-4}{n}}\right)={\begin{cases}+1&{\text{if }}n\equiv 1{\pmod {4}}\\-1&{\text{if }}n\equiv 3{\pmod {4}}\\\;\;\;0&{\text{if }}n{\text{ is even}}.\\\end{cases}}}
There is a formula for r3 in the section on class numbers below.
r
4
(
n
)
=
8
∑
4
∤
d
d
∣
n
d
=
8
(
2
+
(
−
1
)
n
)
∑
2
∤
d
d
∣
n
d
=
{
8
σ
(
n
)
if
n
is odd
24
σ
(
n
2
ν
)
if
n
is even
,
{\displaystyle r_{4}(n)=8\sum _{\stackrel {d\mid n}{4\,\nmid \,d}}d=8(2+(-1)^{n})\sum _{\stackrel {d\mid n}{2\,\nmid \,d}}d={\begin{cases}8\sigma (n)&{\text{if }}n{\text{ is odd }}\\24\sigma \left({\frac {n}{2^{\nu }}}\right)&{\text{if }}n{\text{ is even }}\end{cases}},}
where ν = ν2(n).
r
6
(
n
)
=
16
∑
d
∣
n
χ
(
n
d
)
d
2
−
4
∑
d
∣
n
χ
(
d
)
d
2
,
{\displaystyle r_{6}(n)=16\sum _{d\mid n}\chi \left({\frac {n}{d}}\right)d^{2}-4\sum _{d\mid n}\chi (d)d^{2},}
where
χ
(
n
)
=
(
−
4
n
)
.
{\displaystyle \chi (n)=\left({\frac {-4}{n}}\right).}
Define the function σk*(n) as
σ
k
∗
(
n
)
=
(
−
1
)
n
∑
d
∣
n
(
−
1
)
d
d
k
=
{
∑
d
∣
n
d
k
=
σ
k
(
n
)
if
n
is odd
∑
2
∣
d
d
∣
n
d
k
−
∑
2
∤
d
d
∣
n
d
k
if
n
is even
.
{\displaystyle \sigma _{k}^{*}(n)=(-1)^{n}\sum _{d\mid n}(-1)^{d}d^{k}={\begin{cases}\sum _{d\mid n}d^{k}=\sigma _{k}(n)&{\text{if }}n{\text{ is odd }}\\\sum _{\stackrel {d\mid n}{2\,\mid \,d}}d^{k}-\sum _{\stackrel {d\mid n}{2\,\nmid \,d}}d^{k}&{\text{if }}n{\text{ is even}}.\end{cases}}}
That is, if n is odd, σk*(n) is the sum of the kth powers of the divisors of n, that is, σk(n), and if n is even it is the sum of the kth powers of the even divisors of n minus the sum of the kth powers of the odd divisors of n.
r
8
(
n
)
=
16
σ
3
∗
(
n
)
.
{\displaystyle r_{8}(n)=16\sigma _{3}^{*}(n).}
Adopt the convention that Ramanujan's τ(x) = 0 if x is not an integer.
r
24
(
n
)
=
16
691
σ
11
∗
(
n
)
+
128
691
{
(
−
1
)
n
−
1
259
τ
(
n
)
−
512
τ
(
n
2
)
}
{\displaystyle r_{24}(n)={\frac {16}{691}}\sigma _{11}^{*}(n)+{\frac {128}{691}}\left\{(-1)^{n-1}259\tau (n)-512\tau \left({\frac {n}{2}}\right)\right\}}
=== Divisor sum convolutions ===
Here "convolution" does not mean "Dirichlet convolution" but instead refers to the formula for the coefficients of the product of two power series:
(
∑
n
=
0
∞
a
n
x
n
)
(
∑
n
=
0
∞
b
n
x
n
)
=
∑
i
=
0
∞
∑
j
=
0
∞
a
i
b
j
x
i
+
j
=
∑
n
=
0
∞
(
∑
i
=
0
n
a
i
b
n
−
i
)
x
n
=
∑
n
=
0
∞
c
n
x
n
.
{\displaystyle \left(\sum _{n=0}^{\infty }a_{n}x^{n}\right)\left(\sum _{n=0}^{\infty }b_{n}x^{n}\right)=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }a_{i}b_{j}x^{i+j}=\sum _{n=0}^{\infty }\left(\sum _{i=0}^{n}a_{i}b_{n-i}\right)x^{n}=\sum _{n=0}^{\infty }c_{n}x^{n}.}
The sequence
c
n
=
∑
i
=
0
n
a
i
b
n
−
i
{\displaystyle c_{n}=\sum _{i=0}^{n}a_{i}b_{n-i}}
is called the convolution or the Cauchy product of the sequences an and bn.
These formulas may be proved analytically (see Eisenstein series) or by elementary methods.
σ
3
(
n
)
=
1
5
{
6
n
σ
1
(
n
)
−
σ
1
(
n
)
+
12
∑
0
<
k
<
n
σ
1
(
k
)
σ
1
(
n
−
k
)
}
.
{\displaystyle \sigma _{3}(n)={\frac {1}{5}}\left\{6n\sigma _{1}(n)-\sigma _{1}(n)+12\sum _{0<k<n}\sigma _{1}(k)\sigma _{1}(n-k)\right\}.}
σ
5
(
n
)
=
1
21
{
10
(
3
n
−
1
)
σ
3
(
n
)
+
σ
1
(
n
)
+
240
∑
0
<
k
<
n
σ
1
(
k
)
σ
3
(
n
−
k
)
}
.
{\displaystyle \sigma _{5}(n)={\frac {1}{21}}\left\{10(3n-1)\sigma _{3}(n)+\sigma _{1}(n)+240\sum _{0<k<n}\sigma _{1}(k)\sigma _{3}(n-k)\right\}.}
σ
7
(
n
)
=
1
20
{
21
(
2
n
−
1
)
σ
5
(
n
)
−
σ
1
(
n
)
+
504
∑
0
<
k
<
n
σ
1
(
k
)
σ
5
(
n
−
k
)
}
=
σ
3
(
n
)
+
120
∑
0
<
k
<
n
σ
3
(
k
)
σ
3
(
n
−
k
)
.
{\displaystyle {\begin{aligned}\sigma _{7}(n)&={\frac {1}{20}}\left\{21(2n-1)\sigma _{5}(n)-\sigma _{1}(n)+504\sum _{0<k<n}\sigma _{1}(k)\sigma _{5}(n-k)\right\}\\&=\sigma _{3}(n)+120\sum _{0<k<n}\sigma _{3}(k)\sigma _{3}(n-k).\end{aligned}}}
σ
9
(
n
)
=
1
11
{
10
(
3
n
−
2
)
σ
7
(
n
)
+
σ
1
(
n
)
+
480
∑
0
<
k
<
n
σ
1
(
k
)
σ
7
(
n
−
k
)
}
=
1
11
{
21
σ
5
(
n
)
−
10
σ
3
(
n
)
+
5040
∑
0
<
k
<
n
σ
3
(
k
)
σ
5
(
n
−
k
)
}
.
{\displaystyle {\begin{aligned}\sigma _{9}(n)&={\frac {1}{11}}\left\{10(3n-2)\sigma _{7}(n)+\sigma _{1}(n)+480\sum _{0<k<n}\sigma _{1}(k)\sigma _{7}(n-k)\right\}\\&={\frac {1}{11}}\left\{21\sigma _{5}(n)-10\sigma _{3}(n)+5040\sum _{0<k<n}\sigma _{3}(k)\sigma _{5}(n-k)\right\}.\end{aligned}}}
τ
(
n
)
=
65
756
σ
11
(
n
)
+
691
756
σ
5
(
n
)
−
691
3
∑
0
<
k
<
n
σ
5
(
k
)
σ
5
(
n
−
k
)
,
{\displaystyle \tau (n)={\frac {65}{756}}\sigma _{11}(n)+{\frac {691}{756}}\sigma _{5}(n)-{\frac {691}{3}}\sum _{0<k<n}\sigma _{5}(k)\sigma _{5}(n-k),}
where τ(n) is Ramanujan's function.
Since σk(n) (for natural number k) and τ(n) are integers, the above formulas can be used to prove congruences for the functions. See Ramanujan tau function for some examples.
Extend the domain of the partition function by setting p(0) = 1.
p
(
n
)
=
1
n
∑
1
≤
k
≤
n
σ
(
k
)
p
(
n
−
k
)
.
{\displaystyle p(n)={\frac {1}{n}}\sum _{1\leq k\leq n}\sigma (k)p(n-k).}
This recurrence can be used to compute p(n).
=== Class number related ===
Peter Gustav Lejeune Dirichlet discovered formulas that relate the class number h of quadratic number fields to the Jacobi symbol.
An integer D is called a fundamental discriminant if it is the discriminant of a quadratic number field. This is equivalent to D ≠ 1 and either a) D is squarefree and D ≡ 1 (mod 4) or b) D ≡ 0 (mod 4), D/4 is squarefree, and D/4 ≡ 2 or 3 (mod 4).
Extend the Jacobi symbol to accept even numbers in the "denominator" by defining the Kronecker symbol:
(
a
2
)
=
{
0
if
a
is even
(
−
1
)
a
2
−
1
8
if
a
is odd.
{\displaystyle \left({\frac {a}{2}}\right)={\begin{cases}\;\;\,0&{\text{ if }}a{\text{ is even}}\\(-1)^{\frac {a^{2}-1}{8}}&{\text{ if }}a{\text{ is odd. }}\end{cases}}}
Then if D < −4 is a fundamental discriminant
h
(
D
)
=
1
D
∑
r
=
1
|
D
|
r
(
D
r
)
=
1
2
−
(
D
2
)
∑
r
=
1
|
D
|
/
2
(
D
r
)
.
{\displaystyle {\begin{aligned}h(D)&={\frac {1}{D}}\sum _{r=1}^{|D|}r\left({\frac {D}{r}}\right)\\&={\frac {1}{2-\left({\tfrac {D}{2}}\right)}}\sum _{r=1}^{|D|/2}\left({\frac {D}{r}}\right).\end{aligned}}}
There is also a formula relating r3 and h. Again, let D be a fundamental discriminant, D < −4. Then
r
3
(
|
D
|
)
=
12
(
1
−
(
D
2
)
)
h
(
D
)
.
{\displaystyle r_{3}(|D|)=12\left(1-\left({\frac {D}{2}}\right)\right)h(D).}
=== Prime-count related ===
Let
H
n
=
1
+
1
2
+
1
3
+
⋯
+
1
n
{\displaystyle H_{n}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{n}}}
be the nth harmonic number. Then
σ
(
n
)
≤
H
n
+
e
H
n
log
H
n
{\displaystyle \sigma (n)\leq H_{n}+e^{H_{n}}\log H_{n}}
is true for every natural number n if and only if the Riemann hypothesis is true.
The Riemann hypothesis is also equivalent to the statement that, for all n > 5040,
σ
(
n
)
<
e
γ
n
log
log
n
{\displaystyle \sigma (n)<e^{\gamma }n\log \log n}
(where γ is the Euler–Mascheroni constant). This is Robin's theorem.
∑
p
ν
p
(
n
)
=
Ω
(
n
)
.
{\displaystyle \sum _{p}\nu _{p}(n)=\Omega (n).}
ψ
(
x
)
=
∑
n
≤
x
Λ
(
n
)
.
{\displaystyle \psi (x)=\sum _{n\leq x}\Lambda (n).}
Π
(
x
)
=
∑
n
≤
x
Λ
(
n
)
log
n
.
{\displaystyle \Pi (x)=\sum _{n\leq x}{\frac {\Lambda (n)}{\log n}}.}
e
θ
(
x
)
=
∏
p
≤
x
p
.
{\displaystyle e^{\theta (x)}=\prod _{p\leq x}p.}
e
ψ
(
x
)
=
lcm
[
1
,
2
,
…
,
⌊
x
⌋
]
.
{\displaystyle e^{\psi (x)}=\operatorname {lcm} [1,2,\dots ,\lfloor x\rfloor ].}
=== Menon's identity ===
In 1965 P Kesava Menon proved
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
gcd
(
k
−
1
,
n
)
=
φ
(
n
)
d
(
n
)
.
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\gcd(k-1,n)=\varphi (n)d(n).}
This has been generalized by a number of mathematicians. For example,
B. Sury
∑
gcd
(
k
1
,
n
)
=
1
1
≤
k
1
,
k
2
,
…
,
k
s
≤
n
gcd
(
k
1
−
1
,
k
2
,
…
,
k
s
,
n
)
=
φ
(
n
)
σ
s
−
1
(
n
)
.
{\displaystyle \sum _{\stackrel {1\leq k_{1},k_{2},\dots ,k_{s}\leq n}{\gcd(k_{1},n)=1}}\gcd(k_{1}-1,k_{2},\dots ,k_{s},n)=\varphi (n)\sigma _{s-1}(n).}
N. Rao
∑
gcd
(
k
1
,
k
2
,
…
,
k
s
,
n
)
=
1
1
≤
k
1
,
k
2
,
…
,
k
s
≤
n
gcd
(
k
1
−
a
1
,
k
2
−
a
2
,
…
,
k
s
−
a
s
,
n
)
s
=
J
s
(
n
)
d
(
n
)
,
{\displaystyle \sum _{\stackrel {1\leq k_{1},k_{2},\dots ,k_{s}\leq n}{\gcd(k_{1},k_{2},\dots ,k_{s},n)=1}}\gcd(k_{1}-a_{1},k_{2}-a_{2},\dots ,k_{s}-a_{s},n)^{s}=J_{s}(n)d(n),}
where a1, a2, ..., as are integers, gcd(a1, a2, ..., as, n) = 1.
László Fejes Tóth
∑
gcd
(
k
,
m
)
=
1
1
≤
k
≤
m
gcd
(
k
2
−
1
,
m
1
)
gcd
(
k
2
−
1
,
m
2
)
=
φ
(
n
)
∑
d
2
∣
m
2
d
1
∣
m
1
φ
(
gcd
(
d
1
,
d
2
)
)
2
ω
(
lcm
(
d
1
,
d
2
)
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq m}{\gcd(k,m)=1}}\gcd(k^{2}-1,m_{1})\gcd(k^{2}-1,m_{2})=\varphi (n)\sum _{\stackrel {d_{1}\mid m_{1}}{d_{2}\mid m_{2}}}\varphi (\gcd(d_{1},d_{2}))2^{\omega (\operatorname {lcm} (d_{1},d_{2}))},}
where m1 and m2 are odd, m = lcm(m1, m2).
In fact, if f is any arithmetical function
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
f
(
gcd
(
k
−
1
,
n
)
)
=
φ
(
n
)
∑
d
∣
n
(
μ
∗
f
)
(
d
)
φ
(
d
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}f(\gcd(k-1,n))=\varphi (n)\sum _{d\mid n}{\frac {(\mu *f)(d)}{\varphi (d)}},}
where
∗
{\displaystyle *}
stands for Dirichlet convolution.
=== Miscellaneous ===
Let m and n be distinct, odd, and positive. Then the Jacobi symbol satisfies the law of quadratic reciprocity:
(
m
n
)
(
n
m
)
=
(
−
1
)
(
m
−
1
)
(
n
−
1
)
/
4
.
{\displaystyle \left({\frac {m}{n}}\right)\left({\frac {n}{m}}\right)=(-1)^{(m-1)(n-1)/4}.}
Let D(n) be the arithmetic derivative. Then the logarithmic derivative
D
(
n
)
n
=
∑
p
prime
p
∣
n
v
p
(
n
)
p
.
{\displaystyle {\frac {D(n)}{n}}=\sum _{\stackrel {p\mid n}{p{\text{ prime}}}}{\frac {v_{p}(n)}{p}}.}
See Arithmetic derivative for details.
Let λ(n) be Liouville's function. Then
|
λ
(
n
)
|
μ
(
n
)
=
λ
(
n
)
|
μ
(
n
)
|
=
μ
(
n
)
,
{\displaystyle |\lambda (n)|\mu (n)=\lambda (n)|\mu (n)|=\mu (n),}
and
λ
(
n
)
μ
(
n
)
=
|
μ
(
n
)
|
=
μ
2
(
n
)
.
{\displaystyle \lambda (n)\mu (n)=|\mu (n)|=\mu ^{2}(n).}
Let λ(n) be Carmichael's function. Then
λ
(
n
)
∣
ϕ
(
n
)
.
{\displaystyle \lambda (n)\mid \phi (n).}
Further,
λ
(
n
)
=
ϕ
(
n
)
if and only if
n
=
{
1
,
2
,
4
;
3
,
5
,
7
,
9
,
11
,
…
(that is,
p
k
, where
p
is an odd prime)
;
6
,
10
,
14
,
18
,
…
(that is,
2
p
k
, where
p
is an odd prime)
.
{\displaystyle \lambda (n)=\phi (n){\text{ if and only if }}n={\begin{cases}1,2,4;\\3,5,7,9,11,\ldots {\text{ (that is, }}p^{k}{\text{, where }}p{\text{ is an odd prime)}};\\6,10,14,18,\ldots {\text{ (that is, }}2p^{k}{\text{, where }}p{\text{ is an odd prime)}}.\end{cases}}}
See Multiplicative group of integers modulo n and Primitive root modulo n.
2
ω
(
n
)
≤
d
(
n
)
≤
2
Ω
(
n
)
.
{\displaystyle 2^{\omega (n)}\leq d(n)\leq 2^{\Omega (n)}.}
6
π
2
<
ϕ
(
n
)
σ
(
n
)
n
2
<
1.
{\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\phi (n)\sigma (n)}{n^{2}}}<1.}
c
q
(
n
)
=
μ
(
q
gcd
(
q
,
n
)
)
ϕ
(
q
gcd
(
q
,
n
)
)
ϕ
(
q
)
=
∑
δ
∣
gcd
(
q
,
n
)
μ
(
q
δ
)
δ
.
{\displaystyle {\begin{aligned}c_{q}(n)&={\frac {\mu \left({\frac {q}{\gcd(q,n)}}\right)}{\phi \left({\frac {q}{\gcd(q,n)}}\right)}}\phi (q)\\&=\sum _{\delta \mid \gcd(q,n)}\mu \left({\frac {q}{\delta }}\right)\delta .\end{aligned}}}
Note that
ϕ
(
q
)
=
∑
δ
∣
q
μ
(
q
δ
)
δ
.
{\displaystyle \phi (q)=\sum _{\delta \mid q}\mu \left({\frac {q}{\delta }}\right)\delta .}
c
q
(
1
)
=
μ
(
q
)
.
{\displaystyle c_{q}(1)=\mu (q).}
c
q
(
q
)
=
ϕ
(
q
)
.
{\displaystyle c_{q}(q)=\phi (q).}
∑
δ
∣
n
d
3
(
δ
)
=
(
∑
δ
∣
n
d
(
δ
)
)
2
.
{\displaystyle \sum _{\delta \mid n}d^{3}(\delta )=\left(\sum _{\delta \mid n}d(\delta )\right)^{2}.}
Compare this with 13 + 23 + 33 + ... + n3 = (1 + 2 + 3 + ... + n)2
d
(
u
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
μ
(
δ
)
d
(
u
δ
)
d
(
v
δ
)
.
{\displaystyle d(uv)=\sum _{\delta \mid \gcd(u,v)}\mu (\delta )d\left({\frac {u}{\delta }}\right)d\left({\frac {v}{\delta }}\right).}
σ
k
(
u
)
σ
k
(
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
δ
k
σ
k
(
u
v
δ
2
)
.
{\displaystyle \sigma _{k}(u)\sigma _{k}(v)=\sum _{\delta \mid \gcd(u,v)}\delta ^{k}\sigma _{k}\left({\frac {uv}{\delta ^{2}}}\right).}
τ
(
u
)
τ
(
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
δ
11
τ
(
u
v
δ
2
)
,
{\displaystyle \tau (u)\tau (v)=\sum _{\delta \mid \gcd(u,v)}\delta ^{11}\tau \left({\frac {uv}{\delta ^{2}}}\right),}
where τ(n) is Ramanujan's function.
== First 100 values of some arithmetic functions ==
== Notes ==
== References ==
Tom M. Apostol (1976), Introduction to Analytic Number Theory, Springer Undergraduate Texts in Mathematics, ISBN 0-387-90163-9
Apostol, Tom M. (1989), Modular Functions and Dirichlet Series in Number Theory (2nd Edition), New York: Springer, ISBN 0-387-97127-0
Bateman, Paul T.; Diamond, Harold G. (2004), Analytic number theory, an introduction, World Scientific, ISBN 978-981-238-938-1
Cohen, Henri (1993), A Course in Computational Algebraic Number Theory, Berlin: Springer, ISBN 3-540-55640-0
Edwards, Harold (1977). Fermat's Last Theorem. New York: Springer. ISBN 0-387-90230-9.
Hardy, G. H. (1999), Ramanujan: Twelve Lectures on Subjects Suggested by his Life and work, Providence RI: AMS / Chelsea, hdl:10115/1436, ISBN 978-0-8218-2023-0
Hardy, G. H.; Wright, E. M. (1979) [1938]. An Introduction to the Theory of Numbers (5th ed.). Oxford: Clarendon Press. ISBN 0-19-853171-0. MR 0568909. Zbl 0423.10001.
Jameson, G. J. O. (2003), The Prime Number Theorem, Cambridge University Press, ISBN 0-521-89110-8
Koblitz, Neal (1984), Introduction to Elliptic Curves and Modular Forms, New York: Springer, ISBN 0-387-97966-2
Landau, Edmund (1966), Elementary Number Theory, New York: Chelsea
William J. LeVeque (1996), Fundamentals of Number Theory, Courier Dover Publications, ISBN 0-486-68906-9
Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77-171950
Elliott Mendelson (1987), Introduction to Mathematical Logic, CRC Press, ISBN 0-412-80830-7
Nagell, Trygve (1964), Introduction to number theory (2nd Edition), Chelsea, ISBN 978-0-8218-2833-5 {{citation}}: ISBN / Date incompatibility (help)
Niven, Ivan M.; Zuckerman, Herbert S. (1972), An introduction to the theory of numbers (3rd Edition), John Wiley & Sons, ISBN 0-471-64154-5
Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77-81766
Ramanujan, Srinivasa (2000), Collected Papers, Providence RI: AMS / Chelsea, ISBN 978-0-8218-2076-6
Williams, Kenneth S. (2011), Number theory in the spirit of Liouville, London Mathematical Society Student Texts, vol. 76, Cambridge: Cambridge University Press, ISBN 978-0-521-17562-3, Zbl 1227.11002
== Further reading ==
Schwarz, Wolfgang; Spilker, Jürgen (1994), Arithmetical Functions. An introduction to elementary and analytic properties of arithmetic functions and to some of their almost-periodic properties, London Mathematical Society Lecture Note Series, vol. 184, Cambridge University Press, ISBN 0-521-42725-8, Zbl 0807.11001
== External links ==
"Arithmetic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Matthew Holden, Michael Orrison, Michael Varble Yet another Generalization of Euler's Totient Function
Huard, Ou, Spearman, and Williams. Elementary Evaluation of Certain Convolution Sums Involving Divisor Functions
Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine
László Tóth, Menon's Identity and arithmetical sums representing functions of several variables | Wikipedia/Arithmetic_function |
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates the sum of two or more unknowns, with coefficients, to a constant. An exponential Diophantine equation is one in which unknowns can appear in exponents.
Diophantine problems have fewer equations than unknowns and involve finding integers that solve all equations simultaneously. Because such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations, beyond the case of linear and quadratic equations, was an achievement of the twentieth century.
== Examples ==
In the following Diophantine equations, w, x, y, and z are the unknowns and the other letters are given constants:
== Linear Diophantine equations ==
=== One equation ===
The simplest linear Diophantine equation takes the form
a
x
+
b
y
=
c
,
{\displaystyle ax+by=c,}
where a, b and c are given integers. The solutions are described by the following theorem:
This Diophantine equation has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + kv, y − ku), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b.
Proof: If d is this greatest common divisor, Bézout's identity asserts the existence of integers e and f such that ae + bf = d. If c is a multiple of d, then c = dh for some integer h, and (eh, fh) is a solution. On the other hand, for every pair of integers x and y, the greatest common divisor d of a and b divides ax + by. Thus, if the equation has a solution, then c must be a multiple of d. If a = ud and b = vd, then for every solution (x, y), we have
a
(
x
+
k
v
)
+
b
(
y
−
k
u
)
=
a
x
+
b
y
+
k
(
a
v
−
b
u
)
=
a
x
+
b
y
+
k
(
u
d
v
−
v
d
u
)
=
a
x
+
b
y
,
{\displaystyle {\begin{aligned}a(x+kv)+b(y-ku)&=ax+by+k(av-bu)\\&=ax+by+k(udv-vdu)\\&=ax+by,\end{aligned}}}
showing that (x + kv, y − ku) is another solution. Finally, given two solutions such that
a
x
1
+
b
y
1
=
a
x
2
+
b
y
2
=
c
,
{\displaystyle ax_{1}+by_{1}=ax_{2}+by_{2}=c,}
one deduces that
u
(
x
2
−
x
1
)
+
v
(
y
2
−
y
1
)
=
0.
{\displaystyle u(x_{2}-x_{1})+v(y_{2}-y_{1})=0.}
As u and v are coprime, Euclid's lemma shows that v divides x2 − x1, and thus that there exists an integer k such that both
x
2
−
x
1
=
k
v
,
y
2
−
y
1
=
−
k
u
.
{\displaystyle x_{2}-x_{1}=kv,\quad y_{2}-y_{1}=-ku.}
Therefore,
x
2
=
x
1
+
k
v
,
y
2
=
y
1
−
k
u
,
{\displaystyle x_{2}=x_{1}+kv,\quad y_{2}=y_{1}-ku,}
which completes the proof.
=== Chinese remainder theorem ===
The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let
n
1
,
…
,
n
k
{\displaystyle n_{1},\dots ,n_{k}}
be k pairwise coprime integers greater than one,
a
1
,
…
,
a
k
{\displaystyle a_{1},\dots ,a_{k}}
be k arbitrary integers, and N be the product
n
1
⋯
n
k
.
{\displaystyle n_{1}\cdots n_{k}.}
The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution
(
x
,
x
1
,
…
,
x
k
)
{\displaystyle (x,x_{1},\dots ,x_{k})}
such that 0 ≤ x < N, and that the other solutions are obtained by adding to x a multiple of N:
x
=
a
1
+
n
1
x
1
⋮
x
=
a
k
+
n
k
x
k
{\displaystyle {\begin{aligned}x&=a_{1}+n_{1}\,x_{1}\\&\;\;\vdots \\x&=a_{k}+n_{k}\,x_{k}\end{aligned}}}
=== System of linear Diophantine equations ===
More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written
A
X
=
C
,
{\displaystyle AX=C,}
where A is an m × n matrix of integers, X is an n × 1 column matrix of unknowns and C is an m × 1 column matrix of integers.
The computation of the Smith normal form of A provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) U and V of respective dimensions m × m and n × n, such that the matrix
B
=
[
b
i
,
j
]
=
U
A
V
{\displaystyle B=[b_{i,j}]=UAV}
is such that bi,i is not zero for i not greater than some integer k, and all the other entries are zero. The system to be solved may thus be rewritten as
B
(
V
−
1
X
)
=
U
C
.
{\displaystyle B(V^{-1}X)=UC.}
Calling yi the entries of V−1X and di those of D = UC, this leads to the system
b
i
,
i
y
i
=
d
i
,
1
≤
i
≤
k
0
y
i
=
d
i
,
k
<
i
≤
n
.
{\displaystyle {\begin{aligned}&b_{i,i}y_{i}=d_{i},\quad 1\leq i\leq k\\&0y_{i}=d_{i},\quad k<i\leq n.\end{aligned}}}
This system is equivalent to the given one in the following sense: A column matrix of integers x is a solution of the given system if and only if x = Vy for some column matrix of integers y such that By = D.
It follows that the system has a solution if and only if bi,i divides di for i ≤ k and di = 0 for i > k. If this condition is fulfilled, the solutions of the given system are
V
[
d
1
b
1
,
1
⋮
d
k
b
k
,
k
h
k
+
1
⋮
h
n
]
,
{\displaystyle V\,{\begin{bmatrix}{\frac {d_{1}}{b_{1,1}}}\\\vdots \\{\frac {d_{k}}{b_{k,k}}}\\h_{k+1}\\\vdots \\h_{n}\end{bmatrix}}\,,}
where hk+1, …, hn are arbitrary integers.
Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form."
Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations.
== Homogeneous equations ==
A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem
x
d
+
y
d
−
z
d
=
0.
{\displaystyle x^{d}+y^{d}-z^{d}=0.}
As a homogeneous polynomial in n indeterminates defines a hypersurface in the projective space of dimension n − 1, solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface.
Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the dth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for d > 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved.
For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem).
For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.
=== Degree two ===
Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced.
For proving that there is no solution, one may reduce the equation modulo p. For example, the Diophantine equation
x
2
+
y
2
=
3
z
2
,
{\displaystyle x^{2}+y^{2}=3z^{2},}
does not have any other solution than the trivial solution (0, 0, 0). In fact, by dividing x, y, and z by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if x, y, and z are all even, and are thus not coprime. Thus the only solution is the trivial solution (0, 0, 0). This shows that there is no rational point on a circle of radius
3
{\displaystyle {\sqrt {3}}}
, centered at the origin.
More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist.
If a non-trivial integer solution is known, one may produce all other solutions in the following way.
==== Geometric interpretation ====
Let
Q
(
x
1
,
…
,
x
n
)
=
0
{\displaystyle Q(x_{1},\ldots ,x_{n})=0}
be a homogeneous Diophantine equation, where
Q
(
x
1
,
…
,
x
n
)
{\displaystyle Q(x_{1},\ldots ,x_{n})}
is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all
x
i
{\displaystyle x_{i}}
are zero. If
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\ldots ,a_{n})}
is a non-trivial integer solution of this equation, then
(
a
1
,
…
,
a
n
)
{\displaystyle \left(a_{1},\ldots ,a_{n}\right)}
are the homogeneous coordinates of a rational point of the hypersurface defined by Q. Conversely, if
(
p
1
q
,
…
,
p
n
q
)
{\textstyle \left({\frac {p_{1}}{q}},\ldots ,{\frac {p_{n}}{q}}\right)}
are homogeneous coordinates of a rational point of this hypersurface, where
q
,
p
1
,
…
,
p
n
{\displaystyle q,p_{1},\ldots ,p_{n}}
are integers, then
(
p
1
,
…
,
p
n
)
{\displaystyle \left(p_{1},\ldots ,p_{n}\right)}
is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form
(
k
p
1
d
,
…
,
k
p
n
d
)
,
{\displaystyle \left(k{\frac {p_{1}}{d}},\ldots ,k{\frac {p_{n}}{d}}\right),}
where k is any integer, and d is the greatest common divisor of the
p
i
.
{\displaystyle p_{i}.}
It follows that solving the Diophantine equation
Q
(
x
1
,
…
,
x
n
)
=
0
{\displaystyle Q(x_{1},\ldots ,x_{n})=0}
is completely reduced to finding the rational points of the corresponding projective hypersurface.
==== Parameterization ====
Let now
A
=
(
a
1
,
…
,
a
n
)
{\displaystyle A=\left(a_{1},\ldots ,a_{n}\right)}
be an integer solution of the equation
Q
(
x
1
,
…
,
x
n
)
=
0.
{\displaystyle Q(x_{1},\ldots ,x_{n})=0.}
As Q is a polynomial of degree two, a line passing through A crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through A, and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters.
More precisely, one may proceed as follows.
By permuting the indices, one may suppose, without loss of generality that
a
n
≠
0.
{\displaystyle a_{n}\neq 0.}
Then one may pass to the affine case by considering the affine hypersurface defined by
q
(
x
1
,
…
,
x
n
−
1
)
=
Q
(
x
1
,
…
,
x
n
−
1
,
1
)
,
{\displaystyle q(x_{1},\ldots ,x_{n-1})=Q(x_{1},\ldots ,x_{n-1},1),}
which has the rational point
R
=
(
r
1
,
…
,
r
n
−
1
)
=
(
a
1
a
n
,
…
,
a
n
−
1
a
n
)
.
{\displaystyle R=(r_{1},\ldots ,r_{n-1})=\left({\frac {a_{1}}{a_{n}}},\ldots ,{\frac {a_{n-1}}{a_{n}}}\right).}
If this rational point is a singular point, that is if all partial derivatives are zero at R, all lines passing through R are contained in the hypersurface, and one has a cone. The change of variables
y
i
=
x
i
−
r
i
{\displaystyle y_{i}=x_{i}-r_{i}}
does not change the rational points, and transforms q into a homogeneous polynomial in n − 1 variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables.
If the polynomial q is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case.
In the general case, consider the parametric equation of a line passing through R:
x
2
=
r
2
+
t
2
(
x
1
−
r
1
)
⋮
x
n
−
1
=
r
n
−
1
+
t
n
−
1
(
x
1
−
r
1
)
.
{\displaystyle {\begin{aligned}x_{2}&=r_{2}+t_{2}(x_{1}-r_{1})\\&\;\;\vdots \\x_{n-1}&=r_{n-1}+t_{n-1}(x_{1}-r_{1}).\end{aligned}}}
Substituting this in q, one gets a polynomial of degree two in x1, that is zero for x1 = r1. It is thus divisible by x1 − r1. The quotient is linear in x1, and may be solved for expressing x1 as a quotient of two polynomials of degree at most two in
t
2
,
…
,
t
n
−
1
,
{\displaystyle t_{2},\ldots ,t_{n-1},}
with integer coefficients:
x
1
=
f
1
(
t
2
,
…
,
t
n
−
1
)
f
n
(
t
2
,
…
,
t
n
−
1
)
.
{\displaystyle x_{1}={\frac {f_{1}(t_{2},\ldots ,t_{n-1})}{f_{n}(t_{2},\ldots ,t_{n-1})}}.}
Substituting this in the expressions for
x
2
,
…
,
x
n
−
1
,
{\displaystyle x_{2},\ldots ,x_{n-1},}
one gets, for i = 1, …, n − 1,
x
i
=
f
i
(
t
2
,
…
,
t
n
−
1
)
f
n
(
t
2
,
…
,
t
n
−
1
)
,
{\displaystyle x_{i}={\frac {f_{i}(t_{2},\ldots ,t_{n-1})}{f_{n}(t_{2},\ldots ,t_{n-1})}},}
where
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
are polynomials of degree at most two with integer coefficients.
Then, one can return to the homogeneous case. Let, for i = 1, …, n,
F
i
(
t
1
,
…
,
t
n
−
1
)
=
t
1
2
f
i
(
t
2
t
1
,
…
,
t
n
−
1
t
1
)
,
{\displaystyle F_{i}(t_{1},\ldots ,t_{n-1})=t_{1}^{2}f_{i}\left({\frac {t_{2}}{t_{1}}},\ldots ,{\frac {t_{n-1}}{t_{1}}}\right),}
be the homogenization of
f
i
.
{\displaystyle f_{i}.}
These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by Q:
x
1
=
F
1
(
t
1
,
…
,
t
n
−
1
)
⋮
x
n
=
F
n
(
t
1
,
…
,
t
n
−
1
)
.
{\displaystyle {\begin{aligned}x_{1}&=F_{1}(t_{1},\ldots ,t_{n-1})\\&\;\;\vdots \\x_{n}&=F_{n}(t_{1},\ldots ,t_{n-1}).\end{aligned}}}
A point of the projective hypersurface defined by Q is rational if and only if it may be obtained from rational values of
t
1
,
…
,
t
n
−
1
.
{\displaystyle t_{1},\ldots ,t_{n-1}.}
As
F
1
,
…
,
F
n
{\displaystyle F_{1},\ldots ,F_{n}}
are homogeneous polynomials, the point is not changed if all ti are multiplied by the same rational number. Thus, one may suppose that
t
1
,
…
,
t
n
−
1
{\displaystyle t_{1},\ldots ,t_{n-1}}
are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\ldots ,x_{n})}
where, for i = 1, ..., n,
x
i
=
k
F
i
(
t
1
,
…
,
t
n
−
1
)
d
,
{\displaystyle x_{i}=k\,{\frac {F_{i}(t_{1},\ldots ,t_{n-1})}{d}},}
where k is an integer,
t
1
,
…
,
t
n
−
1
{\displaystyle t_{1},\ldots ,t_{n-1}}
are coprime integers, and d is the greatest common divisor of the n integers
F
i
(
t
1
,
…
,
t
n
−
1
)
.
{\displaystyle F_{i}(t_{1},\ldots ,t_{n-1}).}
One could hope that the coprimality of the ti, could imply that d = 1. Unfortunately this is not the case, as shown in the next section.
==== Example of Pythagorean triples ====
The equation
x
2
+
y
2
−
z
2
=
0
{\displaystyle x^{2}+y^{2}-z^{2}=0}
is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples.
For retrieving exactly Euclid's formula, we start from the solution (−1, 0, 1), corresponding to the point (−1, 0) of the unit circle. A line passing through this point may be parameterized by its slope:
y
=
t
(
x
+
1
)
.
{\displaystyle y=t(x+1).}
Putting this in the circle equation
x
2
+
y
2
−
1
=
0
,
{\displaystyle x^{2}+y^{2}-1=0,}
one gets
x
2
−
1
+
t
2
(
x
+
1
)
2
=
0.
{\displaystyle x^{2}-1+t^{2}(x+1)^{2}=0.}
Dividing by x + 1, results in
x
−
1
+
t
2
(
x
+
1
)
=
0
,
{\displaystyle x-1+t^{2}(x+1)=0,}
which is easy to solve in x:
x
=
1
−
t
2
1
+
t
2
.
{\displaystyle x={\frac {1-t^{2}}{1+t^{2}}}.}
It follows
y
=
t
(
x
+
1
)
=
2
t
1
+
t
2
.
{\displaystyle y=t(x+1)={\frac {2t}{1+t^{2}}}.}
Homogenizing as described above one gets all solutions as
x
=
k
s
2
−
t
2
d
y
=
k
2
s
t
d
z
=
k
s
2
+
t
2
d
,
{\displaystyle {\begin{aligned}x&=k\,{\frac {s^{2}-t^{2}}{d}}\\y&=k\,{\frac {2st}{d}}\\z&=k\,{\frac {s^{2}+t^{2}}{d}},\end{aligned}}}
where k is any integer, s and t are coprime integers, and d is the greatest common divisor of the three numerators. In fact, d = 2 if s and t are both odd, and d = 1 if one is odd and the other is even.
The primitive triples are the solutions where k = 1 and s > t > 0.
This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that x, y, and z are all positive, and does not distinguish between two triples that differ by the exchange of x and y,
== Diophantine analysis ==
=== Typical questions ===
The questions asked in Diophantine analysis include:
Are there any solutions?
Are there any solutions beyond some that are easily found by inspection?
Are there finitely or infinitely many solutions?
Can all solutions be found in theory?
Can one in practice compute a full list of solutions?
These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles.
=== Typical problem ===
The given information is that a father's age is 1 less than twice that of his son, and that the digits AB making up the father's age are reversed in the son's age (i.e. BA). This leads to the equation 10A + B = 2(10B + A) − 1, thus 19B − 8A = 1. Inspection gives the result A = 7, B = 3, and thus AB equals 73 years and BA equals 37 years. One may easily show that there is not any other solution with A and B positive integers less than 10.
Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts.
=== 17th and 18th centuries ===
In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation an + bn = cn has no solutions for any n higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles.
In 1657, Fermat attempted to solve the Diophantine equation 61x2 + 1 = y2 (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is x = 226153980, y = 1766319049 (see Chakravala method).
=== Hilbert's tenth problem ===
In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist.
=== Diophantine geometry ===
Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field K, when K is not algebraically closed.
=== Modern research ===
The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations.
The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist.
During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates.
This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations.
=== Infinite Diophantine equations ===
An example of an infinite Diophantine equation is:
n
=
a
2
+
2
b
2
+
3
c
2
+
4
d
2
+
5
e
2
+
⋯
,
{\displaystyle n=a^{2}+2b^{2}+3c^{2}+4d^{2}+5e^{2}+\cdots ,}
which can be expressed as "How many ways can a given integer n be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each n forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive n. Compare this to:
n
=
a
2
+
4
b
2
+
9
c
2
+
16
d
2
+
25
e
2
+
⋯
,
{\displaystyle n=a^{2}+4b^{2}+9c^{2}+16d^{2}+25e^{2}+\cdots ,}
which does not always have a solution for positive n.
== Exponential Diophantine equations ==
If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include:
the Ramanujan–Nagell equation, 2n − 7 = x2
the equation of the Fermat–Catalan conjecture and Beal's conjecture, am + bn = ck with inequality restrictions on the exponents
the Erdős–Moser equation, 1k + 2k + ⋯ + (m − 1)k = mk
A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error.
== See also ==
Kuṭṭaka, Aryabhata's algorithm for solving linear Diophantine equations in two unknowns
== Notes ==
== References ==
Mordell, L. J. (1969). Diophantine equations. Pure and Applied Mathematics. Vol. 30. Academic Press. ISBN 0-12-506250-8. Zbl 0188.34503.
Schmidt, Wolfgang M. (1991). Diophantine approximations and Diophantine equations. Lecture Notes in Mathematics. Vol. 1467. Berlin: Springer-Verlag. ISBN 3-540-54058-X. Zbl 0754.11020.
Shorey, T. N.; Tijdeman, R. (1986). Exponential Diophantine equations. Cambridge Tracts in Mathematics. Vol. 87. Cambridge University Press. ISBN 0-521-26826-5. Zbl 0606.10011.
Smart, Nigel P. (1998). The algorithmic resolution of Diophantine equations. London Mathematical Society Student Texts. Vol. 41. Cambridge University Press. ISBN 0-521-64156-X. Zbl 0907.11001.
Stillwell, John (2004). Mathematics and its History (Second ed.). Springer Science + Business Media Inc. ISBN 0-387-95336-1.
== Further reading ==
Bachmakova, Isabelle (1966). "Diophante et Fermat". Revue d'Histoire des Sciences et de Leurs Applications. 19 (4): 289–306. doi:10.3406/rhs.1966.2507. JSTOR 23905707.
Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997.
Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré" Historia Mathematica 8 (1981), 393–416.
Bashmakova, Izabella G., Slavutin, E. I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian].
Bashmakova, Izabella G. "Diophantine Equations and the Evolution of Algebra", American Mathematical Society Translations 147 (2), 1990, pp. 85–100. Translated by A. Shenitzer and H. Grant.
Dickson, Leonard Eugene (2005) [1920]. History of the Theory of Numbers. Volume II: Diophantine analysis. Mineola, NY: Dover Publications. ISBN 978-0-486-44233-4. MR 0245500. Zbl 1214.11002.
Bogdan Grechuk (2024). Polynomial Diophantine Equations: A Systematic Approach, Springer.
Rashed, Roshdi; Houzel, Christian (2013). Les "Arithmétiques" de Diophante. doi:10.1515/9783110336481. ISBN 978-3-11-033593-4.
Rashed, Roshdi, Histoire de l'analyse diophantienne classique : D'Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter.
== External links ==
Diophantine Equation. From MathWorld at Wolfram Research.
"Diophantine equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Dario Alpern's Online Calculator. Retrieved 18 March 2009 | Wikipedia/Diophantine_equation |
In algebraic geometry, the function field of an algebraic variety V consists of objects that are interpreted as rational functions on V. In classical algebraic geometry they are ratios of polynomials; in complex geometry these are meromorphic functions and their higher-dimensional analogues; in modern algebraic geometry they are elements of some quotient ring's field of fractions.
== Definition for complex manifolds ==
In complex geometry the objects of study are complex analytic varieties, on which we have a local notion of complex analysis, through which we may define meromorphic functions. The function field of a variety is then the set of all meromorphic functions on the variety. (Like all meromorphic functions, these take their values in
C
∪
{
∞
}
{\displaystyle \mathbb {C} \cup \{\infty \}}
.) Together with the operations of addition and multiplication of functions, this is a field in the sense of algebra.
For the Riemann sphere, which is the variety
P
1
{\displaystyle \mathbb {P} ^{1}}
over the complex numbers, the global meromorphic functions are exactly the rational functions (that is, the ratios of complex polynomial functions).
== Construction in algebraic geometry ==
In classical algebraic geometry, we generalize the second point of view. For the Riemann sphere, above, the notion of a polynomial is not defined globally, but simply with respect to an affine coordinate chart, namely that consisting of the complex plane (all but the north pole of the sphere). On a general variety V, we say that a rational function on an open affine subset U is defined as the ratio of two polynomials in the affine coordinate ring of U, and that a rational function on all of V consists of such local data as agree on the intersections of open affines. We may define the function field of V to be the field of fractions of the affine coordinate ring of any open affine subset, since all such subsets are dense.
== Generalization to arbitrary scheme ==
In the most general setting, that of modern scheme theory, the latter point of view above is taken as a point of departure. Namely, if
X
{\displaystyle X}
is an integral scheme, then for every open affine subset
U
{\displaystyle U}
of
X
{\displaystyle X}
the ring of sections
O
X
(
U
)
{\displaystyle {\mathcal {O}}_{X}(U)}
on
U
{\displaystyle U}
is an integral domain and, hence, has a field of fractions. Furthermore, it can be verified that these are all the same, and are all equal to the stalk of the generic point of
X
{\displaystyle X}
. Thus the function field of
X
{\displaystyle X}
is just the stalk of its generic point.
== Geometry of the function field ==
If V is a variety defined over a field K, then the function field K(V) is a finitely generated field extension of the ground field K; its transcendence degree is equal to the dimension of the variety. All extensions of K that are finitely generated as fields over K arise in this way from some algebraic variety. These field extensions are also known as algebraic function fields over K.
Properties of the variety V that depend only on the function field are studied in birational geometry.
== Examples ==
The function field of a point over K is K.
The function field of the affine line over K is isomorphic to the field K(t) of rational functions in one variable. This is also the function field of the projective line.
Consider the affine algebraic plane curve defined by the equation
y
2
=
x
5
+
1
{\displaystyle y^{2}=x^{5}+1}
. Its function field is the field K(x,y), generated by elements x and y that are transcendental over K and satisfy the algebraic relation
y
2
=
x
5
+
1
{\displaystyle y^{2}=x^{5}+1}
.
== See also ==
Algebraic function field
Cartier divisor
== References ==
David M. Goldschmidt (2002). Algebraic Functions and Projective Curves. Graduate Texts in Mathematics. Vol. 215. Springer-Verlag. ISBN 0-387-95432-5. | Wikipedia/Function_field_of_an_algebraic_variety |
In number theory, Tate's thesis is the 1950 PhD thesis of John Tate (1950) completed under the supervision of Emil Artin at Princeton University. In it, Tate used a translation invariant integration on the locally compact group of ideles to lift the zeta function twisted by a Hecke character, i.e. a Hecke L-function, of a number field to a zeta integral and study its properties. Using harmonic analysis, more precisely the Poisson summation formula, he proved the functional equation and meromorphic continuation of the zeta integral and the Hecke L-function. He also located the poles of the twisted zeta function. His work can be viewed as an elegant and powerful reformulation of a work of Erich Hecke on the proof of the functional equation of the Hecke L-function. Erich Hecke used a generalized theta series associated to an algebraic number field and a lattice in its ring of integers.
== Iwasawa–Tate theory ==
Kenkichi Iwasawa independently discovered essentially the same method (without an analog of the local theory in Tate's thesis) during the Second World War and announced it in his 1950 International Congress of Mathematicians paper and his letter to Jean Dieudonné written in 1952. Hence this theory is often called Iwasawa–Tate theory. Iwasawa in his letter to Dieudonné derived on several pages not only the meromorphic continuation and functional equation of the L-function, he also proved finiteness of the class number and Dirichlet's theorem on units as immediate byproducts of the main computation. The theory in positive characteristic was developed one decade earlier by Ernst Witt, Wilfried Schmid, and Oswald Teichmüller.
Iwasawa–Tate theory uses several structures which come from class field theory, however it does not use any deep result of class field theory.
== Generalisations ==
Iwasawa–Tate theory was extended to the general linear group GL(n) over an algebraic number field and automorphic representations of its adelic group by Roger Godement and Hervé Jacquet in 1972 which formed the foundations of the Langlands correspondence. Tate's thesis can be viewed as the GL(1) case of the work by Godement–Jacquet.
== See also ==
Basic Number Theory
== References ==
Godement, Roger; Jacquet, Hervé (1972), Zeta functions of simple algebras, Lect. Notes Math., vol. 260, Springer
Goldfeld, Dorian; Hundley, Joseph (2011), Automorphic representations of L-functions for the general linear group, Cambridge University Press
Iwasawa, Kenkichi (1952), "A note on functions", Proceedings of the International Congress of Mathematicians, Cambridge, Mass., 1950, vol. 1, Providence, R.I.: American Mathematical Society, p. 322, MR 0044534, archived from the original on 2011-10-03
Iwasawa, Kenkichi (1992) [1952], "Letter to J. Dieudonné", in Kurokawa, Nobushige; Sunada., T. (eds.), Zeta functions in geometry (Tokyo, 1990), Adv. Stud. Pure Math., vol. 21, Tokyo: Kinokuniya, pp. 445–450, ISBN 978-4-314-10078-6, MR 1210798
Kudla, Stephen S. (2003), "Tate's thesis", in Bernstein, Joseph; Gelbart, Stephen (eds.), An introduction to the Langlands program (Jerusalem, 2001), Boston, MA: Birkhäuser Boston, pp. 109–131, ISBN 978-0-8176-3211-3, MR 1990377
Ramakrishnan, Dinakar; Valenza, Robert J. (1999). Fourier analysis on number fields. Graduate Texts in Mathematics. Vol. 186. New York: Springer-Verlag. doi:10.1007/978-1-4757-3085-2. ISBN 0-387-98436-4. MR 1680912.
Tate, John T. (1950), "Fourier analysis in number fields, and Hecke's zeta-functions", Algebraic Number Theory (Proc. Instructional Conf., Brighton, 1965), Thompson, Washington, D.C., pp. 305–347, ISBN 978-0-9502734-2-6, MR 0217026 {{citation}}: ISBN / Date incompatibility (help) | Wikipedia/Iwasawa–Tate_theory |
In mathematics, Arakelov theory (or Arakelov geometry) is an approach to Diophantine geometry, named for Suren Arakelov. It is used to study Diophantine equations in higher dimensions.
== Background ==
The main motivation behind Arakelov geometry is that there is a correspondence between prime ideals
p
∈
Spec
(
Z
)
{\displaystyle {\mathfrak {p}}\in {\text{Spec}}(\mathbb {Z} )}
and finite places
v
p
:
Q
∗
→
R
{\displaystyle v_{p}:\mathbb {Q} ^{*}\to \mathbb {R} }
, but there also exists a place at infinity
v
∞
{\displaystyle v_{\infty }}
, given by the Archimedean valuation, which doesn't have a corresponding prime ideal. Arakelov geometry gives a technique for compactifying
Spec
(
Z
)
{\displaystyle {\text{Spec}}(\mathbb {Z} )}
into a complete space
Spec
(
Z
)
¯
{\displaystyle {\overline {{\text{Spec}}(\mathbb {Z} )}}}
which has a prime lying at infinity. Arakelov's original construction studies one such theory, where a definition of divisors is constructed for a scheme
X
{\displaystyle {\mathfrak {X}}}
of relative dimension 1 over
Spec
(
O
K
)
{\displaystyle {\text{Spec}}({\mathcal {O}}_{K})}
such that it extends to a Riemann surface
X
∞
=
X
(
C
)
{\displaystyle X_{\infty }={\mathfrak {X}}(\mathbb {C} )}
for every valuation at infinity. In addition, he equips these Riemann surfaces with Hermitian metrics on holomorphic vector bundles over X(C), the complex points of
X
{\displaystyle X}
. This extra Hermitian structure is applied as a substitute for the failure of the scheme Spec(Z) to be a complete variety.
Note that other techniques exist for constructing a complete space extending
Spec
(
Z
)
{\displaystyle {\text{Spec}}(\mathbb {Z} )}
, which is the basis of F1 geometry.
=== Original definition of divisors ===
Let
K
{\displaystyle K}
be a field,
O
K
{\displaystyle {\mathcal {O}}_{K}}
its ring of integers, and
X
{\displaystyle X}
a genus
g
{\displaystyle g}
curve over
K
{\displaystyle K}
with a non-singular model
X
→
Spec
(
O
K
)
{\displaystyle {\mathfrak {X}}\to {\text{Spec}}({\mathcal {O}}_{K})}
, called an arithmetic surface. Also, let
∞
:
K
→
C
{\displaystyle \infty :K\to \mathbb {C} }
be an inclusion of fields (which is supposed to represent a place at infinity). Also, let
X
∞
{\displaystyle X_{\infty }}
be the associated Riemann surface from the base change to
C
{\displaystyle \mathbb {C} }
. Using this data, one can define a c-divisor as a formal linear combination
D
=
∑
i
k
i
C
i
+
∑
∞
λ
∞
X
∞
{\displaystyle D=\sum _{i}k_{i}C_{i}+\sum _{\infty }\lambda _{\infty }X_{\infty }}
where
C
i
{\displaystyle C_{i}}
is an irreducible closed subset of
X
{\displaystyle {\mathfrak {X}}}
of codimension 1,
k
i
∈
Z
{\displaystyle k_{i}\in \mathbb {Z} }
, and
λ
∞
∈
R
{\displaystyle \lambda _{\infty }\in \mathbb {R} }
, and the sum
∑
∞
λ
∞
X
∞
{\displaystyle \sum _{\infty }\lambda _{\infty }X_{\infty }}
represents the sum over every real embedding of
K
→
R
{\displaystyle K\to \mathbb {R} }
and over one embedding for each pair of complex embeddings
K
→
C
{\displaystyle K\to \mathbb {C} }
. The set of c-divisors forms a group
Div
c
(
X
)
{\displaystyle {\text{Div}}_{c}({\mathfrak {X}})}
.
== Results ==
Arakelov (1974, 1975) defined an intersection theory on the arithmetic surfaces attached to smooth projective curves over number fields, with the aim of proving certain results, known in the case of function fields,
in the case of number fields. Gerd Faltings (1984) extended Arakelov's work by establishing results such as a Riemann-Roch theorem, a Noether formula, a Hodge index theorem and the nonnegativity of the self-intersection of the dualizing sheaf in this context.
Arakelov theory was used by Paul Vojta (1991) to give a new proof of the Mordell conjecture, and by Gerd Faltings (1991) in his proof of Serge Lang's generalization of the Mordell conjecture.
Pierre Deligne (1987) developed a more general framework to define the intersection pairing defined on an arithmetic surface over the spectrum of a ring of integers by Arakelov. Shou-Wu Zhang (1992) developed a theory of positive line bundles and proved a Nakai–Moishezon type theorem for arithmetic surfaces. Further developments in the theory of positive line bundles by Zhang (1993, 1995a, 1995b) and Lucien Szpiro, Emmanuel Ullmo, and Zhang (1997) culminated in a proof of the Bogomolov conjecture by Ullmo (1998) and Zhang (1998).
Arakelov's theory was generalized by Henri Gillet and Christophe Soulé to higher dimensions. That is, Gillet and Soulé defined an intersection pairing on an arithmetic variety. One of the main results of Gillet and Soulé is the arithmetic Riemann–Roch theorem of Gillet & Soulé (1992), an extension of the Grothendieck–Riemann–Roch theorem to arithmetic varieties.
For this one defines arithmetic Chow groups CHp(X) of an arithmetic variety X, and defines Chern classes for Hermitian vector bundles over X taking values in the arithmetic Chow groups.
The arithmetic Riemann–Roch theorem then describes how the Chern class behaves under pushforward of vector bundles under a proper map of arithmetic varieties. A complete proof of this theorem was only published recently by Gillet, Rössler and Soulé.
Arakelov's intersection theory for arithmetic surfaces was developed further by Jean-Benoît Bost (1999). The theory of Bost is based on the use of Green functions which, up to logarithmic singularities, belong to the Sobolev space
L
1
2
{\displaystyle L_{1}^{2}}
. In this context, Bost obtains an arithmetic Hodge index theorem and uses this to obtain Lefschetz theorems for arithmetic surfaces.
== Arithmetic Chow groups ==
An arithmetic cycle of codimension p is a pair (Z, g) where Z ∈ Zp(X) is a p-cycle on X and g is a Green current for Z, a higher-dimensional generalization of a Green function. The arithmetic Chow group
C
H
^
p
(
X
)
{\displaystyle {\widehat {\mathrm {CH} }}_{p}(X)}
of codimension p is the quotient of this group by the subgroup generated by certain "trivial" cycles.
== The arithmetic Riemann–Roch theorem ==
The usual Grothendieck–Riemann–Roch theorem describes how the Chern character ch behaves under pushforward of sheaves, and states that ch(f*(E))= f*(ch(E)TdX/Y), where f is a proper morphism from X to Y and E is a vector bundle over f. The arithmetic Riemann–Roch theorem is similar, except that the Todd class gets multiplied by a certain power series.
The arithmetic Riemann–Roch theorem states
c
h
^
(
f
∗
(
[
E
]
)
)
=
f
∗
(
c
h
^
(
E
)
T
d
^
R
(
T
X
/
Y
)
)
{\displaystyle {\hat {\mathrm {ch} }}(f_{*}([E]))=f_{*}({\hat {\mathrm {ch} }}(E){\widehat {\mathrm {Td} }}^{R}(T_{X/Y}))}
where
X and Y are regular projective arithmetic schemes.
f is a smooth proper map from X to Y
E is an arithmetic vector bundle over X.
c
h
^
{\displaystyle {\hat {\mathrm {ch} }}}
is the arithmetic Chern character.
TX/Y is the relative tangent bundle
T
d
^
{\displaystyle {\hat {\mathrm {Td} }}}
is the arithmetic Todd class
T
d
^
R
(
E
)
{\displaystyle {\hat {\mathrm {Td} }}^{R}(E)}
is
T
d
^
(
E
)
(
1
−
ϵ
(
R
(
E
)
)
)
{\displaystyle {\hat {\mathrm {Td} }}(E)(1-\epsilon (R(E)))}
R(X) is the additive characteristic class associated to the formal power series
∑
m
>
0
m
odd
X
m
m
!
[
2
ζ
′
(
−
m
)
+
ζ
(
−
m
)
(
1
1
+
1
2
+
⋯
+
1
m
)
]
.
{\displaystyle \sum _{m>0 \atop m{\text{ odd}}}{\frac {X^{m}}{m!}}\left[2\zeta '(-m)+\zeta (-m)\left({1 \over 1}+{1 \over 2}+\cdots +{1 \over m}\right)\right].}
== See also ==
Hodge–Arakelov theory
Hodge theory
P-adic Hodge theory
Adelic group
== Notes ==
== References ==
Arakelov, Suren J. (1974), "Intersection theory of divisors on an arithmetic surface", Math. USSR Izv., 8 (6): 1167–1180, doi:10.1070/IM1974v008n06ABEH002141, Zbl 0355.14002
Arakelov, Suren J. (1975), "Theory of intersections on an arithmetic surface", Proc. Internat. Congr. Mathematicians Vancouver, vol. 1, Amer. Math. Soc., pp. 405–408, Zbl 0351.14003
Bost, Jean-Benoît (1999), "Potential theory and Lefschetz theorems for arithmetic surfaces" (PDF), Annales Scientifiques de l'École Normale Supérieure, Série 4, 32 (2): 241–312, doi:10.1016/s0012-9593(99)80015-9, ISSN 0012-9593, Zbl 0931.14014
Deligne, P. (1987), "Le déterminant de la cohomologie", Current trends in arithmetical algebraic geometry (Arcata, Calif., 1985) [The determinant of the cohomology], Contemporary Mathematics, vol. 67, Providence, RI: American Mathematical Society, pp. 93–177, doi:10.1090/conm/067/902592, MR 0902592
Faltings, Gerd (1984), "Calculus on Arithmetic Surfaces", Annals of Mathematics, Second Series, 119 (2): 387–424, doi:10.2307/2007043, JSTOR 2007043
Faltings, Gerd (1991), "Diophantine Approximation on Abelian Varieties", Annals of Mathematics, Second Series, 133 (3): 549–576, doi:10.2307/2944319, JSTOR 2944319
Faltings, Gerd (1992), Lectures on the arithmetic Riemann–Roch theorem, Annals of Mathematics Studies, vol. 127, Princeton, NJ: Princeton University Press, doi:10.1515/9781400882472, ISBN 0-691-08771-7, MR 1158661
Gillet, Henri; Soulé, Christophe (1992), "An arithmetic Riemann–Roch Theorem", Inventiones Mathematicae, 110: 473–543, doi:10.1007/BF01231343
Kawaguchi, Shu; Moriwaki, Atsushi; Yamaki, Kazuhiko (2002), "Introduction to Arakelov geometry", Algebraic geometry in East Asia (Kyoto, 2001), River Edge, NJ: World Sci. Publ., pp. 1–74, doi:10.1142/9789812705105_0001, ISBN 978-981-238-265-8, MR 2030448
Lang, Serge (1988), Introduction to Arakelov theory, New York: Springer-Verlag, doi:10.1007/978-1-4612-1031-3, ISBN 0-387-96793-1, MR 0969124, Zbl 0667.14001
Manin, Yu. I.; Panchishkin, A. A. (2007). Introduction to Modern Number Theory. Encyclopaedia of Mathematical Sciences. Vol. 49 (Second ed.). ISBN 978-3-540-20364-3. ISSN 0938-0396. Zbl 1079.11002.
Soulé, Christophe (2001) [1994], "Arakelov theory", Encyclopedia of Mathematics, EMS Press
Soulé, C.; with the collaboration of D. Abramovich, J.-F. Burnol and J. Kramer (1992), Lectures on Arakelov geometry, Cambridge Studies in Advanced Mathematics, vol. 33, Cambridge: Cambridge University Press, pp. viii+177, doi:10.1017/CBO9780511623950, ISBN 0-521-41669-8, MR 1208731
Szpiro, Lucien; Ullmo, Emmanuel; Zhang, Shou-Wu (1997), "Equirépartition des petits points", Inventiones Mathematicae, 127 (2): 337–347, Bibcode:1997InMat.127..337S, doi:10.1007/s002220050123, S2CID 119668209.
Ullmo, Emmanuel (1998), "Positivité et Discrétion des Points Algébriques des Courbes", Annals of Mathematics, 147 (1): 167–179, arXiv:alg-geom/9606017, doi:10.2307/120987, Zbl 0934.14013
Vojta, Paul (1991), "Siegel's Theorem in the Compact Case", Annals of Mathematics, 133 (3), Annals of Mathematics, Vol. 133, No. 3: 509–548, doi:10.2307/2944318, JSTOR 2944318
Zhang, Shou-Wu (1992), "Positive line bundles on arithmetic surfaces", Annals of Mathematics, 136 (3): 569–587, doi:10.2307/2946601.
Zhang, Shou-Wu (1993), "Admissible pairing on a curve", Inventiones Mathematicae, 112 (1): 421–432, Bibcode:1993InMat.112..171Z, doi:10.1007/BF01232429, S2CID 120229374.
Zhang, Shou-Wu (1995a), "Small points and adelic metrics", Journal of Algebraic Geometry, 8 (1): 281–300.
Zhang, Shou-Wu (1995b), "Positive line bundles on arithmetic varieties", Journal of the American Mathematical Society, 136 (3): 187–221, doi:10.1090/S0894-0347-1995-1254133-7.
Zhang, Shou-Wu (1996), "Heights and reductions of semi-stable varieties", Compositio Mathematica, 104 (1): 77–105.
Zhang, Shou-Wu (1998), "Equidistribution of small points on abelian varieties", Annals of Mathematics, 147 (1): 159–165, doi:10.2307/120986, JSTOR 120986.
== External links ==
Original paper
Arakelov geometry preprint archive | Wikipedia/Arakelov_theory |
In number theory, Dirichlet's theorem on Diophantine approximation, also called Dirichlet's approximation theorem, states that for any real numbers
α
{\displaystyle \alpha }
and
N
{\displaystyle N}
, with
1
≤
N
{\displaystyle 1\leq N}
, there exist integers
p
{\displaystyle p}
and
q
{\displaystyle q}
such that
1
≤
q
≤
N
{\displaystyle 1\leq q\leq N}
and
|
q
α
−
p
|
≤
1
⌊
N
⌋
+
1
<
1
N
.
{\displaystyle \left|q\alpha -p\right|\leq {\frac {1}{\lfloor N\rfloor +1}}<{\frac {1}{N}}.}
Here
⌊
N
⌋
{\displaystyle \lfloor N\rfloor }
represents the integer part of
N
{\displaystyle N}
.
This is a fundamental result in Diophantine approximation, showing that any real number has a sequence of good rational approximations: in fact an immediate consequence is that for a given irrational α, the inequality
|
α
−
p
q
|
<
1
q
2
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{q^{2}}}}
is satisfied by infinitely many integers p and q. This shows that any irrational number has irrationality measure at least 2.
The Thue–Siegel–Roth theorem says that, for algebraic irrational numbers, the exponent of 2 in the corollary to Dirichlet’s approximation theorem is the best we can do: such numbers cannot be approximated by any exponent greater than 2. The Thue–Siegel–Roth theorem uses advanced techniques of number theory, but many simpler numbers such as the golden ratio
(
1
+
5
)
/
2
{\displaystyle (1+{\sqrt {5}})/2}
can be much more easily verified to be inapproximable beyond exponent 2.
== Simultaneous version ==
The simultaneous version of the Dirichlet's approximation theorem states that given real numbers
α
1
,
…
,
α
d
{\displaystyle \alpha _{1},\ldots ,\alpha _{d}}
and a natural number
N
{\displaystyle N}
then there are integers
p
1
,
…
,
p
d
,
q
∈
Z
,
1
≤
q
≤
N
d
{\displaystyle p_{1},\ldots ,p_{d},q\in \mathbb {Z} ,1\leq q\leq N^{d}}
such that
|
α
i
−
p
i
q
|
≤
1
q
N
.
{\displaystyle \left|\alpha _{i}-{\frac {p_{i}}{q}}\right|\leq {\frac {1}{qN}}.}
== Method of proof ==
=== Proof by the pigeonhole principle ===
This theorem is a consequence of the pigeonhole principle. Peter Gustav Lejeune Dirichlet who proved the result used the same principle in other contexts (for example, the Pell equation) and by naming the principle (in German) popularized its use, though its status in textbook terms comes later. The method extends to simultaneous approximation.
Proof outline: Let
α
{\displaystyle \alpha }
be an irrational number and
N
{\displaystyle N}
be an integer. For every
k
=
0
,
1
,
.
.
.
,
N
{\displaystyle k=0,1,...,N}
we can write
k
α
=
m
k
+
x
k
{\displaystyle k\alpha =m_{k}+x_{k}}
such that
m
k
{\displaystyle m_{k}}
is an integer and
0
≤
x
k
<
1
{\displaystyle 0\leq x_{k}<1}
.
One can divide the interval
[
0
,
1
)
{\displaystyle [0,1)}
into
N
{\displaystyle N}
smaller intervals of measure
1
N
{\displaystyle {\frac {1}{N}}}
. Now, we have
N
+
1
{\displaystyle N+1}
numbers
x
0
,
x
1
,
.
.
.
,
x
N
{\displaystyle x_{0},x_{1},...,x_{N}}
and
N
{\displaystyle N}
intervals. Therefore, by the pigeonhole principle, at least two of them are in the same interval. We can call those
x
i
,
x
j
{\displaystyle x_{i},x_{j}}
such that
i
<
j
{\displaystyle i<j}
. Now:
|
(
j
−
i
)
α
−
(
m
j
−
m
i
)
|
=
|
j
α
−
m
j
−
(
i
α
−
m
i
)
|
=
|
x
j
−
x
i
|
<
1
N
{\displaystyle |(j-i)\alpha -(m_{j}-m_{i})|=|j\alpha -m_{j}-(i\alpha -m_{i})|=|x_{j}-x_{i}|<{\frac {1}{N}}}
Dividing both sides by
j
−
i
{\displaystyle j-i}
will result in:
|
α
−
m
j
−
m
i
j
−
i
|
<
1
(
j
−
i
)
N
≤
1
(
j
−
i
)
2
{\displaystyle \left|\alpha -{\frac {m_{j}-m_{i}}{j-i}}\right|<{\frac {1}{(j-i)N}}\leq {\frac {1}{\left(j-i\right)^{2}}}}
And we proved the theorem.
=== Proof by Minkowski's theorem ===
Another simple proof of the Dirichlet's approximation theorem is based on Minkowski's theorem applied to the set
S
=
{
(
x
,
y
)
∈
R
2
:
−
N
−
1
2
≤
x
≤
N
+
1
2
,
|
α
x
−
y
|
≤
1
N
}
.
{\displaystyle S=\left\{(x,y)\in \mathbb {R} ^{2}:-N-{\frac {1}{2}}\leq x\leq N+{\frac {1}{2}},\vert \alpha x-y\vert \leq {\frac {1}{N}}\right\}.}
Since the volume of
S
{\displaystyle S}
is greater than
4
{\displaystyle 4}
, Minkowski's theorem establishes the existence of a non-trivial point with integral coordinates. This proof extends naturally to simultaneous approximations by considering the set
S
=
{
(
x
,
y
1
,
…
,
y
d
)
∈
R
1
+
d
:
−
N
−
1
2
≤
x
≤
N
+
1
2
,
|
α
i
x
−
y
i
|
≤
1
N
1
/
d
}
.
{\displaystyle S=\left\{(x,y_{1},\dots ,y_{d})\in \mathbb {R} ^{1+d}:-N-{\frac {1}{2}}\leq x\leq N+{\frac {1}{2}},|\alpha _{i}x-y_{i}|\leq {\frac {1}{N^{1/d}}}\right\}.}
== Related theorems ==
=== Legendre's theorem on continued fractions ===
In his Essai sur la théorie des nombres (1798), Adrien-Marie Legendre derives a necessary and sufficient condition for a rational number to be a convergent of the simple continued fraction of a given real number. A consequence of this criterion, often called Legendre's theorem within the study of continued fractions, is as follows:
Theorem. If α is a real number and p, q are positive integers such that
|
α
−
p
q
|
<
1
2
q
2
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{2q^{2}}}}
, then p/q is a convergent of the continued fraction of α.
This theorem forms the basis for Wiener's attack, a polynomial-time exploit of the RSA cryptographic protocol that can occur for an injudicious choice of public and private keys (specifically, this attack succeeds if the prime factors of the public key n = pq satisfy p < q < 2p and the private key d is less than (1/3)n1/4).
== See also ==
Dirichlet's theorem on arithmetic progressions
Hurwitz's theorem (number theory)
Heilbronn set
Kronecker's theorem (generalization of Dirichlet's theorem)
== Notes ==
== References ==
Schmidt, Wolfgang M (1980). Diophantine Approximation. Lecture Notes in Mathematics. Vol. 785. Springer. doi:10.1007/978-3-540-38645-2. ISBN 978-3-540-38645-2.
Schmidt, Wolfgang M. (1991). Diophantine Approximations and Diophantine Equations. Lecture Notes in Mathematics book series. Vol. 1467. Springer. doi:10.1007/BFb0098246. ISBN 978-3-540-47374-9. S2CID 118143570.
== External links ==
Dirichlet's Approximation Theorem at PlanetMath. | Wikipedia/Dirichlet's_approximation_theorem |
In mathematics, Hodge–Arakelov theory of elliptic curves is an analogue of classical and p-adic Hodge theory for elliptic curves carried out in the framework of Arakelov theory. It was introduced by Mochizuki (1999). It bears the name of two mathematicians, Suren Arakelov and W. V. D. Hodge.
The main comparison in his theory remains unpublished as of 2019.
Mochizuki's main comparison theorem in Hodge–Arakelov theory states (roughly) that the space of polynomial functions of degree less than d on the universal extension of a smooth elliptic curve in characteristic 0 is naturally isomorphic (via restriction) to the d2-dimensional space of functions on the d-torsion points.
It is called a 'comparison theorem' as it is an analogue for Arakelov theory of comparison theorems in cohomology relating de Rham cohomology to singular cohomology of complex varieties or étale cohomology of p-adic varieties.
In Mochizuki (1999) and Mochizuki (2002a) he pointed out that arithmetic Kodaira–Spencer map and Gauss–Manin connection may give some important hints for Vojta's conjecture, ABC conjecture and so on; in 2012, he published his Inter-universal Teichmuller theory, in which he didn't use Hodge-Arakelov theory but used the theory of frobenioids, anabelioids and mono-anabelian geometry.
== See also ==
Hodge theory
Arakelov theory
P-adic Hodge theory
Inter-universal Teichmüller theory
== References ==
Mochizuki, Shinichi (1999), The Hodge-Arakelov theory of elliptic curves: global discretization of local Hodge theories (PDF), Preprint No. 1255/1256, Res. Inst. Math. Sci., Kyoto Univ., Kyoto
Mochizuki, Shinichi (2002a), "A survey of the Hodge-Arakelov theory of elliptic curves. I", in Fried, Michael D.; Ihara, Yasutaka (eds.), Arithmetic fundamental groups and noncommutative algebra (Berkeley, CA, 1999) (PDF), Proc. Sympos. Pure Math., vol. 70, Providence, R.I.: American Mathematical Society, pp. 533–569, ISBN 978-0-8218-2036-0, MR 1935421
Mochizuki, Shinichi (2002b), "A survey of the Hodge-Arakelov theory of elliptic curves. II", Algebraic geometry 2000, Azumino (Hotaka) (PDF), Adv. Stud. Pure Math., vol. 36, Tokyo: Math. Soc. Japan, pp. 81–114, ISBN 978-4-931469-20-4, MR 1971513 | Wikipedia/Hodge–Arakelov_theory |
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing.
Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City.
== History ==
Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second-largest academic publisher with 65 staff in 1872. In 1964, Springer expanded its business internationally, opening an office in New York City. Offices in Tokyo, Paris, Milan, Hong Kong, and Delhi soon followed.
In 1999, the academic publishing company BertelsmannSpringer was formed after the media and entertainment company Bertelsmann bought a majority stake in Springer-Verlag. In 2003, the British investment groups Cinven and Candover bought BertelsmannSpringer from Bertelsmann. They merged the company in 2004 with the Dutch publisher Kluwer Academic Publishers (successor of D. Reidel, Dr. W. Junk, Plenum Publishers, most of Chapman & Hall, and Baltzer Science Publishers) which they bought from Wolters Kluwer in 2002, to form Springer Science+Business Media.
In 2006, Springer acquired Humana Press.
Springer acquired the open-access publisher BioMed Central in October 2008 for an undisclosed amount.
In 2009, Cinven and Candover sold Springer to two private equity firms, EQT AB and Government of Singapore Investment Corporation, confirmed in February 2010 after the competition authorities in the US and in Europe approved the transfer.
In 2011, Springer acquired Pharma Marketing and Publishing Services (MPS) from Wolters Kluwer.
In 2013, the London-based private equity firm BC Partners acquired a majority stake in Springer from EQT and GIC for $4.4 billion.
In January 2015, Holtzbrinck Publishing Group / Nature Publishing Group and Springer Science+Business Media announced a merger. in May 2015 they concluded the transaction and formed a new joint venture company, Springer Nature with Holtzbrinck in the majority 53% share and BC Partners retaining 47% interest in the company.
== Products ==
In 1996, Springer launched electronic book and journal content on its SpringerLink site.
SpringerImages was launched in 2008. In 2009, SpringerMaterials, a platform for accessing the Landolt-Börnstein database of research and information on materials and their properties, was launched.
AuthorMapper is a free online tool for visualizing scientific research that enables document discovery based on author locations and geographic maps, helping users explore patterns in scientific research, identify literature trends, discover collaborative relationships, and locate experts in several scientific/medical fields.
Springer Protocols contained a collection of laboratory protocols, recipes that provide step-by-step instructions for conducting experiments, which in 2018 was made available in SpringerLink instead.
Book publications include major reference works, textbooks, monographs and book series; more than 168,000 titles are available as e-books in 24 subject collections.
=== Open access ===
Springer is a member of the Open Access Scholarly Publishers Association. For some of its journals, Springer does not require its authors to transfer their copyrights, and allows them to decide whether their articles are published under an open-access license or in the traditional restricted licence model. While open-access publishing typically requires the author to pay a fee for copyright retention, this fee is sometimes covered by a third party. For example, a national institution in Poland allows authors to publish in open-access journals without incurring any personal cost but using public funds.
== Controversies ==
In 1938, Springer-Verlag was pressed to apply Nazi principles on the journal Zentralblatt MATH. Tullio Levi-Civita, who was Jewish, was forced out from the editorial board, and Otto Neugebauer resigned in protest along with most of the rest of the board.
In 2014, it was revealed that 16 papers in conference proceedings published by Springer had been computer-generated using SCIgen. Springer subsequently retracted all papers from these proceedings. IEEE had removed more than 100 fake papers from its conference proceedings.
In 2015, Springer retracted 64 papers from 10 of its journals it had published after a fraudulent peer review process was uncovered.
=== Manipulation of bibliometrics ===
According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as a proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics.
Seven Springer Nature journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total.
== Selected imprints ==
== Selected publications ==
Cellular Oncology
Encyclopaedia of Mathematics
Ergebnisse der Mathematik und ihrer Grenzgebiete (book series)
Graduate Texts in Mathematics (book series)
Grothendieck's Séminaire de géométrie algébrique
The International Journal of Advanced Manufacturing Technology
Lecture Notes in Computer Science
Undergraduate Texts in Mathematics (book series)
Zentralblatt MATH
MRS Bulletin
== See also ==
Category:Springer Science+Business Media academic journals
List of publishers
Media concentration
== References ==
== External links ==
Official website
Mary H. Munroe (2004). "Springer Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on 2014-10-20 – via Northern Illinois University. | Wikipedia/Springer_Science+Business_Media |
In mathematics, a ring is an algebraic structure consisting of a set with two binary operations called addition and multiplication, which obey the same basic laws as addition and multiplication of integers, except that multiplication in a ring does not need to be commutative. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series.
A ring may be defined as a set that is endowed with two binary operations called addition and multiplication such that the ring is an abelian group with respect to the addition operator, and the multiplication operator is associative, is distributive over the addition operation, and has a multiplicative identity element. (Some authors apply the term ring to a further generalization, often called a rng, that omits the requirement for a multiplicative identity, and instead call the structure defined above a ring with identity. See § Variations on terminology.)
Whether a ring is commutative (that is, its multiplication is a commutative operation) has profound implications on its properties. Commutative algebra, the theory of commutative rings, is a major branch of ring theory. Its development has been greatly influenced by problems and ideas of algebraic number theory and algebraic geometry.
Examples of commutative rings include every field, the integers, the polynomials in one or several variables with coefficients in another ring, the coordinate ring of an affine algebraic variety, and the ring of integers of a number field. Examples of noncommutative rings include the ring of n × n real square matrices with n ≥ 2, group rings in representation theory, operator algebras in functional analysis, rings of differential operators, and cohomology rings in topology.
The conceptualization of rings spanned the 1870s to the 1920s, with key contributions by Dedekind, Hilbert, Fraenkel, and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. They later proved useful in other branches of mathematics such as geometry and analysis.
Rings appear in the following chain of class inclusions:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields
== Definition ==
A ring is a set R equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms:
R is an abelian group under addition, meaning that:
(a + b) + c = a + (b + c) for all a, b, c in R (that is, + is associative).
a + b = b + a for all a, b in R (that is, + is commutative).
There is an element 0 in R such that a + 0 = a for all a in R (that is, 0 is the additive identity).
For each a in R there exists −a in R such that a + (−a) = 0 (that is, −a is the additive inverse of a).
R is a monoid under multiplication, meaning that:
(a · b) · c = a · (b · c) for all a, b, c in R (that is, ⋅ is associative).
There is an element 1 in R such that a · 1 = a and 1 · a = a for all a in R (that is, 1 is the multiplicative identity).
Multiplication is distributive with respect to addition, meaning that:
a · (b + c) = (a · b) + (a · c) for all a, b, c in R (left distributivity).
(b + c) · a = (b · a) + (c · a) for all a, b, c in R (right distributivity).
In notation, the multiplication symbol · is often omitted, in which case a · b is written as ab.
=== Variations on terminology ===
In the terminology of this article, a ring is defined to have a multiplicative identity, while a structure with the same axiomatic definition but without the requirement for a multiplicative identity is instead called a "rng" (IPA: ) with a missing "i". For example, the set of even integers with the usual + and ⋅ is a rng, but not a ring. As explained in § History below, many authors apply the term "ring" without requiring a multiplicative identity.
Although ring addition is commutative, ring multiplication is not required to be commutative: ab need not necessarily equal ba. Rings that also satisfy commutativity for multiplication (such as the ring of integers) are called commutative rings. Books on commutative algebra or algebraic geometry often adopt the convention that ring means commutative ring, to simplify terminology.
In a ring, multiplicative inverses are not required to exist. A nonzero commutative ring in which every nonzero element has a multiplicative inverse is called a field.
The additive group of a ring is the underlying set equipped with only the operation of addition. Although the definition requires that the additive group be abelian, this can be inferred from the other ring axioms. The proof makes use of the "1", and does not work in a rng. (For a rng, omitting the axiom of commutativity of addition leaves it inferable from the remaining rng assumptions only for elements that are products: ab + cd = cd + ab.)
There are a few authors who use the term "ring" to refer to structures in which there is no requirement for multiplication to be associative. For these authors, every algebra is a "ring".
== Illustration ==
The most familiar example of a ring is the set of all integers
Z
,
{\displaystyle \mathbb {Z} ,}
consisting of the numbers
…
,
−
5
,
−
4
,
−
3
,
−
2
,
−
1
,
0
,
1
,
2
,
3
,
4
,
5
,
…
{\displaystyle \dots ,-5,-4,-3,-2,-1,0,1,2,3,4,5,\dots }
The axioms of a ring were elaborated as a generalization of familiar properties of addition and multiplication of integers.
=== Some properties ===
Some basic properties of a ring follow immediately from the axioms:
The additive identity is unique.
The additive inverse of each element is unique.
The multiplicative identity is unique.
For any element x in a ring R, one has x0 = 0 = 0x (zero is an absorbing element with respect to multiplication) and (–1)x = –x.
If 0 = 1 in a ring R (or more generally, 0 is a unit element), then R has only one element, and is called the zero ring.
If a ring R contains the zero ring as a subring, then R itself is the zero ring.
The binomial formula holds for any x and y satisfying xy = yx.
=== Example: Integers modulo 4 ===
Equip the set
Z
/
4
Z
=
{
0
¯
,
1
¯
,
2
¯
,
3
¯
}
{\displaystyle \mathbb {Z} /4\mathbb {Z} =\left\{{\overline {0}},{\overline {1}},{\overline {2}},{\overline {3}}\right\}}
with the following operations:
The sum
x
¯
+
y
¯
{\displaystyle {\overline {x}}+{\overline {y}}}
in
Z
/
4
Z
{\displaystyle \mathbb {Z} /4\mathbb {Z} }
is the remainder when the integer x + y is divided by 4 (as x + y is always smaller than 8, this remainder is either x + y or x + y − 4). For example,
2
¯
+
3
¯
=
1
¯
{\displaystyle {\overline {2}}+{\overline {3}}={\overline {1}}}
and
3
¯
+
3
¯
=
2
¯
.
{\displaystyle {\overline {3}}+{\overline {3}}={\overline {2}}.}
The product
x
¯
⋅
y
¯
{\displaystyle {\overline {x}}\cdot {\overline {y}}}
in
Z
/
4
Z
{\displaystyle \mathbb {Z} /4\mathbb {Z} }
is the remainder when the integer xy is divided by 4. For example,
2
¯
⋅
3
¯
=
2
¯
{\displaystyle {\overline {2}}\cdot {\overline {3}}={\overline {2}}}
and
3
¯
⋅
3
¯
=
1
¯
.
{\displaystyle {\overline {3}}\cdot {\overline {3}}={\overline {1}}.}
Then
Z
/
4
Z
{\displaystyle \mathbb {Z} /4\mathbb {Z} }
is a ring: each axiom follows from the corresponding axiom for
Z
.
{\displaystyle \mathbb {Z} .}
If x is an integer, the remainder of x when divided by 4 may be considered as an element of
Z
/
4
Z
,
{\displaystyle \mathbb {Z} /4\mathbb {Z} ,}
and this element is often denoted by "x mod 4" or
x
¯
,
{\displaystyle {\overline {x}},}
which is consistent with the notation for 0, 1, 2, 3. The additive inverse of any
x
¯
{\displaystyle {\overline {x}}}
in
Z
/
4
Z
{\displaystyle \mathbb {Z} /4\mathbb {Z} }
is
−
x
¯
=
−
x
¯
.
{\displaystyle -{\overline {x}}={\overline {-x}}.}
For example,
−
3
¯
=
−
3
¯
=
1
¯
.
{\displaystyle -{\overline {3}}={\overline {-3}}={\overline {1}}.}
Z
/
4
Z
{\displaystyle \mathbb {Z} /4\mathbb {Z} }
has a subring
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
, and if
p
{\displaystyle p}
is prime, then
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} }
has no subrings.
=== Example: 2-by-2 matrices ===
The set of 2-by-2 square matrices with entries in a field F is
M
2
(
F
)
=
{
(
a
b
c
d
)
|
a
,
b
,
c
,
d
∈
F
}
.
{\displaystyle \operatorname {M} _{2}(F)=\left\{\left.{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\right|\ a,b,c,d\in F\right\}.}
With the operations of matrix addition and matrix multiplication,
M
2
(
F
)
{\displaystyle \operatorname {M} _{2}(F)}
satisfies the above ring axioms. The element
(
1
0
0
1
)
{\displaystyle \left({\begin{smallmatrix}1&0\\0&1\end{smallmatrix}}\right)}
is the multiplicative identity of the ring. If
A
=
(
0
1
1
0
)
{\displaystyle A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)}
and
B
=
(
0
1
0
0
)
,
{\displaystyle B=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right),}
then
A
B
=
(
0
0
0
1
)
{\displaystyle AB=\left({\begin{smallmatrix}0&0\\0&1\end{smallmatrix}}\right)}
while
B
A
=
(
1
0
0
0
)
;
{\displaystyle BA=\left({\begin{smallmatrix}1&0\\0&0\end{smallmatrix}}\right);}
this example shows that the ring is noncommutative.
More generally, for any ring R, commutative or not, and any nonnegative integer n, the square n × n matrices with entries in R form a ring; see Matrix ring.
== History ==
=== Dedekind ===
The study of rings originated from the theory of polynomial rings and the theory of algebraic integers. In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms "ideal" (inspired by Ernst Kummer's notion of ideal number) and "module" and studied their properties. Dedekind did not use the term "ring" and did not define the concept of a ring in a general setting.
=== Hilbert ===
The term "Zahlring" (number ring) was coined by David Hilbert in 1892 and published in 1897. In 19th century German, the word "Ring" could mean "association", which is still used today in English in a limited sense (for example, spy ring), so if that were the etymology then it would be similar to the way "group" entered mathematics by being a non-technical word for "collection of related things". According to Harvey Cohn, Hilbert used the term for a ring that had the property of "circling directly back" to an element of itself (in the sense of an equivalence). Specifically, in a ring of algebraic integers, all high powers of an algebraic integer can be written as an integral combination of a fixed set of lower powers, and thus the powers "cycle back". For instance, if a3 − 4a + 1 = 0 then:
a
3
=
4
a
−
1
,
a
4
=
4
a
2
−
a
,
a
5
=
−
a
2
+
16
a
−
4
,
a
6
=
16
a
2
−
8
a
+
1
,
a
7
=
−
8
a
2
+
65
a
−
16
,
⋮
⋮
{\displaystyle {\begin{aligned}a^{3}&=4a-1,\\a^{4}&=4a^{2}-a,\\a^{5}&=-a^{2}+16a-4,\\a^{6}&=16a^{2}-8a+1,\\a^{7}&=-8a^{2}+65a-16,\\\vdots \ &\qquad \vdots \end{aligned}}}
and so on; in general, an is going to be an integral linear combination of 1, a, and a2.
=== Fraenkel and Noether ===
The first axiomatic definition of a ring was given by Adolf Fraenkel in 1915, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave a modern axiomatic definition of commutative rings (with and without 1) and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen.
=== Multiplicative identity and the term "ring" ===
Fraenkel applied the term "ring" to structures with axioms that included a multiplicative identity, whereas Noether applied it to structures that did not.
Most or all books on algebra up to around 1960 followed Noether's convention of not requiring a 1 for a "ring". Starting in the 1960s, it became increasingly common to see books including the existence of 1 in the definition of "ring", especially in advanced books by notable authors such as Artin, Bourbaki, Eisenbud, and Lang. There are also books published as late as 2022 that use the term without the requirement for a 1. Likewise, the Encyclopedia of Mathematics does not require unit elements in rings. In a research article, the authors often specify which definition of ring they use in the beginning of that article.
Gardner and Wiegandt assert that, when dealing with several objects in the category of rings (as opposed to working with a fixed ring), if one requires all rings to have a 1, then some consequences include the lack of existence of infinite direct sums of rings, and that proper direct summands of rings are not subrings. They conclude that "in many, maybe most, branches of ring theory the requirement of the existence of a unity element is not sensible, and therefore unacceptable." Poonen makes the counterargument that the natural notion for rings would be the direct product rather than the direct sum. However, his main argument is that rings without a multiplicative identity are not totally associative, in the sense that they do not contain the product of any finite sequence of ring elements, including the empty sequence.
Authors who follow either convention for the use of the term "ring" may use one of the following terms to refer to objects satisfying the other convention:
to include a requirement for a multiplicative identity: "unital ring", "unitary ring", "unit ring", "ring with unity", "ring with identity", "ring with a unit", or "ring with 1".
to omit a requirement for a multiplicative identity: "rng" or "pseudo-ring", although the latter may be confusing because it also has other meanings.
== Basic examples ==
=== Commutative rings ===
The prototypical example is the ring of integers with the two operations of addition and multiplication.
The rational, real and complex numbers are commutative rings of a type called fields.
A unital associative algebra over a commutative ring R is itself a ring as well as an R-module. Some examples:
The algebra R[X] of polynomials with coefficients in R.
The algebra
R
[
[
X
1
,
…
,
X
n
]
]
{\displaystyle R[[X_{1},\dots ,X_{n}]]}
of formal power series with coefficients in R.
The set of all continuous real-valued functions defined on the real line forms a commutative
R
{\displaystyle \mathbb {R} }
-algebra. The operations are pointwise addition and multiplication of functions.
Let X be a set, and let R be a ring. Then the set of all functions from X to R forms a ring, which is commutative if R is commutative.
The ring of quadratic integers, the integral closure of
Z
{\displaystyle \mathbb {Z} }
in a quadratic extension of
Q
.
{\displaystyle \mathbb {Q} .}
It is a subring of the ring of all algebraic integers.
The ring of profinite integers
Z
^
,
{\displaystyle {\widehat {\mathbb {Z} }},}
the (infinite) product of the rings of p-adic integers
Z
p
{\displaystyle \mathbb {Z} _{p}}
over all prime numbers p.
The Hecke ring, the ring generated by Hecke operators.
If S is a set, then the power set of S becomes a ring if we define addition to be the symmetric difference of sets and multiplication to be intersection. This is an example of a Boolean ring.
=== Noncommutative rings ===
For any ring R and any natural number n, the set of all square n-by-n matrices with entries from R, forms a ring with matrix addition and matrix multiplication as operations. For n = 1, this matrix ring is isomorphic to R itself. For n > 1 (and R not the zero ring), this matrix ring is noncommutative.
If G is an abelian group, then the endomorphisms of G form a ring, the endomorphism ring End(G) of G. The operations in this ring are addition and composition of endomorphisms. More generally, if V is a left module over a ring R, then the set of all R-linear maps forms a ring, also called the endomorphism ring and denoted by EndR(V).
The endomorphism ring of an elliptic curve. It is a commutative ring if the elliptic curve is defined over a field of characteristic zero.
If G is a group and R is a ring, the group ring of G over R is a free module over R having G as basis. Multiplication is defined by the rules that the elements of G commute with the elements of R and multiply together as they do in the group G.
The ring of differential operators (depending on the context). In fact, many rings that appear in analysis are noncommutative. For example, most Banach algebras are noncommutative.
=== Non-rings ===
The set of natural numbers
N
{\displaystyle \mathbb {N} }
with the usual operations is not a ring, since
(
N
,
+
)
{\displaystyle (\mathbb {N} ,+)}
is not even a group (not all the elements are invertible with respect to addition – for instance, there is no natural number which can be added to 3 to get 0 as a result). There is a natural way to enlarge it to a ring, by including negative numbers to produce the ring of integers
Z
.
{\displaystyle \mathbb {Z} .}
The natural numbers (including 0) form an algebraic structure known as a semiring (which has all of the axioms of a ring excluding that of an additive inverse).
Let R be the set of all continuous functions on the real line that vanish outside a bounded interval that depends on the function, with addition as usual but with multiplication defined as convolution:
(
f
∗
g
)
(
x
)
=
∫
−
∞
∞
f
(
y
)
g
(
x
−
y
)
d
y
.
{\displaystyle (f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy.}
Then R is a rng, but not a ring: the Dirac delta function has the property of a multiplicative identity, but it is not a function and hence is not an element of R.
== Basic concepts ==
=== Products and powers ===
For each nonnegative integer n, given a sequence
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\dots ,a_{n})}
of n elements of R, one can define the product
P
n
=
∏
i
=
1
n
a
i
{\displaystyle \textstyle P_{n}=\prod _{i=1}^{n}a_{i}}
recursively: let P0 = 1 and let Pm = Pm−1am for 1 ≤ m ≤ n.
As a special case, one can define nonnegative integer powers of an element a of a ring: a0 = 1 and an = an−1a for n ≥ 1. Then am+n = aman for all m, n ≥ 0.
=== Elements in a ring ===
A left zero divisor of a ring R is an element a in the ring such that there exists a nonzero element b of R such that ab = 0. A right zero divisor is defined similarly.
A nilpotent element is an element a such that an = 0 for some n > 0. One example of a nilpotent element is a nilpotent matrix. A nilpotent element in a nonzero ring is necessarily a zero divisor.
An idempotent
e
{\displaystyle e}
is an element such that e2 = e. One example of an idempotent element is a projection in linear algebra.
A unit is an element a having a multiplicative inverse; in this case the inverse is unique, and is denoted by a–1. The set of units of a ring is a group under ring multiplication; this group is denoted by R× or R* or U(R). For example, if R is the ring of all square matrices of size n over a field, then R× consists of the set of all invertible matrices of size n, and is called the general linear group.
=== Subring ===
A subset S of R is called a subring if any one of the following equivalent conditions holds:
the addition and multiplication of R restrict to give operations S × S → S making S a ring with the same multiplicative identity as R.
1 ∈ S; and for all x, y in S, the elements xy, x + y, and −x are in S.
S can be equipped with operations making it a ring such that the inclusion map S → R is a ring homomorphism.
For example, the ring
Z
{\displaystyle \mathbb {Z} }
of integers is a subring of the field of real numbers and also a subring of the ring of polynomials
Z
[
X
]
{\displaystyle \mathbb {Z} [X]}
(in both cases,
Z
{\displaystyle \mathbb {Z} }
contains 1, which is the multiplicative identity of the larger rings). On the other hand, the subset of even integers
2
Z
{\displaystyle 2\mathbb {Z} }
does not contain the identity element 1 and thus does not qualify as a subring of
Z
;
{\displaystyle \mathbb {Z} ;}
one could call
2
Z
{\displaystyle 2\mathbb {Z} }
a subrng, however.
An intersection of subrings is a subring. Given a subset E of R, the smallest subring of R containing E is the intersection of all subrings of R containing E, and it is called the subring generated by E.
For a ring R, the smallest subring of R is called the characteristic subring of R. It can be generated through addition of copies of 1 and −1. It is possible that n · 1 = 1 + 1 + ... + 1 (n times) can be zero. If n is the smallest positive integer such that this occurs, then n is called the characteristic of R. In some rings, n · 1 is never zero for any positive integer n, and those rings are said to have characteristic zero.
Given a ring R, let Z(R) denote the set of all elements x in R such that x commutes with every element in R: xy = yx for any y in R. Then Z(R) is a subring of R, called the center of R. More generally, given a subset X of R, let S be the set of all elements in R that commute with every element in X. Then S is a subring of R, called the centralizer (or commutant) of X. The center is the centralizer of the entire ring R. Elements or subsets of the center are said to be central in R; they (each individually) generate a subring of the center.
=== Ideal ===
Let R be a ring. A left ideal of R is a nonempty subset I of R such that for any x, y in I and r in R, the elements x + y and rx are in I. If R I denotes the R-span of I, that is, the set of finite sums
r
1
x
1
+
⋯
+
r
n
x
n
such
that
r
i
∈
R
and
x
i
∈
I
,
{\displaystyle r_{1}x_{1}+\cdots +r_{n}x_{n}\quad {\textrm {such}}\;{\textrm {that}}\;r_{i}\in R\;{\textrm {and}}\;x_{i}\in I,}
then I is a left ideal if RI ⊆ I. Similarly, a right ideal is a subset I such that IR ⊆ I. A subset I is said to be a two-sided ideal or simply ideal if it is both a left ideal and right ideal. A one-sided or two-sided ideal is then an additive subgroup of R. If E is a subset of R, then RE is a left ideal, called the left ideal generated by E; it is the smallest left ideal containing E. Similarly, one can consider the right ideal or the two-sided ideal generated by a subset of R.
If x is in R, then Rx and xR are left ideals and right ideals, respectively; they are called the principal left ideals and right ideals generated by x. The principal ideal RxR is written as (x). For example, the set of all positive and negative multiples of 2 along with 0 form an ideal of the integers, and this ideal is generated by the integer 2. In fact, every ideal of the ring of integers is principal.
Like a group, a ring is said to be simple if it is nonzero and it has no proper nonzero two-sided ideals. A commutative simple ring is precisely a field.
Rings are often studied with special conditions set upon their ideals. For example, a ring in which there is no strictly increasing infinite chain of left ideals is called a left Noetherian ring. A ring in which there is no strictly decreasing infinite chain of left ideals is called a left Artinian ring. It is a somewhat surprising fact that a left Artinian ring is left Noetherian (the Hopkins–Levitzki theorem). The integers, however, form a Noetherian ring which is not Artinian.
For commutative rings, the ideals generalize the classical notion of divisibility and decomposition of an integer into prime numbers in algebra. A proper ideal P of R is called a prime ideal if for any elements
x
,
y
∈
R
{\displaystyle x,y\in R}
we have that
x
y
∈
P
{\displaystyle xy\in P}
implies either
x
∈
P
{\displaystyle x\in P}
or
y
∈
P
.
{\displaystyle y\in P.}
Equivalently, P is prime if for any ideals I, J we have that IJ ⊆ P implies either I ⊆ P or J ⊆ P. This latter formulation illustrates the idea of ideals as generalizations of elements.
=== Homomorphism ===
A homomorphism from a ring (R, +, ⋅) to a ring (S, ‡, ∗) is a function f from R to S that preserves the ring operations; namely, such that, for all a, b in R the following identities hold:
f
(
a
+
b
)
=
f
(
a
)
‡
f
(
b
)
f
(
a
⋅
b
)
=
f
(
a
)
∗
f
(
b
)
f
(
1
R
)
=
1
S
{\displaystyle {\begin{aligned}&f(a+b)=f(a)\ddagger f(b)\\&f(a\cdot b)=f(a)*f(b)\\&f(1_{R})=1_{S}\end{aligned}}}
If one is working with rngs, then the third condition is dropped.
A ring homomorphism f is said to be an isomorphism if there exists an inverse homomorphism to f (that is, a ring homomorphism that is an inverse function), or equivalently if it is bijective.
Examples:
The function that maps each integer x to its remainder modulo 4 (a number in {0, 1, 2, 3}) is a homomorphism from the ring
Z
{\displaystyle \mathbb {Z} }
to the quotient ring
Z
/
4
Z
{\displaystyle \mathbb {Z} /4\mathbb {Z} }
("quotient ring" is defined below).
If u is a unit element in a ring R, then
R
→
R
,
x
↦
u
x
u
−
1
{\displaystyle R\to R,x\mapsto uxu^{-1}}
is a ring homomorphism, called an inner automorphism of R.
Let R be a commutative ring of prime characteristic p. Then x ↦ xp is a ring endomorphism of R called the Frobenius homomorphism.
The Galois group of a field extension L / K is the set of all automorphisms of L whose restrictions to K are the identity.
For any ring R, there are a unique ring homomorphism
Z
↦
R
{\displaystyle \mathbb {Z} \mapsto R}
and a unique ring homomorphism R → 0.
An epimorphism (that is, right-cancelable morphism) of rings need not be surjective. For example, the unique map
Z
→
Q
{\displaystyle \mathbb {Z} \to \mathbb {Q} }
is an epimorphism.
An algebra homomorphism from a k-algebra to the endomorphism algebra of a vector space over k is called a representation of the algebra.
Given a ring homomorphism f : R → S, the set of all elements mapped to 0 by f is called the kernel of f. The kernel is a two-sided ideal of R. The image of f, on the other hand, is not always an ideal, but it is always a subring of S.
To give a ring homomorphism from a commutative ring R to a ring A with image contained in the center of A is the same as to give a structure of an algebra over R to A (which in particular gives a structure of an A-module).
=== Quotient ring ===
The notion of quotient ring is analogous to the notion of a quotient group. Given a ring (R, +, ⋅) and a two-sided ideal I of (R, +, ⋅), view I as subgroup of (R, +); then the quotient ring R / I is the set of cosets of I together with the operations
(
a
+
I
)
+
(
b
+
I
)
=
(
a
+
b
)
+
I
,
(
a
+
I
)
(
b
+
I
)
=
(
a
b
)
+
I
.
{\displaystyle {\begin{aligned}&(a+I)+(b+I)=(a+b)+I,\\&(a+I)(b+I)=(ab)+I.\end{aligned}}}
for all a, b in R. The ring R / I is also called a factor ring.
As with a quotient group, there is a canonical homomorphism p : R → R / I, given by x ↦ x + I. It is surjective and satisfies the following universal property:
If f : R → S is a ring homomorphism such that f(I) = 0, then there is a unique homomorphism
f
¯
:
R
/
I
→
S
{\displaystyle {\overline {f}}:R/I\to S}
such that
f
=
f
¯
∘
p
.
{\displaystyle f={\overline {f}}\circ p.}
For any ring homomorphism f : R → S, invoking the universal property with I = ker f produces a homomorphism
f
¯
:
R
/
ker
f
→
S
{\displaystyle {\overline {f}}:R/\ker f\to S}
that gives an isomorphism from R / ker f to the image of f.
== Modules ==
The concept of a module over a ring generalizes the concept of a vector space (over a field) by generalizing from multiplication of vectors with elements of a field (scalar multiplication) to multiplication with elements of a ring. More precisely, given a ring R, an R-module M is an abelian group equipped with an operation R × M → M (associating an element of M to every pair of an element of R and an element of M) that satisfies certain axioms. This operation is commonly denoted by juxtaposition and called multiplication. The axioms of modules are the following: for all a, b in R and all x, y in M,
M is an abelian group under addition.
a
(
x
+
y
)
=
a
x
+
a
y
(
a
+
b
)
x
=
a
x
+
b
x
1
x
=
x
(
a
b
)
x
=
a
(
b
x
)
{\displaystyle {\begin{aligned}&a(x+y)=ax+ay\\&(a+b)x=ax+bx\\&1x=x\\&(ab)x=a(bx)\end{aligned}}}
When the ring is noncommutative these axioms define left modules; right modules are defined similarly by writing xa instead of ax. This is not only a change of notation, as the last axiom of right modules (that is x(ab) = (xa)b) becomes (ab)x = b(ax), if left multiplication (by ring elements) is used for a right module.
Basic examples of modules are ideals, including the ring itself.
Although similarly defined, the theory of modules is much more complicated than that of vector space, mainly, because, unlike vector spaces, modules are not characterized (up to an isomorphism) by a single invariant (the dimension of a vector space). In particular, not all modules have a basis.
The axioms of modules imply that (−1)x = −x, where the first minus denotes the additive inverse in the ring and the second minus the additive inverse in the module. Using this and denoting repeated addition by a multiplication by a positive integer allows identifying abelian groups with modules over the ring of integers.
Any ring homomorphism induces a structure of a module: if f : R → S is a ring homomorphism, then S is a left module over R by the multiplication: rs = f(r)s. If R is commutative or if f(R) is contained in the center of S, the ring S is called a R-algebra. In particular, every ring is an algebra over the integers.
== Constructions ==
=== Direct product ===
Let R and S be rings. Then the product R × S can be equipped with the following natural ring structure:
(
r
1
,
s
1
)
+
(
r
2
,
s
2
)
=
(
r
1
+
r
2
,
s
1
+
s
2
)
(
r
1
,
s
1
)
⋅
(
r
2
,
s
2
)
=
(
r
1
⋅
r
2
,
s
1
⋅
s
2
)
{\displaystyle {\begin{aligned}&(r_{1},s_{1})+(r_{2},s_{2})=(r_{1}+r_{2},s_{1}+s_{2})\\&(r_{1},s_{1})\cdot (r_{2},s_{2})=(r_{1}\cdot r_{2},s_{1}\cdot s_{2})\end{aligned}}}
for all r1, r2 in R and s1, s2 in S. The ring R × S with the above operations of addition and multiplication and the multiplicative identity (1, 1) is called the direct product of R with S. The same construction also works for an arbitrary family of rings: if Ri are rings indexed by a set I, then
∏
i
∈
I
R
i
{\textstyle \prod _{i\in I}R_{i}}
is a ring with componentwise addition and multiplication.
Let R be a commutative ring and
a
1
,
⋯
,
a
n
{\displaystyle {\mathfrak {a}}_{1},\cdots ,{\mathfrak {a}}_{n}}
be ideals such that
a
i
+
a
j
=
(
1
)
{\displaystyle {\mathfrak {a}}_{i}+{\mathfrak {a}}_{j}=(1)}
whenever i ≠ j. Then the Chinese remainder theorem says there is a canonical ring isomorphism:
R
/
⋂
i
=
1
n
a
i
≃
∏
i
=
1
n
R
/
a
i
,
x
mod
⋂
i
=
1
n
a
i
↦
(
x
mod
a
1
,
…
,
x
mod
a
n
)
.
{\displaystyle R/{\textstyle \bigcap _{i=1}^{n}{{\mathfrak {a}}_{i}}}\simeq \prod _{i=1}^{n}{R/{\mathfrak {a}}_{i}},\qquad x{\bmod {\textstyle \bigcap _{i=1}^{n}{\mathfrak {a}}_{i}}}\mapsto (x{\bmod {\mathfrak {a}}}_{1},\ldots ,x{\bmod {\mathfrak {a}}}_{n}).}
A "finite" direct product may also be viewed as a direct sum of ideals. Namely, let
R
i
,
1
≤
i
≤
n
{\displaystyle R_{i},1\leq i\leq n}
be rings,
R
i
→
R
=
∏
R
i
{\textstyle R_{i}\to R=\prod R_{i}}
the inclusions with the images
a
i
{\displaystyle {\mathfrak {a}}_{i}}
(in particular
a
i
{\displaystyle {\mathfrak {a}}_{i}}
are rings though not subrings). Then
a
i
{\displaystyle {\mathfrak {a}}_{i}}
are ideals of R and
R
=
a
1
⊕
⋯
⊕
a
n
,
a
i
a
j
=
0
,
i
≠
j
,
a
i
2
⊆
a
i
{\displaystyle R={\mathfrak {a}}_{1}\oplus \cdots \oplus {\mathfrak {a}}_{n},\quad {\mathfrak {a}}_{i}{\mathfrak {a}}_{j}=0,i\neq j,\quad {\mathfrak {a}}_{i}^{2}\subseteq {\mathfrak {a}}_{i}}
as a direct sum of abelian groups (because for abelian groups finite products are the same as direct sums). Clearly the direct sum of such ideals also defines a product of rings that is isomorphic to R. Equivalently, the above can be done through central idempotents. Assume that R has the above decomposition. Then we can write
1
=
e
1
+
⋯
+
e
n
,
e
i
∈
a
i
.
{\displaystyle 1=e_{1}+\cdots +e_{n},\quad e_{i}\in {\mathfrak {a}}_{i}.}
By the conditions on
a
i
,
{\displaystyle {\mathfrak {a}}_{i},}
one has that ei are central idempotents and eiej = 0, i ≠ j (orthogonal). Again, one can reverse the construction. Namely, if one is given a partition of 1 in orthogonal central idempotents, then let
a
i
=
R
e
i
,
{\displaystyle {\mathfrak {a}}_{i}=Re_{i},}
which are two-sided ideals. If each ei is not a sum of orthogonal central idempotents, then their direct sum is isomorphic to R.
An important application of an infinite direct product is the construction of a projective limit of rings (see below). Another application is a restricted product of a family of rings (cf. adele ring).
=== Polynomial ring ===
Given a symbol t (called a variable) and a commutative ring R, the set of polynomials
R
[
t
]
=
{
a
n
t
n
+
a
n
−
1
t
n
−
1
+
⋯
+
a
1
t
+
a
0
∣
n
≥
0
,
a
j
∈
R
}
{\displaystyle R[t]=\left\{a_{n}t^{n}+a_{n-1}t^{n-1}+\dots +a_{1}t+a_{0}\mid n\geq 0,a_{j}\in R\right\}}
forms a commutative ring with the usual addition and multiplication, containing R as a subring. It is called the polynomial ring over R. More generally, the set
R
[
t
1
,
…
,
t
n
]
{\displaystyle R\left[t_{1},\ldots ,t_{n}\right]}
of all polynomials in variables
t
1
,
…
,
t
n
{\displaystyle t_{1},\ldots ,t_{n}}
forms a commutative ring, containing
R
[
t
i
]
{\displaystyle R\left[t_{i}\right]}
as subrings.
If R is an integral domain, then R[t] is also an integral domain; its field of fractions is the field of rational functions. If R is a Noetherian ring, then R[t] is a Noetherian ring. If R is a unique factorization domain, then R[t] is a unique factorization domain. Finally, R is a field if and only if R[t] is a principal ideal domain.
Let
R
⊆
S
{\displaystyle R\subseteq S}
be commutative rings. Given an element x of S, one can consider the ring homomorphism
R
[
t
]
→
S
,
f
↦
f
(
x
)
{\displaystyle R[t]\to S,\quad f\mapsto f(x)}
(that is, the substitution). If S = R[t] and x = t, then f(t) = f. Because of this, the polynomial f is often also denoted by f(t). The image of the map
f
↦
f
(
x
)
{\displaystyle f\mapsto f(x)}
is denoted by R[x]; it is the same thing as the subring of S generated by R and x.
Example:
k
[
t
2
,
t
3
]
{\displaystyle k\left[t^{2},t^{3}\right]}
denotes the image of the homomorphism
k
[
x
,
y
]
→
k
[
t
]
,
f
↦
f
(
t
2
,
t
3
)
.
{\displaystyle k[x,y]\to k[t],\,f\mapsto f\left(t^{2},t^{3}\right).}
In other words, it is the subalgebra of k[t] generated by t2 and t3.
Example: let f be a polynomial in one variable, that is, an element in a polynomial ring R. Then f(x + h) is an element in R[h] and f(x + h) – f(x) is divisible by h in that ring. The result of substituting zero to h in (f(x + h) – f(x)) / h is f' (x), the derivative of f at x.
The substitution is a special case of the universal property of a polynomial ring. The property states: given a ring homomorphism
ϕ
:
R
→
S
{\displaystyle \phi :R\to S}
and an element x in S there exists a unique ring homomorphism
ϕ
¯
:
R
[
t
]
→
S
{\displaystyle {\overline {\phi }}:R[t]\to S}
such that
ϕ
¯
(
t
)
=
x
{\displaystyle {\overline {\phi }}(t)=x}
and
ϕ
¯
{\displaystyle {\overline {\phi }}}
restricts to ϕ. For example, choosing a basis, a symmetric algebra satisfies the universal property and so is a polynomial ring.
To give an example, let S be the ring of all functions from R to itself; the addition and the multiplication are those of functions. Let x be the identity function. Each r in R defines a constant function, giving rise to the homomorphism R → S. The universal property says that this map extends uniquely to
R
[
t
]
→
S
,
f
↦
f
¯
{\displaystyle R[t]\to S,\quad f\mapsto {\overline {f}}}
(t maps to x) where
f
¯
{\displaystyle {\overline {f}}}
is the polynomial function defined by f. The resulting map is injective if and only if R is infinite.
Given a non-constant monic polynomial f in R[t], there exists a ring S containing R such that f is a product of linear factors in S[t].
Let k be an algebraically closed field. The Hilbert's Nullstellensatz (theorem of zeros) states that there is a natural one-to-one correspondence between the set of all prime ideals in
k
[
t
1
,
…
,
t
n
]
{\displaystyle k\left[t_{1},\ldots ,t_{n}\right]}
and the set of closed subvarieties of kn. In particular, many local problems in algebraic geometry may be attacked through the study of the generators of an ideal in a polynomial ring. (cf. Gröbner basis.)
There are some other related constructions. A formal power series ring
R
[
[
t
]
]
{\displaystyle R[\![t]\!]}
consists of formal power series
∑
0
∞
a
i
t
i
,
a
i
∈
R
{\displaystyle \sum _{0}^{\infty }a_{i}t^{i},\quad a_{i}\in R}
together with multiplication and addition that mimic those for convergent series. It contains R[t] as a subring. A formal power series ring does not have the universal property of a polynomial ring; a series may not converge after a substitution. The important advantage of a formal power series ring over a polynomial ring is that it is local (in fact, complete).
=== Matrix ring and endomorphism ring ===
Let R be a ring (not necessarily commutative). The set of all square matrices of size n with entries in R forms a ring with the entry-wise addition and the usual matrix multiplication. It is called the matrix ring and is denoted by Mn(R). Given a right R-module U, the set of all R-linear maps from U to itself forms a ring with addition that is of function and multiplication that is of composition of functions; it is called the endomorphism ring of U and is denoted by EndR(U).
As in linear algebra, a matrix ring may be canonically interpreted as an endomorphism ring:
End
R
(
R
n
)
≃
M
n
(
R
)
.
{\displaystyle \operatorname {End} _{R}(R^{n})\simeq \operatorname {M} _{n}(R).}
This is a special case of the following fact: If
f
:
⊕
1
n
U
→
⊕
1
n
U
{\displaystyle f:\oplus _{1}^{n}U\to \oplus _{1}^{n}U}
is an R-linear map, then f may be written as a matrix with entries fij in S = EndR(U), resulting in the ring isomorphism:
End
R
(
⊕
1
n
U
)
→
M
n
(
S
)
,
f
↦
(
f
i
j
)
.
{\displaystyle \operatorname {End} _{R}(\oplus _{1}^{n}U)\to \operatorname {M} _{n}(S),\quad f\mapsto (f_{ij}).}
Any ring homomorphism R → S induces Mn(R) → Mn(S).
Schur's lemma says that if U is a simple right R-module, then EndR(U) is a division ring. If
U
=
⨁
i
=
1
r
U
i
⊕
m
i
{\displaystyle U=\bigoplus _{i=1}^{r}U_{i}^{\oplus m_{i}}}
is a direct sum of mi-copies of simple R-modules
U
i
,
{\displaystyle U_{i},}
then
End
R
(
U
)
≃
∏
i
=
1
r
M
m
i
(
End
R
(
U
i
)
)
.
{\displaystyle \operatorname {End} _{R}(U)\simeq \prod _{i=1}^{r}\operatorname {M} _{m_{i}}(\operatorname {End} _{R}(U_{i})).}
The Artin–Wedderburn theorem states any semisimple ring (cf. below) is of this form.
A ring R and the matrix ring Mn(R) over it are Morita equivalent: the category of right modules of R is equivalent to the category of right modules over Mn(R). In particular, two-sided ideals in R correspond in one-to-one to two-sided ideals in Mn(R).
=== Limits and colimits of rings ===
Let Ri be a sequence of rings such that Ri is a subring of Ri + 1 for all i. Then the union (or filtered colimit) of Ri is the ring
lim
→
R
i
{\displaystyle \varinjlim R_{i}}
defined as follows: it is the disjoint union of all Ri's modulo the equivalence relation x ~ y if and only if x = y in Ri for sufficiently large i.
Examples of colimits:
A polynomial ring in infinitely many variables:
R
[
t
1
,
t
2
,
⋯
]
=
lim
→
R
[
t
1
,
t
2
,
⋯
,
t
m
]
.
{\displaystyle R[t_{1},t_{2},\cdots ]=\varinjlim R[t_{1},t_{2},\cdots ,t_{m}].}
The algebraic closure of finite fields of the same characteristic
F
¯
p
=
lim
→
F
p
m
.
{\displaystyle {\overline {\mathbf {F} }}_{p}=\varinjlim \mathbf {F} _{p^{m}}.}
The field of formal Laurent series over a field k:
k
(
(
t
)
)
=
lim
→
t
−
m
k
[
[
t
]
]
{\displaystyle k(\!(t)\!)=\varinjlim t^{-m}k[\![t]\!]}
(it is the field of fractions of the formal power series ring
k
[
[
t
]
]
.
{\displaystyle k[\![t]\!].}
)
The function field of an algebraic variety over a field k is
lim
→
k
[
U
]
{\displaystyle \varinjlim k[U]}
where the limit runs over all the coordinate rings k[U] of nonempty open subsets U (more succinctly it is the stalk of the structure sheaf at the generic point.)
Any commutative ring is the colimit of finitely generated subrings.
A projective limit (or a filtered limit) of rings is defined as follows. Suppose we are given a family of rings Ri, i running over positive integers, say, and ring homomorphisms Rj → Ri, j ≥ i such that Ri → Ri are all the identities and Rk → Rj → Ri is Rk → Ri whenever k ≥ j ≥ i. Then
lim
←
R
i
{\displaystyle \varprojlim R_{i}}
is the subring of
∏
R
i
{\displaystyle \textstyle \prod R_{i}}
consisting of (xn) such that xj maps to xi under Rj → Ri, j ≥ i.
For an example of a projective limit, see § Completion.
=== Localization ===
The localization generalizes the construction of the field of fractions of an integral domain to an arbitrary ring and modules. Given a (not necessarily commutative) ring R and a subset S of R, there exists a ring
R
[
S
−
1
]
{\displaystyle R[S^{-1}]}
together with the ring homomorphism
R
→
R
[
S
−
1
]
{\displaystyle R\to R\left[S^{-1}\right]}
that "inverts" S; that is, the homomorphism maps elements in S to unit elements in
R
[
S
−
1
]
,
{\displaystyle R\left[S^{-1}\right],}
and, moreover, any ring homomorphism from R that "inverts" S uniquely factors through
R
[
S
−
1
]
.
{\displaystyle R\left[S^{-1}\right].}
The ring
R
[
S
−
1
]
{\displaystyle R\left[S^{-1}\right]}
is called the localization of R with respect to S. For example, if R is a commutative ring and f an element in R, then the localization
R
[
f
−
1
]
{\displaystyle R\left[f^{-1}\right]}
consists of elements of the form
r
/
f
n
,
r
∈
R
,
n
≥
0
{\displaystyle r/f^{n},\,r\in R,\,n\geq 0}
(to be precise,
R
[
f
−
1
]
=
R
[
t
]
/
(
t
f
−
1
)
.
{\displaystyle R\left[f^{-1}\right]=R[t]/(tf-1).}
)
The localization is frequently applied to a commutative ring R with respect to the complement of a prime ideal (or a union of prime ideals) in R. In that case
S
=
R
−
p
,
{\displaystyle S=R-{\mathfrak {p}},}
one often writes
R
p
{\displaystyle R_{\mathfrak {p}}}
for
R
[
S
−
1
]
.
{\displaystyle R\left[S^{-1}\right].}
R
p
{\displaystyle R_{\mathfrak {p}}}
is then a local ring with the maximal ideal
p
R
p
.
{\displaystyle {\mathfrak {p}}R_{\mathfrak {p}}.}
This is the reason for the terminology "localization". The field of fractions of an integral domain R is the localization of R at the prime ideal zero. If
p
{\displaystyle {\mathfrak {p}}}
is a prime ideal of a commutative ring R, then the field of fractions of
R
/
p
{\displaystyle R/{\mathfrak {p}}}
is the same as the residue field of the local ring
R
p
{\displaystyle R_{\mathfrak {p}}}
and is denoted by
k
(
p
)
.
{\displaystyle k({\mathfrak {p}}).}
If M is a left R-module, then the localization of M with respect to S is given by a change of rings
M
[
S
−
1
]
=
R
[
S
−
1
]
⊗
R
M
.
{\displaystyle M\left[S^{-1}\right]=R\left[S^{-1}\right]\otimes _{R}M.}
The most important properties of localization are the following: when R is a commutative ring and S a multiplicatively closed subset
p
↦
p
[
S
−
1
]
{\displaystyle {\mathfrak {p}}\mapsto {\mathfrak {p}}\left[S^{-1}\right]}
is a bijection between the set of all prime ideals in R disjoint from S and the set of all prime ideals in
R
[
S
−
1
]
.
{\displaystyle R\left[S^{-1}\right].}
R
[
S
−
1
]
=
lim
→
R
[
f
−
1
]
,
{\displaystyle R\left[S^{-1}\right]=\varinjlim R\left[f^{-1}\right],}
f running over elements in S with partial ordering given by divisibility.
The localization is exact:
0
→
M
′
[
S
−
1
]
→
M
[
S
−
1
]
→
M
″
[
S
−
1
]
→
0
{\displaystyle 0\to M'\left[S^{-1}\right]\to M\left[S^{-1}\right]\to M''\left[S^{-1}\right]\to 0}
is exact over
R
[
S
−
1
]
{\displaystyle R\left[S^{-1}\right]}
whenever
0
→
M
′
→
M
→
M
″
→
0
{\displaystyle 0\to M'\to M\to M''\to 0}
is exact over R.
Conversely, if
0
→
M
m
′
→
M
m
→
M
m
″
→
0
{\displaystyle 0\to M'_{\mathfrak {m}}\to M_{\mathfrak {m}}\to M''_{\mathfrak {m}}\to 0}
is exact for any maximal ideal
m
,
{\displaystyle {\mathfrak {m}},}
then
0
→
M
′
→
M
→
M
″
→
0
{\displaystyle 0\to M'\to M\to M''\to 0}
is exact.
A remark: localization is no help in proving a global existence. One instance of this is that if two modules are isomorphic at all prime ideals, it does not follow that they are isomorphic. (One way to explain this is that the localization allows one to view a module as a sheaf over prime ideals and a sheaf is inherently a local notion.)
In category theory, a localization of a category amounts to making some morphisms isomorphisms. An element in a commutative ring R may be thought of as an endomorphism of any R-module. Thus, categorically, a localization of R with respect to a subset S of R is a functor from the category of R-modules to itself that sends elements of S viewed as endomorphisms to automorphisms and is universal with respect to this property. (Of course, R then maps to
R
[
S
−
1
]
{\displaystyle R\left[S^{-1}\right]}
and R-modules map to
R
[
S
−
1
]
{\displaystyle R\left[S^{-1}\right]}
-modules.)
=== Completion ===
Let R be a commutative ring, and let I be an ideal of R.
The completion of R at I is the projective limit
R
^
=
lim
←
R
/
I
n
;
{\displaystyle {\hat {R}}=\varprojlim R/I^{n};}
it is a commutative ring. The canonical homomorphisms from R to the quotients
R
/
I
n
{\displaystyle R/I^{n}}
induce a homomorphism
R
→
R
^
.
{\displaystyle R\to {\hat {R}}.}
The latter homomorphism is injective if R is a Noetherian integral domain and I is a proper ideal, or if R is a Noetherian local ring with maximal ideal I, by Krull's intersection theorem. The construction is especially useful when I is a maximal ideal.
The basic example is the completion of
Z
{\displaystyle \mathbb {Z} }
at the principal ideal (p) generated by a prime number p; it is called the ring of p-adic integers and is denoted
Z
p
.
{\displaystyle \mathbb {Z} _{p}.}
The completion can in this case be constructed also from the p-adic absolute value on
Q
.
{\displaystyle \mathbb {Q} .}
The p-adic absolute value on
Q
{\displaystyle \mathbb {Q} }
is a map
x
↦
|
x
|
{\displaystyle x\mapsto |x|}
from
Q
{\displaystyle \mathbb {Q} }
to
R
{\displaystyle \mathbb {R} }
given by
|
n
|
p
=
p
−
v
p
(
n
)
{\displaystyle |n|_{p}=p^{-v_{p}(n)}}
where
v
p
(
n
)
{\displaystyle v_{p}(n)}
denotes the exponent of p in the prime factorization of a nonzero integer n into prime numbers (we also put
|
0
|
p
=
0
{\displaystyle |0|_{p}=0}
and
|
m
/
n
|
p
=
|
m
|
p
/
|
n
|
p
{\displaystyle |m/n|_{p}=|m|_{p}/|n|_{p}}
). It defines a distance function on
Q
{\displaystyle \mathbb {Q} }
and the completion of
Q
{\displaystyle \mathbb {Q} }
as a metric space is denoted by
Q
p
.
{\displaystyle \mathbb {Q} _{p}.}
It is again a field since the field operations extend to the completion. The subring of
Q
p
{\displaystyle \mathbb {Q} _{p}}
consisting of elements x with |x|p ≤ 1 is isomorphic to
Z
p
.
{\displaystyle \mathbb {Z} _{p}.}
Similarly, the formal power series ring R[{[t]}] is the completion of R[t] at (t) (see also Hensel's lemma)
A complete ring has much simpler structure than a commutative ring. This owns to the Cohen structure theorem, which says, roughly, that a complete local ring tends to look like a formal power series ring or a quotient of it. On the other hand, the interaction between the integral closure and completion has been among the most important aspects that distinguish modern commutative ring theory from the classical one developed by the likes of Noether. Pathological examples found by Nagata led to the reexamination of the roles of Noetherian rings and motivated, among other things, the definition of excellent ring.
=== Rings with generators and relations ===
The most general way to construct a ring is by specifying generators and relations. Let F be a free ring (that is, free algebra over the integers) with the set X of symbols, that is, F consists of polynomials with integral coefficients in noncommuting variables that are elements of X. A free ring satisfies the universal property: any function from the set X to a ring R factors through F so that F → R is the unique ring homomorphism. Just as in the group case, every ring can be represented as a quotient of a free ring.
Now, we can impose relations among symbols in X by taking a quotient. Explicitly, if E is a subset of F, then the quotient ring of F by the ideal generated by E is called the ring with generators X and relations E. If we used a ring, say, A as a base ring instead of
Z
,
{\displaystyle \mathbb {Z} ,}
then the resulting ring will be over A. For example, if
E
=
{
x
y
−
y
x
∣
x
,
y
∈
X
}
,
{\displaystyle E=\{xy-yx\mid x,y\in X\},}
then the resulting ring will be the usual polynomial ring with coefficients in A in variables that are elements of X (It is also the same thing as the symmetric algebra over A with symbols X.)
In the category-theoretic terms, the formation
S
↦
the free ring generated by the set
S
{\displaystyle S\mapsto {\text{the free ring generated by the set }}S}
is the left adjoint functor of the forgetful functor from the category of rings to Set (and it is often called the free ring functor.)
Let A, B be algebras over a commutative ring R. Then the tensor product of R-modules
A
⊗
R
B
{\displaystyle A\otimes _{R}B}
is an R-algebra with multiplication characterized by
(
x
⊗
u
)
(
y
⊗
v
)
=
x
y
⊗
u
v
.
{\displaystyle (x\otimes u)(y\otimes v)=xy\otimes uv.}
== Special kinds of rings ==
=== Domains ===
A nonzero ring with no nonzero zero-divisors is called a domain. A commutative domain is called an integral domain. The most important integral domains are principal ideal domains, PIDs for short, and fields. A principal ideal domain is an integral domain in which every ideal is principal. An important class of integral domains that contain a PID is a unique factorization domain (UFD), an integral domain in which every nonunit element is a product of prime elements (an element is prime if it generates a prime ideal.) The fundamental question in algebraic number theory is on the extent to which the ring of (generalized) integers in a number field, where an "ideal" admits prime factorization, fails to be a PID.
Among theorems concerning a PID, the most important one is the structure theorem for finitely generated modules over a principal ideal domain. The theorem may be illustrated by the following application to linear algebra. Let V be a finite-dimensional vector space over a field k and f : V → V a linear map with minimal polynomial q. Then, since k[t] is a unique factorization domain, q factors into powers of distinct irreducible polynomials (that is, prime elements):
q
=
p
1
e
1
…
p
s
e
s
.
{\displaystyle q=p_{1}^{e_{1}}\ldots p_{s}^{e_{s}}.}
Letting
t
⋅
v
=
f
(
v
)
,
{\displaystyle t\cdot v=f(v),}
we make V a k[t]-module. The structure theorem then says V is a direct sum of cyclic modules, each of which is isomorphic to the module of the form
k
[
t
]
/
(
p
i
k
j
)
.
{\displaystyle k[t]/\left(p_{i}^{k_{j}}\right).}
Now, if
p
i
(
t
)
=
t
−
λ
i
,
{\displaystyle p_{i}(t)=t-\lambda _{i},}
then such a cyclic module (for pi) has a basis in which the restriction of f is represented by a Jordan matrix. Thus, if, say, k is algebraically closed, then all pi's are of the form t – λi and the above decomposition corresponds to the Jordan canonical form of f.
In algebraic geometry, UFDs arise because of smoothness. More precisely, a point in a variety (over a perfect field) is smooth if the local ring at the point is a regular local ring. A regular local ring is a UFD.
The following is a chain of class inclusions that describes the relationship between rings, domains and fields:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields
=== Division ring ===
A division ring is a ring such that every non-zero element is a unit. A commutative division ring is a field. A prominent example of a division ring that is not a field is the ring of quaternions. Any centralizer in a division ring is also a division ring. In particular, the center of a division ring is a field. It turned out that every finite domain (in particular finite division ring) is a field; in particular commutative (the Wedderburn's little theorem).
Every module over a division ring is a free module (has a basis); consequently, much of linear algebra can be carried out over a division ring instead of a field.
The study of conjugacy classes figures prominently in the classical theory of division rings; see, for example, the Cartan–Brauer–Hua theorem.
A cyclic algebra, introduced by L. E. Dickson, is a generalization of a quaternion algebra.
=== Semisimple rings ===
A semisimple module is a direct sum of simple modules. A semisimple ring is a ring that is semisimple as a left module (or right module) over itself.
==== Examples ====
A division ring is semisimple (and simple).
For any division ring D and positive integer n, the matrix ring Mn(D) is semisimple (and simple).
For a field k and finite group G, the group ring kG is semisimple if and only if the characteristic of k does not divide the order of G (Maschke's theorem).
Clifford algebras are semisimple.
The Weyl algebra over a field is a simple ring, but it is not semisimple. The same holds for a ring of differential operators in many variables.
==== Properties ====
Any module over a semisimple ring is semisimple. (Proof: A free module over a semisimple ring is semisimple and any module is a quotient of a free module.)
For a ring R, the following are equivalent:
R is semisimple.
R is artinian and semiprimitive.
R is a finite direct product
∏
i
=
1
r
M
n
i
(
D
i
)
{\textstyle \prod _{i=1}^{r}\operatorname {M} _{n_{i}}(D_{i})}
where each ni is a positive integer, and each Di is a division ring (Artin–Wedderburn theorem).
Semisimplicity is closely related to separability. A unital associative algebra A over a field k is said to be separable if the base extension
A
⊗
k
F
{\displaystyle A\otimes _{k}F}
is semisimple for every field extension F / k. If A happens to be a field, then this is equivalent to the usual definition in field theory (cf. separable extension.)
=== Central simple algebra and Brauer group ===
For a field k, a k-algebra is central if its center is k and is simple if it is a simple ring. Since the center of a simple k-algebra is a field, any simple k-algebra is a central simple algebra over its center. In this section, a central simple algebra is assumed to have finite dimension. Also, we mostly fix the base field; thus, an algebra refers to a k-algebra. The matrix ring of size n over a ring R will be denoted by Rn.
The Skolem–Noether theorem states any automorphism of a central simple algebra is inner.
Two central simple algebras A and B are said to be similar if there are integers n and m such that
A
⊗
k
k
n
≈
B
⊗
k
k
m
.
{\displaystyle A\otimes _{k}k_{n}\approx B\otimes _{k}k_{m}.}
Since
k
n
⊗
k
k
m
≃
k
n
m
,
{\displaystyle k_{n}\otimes _{k}k_{m}\simeq k_{nm},}
the similarity is an equivalence relation. The similarity classes [A] with the multiplication
[
A
]
[
B
]
=
[
A
⊗
k
B
]
{\displaystyle [A][B]=\left[A\otimes _{k}B\right]}
form an abelian group called the Brauer group of k and is denoted by Br(k). By the Artin–Wedderburn theorem, a central simple algebra is the matrix ring of a division ring; thus, each similarity class is represented by a unique division ring.
For example, Br(k) is trivial if k is a finite field or an algebraically closed field (more generally quasi-algebraically closed field; cf. Tsen's theorem).
Br
(
R
)
{\displaystyle \operatorname {Br} (\mathbb {R} )}
has order 2 (a special case of the theorem of Frobenius). Finally, if k is a nonarchimedean local field (for example,
Q
p
{\displaystyle \mathbb {Q} _{p}}
), then
Br
(
k
)
=
Q
/
Z
{\displaystyle \operatorname {Br} (k)=\mathbb {Q} /\mathbb {Z} }
through the invariant map.
Now, if F is a field extension of k, then the base extension
−
⊗
k
F
{\displaystyle -\otimes _{k}F}
induces Br(k) → Br(F). Its kernel is denoted by Br(F / k). It consists of [A] such that
A
⊗
k
F
{\displaystyle A\otimes _{k}F}
is a matrix ring over F (that is, A is split by F.) If the extension is finite and Galois, then Br(F / k) is canonically isomorphic to
H
2
(
Gal
(
F
/
k
)
,
k
∗
)
.
{\displaystyle H^{2}\left(\operatorname {Gal} (F/k),k^{*}\right).}
Azumaya algebras generalize the notion of central simple algebras to a commutative local ring.
=== Valuation ring ===
If K is a field, a valuation v is a group homomorphism from the multiplicative group K∗ to a totally ordered abelian group G such that, for any f, g in K with f + g nonzero, v(f + g) ≥ min{v(f), v(g)}. The valuation ring of v is the subring of K consisting of zero and all nonzero f such that v(f) ≥ 0.
Examples:
The field of formal Laurent series
k
(
(
t
)
)
{\displaystyle k(\!(t)\!)}
over a field k comes with the valuation v such that v(f) is the least degree of a nonzero term in f; the valuation ring of v is the formal power series ring
k
[
[
t
]
]
.
{\displaystyle k[\![t]\!].}
More generally, given a field k and a totally ordered abelian group G, let
k
(
(
G
)
)
{\displaystyle k(\!(G)\!)}
be the set of all functions from G to k whose supports (the sets of points at which the functions are nonzero) are well ordered. It is a field with the multiplication given by convolution:
(
f
∗
g
)
(
t
)
=
∑
s
∈
G
f
(
s
)
g
(
t
−
s
)
.
{\displaystyle (f*g)(t)=\sum _{s\in G}f(s)g(t-s).}
It also comes with the valuation v such that v(f) is the least element in the support of f. The subring consisting of elements with finite support is called the group ring of G (which makes sense even if G is not commutative). If G is the ring of integers, then we recover the previous example (by identifying f with the series whose nth coefficient is f(n).)
== Rings with extra structure ==
A ring may be viewed as an abelian group (by using the addition operation), with extra structure: namely, ring multiplication. In the same way, there are other mathematical objects which may be considered as rings with extra structure. For example:
An associative algebra is a ring that is also a vector space over a field n such that the scalar multiplication is compatible with the ring multiplication. For instance, the set of n-by-n matrices over the real field
R
{\displaystyle \mathbb {R} }
has dimension n2 as a real vector space.
A ring R is a topological ring if its set of elements R is given a topology which makes the addition map (
+
:
R
×
R
→
R
{\displaystyle +:R\times R\to R}
) and the multiplication map ⋅ : R × R → R to be both continuous as maps between topological spaces (where X × X inherits the product topology or any other product in the category). For example, n-by-n matrices over the real numbers could be given either the Euclidean topology, or the Zariski topology, and in either case one would obtain a topological ring.
A λ-ring is a commutative ring R together with operations λn: R → R that are like nth exterior powers:
λ
n
(
x
+
y
)
=
∑
0
n
λ
i
(
x
)
λ
n
−
i
(
y
)
.
{\displaystyle \lambda ^{n}(x+y)=\sum _{0}^{n}\lambda ^{i}(x)\lambda ^{n-i}(y).}
For example,
Z
{\displaystyle \mathbb {Z} }
is a λ-ring with
λ
n
(
x
)
=
(
x
n
)
,
{\displaystyle \lambda ^{n}(x)={\binom {x}{n}},}
the binomial coefficients. The notion plays a central rule in the algebraic approach to the Riemann–Roch theorem.
A totally ordered ring is a ring with a total ordering that is compatible with ring operations.
== Some examples of the ubiquity of rings ==
Many different kinds of mathematical objects can be fruitfully analyzed in terms of some associated ring.
=== Cohomology ring of a topological space ===
To any topological space X one can associate its integral cohomology ring
H
∗
(
X
,
Z
)
=
⨁
i
=
0
∞
H
i
(
X
,
Z
)
,
{\displaystyle H^{*}(X,\mathbb {Z} )=\bigoplus _{i=0}^{\infty }H^{i}(X,\mathbb {Z} ),}
a graded ring. There are also homology groups
H
i
(
X
,
Z
)
{\displaystyle H_{i}(X,\mathbb {Z} )}
of a space, and indeed these were defined first, as a useful tool for distinguishing between certain pairs of topological spaces, like the spheres and tori, for which the methods of point-set topology are not well-suited. Cohomology groups were later defined in terms of homology groups in a way which is roughly analogous to the dual of a vector space. To know each individual integral homology group is essentially the same as knowing each individual integral cohomology group, because of the universal coefficient theorem. However, the advantage of the cohomology groups is that there is a natural product, which is analogous to the observation that one can multiply pointwise a k-multilinear form and an l-multilinear form to get a (k + l)-multilinear form.
The ring structure in cohomology provides the foundation for characteristic classes of fiber bundles, intersection theory on manifolds and algebraic varieties, Schubert calculus and much more.
=== Burnside ring of a group ===
To any group is associated its Burnside ring which uses a ring to describe the various ways the group can act on a finite set. The Burnside ring's additive group is the free abelian group whose basis is the set of transitive actions of the group and whose addition is the disjoint union of the action. Expressing an action in terms of the basis is decomposing an action into its transitive constituents. The multiplication is easily expressed in terms of the representation ring: the multiplication in the Burnside ring is formed by writing the tensor product of two permutation modules as a permutation module. The ring structure allows a formal way of subtracting one action from another. Since the Burnside ring is contained as a finite index subring of the representation ring, one can pass easily from one to the other by extending the coefficients from integers to the rational numbers.
=== Representation ring of a group ring ===
To any group ring or Hopf algebra is associated its representation ring or "Green ring". The representation ring's additive group is the free abelian group whose basis are the indecomposable modules and whose addition corresponds to the direct sum. Expressing a module in terms of the basis is finding an indecomposable decomposition of the module. The multiplication is the tensor product. When the algebra is semisimple, the representation ring is just the character ring from character theory, which is more or less the Grothendieck group given a ring structure.
=== Function field of an irreducible algebraic variety ===
To any irreducible algebraic variety is associated its function field. The points of an algebraic variety correspond to valuation rings contained in the function field and containing the coordinate ring. The study of algebraic geometry makes heavy use of commutative algebra to study geometric concepts in terms of ring-theoretic properties. Birational geometry studies maps between the subrings of the function field.
=== Face ring of a simplicial complex ===
Every simplicial complex has an associated face ring, also called its Stanley–Reisner ring. This ring reflects many of the combinatorial properties of the simplicial complex, so it is of particular interest in algebraic combinatorics. In particular, the algebraic geometry of the Stanley–Reisner ring was used to characterize the numbers of faces in each dimension of simplicial polytopes.
== Category-theoretic description ==
Every ring can be thought of as a monoid in Ab, the category of abelian groups (thought of as a monoidal category under the tensor product of
Z
{\displaystyle \mathbb {Z} }
-modules). The monoid action of a ring R on an abelian group is simply an R-module. Essentially, an R-module is a generalization of the notion of a vector space – where rather than a vector space over a field, one has a "vector space over a ring".
Let (A, +) be an abelian group and let End(A) be its endomorphism ring (see above). Note that, essentially, End(A) is the set of all morphisms of A, where if f is in End(A), and g is in End(A), the following rules may be used to compute f + g and f ⋅ g:
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
(
f
⋅
g
)
(
x
)
=
f
(
g
(
x
)
)
,
{\displaystyle {\begin{aligned}&(f+g)(x)=f(x)+g(x)\\&(f\cdot g)(x)=f(g(x)),\end{aligned}}}
where + as in f(x) + g(x) is addition in A, and function composition is denoted from right to left. Therefore, associated to any abelian group, is a ring. Conversely, given any ring, (R, +, ⋅ ), (R, +) is an abelian group. Furthermore, for every r in R, right (or left) multiplication by r gives rise to a morphism of (R, +), by right (or left) distributivity. Let A = (R, +). Consider those endomorphisms of A, that "factor through" right (or left) multiplication of R. In other words, let EndR(A) be the set of all morphisms m of A, having the property that m(r ⋅ x) = r ⋅ m(x). It was seen that every r in R gives rise to a morphism of A: right multiplication by r. It is in fact true that this association of any element of R, to a morphism of A, as a function from R to EndR(A), is an isomorphism of rings. In this sense, therefore, any ring can be viewed as the endomorphism ring of some abelian X-group (by X-group, it is meant a group with X being its set of operators). In essence, the most general form of a ring, is the endomorphism group of some abelian X-group.
Any ring can be seen as a preadditive category with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context. Additive functors between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of morphisms closed under addition and under composition with arbitrary morphisms.
== Generalization ==
Algebraists have defined structures more general than rings by weakening or dropping some of ring axioms.
=== Rng ===
A rng is the same as a ring, except that the existence of a multiplicative identity is not assumed.
=== Nonassociative ring ===
A nonassociative ring is an algebraic structure that satisfies all of the ring axioms except the associative property and the existence of a multiplicative identity. A notable example is a Lie algebra. There exists some structure theory for such algebras that generalizes the analogous results for Lie algebras and associative algebras.
=== Semiring ===
A semiring (sometimes rig) is obtained by weakening the assumption that (R, +) is an abelian group to the assumption that (R, +) is a commutative monoid, and adding the axiom that 0 ⋅ a = a ⋅ 0 = 0 for all a in R (since it no longer follows from the other axioms).
Examples:
the non-negative integers
{
0
,
1
,
2
,
…
}
{\displaystyle \{0,1,2,\ldots \}}
with ordinary addition and multiplication;
the tropical semiring.
== Other ring-like objects ==
=== Ring object in a category ===
Let C be a category with finite products. Let pt denote a terminal object of C (an empty product). A ring object in C is an object R equipped with morphisms
R
×
R
→
a
R
{\displaystyle R\times R\;{\stackrel {a}{\to }}\,R}
(addition),
R
×
R
→
m
R
{\displaystyle R\times R\;{\stackrel {m}{\to }}\,R}
(multiplication),
pt
→
0
R
{\displaystyle \operatorname {pt} {\stackrel {0}{\to }}\,R}
(additive identity),
R
→
i
R
{\displaystyle R\;{\stackrel {i}{\to }}\,R}
(additive inverse), and
pt
→
1
R
{\displaystyle \operatorname {pt} {\stackrel {1}{\to }}\,R}
(multiplicative identity) satisfying the usual ring axioms. Equivalently, a ring object is an object R equipped with a factorization of its functor of points
h
R
=
Hom
(
−
,
R
)
:
C
op
→
S
e
t
s
{\displaystyle h_{R}=\operatorname {Hom} (-,R):C^{\operatorname {op} }\to \mathbf {Sets} }
through the category of rings:
C
op
→
R
i
n
g
s
⟶
forgetful
S
e
t
s
.
{\displaystyle C^{\operatorname {op} }\to \mathbf {Rings} {\stackrel {\textrm {forgetful}}{\longrightarrow }}\mathbf {Sets} .}
=== Ring scheme ===
In algebraic geometry, a ring scheme over a base scheme S is a ring object in the category of S-schemes. One example is the ring scheme Wn over
Spec
Z
{\displaystyle \operatorname {Spec} \mathbb {Z} }
, which for any commutative ring A returns the ring Wn(A) of p-isotypic Witt vectors of length n over A.
=== Ring spectrum ===
In algebraic topology, a ring spectrum is a spectrum X together with a multiplication
μ
:
X
∧
X
→
X
{\displaystyle \mu :X\wedge X\to X}
and a unit map S → X from the sphere spectrum S, such that the ring axiom diagrams commute up to homotopy. In practice, it is common to define a ring spectrum as a monoid object in a good category of spectra such as the category of symmetric spectra.
== See also ==
Special types of rings:
== Notes ==
== Citations ==
== References ==
=== General references ===
=== Special references ===
=== Primary sources ===
=== Historical references === | Wikipedia/Ring_(algebra) |
In mathematics, p-adic Hodge theory is a theory that provides a way to classify and study p-adic Galois representations of characteristic 0 local fields with residual characteristic p (such as Qp). The theory has its beginnings in Jean-Pierre Serre and John Tate's study of Tate modules of abelian varieties and the notion of Hodge–Tate representation. Hodge–Tate representations are related to certain decompositions of p-adic cohomology theories analogous to the Hodge decomposition, hence the name p-adic Hodge theory. Further developments were inspired by properties of p-adic Galois representations arising from the étale cohomology of varieties. Jean-Marc Fontaine introduced many of the basic concepts of the field.
== General classification of p-adic representations ==
Let
K
{\displaystyle K}
be a local field with residue field
k
{\displaystyle k}
of characteristic
p
{\displaystyle p}
. In this article, a
p
{\displaystyle p}
-adic representation of
K
{\displaystyle K}
(or of
G
K
{\displaystyle G_{K}}
, the absolute Galois group of
K
{\displaystyle K}
) will be a continuous representation
ρ
:
G
K
→
GL
(
V
)
{\displaystyle \rho :G_{K}\to {\text{GL}}(V)}
, where
V
{\displaystyle V}
is a finite-dimensional vector space over
Q
p
{\displaystyle \mathbb {Q} _{p}}
. The collection of all
p
{\displaystyle p}
-adic representations of
K
{\displaystyle K}
form an abelian category denoted
R
e
p
Q
p
(
K
)
{\displaystyle \mathrm {Rep} _{\mathbb {Q} _{p}}(K)}
in this article.
p
{\displaystyle p}
-adic Hodge theory provides subcollections of
p
{\displaystyle p}
-adic representations based on how nice they are, and also provides faithful functors to categories of linear algebraic objects that are easier to study. The basic classification is as follows:
Rep
c
r
y
s
(
K
)
⊊
Rep
s
s
(
K
)
⊊
Rep
d
R
(
K
)
⊊
Rep
H
T
(
K
)
⊊
Rep
Q
p
(
K
)
{\displaystyle \operatorname {Rep} _{\mathrm {crys} }(K)\subsetneq \operatorname {Rep} _{ss}(K)\subsetneq \operatorname {Rep} _{dR}(K)\subsetneq \operatorname {Rep} _{HT}(K)\subsetneq \operatorname {Rep} _{\mathbb {Q} _{p}}(K)}
where each collection is a full subcategory properly contained in the next. In order, these are the categories of crystalline representations, semistable representations, de Rham representations, Hodge–Tate representations, and all p-adic representations. In addition, two other categories of representations can be introduced, the potentially crystalline representations
Rep
p
c
r
y
s
(
K
)
{\displaystyle \operatorname {Rep} _{\mathrm {pcrys} }(K)}
and the potentially semistable representations
Rep
p
s
s
(
K
)
{\displaystyle \operatorname {Rep} _{\mathrm {pss} }(K)}
. The latter strictly contains the former which in turn generally strictly contains
Rep
c
r
y
s
(
K
)
{\displaystyle \operatorname {Rep} _{\mathrm {crys} }(K)}
; additionally,
Rep
p
s
s
(
K
)
{\displaystyle \operatorname {Rep} _{\mathrm {pss} }(K)}
generally strictly contains
Rep
s
s
(
K
)
{\displaystyle \operatorname {Rep} _{\mathrm {ss} }(K)}
, and is contained in
Rep
d
R
(
K
)
{\displaystyle \operatorname {Rep} _{dR}(K)}
(with equality when the residue field of
K
{\displaystyle K}
is finite, a statement called the p-adic monodromy theorem).
== Period rings and comparison isomorphisms in arithmetic geometry ==
The general strategy of p-adic Hodge theory, introduced by Fontaine, is to construct certain so-called period rings such as BdR, Bst, Bcris, and BHT which have both an action by GK and some linear algebraic structure and to consider so-called Dieudonné modules
D
B
(
V
)
=
(
B
⊗
Q
p
V
)
G
K
{\displaystyle D_{B}(V)=(B\otimes _{\mathbf {Q} _{p}}V)^{G_{K}}}
(where B is a period ring, and V is a p-adic representation) which no longer have a GK-action, but are endowed with linear algebraic structures inherited from the ring B. In particular, they are vector spaces over the fixed field
E
:=
B
G
K
{\displaystyle E:=B^{G_{K}}}
. This construction fits into the formalism of B-admissible representations introduced by Fontaine. For a period ring like the aforementioned ones B∗ (for ∗ = HT, dR, st, cris), the category of p-adic representations Rep∗(K) mentioned above is the category of B∗-admissible ones, i.e. those p-adic representations V for which
dim
E
D
B
∗
(
V
)
=
dim
Q
p
V
{\displaystyle \dim _{E}D_{B_{\ast }}(V)=\dim _{\mathbf {Q} _{p}}V}
or, equivalently, the comparison morphism
α
V
:
B
∗
⊗
E
D
B
∗
(
V
)
⟶
B
∗
⊗
Q
p
V
{\displaystyle \alpha _{V}:B_{\ast }\otimes _{E}D_{B_{\ast }}(V)\longrightarrow B_{\ast }\otimes _{\mathbf {Q} _{p}}V}
is an isomorphism.
This formalism (and the name period ring) grew out of a few results and conjectures regarding comparison isomorphisms in arithmetic and complex geometry:
If X is a proper smooth scheme over C, there is a classical comparison isomorphism between the algebraic de Rham cohomology of X over C and the singular cohomology of X(C)
H
d
R
∗
(
X
/
C
)
≅
H
∗
(
X
(
C
)
,
Q
)
⊗
Q
C
.
{\displaystyle H_{\mathrm {dR} }^{\ast }(X/\mathbf {C} )\cong H^{\ast }(X(\mathbf {C} ),\mathbf {Q} )\otimes _{\mathbf {Q} }\mathbf {C} .}
This isomorphism can be obtained by considering a pairing obtained by integrating differential forms in the algebraic de Rham cohomology over cycles in the singular cohomology. The result of such an integration is called a period and is generally a complex number. This explains why the singular cohomology must be tensored to C, and from this point of view, C can be said to contain all the periods necessary to compare algebraic de Rham cohomology with singular cohomology, and could hence be called a period ring in this situation.
In the mid sixties, Tate conjectured that a similar isomorphism should hold for proper smooth schemes X over K between algebraic de Rham cohomology and p-adic étale cohomology (the Hodge–Tate conjecture, also called CHT). Specifically, let CK be the completion of an algebraic closure of K, let CK(i) denote CK where the action of GK is via g·z = χ(g)ig·z (where χ is the p-adic cyclotomic character, and i is an integer), and let
B
H
T
:=
⊕
i
∈
Z
C
K
(
i
)
{\displaystyle B_{\mathrm {HT} }:=\oplus _{i\in \mathbf {Z} }\mathbf {C} _{K}(i)}
. Then there is a functorial isomorphism
B
H
T
⊗
K
g
r
H
d
R
∗
(
X
/
K
)
≅
B
H
T
⊗
Q
p
H
e
´
t
∗
(
X
×
K
K
¯
,
Q
p
)
{\displaystyle B_{\mathrm {HT} }\otimes _{K}\mathrm {gr} H_{\mathrm {dR} }^{\ast }(X/K)\cong B_{\mathrm {HT} }\otimes _{\mathbf {Q} _{p}}H_{\mathrm {{\acute {e}}t} }^{\ast }(X\times _{K}{\overline {K}},\mathbf {Q} _{p})}
of graded vector spaces with GK-action (the de Rham cohomology is equipped with the Hodge filtration, and
g
r
H
d
R
∗
{\displaystyle \mathrm {gr} H_{\mathrm {dR} }^{\ast }}
is its associated graded). This conjecture was proved by Gerd Faltings in the late eighties after partial results by several other mathematicians (including Tate himself).
For an abelian variety X with good reduction over a p-adic field K, Alexander Grothendieck reformulated a theorem of Tate's to say that the crystalline cohomology H1(X/W(k)) ⊗ Qp of the special fiber (with the Frobenius endomorphism on this group and the Hodge filtration on this group tensored with K) and the p-adic étale cohomology H1(X,Qp) (with the action of the Galois group of K) contained the same information. Both are equivalent to the p-divisible group associated to X, up to isogeny. Grothendieck conjectured that there should be a way to go directly from p-adic étale cohomology to crystalline cohomology (and back), for all varieties with good reduction over p-adic fields. This suggested relation became known as the mysterious functor.
To improve the Hodge–Tate conjecture to one involving the de Rham cohomology (not just its associated graded), Fontaine constructed a filtered ring BdR whose associated graded is BHT and conjectured the following (called CdR) for any smooth proper scheme X over K
B
d
R
⊗
K
H
d
R
∗
(
X
/
K
)
≅
B
d
R
⊗
Q
p
H
e
´
t
∗
(
X
×
K
K
¯
,
Q
p
)
{\displaystyle B_{\mathrm {dR} }\otimes _{K}H_{\mathrm {dR} }^{\ast }(X/K)\cong B_{\mathrm {dR} }\otimes _{\mathbf {Q} _{p}}H_{\mathrm {{\acute {e}}t} }^{\ast }(X\times _{K}{\overline {K}},\mathbf {Q} _{p})}
as filtered vector spaces with GK-action. In this way, BdR could be said to contain all (p-adic) periods required to compare algebraic de Rham cohomology with p-adic étale cohomology, just as the complex numbers above were used with the comparison with singular cohomology. This is where BdR obtains its name of ring of p-adic periods.
Similarly, to formulate a conjecture explaining Grothendieck's mysterious functor, Fontaine introduced a ring Bcris with GK-action, a "Frobenius" φ, and a filtration after extending scalars from K0 to K. He conjectured the following (called Ccris) for any smooth proper scheme X over K with good reduction
B
c
r
i
s
⊗
K
0
H
d
R
∗
(
X
/
K
)
≅
B
c
r
i
s
⊗
Q
p
H
e
´
t
∗
(
X
×
K
K
¯
,
Q
p
)
{\displaystyle B_{\mathrm {cris} }\otimes _{K_{0}}H_{\mathrm {dR} }^{\ast }(X/K)\cong B_{\mathrm {cris} }\otimes _{\mathbf {Q} _{p}}H_{\mathrm {{\acute {e}}t} }^{\ast }(X\times _{K}{\overline {K}},\mathbf {Q} _{p})}
as vector spaces with φ-action, GK-action, and filtration after extending scalars to K (here
H
d
R
∗
(
X
/
K
)
{\displaystyle H_{\mathrm {dR} }^{\ast }(X/K)}
is given its structure as a K0-vector space with φ-action given by its comparison with crystalline cohomology). Both the CdR and the Ccris conjectures were proved by Faltings.
Upon comparing these two conjectures with the notion of B∗-admissible representations above, it is seen that if X is a proper smooth scheme over K (with good reduction) and V is the p-adic Galois representation obtained as is its ith p-adic étale cohomology group, then
D
B
∗
(
V
)
=
H
d
R
i
(
X
/
K
)
.
{\displaystyle D_{B_{\ast }}(V)=H_{\mathrm {dR} }^{i}(X/K).}
In other words, the Dieudonné modules should be thought of as giving the other cohomologies related to V.
In the late eighties, Fontaine and Uwe Jannsen formulated another comparison isomorphism conjecture, Cst, this time allowing X to have semi-stable reduction. Fontaine constructed a ring Bst with GK-action, a "Frobenius" φ, a filtration after extending scalars from K0 to K (and fixing an extension of the p-adic logarithm), and a "monodromy operator" N. When X has semi-stable reduction, the de Rham cohomology can be equipped with the φ-action and a monodromy operator by its comparison with the log-crystalline cohomology first introduced by Osamu Hyodo. The conjecture then states that
B
s
t
⊗
K
0
H
l
o
g
−
c
r
i
s
∗
(
X
/
K
)
≅
B
s
t
⊗
Q
p
H
e
´
t
∗
(
X
×
K
K
¯
,
Q
p
)
{\displaystyle B_{\mathrm {st} }\otimes _{K_{0}}H_{\mathrm {log-cris} }^{\ast }(X/K)\cong B_{\mathrm {st} }\otimes _{\mathbf {Q} _{p}}H_{\mathrm {{\acute {e}}t} }^{\ast }(X\times _{K}{\overline {K}},\mathbf {Q} _{p})}
as vector spaces with φ-action, GK-action, filtration after extending scalars to K, and monodromy operator N. This conjecture was proved in the late nineties by Takeshi Tsuji.
== Notes ==
== See also ==
Hodge theory
Arakelov theory
Hodge-Arakelov theory
p-adic Teichmüller theory
== References ==
Tate, John (1967), "p-Divisible Groups"", Proceedings of a Conference on Local Fields, Springer, pp. 158–183, doi:10.1007/978-3-642-87942-5_12, ISBN 978-3-642-87942-5
Faltings, Gerd (1988), "p-adic Hodge theory", Journal of the American Mathematical Society, 1 (1): 255–299, doi:10.2307/1990970, JSTOR 1990970, MR 0924705
Faltings, Gerd (1989), "Crystalline cohomology and p-adic Galois representations", in Igusa, Jun-Ichi (ed.), Algebraic analysis, geometry, and number theory, Baltimore, MD: Johns Hopkins University Press, pp. 25–80, ISBN 978-0-8018-3841-5, MR 1463696
Fontaine, Jean-Marc (1982), "Sur certains types de représentations p-adiques du groupe de Galois d'un corps local; construction d'un anneau de Barsotti–Tate", Annals of Mathematics, 115 (3): 529–577, doi:10.2307/2007012, JSTOR 2007012, MR 0657238
Grothendieck, Alexander (1971), "Groupes de Barsotti–Tate et cristaux", Actes du Congrès International des Mathématiciens (Nice, 1970), vol. 1, pp. 431–436, MR 0578496
Hyodo, Osamu (1991), "On the de Rham–Witt complex attached to a semi-stable family", Compositio Mathematica, 78 (3): 241–260, MR 1106296
Serre, Jean-Pierre (1967), "Résumé des cours, 1965–66", Annuaire du Collège de France, Paris, pp. 49–58{{citation}}: CS1 maint: location missing publisher (link)
Tsuji, Takeshi (1999), "p-adic étale cohomology and crystalline cohomology in the semi-stable reduction case", Inventiones Mathematicae, 137 (2): 233–411, Bibcode:1999InMat.137..233T, doi:10.1007/s002220050330, MR 1705837, S2CID 121547567
Berger, Laurent (2004), "An introduction to the theory of p-adic representations", Geometric aspects of Dwork theory, vol. I, Berlin: Walter de Gruyter GmbH & Co. KG, arXiv:math/0210184, Bibcode:2002math.....10184B, ISBN 978-3-11-017478-6, MR 2023292
Brinon, Olivier; Conrad, Brian (2009), CMI Summer School notes on p-adic Hodge theory (PDF), retrieved 2010-02-05
Fontaine, Jean-Marc, ed. (1994), Périodes p-adiques, Astérisque, vol. 223, Paris: Société Mathématique de France, MR 1293969
Illusie, Luc (1990), "Cohomologie de de Rham et cohomologie étale p-adique (d'après G. Faltings, J.-M. Fontaine et al.) Exp. 726", Séminaire Bourbaki. Vol. 1989/90. Exposés 715–729, Astérisque, vol. 189–190, Paris: Société Mathématique de France, pp. 325–374, MR 1099881 | Wikipedia/P-adic_Hodge_theory |
In mathematics, an algebraic function field (often abbreviated as function field) of n variables over a field k is a finitely generated field extension K/k which has transcendence degree n over k. Equivalently, an algebraic function field of n variables over k may be defined as a finite field extension of the field K = k(x1,...,xn) of rational functions in n variables over k.
== Example ==
As an example, in the polynomial ring k [X,Y] consider the ideal generated by the irreducible polynomial Y 2 − X 3 and form the field of fractions of the quotient ring k [X,Y]/(Y 2 − X 3). This is a function field of one variable over k; it can also be written as
k
(
X
)
(
X
3
)
{\displaystyle k(X)({\sqrt {X^{3}}})}
(with degree 2 over
k
(
X
)
{\displaystyle k(X)}
) or as
k
(
Y
)
(
Y
2
3
)
{\displaystyle k(Y)({\sqrt[{3}]{Y^{2}}})}
(with degree 3 over
k
(
Y
)
{\displaystyle k(Y)}
). We see that the degree of an algebraic function field is not a well-defined notion.
== Category structure ==
The algebraic function fields over k form a category; the morphisms from function field K to L are the ring homomorphisms f : K → L with f(a) = a for all a in k. All these morphisms are injective. If K is a function field over k of n variables, and L is a function field in m variables, and n > m, then there are no morphisms from K to L.
== Function fields arising from varieties, curves and Riemann surfaces ==
The function field of an algebraic variety of dimension n over k is an algebraic function field of n variables over k.
Two varieties are birationally equivalent if and only if their function fields are isomorphic. (But note that non-isomorphic varieties may have the same function field!) Assigning to each variety its function field yields a duality (contravariant equivalence) between the category of varieties over k (with dominant rational maps as morphisms) and the category of algebraic function fields over k. (The varieties considered here are to be taken in the scheme sense; they need not have any k-rational points, like the curve X2 + Y2 + 1 = 0 defined over the reals, that is with k = R.)
The case n = 1 (irreducible algebraic curves in the scheme sense) is especially important, since every function field of one variable over k arises as the function field of a uniquely defined regular (i.e. non-singular) projective irreducible algebraic curve over k. In fact, the function field yields a duality between the category of regular projective irreducible algebraic curves (with dominant regular maps as morphisms) and the category of function fields of one variable over k.
The field M(X) of meromorphic functions defined on a connected Riemann surface X is a function field of one variable over the complex numbers C. In fact, M yields a duality (contravariant equivalence) between the category of compact connected Riemann surfaces (with non-constant holomorphic maps as morphisms) and function fields of one variable over C. A similar correspondence exists between compact connected Klein surfaces and function fields in one variable over R.
== Number fields and finite fields ==
The function field analogy states that almost all theorems on number fields have a counterpart on function fields of one variable over a finite field, and these counterparts are frequently easier to prove. (For example, see Analogue for irreducible polynomials over a finite field.) In the context of this analogy, both number fields and function fields over finite fields are usually called "global fields".
The study of function fields over a finite field has applications in cryptography and error correcting codes. For example, the function field of an elliptic curve over a finite field (an important mathematical tool for public key cryptography) is an algebraic function field.
Function fields over the field of rational numbers play also an important role in solving inverse Galois problems.
== Field of constants ==
Given any algebraic function field K over k, we can consider the set of elements of K which are algebraic over k. These elements form a field, known as the field of constants of the algebraic function field.
For instance, C(x) is a function field of one variable over R; its field of constants is C.
== Valuations and places ==
Key tools to study algebraic function fields are absolute values, valuations, places and their completions.
Given an algebraic function field K/k of one variable, we define the notion of a valuation ring of K/k: this is a subring O of K that contains k and is different from k and K, and such that for any x in K we have x ∈ O or x -1 ∈ O. Each such valuation ring is a discrete valuation ring and its maximal ideal is called a place of K/k.
A discrete valuation of K/k is a surjective function v : K → Z∪{∞} such that v(x) = ∞ iff x = 0, v(xy) = v(x) + v(y) and v(x + y) ≥ min(v(x),v(y)) for all x, y ∈ K, and v(a) = 0 for all a ∈ k \ {0}.
There are natural bijective correspondences between the set of valuation rings of K/k, the set of places of K/k, and the set of discrete valuations of K/k. These sets can be given a natural topological structure: the Zariski–Riemann space of K/k.
== See also ==
function field of an algebraic variety
function field (scheme theory)
algebraic function
Drinfeld module
== References == | Wikipedia/Algebraic_function_field |
In mathematics, an L-function is a meromorphic function on the complex plane, associated to one out of several categories of mathematical objects. An L-series is a Dirichlet series, usually convergent on a half-plane, that may give rise to an L-function via analytic continuation. The Riemann zeta function is an example of an L-function, and some important conjectures involving L-functions are the Riemann hypothesis and its generalizations.
The theory of L-functions has become a very substantial, and still largely conjectural, part of contemporary analytic number theory. In it, broad generalisations of the Riemann zeta function and the L-series for a Dirichlet character are constructed, and their general properties, in most cases still out of reach of proof, are set out in a systematic way. Because of the Euler product formula there is a deep connection between L-functions and the theory of prime numbers.
The mathematical field that studies L-functions is sometimes called analytic theory of L-functions.
== Construction ==
We distinguish at the outset between the L-series, an infinite series representation (for example the Dirichlet series for the Riemann zeta function), and the L-function, the function in the complex plane that is its analytic continuation. The general constructions start with an L-series, defined first as a Dirichlet series, and then by an expansion as an Euler product indexed by prime numbers. Estimates are required to prove that this converges in some right half-plane of the complex numbers. Then one asks whether the function so defined can be analytically continued to the rest of the complex plane (perhaps with some poles).
It is this (conjectural) meromorphic continuation to the complex plane which is called an L-function. In the classical cases, already, one knows that useful information is contained in the values and behaviour of the L-function at points where the series representation does not converge. The general term L-function here includes many known types of zeta functions. The Selberg class is an attempt to capture the core properties of L-functions in a set of axioms, thus encouraging the study of the properties of the class rather than of individual functions.
== Conjectural information ==
One can list characteristics of known examples of L-functions that one would wish to see generalized:
location of zeros and poles;
functional equation, with respect to some vertical line Re(s) = constant;
interesting values at integers related to quantities from algebraic K-theory.
Detailed work has produced a large body of plausible conjectures, for example about the exact type of functional equation that should apply. Since the Riemann zeta function connects through its values at positive even integers (and negative odd integers) to the Bernoulli numbers, one looks for an appropriate generalisation of that phenomenon. In that case results have been obtained for p-adic L-functions, which describe certain Galois modules.
The statistics of the zero distributions are of interest because of their connection to problems like the generalized Riemann hypothesis, distribution of prime numbers, etc. The connections with random matrix theory and quantum chaos are also of interest. The fractal structure of the distributions has been studied using rescaled range analysis. The self-similarity of the zero distribution is quite remarkable, and is characterized by a large fractal dimension of 1.9. This rather large fractal dimension is found over zeros covering at least fifteen orders of magnitude for the Riemann zeta function, and also for the zeros of other L-functions of different orders and conductors.
== Birch and Swinnerton-Dyer conjecture ==
One of the influential examples, both for the history of the more general L-functions and as a still-open research problem, is the conjecture developed by Bryan Birch and Peter Swinnerton-Dyer in the early part of the 1960s. It applies to an elliptic curve E, and the problem it attempts to solve is the prediction of the rank of the elliptic curve over the rational numbers (or another global field): i.e. the number of free generators of its group of rational points. Much previous work in the area began to be unified around a better knowledge of L-functions. This was something like a paradigm example of the nascent theory of L-functions.
== Rise of the general theory ==
This development preceded the Langlands program by a few years, and can be regarded as complementary to it: Langlands' work relates largely to Artin L-functions, which, like Hecke L-functions, were defined several decades earlier, and to L-functions attached to general automorphic representations.
Gradually it became clearer in what sense the construction of Hasse–Weil zeta functions might be made to work to provide valid L-functions, in the analytic sense: there should be some input from analysis, which meant automorphic analysis. The general case now unifies at a conceptual level a number of different research programs.
== See also ==
== References ==
Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
== External links ==
"LMFDB, the database of L-functions, modular forms, and related objects".
Lavrik, A.F. (2001) [1994]. "L-function". Encyclopedia of Mathematics. EMS Press.
Articles about a breakthrough third degree transcendental L-function
"Glimpses of a new (mathematical) world". Mathematics. Physorg.com. American Institute of Mathematics. March 13, 2008.
Rehmeyer, Julie (April 2, 2008). "Creeping Up on Riemann". Science News. Archived from the original on February 16, 2012. Retrieved August 5, 2008.
"Hunting the elusive L-function". Mathematics. Physorg.com. University of Bristol. August 6, 2008. | Wikipedia/L-function |
In number theory, the study of Diophantine approximation deals with the approximation of real numbers by rational numbers. It is named after Diophantus of Alexandria.
The first problem was to know how well a real number can be approximated by rational numbers. For this problem, a rational number p/q is a "good" approximation of a real number α if the absolute value of the difference between p/q and α may not decrease if p/q is replaced by another rational number with a smaller denominator. This problem was solved during the 18th century by means of simple continued fractions.
Knowing the "best" approximations of a given number, the main problem of the field is to find sharp upper and lower bounds of the above difference, expressed as a function of the denominator. It appears that these bounds depend on the nature of the real numbers to be approximated: the lower bound for the approximation of a rational number by another rational number is larger than the lower bound for algebraic numbers, which is itself larger than the lower bound for all real numbers. Thus a real number that may be better approximated than the bound for algebraic numbers is certainly a transcendental number.
This knowledge enabled Liouville, in 1844, to produce the first explicit transcendental number. Later, the proofs that π and e are transcendental were obtained by a similar method.
Diophantine approximations and transcendental number theory are very close areas that share many theorems and methods. Diophantine approximations also have important applications in the study of Diophantine equations.
The 2022 Fields Medal was awarded to James Maynard, in part for his work on Diophantine approximation.
== Best Diophantine approximations of a real number ==
Given a real number α, there are two ways to define a best Diophantine approximation of α. For the first definition, the rational number p/q is a best Diophantine approximation of α if
|
α
−
p
q
|
<
|
α
−
p
′
q
′
|
,
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<\left|\alpha -{\frac {p'}{q'}}\right|,}
for every rational number p'/q' different from p/q such that 0 < q′ ≤ q.
For the second definition, the above inequality is replaced by
|
q
α
−
p
|
<
|
q
′
α
−
p
′
|
.
{\displaystyle \left|q\alpha -p\right|<\left|q^{\prime }\alpha -p^{\prime }\right|.}
A best approximation for the second definition is also a best approximation for the first one, but the converse is not true in general.
The theory of continued fractions allows us to compute the best approximations of a real number: for the second definition, they are the convergents of its expression as a regular continued fraction. For the first definition, one has to consider also the semiconvergents.
For example, the constant e = 2.718281828459045235... has the (regular) continued fraction representation
[
2
;
1
,
2
,
1
,
1
,
4
,
1
,
1
,
6
,
1
,
1
,
8
,
1
,
…
]
.
{\displaystyle [2;1,2,1,1,4,1,1,6,1,1,8,1,\ldots \;].}
Its best approximations for the second definition are
3
,
8
3
,
11
4
,
19
7
,
87
32
,
…
,
{\displaystyle 3,{\tfrac {8}{3}},{\tfrac {11}{4}},{\tfrac {19}{7}},{\tfrac {87}{32}},\ldots \,,}
while, for the first definition, they are
3
,
5
2
,
8
3
,
11
4
,
19
7
,
49
18
,
68
25
,
87
32
,
106
39
,
…
.
{\displaystyle 3,{\tfrac {5}{2}},{\tfrac {8}{3}},{\tfrac {11}{4}},{\tfrac {19}{7}},{\tfrac {49}{18}},{\tfrac {68}{25}},{\tfrac {87}{32}},{\tfrac {106}{39}},\ldots \,.}
== Measure of the accuracy of approximations ==
The obvious measure of the accuracy of a Diophantine approximation of a real number α by a rational number p/q is
|
α
−
p
q
|
.
{\textstyle \left|\alpha -{\frac {p}{q}}\right|.}
However, this quantity can always be made arbitrarily small by increasing the absolute values of p and q; thus the accuracy of the approximation is usually estimated by comparing this quantity to some function φ of the denominator q, typically a negative power of it.
For such a comparison, one may want upper bounds or lower bounds of the accuracy. A lower bound is typically described by a theorem like "for every element α of some subset of the real numbers and every rational number p/q, we have
|
α
−
p
q
|
>
ϕ
(
q
)
{\textstyle \left|\alpha -{\frac {p}{q}}\right|>\phi (q)}
". In some cases, "every rational number" may be replaced by "all rational numbers except a finite number of them", which amounts to multiplying φ by some constant depending on α.
For upper bounds, one has to take into account that not all the "best" Diophantine approximations provided by the convergents may have the desired accuracy. Therefore, the theorems take the form "for every element α of some subset of the real numbers, there are infinitely many rational numbers p/q such that
|
α
−
p
q
|
<
ϕ
(
q
)
{\textstyle \left|\alpha -{\frac {p}{q}}\right|<\phi (q)}
".
=== Badly approximable numbers ===
A badly approximable number is an x for which there is a positive constant c such that for all rational p/q we have
|
x
−
p
q
|
>
c
q
2
.
{\displaystyle \left|{x-{\frac {p}{q}}}\right|>{\frac {c}{q^{2}}}\ .}
The badly approximable numbers are precisely those with bounded partial quotients.
Equivalently, a number is badly approximable if and only if its Markov constant is finite or equivalently its simple continued fraction is bounded.
== Lower bounds for Diophantine approximations ==
=== Approximation of a rational by other rationals ===
A rational number
α
=
a
b
{\textstyle \alpha ={\frac {a}{b}}}
may be obviously and perfectly approximated by
p
i
q
i
=
i
a
i
b
{\textstyle {\frac {p_{i}}{q_{i}}}={\frac {i\,a}{i\,b}}}
for every positive integer i.
If
p
q
≠
α
=
a
b
,
{\textstyle {\frac {p}{q}}\not =\alpha ={\frac {a}{b}}\,,}
we have
|
a
b
−
p
q
|
=
|
a
q
−
b
p
b
q
|
≥
1
b
q
,
{\displaystyle \left|{\frac {a}{b}}-{\frac {p}{q}}\right|=\left|{\frac {aq-bp}{bq}}\right|\geq {\frac {1}{bq}},}
because
|
a
q
−
b
p
|
{\displaystyle |aq-bp|}
is a positive integer and is thus not lower than 1. Thus the accuracy of the approximation is bad relative to irrational numbers (see next sections).
It may be remarked that the preceding proof uses a variant of the pigeonhole principle: a non-negative integer that is not 0 is not smaller than 1. This apparently trivial remark is used in almost every proof of lower bounds for Diophantine approximations, even the most sophisticated ones.
In summary, a rational number is perfectly approximated by itself, but is badly approximated by any other rational number.
=== Approximation of algebraic numbers, Liouville's result ===
In the 1840s, Joseph Liouville obtained the first lower bound for the approximation of algebraic numbers: If x is an irrational algebraic number of degree n over the rational numbers, then there exists a constant c(x) > 0 such that
|
x
−
p
q
|
>
c
(
x
)
q
n
{\displaystyle \left|x-{\frac {p}{q}}\right|>{\frac {c(x)}{q^{n}}}}
holds for all integers p and q where q > 0.
This result allowed him to produce the first proven example of a transcendental number, the Liouville constant
∑
j
=
1
∞
10
−
j
!
=
0.110001000000000000000001000
…
,
{\displaystyle \sum _{j=1}^{\infty }10^{-j!}=0.110001000000000000000001000\ldots \,,}
which does not satisfy Liouville's theorem, whichever degree n is chosen.
This link between Diophantine approximations and transcendental number theory continues to the present day. Many of the proof techniques are shared between the two areas.
=== Approximation of algebraic numbers, Thue–Siegel–Roth theorem ===
Over more than a century, there were many efforts to improve Liouville's theorem: every improvement of the bound enables us to prove that more numbers are transcendental. The main improvements are due to Axel Thue (1909), Siegel (1921), Freeman Dyson (1947), and Klaus Roth (1955), leading finally to the Thue–Siegel–Roth theorem: If x is an irrational algebraic number and ε > 0, then there exists a positive real number c(x, ε) such that
|
x
−
p
q
|
>
c
(
x
,
ε
)
q
2
+
ε
{\displaystyle \left|x-{\frac {p}{q}}\right|>{\frac {c(x,\varepsilon )}{q^{2+\varepsilon }}}}
holds for every integer p and q such that q > 0.
In some sense, this result is optimal, as the theorem would be false with ε = 0. This is an immediate consequence of the upper bounds described below.
=== Simultaneous approximations of algebraic numbers ===
Subsequently, Wolfgang M. Schmidt generalized this to the case of simultaneous approximations, proving that: If x1, ..., xn are algebraic numbers such that 1, x1, ..., xn are linearly independent over the rational numbers and ε is any given positive real number, then there are only finitely many rational n-tuples (p1/q, ..., pn/q) such that
|
x
i
−
p
i
q
|
<
q
−
(
1
+
1
/
n
+
ε
)
,
i
=
1
,
…
,
n
.
{\displaystyle \left|x_{i}-{\frac {p_{i}}{q}}\right|<q^{-(1+1/n+\varepsilon )},\quad i=1,\ldots ,n.}
Again, this result is optimal in the sense that one may not remove ε from the exponent.
=== Effective bounds ===
All preceding lower bounds are not effective, in the sense that the proofs do not provide any way to compute the constant implied in the statements. This means that one cannot use the results or their proofs to obtain bounds on the size of solutions of related Diophantine equations. However, these techniques and results can often be used to bound the number of solutions of such equations.
Nevertheless, a refinement of Baker's theorem by Feldman provides an effective bound: if x is an algebraic number of degree n over the rational numbers, then there exist effectively computable constants c(x) > 0 and 0 < d(x) < n such that
|
x
−
p
q
|
>
c
(
x
)
|
q
|
d
(
x
)
{\displaystyle \left|x-{\frac {p}{q}}\right|>{\frac {c(x)}{|q|^{d(x)}}}}
holds for all rational integers.
However, as for every effective version of Baker's theorem, the constants d and 1/c are so large that this effective result cannot be used in practice.
== Upper bounds for Diophantine approximations ==
=== General upper bound ===
The first important result about upper bounds for Diophantine approximations is Dirichlet's approximation theorem, which implies that, for every irrational number α, there are infinitely many fractions
p
q
{\displaystyle {\tfrac {p}{q}}\;}
such that
|
α
−
p
q
|
<
1
q
2
.
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{q^{2}}}\,.}
This implies immediately that one cannot suppress the ε in the statement of Thue-Siegel-Roth theorem.
Adolf Hurwitz (1891) strengthened this result, proving that for every irrational number α, there are infinitely many fractions
p
q
{\displaystyle {\tfrac {p}{q}}\;}
such that
|
α
−
p
q
|
<
1
5
q
2
.
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{{\sqrt {5}}q^{2}}}\,.}
Therefore,
1
5
q
2
{\displaystyle {\frac {1}{{\sqrt {5}}\,q^{2}}}}
is an upper bound for the Diophantine approximations of any irrational number.
The constant in this result may not be further improved without excluding some irrational numbers (see below).
Émile Borel (1903) showed that, in fact, given any irrational number α, and given three consecutive convergents of α, at least one must satisfy the inequality given in Hurwitz's Theorem.
=== Equivalent real numbers ===
Definition: Two real numbers
x
,
y
{\displaystyle x,y}
are called equivalent if there are integers
a
,
b
,
c
,
d
{\displaystyle a,b,c,d\;}
with
a
d
−
b
c
=
±
1
{\displaystyle ad-bc=\pm 1\;}
such that:
y
=
a
x
+
b
c
x
+
d
.
{\displaystyle y={\frac {ax+b}{cx+d}}\,.}
So equivalence is defined by an integer Möbius transformation on the real numbers, or by a member of the Modular group
SL
2
±
(
Z
)
{\displaystyle {\text{SL}}_{2}^{\pm }(\mathbb {Z} )}
, the set of invertible 2 × 2 matrices over the integers. Each rational number is equivalent to 0; thus the rational numbers are an equivalence class for this relation.
The equivalence may be read on the regular continued fraction representation, as shown by the following theorem of Serret:
Theorem: Two irrational numbers x and y are equivalent if and only if there exist two positive integers h and k such that the regular continued fraction representations of x and y
x
=
[
u
0
;
u
1
,
u
2
,
…
]
,
y
=
[
v
0
;
v
1
,
v
2
,
…
]
,
{\displaystyle {\begin{aligned}x&=[u_{0};u_{1},u_{2},\ldots ]\,,\\y&=[v_{0};v_{1},v_{2},\ldots ]\,,\end{aligned}}}
satisfy
u
h
+
i
=
v
k
+
i
{\displaystyle u_{h+i}=v_{k+i}}
for every non negative integer i.
Thus, except for a finite initial sequence, equivalent numbers have the same continued fraction representation.
Equivalent numbers are approximable to the same degree, in the sense that they have the same Markov constant.
=== Lagrange spectrum ===
As said above, the constant in Borel's theorem may not be improved, as shown by Adolf Hurwitz in 1891.
Let
ϕ
=
1
+
5
2
{\displaystyle \phi ={\tfrac {1+{\sqrt {5}}}{2}}}
be the golden ratio.
Then for any real constant c with
c
>
5
{\displaystyle c>{\sqrt {5}}\;}
there are only a finite number of rational numbers p/q such that
|
ϕ
−
p
q
|
<
1
c
q
2
.
{\displaystyle \left|\phi -{\frac {p}{q}}\right|<{\frac {1}{c\,q^{2}}}.}
Hence an improvement can only be achieved, if the numbers which are equivalent to
ϕ
{\displaystyle \phi }
are excluded. More precisely:
For every irrational number
α
{\displaystyle \alpha }
, which is not equivalent to
ϕ
{\displaystyle \phi }
, there are infinite many fractions
p
q
{\displaystyle {\tfrac {p}{q}}\;}
such that
|
α
−
p
q
|
<
1
8
q
2
.
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{{\sqrt {8}}q^{2}}}.}
By successive exclusions — next one must exclude the numbers equivalent to
2
{\displaystyle {\sqrt {2}}}
— of more and more classes of equivalence, the lower bound can be further enlarged.
The values which may be generated in this way are Lagrange numbers, which are part of the Lagrange spectrum.
They converge to the number 3 and are related to the Markov numbers.
== Khinchin's theorem on metric Diophantine approximation and extensions ==
Let
ψ
{\displaystyle \psi }
be a positive real-valued function on positive integers (i.e., a positive sequence) such that
q
ψ
(
q
)
{\displaystyle q\psi (q)}
is non-increasing. A real number x (not necessarily algebraic) is called
ψ
{\displaystyle \psi }
-approximable if there exist infinitely many rational numbers p/q such that
|
x
−
p
q
|
<
ψ
(
q
)
|
q
|
.
{\displaystyle \left|x-{\frac {p}{q}}\right|<{\frac {\psi (q)}{|q|}}.}
Aleksandr Khinchin proved in 1926 that if the series
∑
q
ψ
(
q
)
{\textstyle \sum _{q}\psi (q)}
diverges, then almost every real number (in the sense of Lebesgue measure) is
ψ
{\displaystyle \psi }
-approximable, and if the series converges, then almost every real number is not
ψ
{\displaystyle \psi }
-approximable. The circle of ideas surrounding this theorem and its relatives is known as metric Diophantine approximation or the metric theory of Diophantine approximation (not to be confused with height "metrics" in Diophantine geometry) or metric number theory.
Duffin & Schaeffer (1941) proved a generalization of Khinchin's result, and posed what is now known as the Duffin–Schaeffer conjecture on the analogue of Khinchin's dichotomy for general, not necessarily decreasing, sequences
ψ
{\displaystyle \psi }
. Beresnevich & Velani (2006) proved that a Hausdorff measure analogue of the Duffin–Schaeffer conjecture is equivalent to the original Duffin–Schaeffer conjecture, which is a priori weaker.
In July 2019, Dimitris Koukoulopoulos and James Maynard announced a proof of the conjecture.
=== Hausdorff dimension of exceptional sets ===
An important example of a function
ψ
{\displaystyle \psi }
to which Khinchin's theorem can be applied is the function
ψ
c
(
q
)
=
q
−
c
{\displaystyle \psi _{c}(q)=q^{-c}}
, where c > 1 is a real number. For this function, the relevant series converges and so Khinchin's theorem tells us that almost every point is not
ψ
c
{\displaystyle \psi _{c}}
-approximable. Thus, the set of numbers which are
ψ
c
{\displaystyle \psi _{c}}
-approximable forms a subset of the real line of Lebesgue measure zero. The Jarník-Besicovitch theorem, due to V. Jarník and A. S. Besicovitch, states that the Hausdorff dimension of this set is equal to
1
/
c
{\displaystyle 1/c}
. In particular, the set of numbers which are
ψ
c
{\displaystyle \psi _{c}}
-approximable for some
c
>
1
{\displaystyle c>1}
(known as the set of very well approximable numbers) has Hausdorff dimension one, while the set of numbers which are
ψ
c
{\displaystyle \psi _{c}}
-approximable for all
c
>
1
{\displaystyle c>1}
(known as the set of Liouville numbers) has Hausdorff dimension zero.
Another important example is the function
ψ
ε
(
q
)
=
ε
q
−
1
{\displaystyle \psi _{\varepsilon }(q)=\varepsilon q^{-1}}
, where
ε
>
0
{\displaystyle \varepsilon >0}
is a real number. For this function, the relevant series diverges and so Khinchin's theorem tells us that almost every number is
ψ
ε
{\displaystyle \psi _{\varepsilon }}
-approximable. This is the same as saying that every such number is well approximable, where a number is called well approximable if it is not badly approximable. So an appropriate analogue of the Jarník-Besicovitch theorem should concern the Hausdorff dimension of the set of badly approximable numbers. And indeed, V. Jarník proved that the Hausdorff dimension of this set is equal to one. This result was improved by W. M. Schmidt, who showed that the set of badly approximable numbers is incompressible, meaning that if
f
1
,
f
2
,
…
{\displaystyle f_{1},f_{2},\ldots }
is a sequence of bi-Lipschitz maps, then the set of numbers x for which
f
1
(
x
)
,
f
2
(
x
)
,
…
{\displaystyle f_{1}(x),f_{2}(x),\ldots }
are all badly approximable has Hausdorff dimension one. Schmidt also generalized Jarník's theorem to higher dimensions, a significant achievement because Jarník's argument is essentially one-dimensional, depending on the apparatus of continued fractions.
== Uniform distribution ==
Another topic that has seen a thorough development is the theory of uniform distribution mod 1. Take a sequence a1, a2, ... of real numbers and consider their fractional parts. That is, more abstractly, look at the sequence in
R
/
Z
{\displaystyle \mathbb {R} /\mathbb {Z} }
, which is a circle. For any interval I on the circle we look at the proportion of the sequence's elements that lie in it, up to some integer N, and compare it to the proportion of the circumference occupied by I. Uniform distribution means that in the limit, as N grows, the proportion of hits on the interval tends to the 'expected' value. Hermann Weyl proved a basic result showing that this was equivalent to bounds for exponential sums formed from the sequence. This showed that Diophantine approximation results were closely related to the general problem of cancellation in exponential sums, which occurs throughout analytic number theory in the bounding of error terms.
Related to uniform distribution is the topic of irregularities of distribution, which is of a combinatorial nature.
== Algorithms ==
Grotschel, Lovasz and Schrijver describe algorithms for finding approximately-best diophantine approximations, both for individual real numbers and for set of real numbers. The latter problem is called simultaneous diophantine approximation.: Sec. 5.2
== Unsolved problems ==
There are still simply stated unsolved problems remaining in Diophantine approximation, for example the Littlewood conjecture and the lonely runner conjecture.
It is also unknown if there are algebraic numbers with unbounded coefficients in their continued fraction expansion.
== Recent developments ==
In his plenary address at the International Mathematical Congress in Kyoto (1990), Grigory Margulis outlined a broad program rooted in ergodic theory that allows one to prove number-theoretic results using the dynamical and ergodic properties of actions of subgroups of semisimple Lie groups. The work of D. Kleinbock, G. Margulis and their collaborators demonstrated the power of this novel approach to classical problems in Diophantine approximation. Among its notable successes are the proof of the decades-old Oppenheim conjecture by Margulis, with later extensions by Dani and Margulis and Eskin–Margulis–Mozes, and the proof of Baker and Sprindzhuk conjectures in the Diophantine approximations on manifolds by Kleinbock and Margulis. Various generalizations of the above results of Aleksandr Khinchin in metric Diophantine approximation have also been obtained within this framework.
== See also ==
Davenport–Schmidt theorem
Duffin–Schaeffer theorem
Heilbronn set
Low-discrepancy sequence
== Notes ==
== References ==
== External links ==
Diophantine Approximation: historical survey Archived 2012-02-14 at the Wayback Machine. From Introduction to Diophantine methods course by Michel Waldschmidt.
"Diophantine approximations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Diophantine_approximation |
In number theory, Iwasawa theory is the study of objects of arithmetic interest over infinite towers of number fields. It began as a Galois module theory of ideal class groups, initiated by Kenkichi Iwasawa (1959) (岩澤 健吉), as part of the theory of cyclotomic fields. In the early 1970s, Barry Mazur considered generalizations of Iwasawa theory to abelian varieties. More recently (early 1990s), Ralph Greenberg has proposed an Iwasawa theory for motives.
== Formulation ==
Iwasawa worked with so-called
Z
p
{\displaystyle \mathbb {Z} _{p}}
-extensions: infinite extensions of a number field
F
{\displaystyle F}
with Galois group
Γ
{\displaystyle \Gamma }
isomorphic to the additive group of p-adic integers for some prime p. (These were called
Γ
{\displaystyle \Gamma }
-extensions in early papers.) Every closed subgroup of
Γ
{\displaystyle \Gamma }
is of the form
Γ
p
n
,
{\displaystyle \Gamma ^{p^{n}},}
so by Galois theory, a
Z
p
{\displaystyle \mathbb {Z} _{p}}
-extension
F
∞
/
F
{\displaystyle F_{\infty }/F}
is the same thing as a tower of fields
F
=
F
0
⊂
F
1
⊂
F
2
⊂
⋯
⊂
F
∞
{\displaystyle F=F_{0}\subset F_{1}\subset F_{2}\subset \cdots \subset F_{\infty }}
such that
Gal
(
F
n
/
F
)
≅
Z
/
p
n
Z
.
{\displaystyle \operatorname {Gal} (F_{n}/F)\cong \mathbb {Z} /p^{n}\mathbb {Z} .}
Iwasawa studied classical Galois modules over
F
n
{\displaystyle F_{n}}
by asking questions about the structure of modules over
F
∞
.
{\displaystyle F_{\infty }.}
More generally, Iwasawa theory asks questions about the structure of Galois modules over extensions with Galois group a p-adic Lie group.
== Example ==
Let
p
{\displaystyle p}
be a prime number and let
K
=
Q
(
μ
p
)
{\displaystyle K=\mathbb {Q} (\mu _{p})}
be the field generated over
Q
{\displaystyle \mathbb {Q} }
by the
p
{\displaystyle p}
th roots of unity. Iwasawa considered the following tower of number fields:
K
=
K
0
⊂
K
1
⊂
⋯
⊂
K
∞
,
{\displaystyle K=K_{0}\subset K_{1}\subset \cdots \subset K_{\infty },}
where
K
n
{\displaystyle K_{n}}
is the field generated by adjoining to
K
{\displaystyle K}
the pn+1-st roots of unity and
K
∞
=
⋃
K
n
.
{\displaystyle K_{\infty }=\bigcup K_{n}.}
The fact that
Gal
(
K
n
/
K
)
≃
Z
/
p
n
Z
{\displaystyle \operatorname {Gal} (K_{n}/K)\simeq \mathbb {Z} /p^{n}\mathbb {Z} }
implies, by infinite Galois theory, that
Gal
(
K
∞
/
K
)
≃
lim
←
n
Z
/
p
n
Z
=
Z
p
.
{\displaystyle \operatorname {Gal} (K_{\infty }/K)\simeq \varprojlim _{n}\mathbb {Z} /p^{n}\mathbb {Z} =\mathbb {Z} _{p}.}
In order to get an interesting Galois module, Iwasawa took the ideal class group of
K
n
{\displaystyle K_{n}}
, and let
I
n
{\displaystyle I_{n}}
be its p-torsion part. There are norm maps
I
m
→
I
n
{\displaystyle I_{m}\to I_{n}}
whenever
m
>
n
{\displaystyle m>n}
, and this gives us the data of an inverse system. If we set
I
=
lim
←
I
n
,
{\displaystyle I=\varprojlim I_{n},}
then it is not hard to see from the inverse limit construction that
I
{\displaystyle I}
is a module over
Z
p
.
{\displaystyle \mathbb {Z} _{p}.}
In fact,
I
{\displaystyle I}
is a module over the Iwasawa algebra
Λ
=
Z
p
[
[
Γ
]
]
{\displaystyle \Lambda =\mathbb {Z} _{p}[[\Gamma ]]}
. This is a 2-dimensional, regular local ring, and this makes it possible to describe modules over it. From this description it is possible to recover information about the p-part of the class group of
K
.
{\displaystyle K.}
The motivation here is that the p-torsion in the ideal class group of
K
{\displaystyle K}
had already been identified by Kummer as the main obstruction to the direct proof of Fermat's Last Theorem.
== Connections with p-adic analysis ==
From this beginning in the 1950s, a substantial theory has been built up. A fundamental connection was noticed between the module theory, and the p-adic L-functions that were defined in the 1960s by Kubota and Leopoldt. The latter begin from the Bernoulli numbers, and use interpolation to define p-adic analogues of the Dirichlet L-functions. It became clear that the theory had prospects of moving ahead finally from Kummer's century-old results on regular primes.
Iwasawa formulated the main conjecture of Iwasawa theory as an assertion that two methods of defining p-adic L-functions (by module theory, by interpolation) should coincide, as far as that was well-defined. This was proved by Mazur & Wiles (1984) for
Q
{\displaystyle \mathbb {Q} }
and for all totally real number fields by Wiles (1990). These proofs were modeled upon Ken Ribet's proof of the converse to Herbrand's theorem (the so-called Herbrand–Ribet theorem).
Karl Rubin found a more elementary proof of the Mazur-Wiles theorem by using Kolyvagin's Euler systems, described in Lang (1990) and Washington (1997), and later proved other generalizations of the main conjecture for imaginary quadratic fields.
== Generalizations ==
The Galois group of the infinite tower, the starting field, and the sort of arithmetic module studied can all be varied. In each case, there is a main conjecture linking the tower to a p-adic L-function.
In 2002, Christopher Skinner and Eric Urban claimed a proof of a main conjecture for GL(2). In 2010, they posted a preprint (Skinner & Urban 2010).
== See also ==
Ferrero–Washington theorem
Tate module of a number field
== References ==
Sources
Coates, J.; Sujatha, R. (2006), Cyclotomic Fields and Zeta Values, Springer Monographs in Mathematics, Springer-Verlag, ISBN 978-3-540-33068-4, Zbl 1100.11002
Greenberg, Ralph (2001), "Iwasawa theory---past and present", in Miyake, Katsuya (ed.), Class field theory---its centenary and prospect (Tokyo, 1998), Adv. Stud. Pure Math., vol. 30, Tokyo: Math. Soc. Japan, pp. 335–385, ISBN 978-4-931469-11-2, MR 1846466, Zbl 0998.11054
Iwasawa, Kenkichi (1959), "On Γ-extensions of algebraic number fields", Bulletin of the American Mathematical Society, 65 (4): 183–226, doi:10.1090/S0002-9904-1959-10317-7, ISSN 0002-9904, MR 0124316, Zbl 0089.02402
Kato, Kazuya (2007), "Iwasawa theory and generalizations" (PDF), in Sanz-Solé, Marta; Soria, Javier; Varona, Juan Luis; et al. (eds.), International Congress of Mathematicians. Vol. I, vol. 1, Eur. Math. Soc., Zürich, pp. 335–357, doi:10.4171/022-1/14, ISBN 978-3-03719-022-7, MR 2334196, archived from the original (PDF) on 2017-09-22, retrieved 2011-05-08
Lang, Serge (1990), Cyclotomic fields I and II, Graduate Texts in Mathematics, vol. 121, With an appendix by Karl Rubin (Combined 2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-96671-7, Zbl 0704.11038
Mazur, Barry; Wiles, Andrew (1984), "Class fields of abelian extensions of Q", Inventiones Mathematicae, 76 (2): 179–330, Bibcode:1984InMat..76..179M, doi:10.1007/BF01388599, ISSN 0020-9910, MR 0742853, S2CID 122576427, Zbl 0545.12005
Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2008), Cohomology of Number Fields, Grundlehren der Mathematischen Wissenschaften, vol. 323 (Second ed.), Berlin: Springer-Verlag, doi:10.1007/978-3-540-37889-1, ISBN 978-3-540-37888-4, MR 2392026, Zbl 1136.11001
Rubin, Karl (1991), "The 'main conjectures' of Iwasawa theory for imaginary quadratic fields", Inventiones Mathematicae, 103 (1): 25–68, Bibcode:1991InMat.103...25R, doi:10.1007/BF01239508, ISSN 0020-9910, S2CID 120179735, Zbl 0737.11030
Skinner, Chris; Urban, Éric (2010), The Iwasawa main conjectures for GL2 (PDF), p. 219
Washington, Lawrence C. (1997), Introduction to cyclotomic fields, Graduate Texts in Mathematics, vol. 83 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94762-4
Wiles, Andrew (1990), "The Iwasawa Conjecture for Totally Real Fields", Annals of Mathematics, 131 (3): 493–540, doi:10.2307/1971468, JSTOR 1971468, Zbl 0719.11071.
Citations
== Further reading ==
de Shalit, Ehud (1987), Iwasawa theory of elliptic curves with complex multiplication. p-adic L functions, Perspectives in Mathematics, vol. 3, Boston etc.: Academic Press, ISBN 978-0-12-210255-4, Zbl 0674.12004
Masato Kurihara, Kenichi Bannai, Tadashi Ochiai, Takeshi Tsuji (EDs.): Development of Iwasawa Theory: The Centennial of K. Iwasawa's Birth, Mathematical Soc of Japan, (Advanced Studies in Pure Mathematics, V.86), ISBN 978-4-86497092-1 (2020).
Tadashi Ochiai: Iwasawa Theory and Its Perspective, Vol.1, Amer. Math. Soc., (Mathematical Surveys and Monographs V.272), ISBN 978-1-4704-5672-6 (2023).
Tadashi Ochiai: Iwasawa Theory and Its Perspective, Vol.2, Amer. Math. Soc., (Mathematical Surveys and Monographs V.280), ISBN 978-1-4704-5673-3 (2024).
Tadashi Ochiai: Iwasawa Theory and Its Perspective, Vol.3, Amer. Math. Soc., (Mathematical Surveys and Monographs V.291), ISBN 978-1-4704-7732-5 (2025).
== External links ==
"Iwasawa theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Iwasawa_theory |
Arithmetic dynamics is a field that amalgamates two areas of mathematics, dynamical systems and number theory. Part of the inspiration comes from complex dynamics, the study of the iteration of self-maps of the complex plane or other complex algebraic varieties. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, p-adic, or algebraic points under repeated application of a polynomial or rational function. A fundamental goal is to describe arithmetic properties in terms of underlying geometric structures.
Global arithmetic dynamics is the study of analogues of classical diophantine geometry in the setting of discrete dynamical systems, while local arithmetic dynamics, also called p-adic or nonarchimedean dynamics, is an analogue of complex dynamics in which one replaces the complex numbers C by a p-adic field such as Qp or Cp and studies chaotic behavior and the Fatou and Julia sets.
The following table describes a rough correspondence between Diophantine equations, especially abelian varieties, and dynamical systems:
== Definitions and notation from discrete dynamics ==
Let S be a set and let F : S → S be a map from S to itself. The iterate of F with itself n times is denoted
F
(
n
)
=
F
∘
F
∘
⋯
∘
F
.
{\displaystyle F^{(n)}=F\circ F\circ \cdots \circ F.}
A point P ∈ S is periodic if F(n)(P) = P for some n ≥ 1.
The point is preperiodic if F(k)(P) is periodic for some k ≥ 1.
The (forward) orbit of P is the set
O
F
(
P
)
=
{
P
,
F
(
P
)
,
F
(
2
)
(
P
)
,
F
(
3
)
(
P
)
,
⋯
}
.
{\displaystyle O_{F}(P)=\left\{P,F(P),F^{(2)}(P),F^{(3)}(P),\cdots \right\}.}
Thus P is preperiodic if and only if its orbit OF(P) is finite.
== Number theoretic properties of preperiodic points ==
Let F(x) be a rational function of degree at least two with coefficients in Q. A theorem of Douglas Northcott says that F has only finitely many Q-rational preperiodic points, i.e., F has only finitely many preperiodic points in P1(Q). The uniform boundedness conjecture for preperiodic points of Patrick Morton and Joseph Silverman says that the number of preperiodic points of F in P1(Q) is bounded by a constant that depends only on the degree of F.
More generally, let F : PN → PN be a morphism of degree at least two defined over a number field K. Northcott's theorem says that F has only finitely many preperiodic points in
PN(K), and the general Uniform Boundedness Conjecture says that the number of preperiodic points in
PN(K) may be bounded solely in terms of N, the degree of F, and the degree of K over Q.
The Uniform Boundedness Conjecture is not known even for quadratic polynomials Fc(x) = x2 + c over the rational numbers Q. It is known in this case that Fc(x) cannot have periodic points of period four, five, or six, although the result for period six is contingent on the validity of the conjecture of Birch and Swinnerton-Dyer. Bjorn Poonen has conjectured that Fc(x) cannot have rational periodic points of any period strictly larger than three.
== Integer points in orbits ==
The orbit of a rational map may contain infinitely many integers. For example, if F(x) is a polynomial with integer coefficients and if a is an integer, then it is clear that the entire orbit OF(a) consists of integers. Similarly, if F(x) is a rational map and some iterate F(n)(x) is a polynomial with integer coefficients, then every n-th entry in the orbit is an integer. An example of this phenomenon is the map F(x) = x−d, whose second iterate is a polynomial. It turns out that this is the only way that an orbit can contain infinitely many integers.
Theorem. Let F(x) ∈ Q(x) be a rational function of degree at least two, and assume that no iterate of F is a polynomial. Let a ∈ Q. Then the orbit OF(a) contains only finitely many integers.
== Dynamically defined points lying on subvarieties ==
There are general conjectures due to Shouwu Zhang
and others concerning subvarieties that contain infinitely many periodic points or that intersect an orbit in infinitely many points. These are dynamical analogues of, respectively, the Manin–Mumford conjecture, proven by Michel Raynaud, and the Mordell–Lang conjecture, proven by Gerd Faltings. The following conjectures illustrate the general theory in the case that the subvariety is a curve.
Conjecture. Let F : PN → PN be a morphism and let C ⊂ PN be an irreducible algebraic curve. Suppose that there is a point P ∈ PN such that C contains infinitely many points in the orbit OF(P). Then C is periodic for F in the sense that there is some iterate F(k) of F that maps C to itself.
== p-adic dynamics ==
The field of p-adic (or nonarchimedean) dynamics is the study of classical dynamical questions over a field K that is complete with respect to a nonarchimedean absolute value. Examples of such fields are the field of p-adic rationals Qp and the completion of its algebraic closure Cp. The metric on K and the standard definition of equicontinuity leads to the usual definition of the Fatou and Julia sets of a rational map F(x) ∈ K(x). There are many similarities between the complex and the nonarchimedean theories, but also many differences. A striking difference is that in the nonarchimedean setting, the Fatou set is always nonempty, but the Julia set may be empty. This is the reverse of what is true over the complex numbers. Nonarchimedean dynamics has been extended to Berkovich space, which is a compact connected space that contains the totally disconnected non-locally compact field Cp.
== Generalizations ==
There are natural generalizations of arithmetic dynamics in which Q and Qp are replaced by number fields and their p-adic completions. Another natural generalization is to replace self-maps of P1 or PN with self-maps (morphisms) V → V of other affine or projective varieties.
== Other areas in which number theory and dynamics interact ==
There are many other problems of a number theoretic nature that appear in the setting of dynamical systems, including:
dynamics over finite fields.
dynamics over function fields such as C(x).
iteration of formal and p-adic power series.
dynamics on Lie groups.
arithmetic properties of dynamically defined moduli spaces.
equidistribution and invariant measures, especially on p-adic spaces.
dynamics on Drinfeld modules.
number-theoretic iteration problems that are not described by rational maps on varieties, for example, the Collatz problem.
symbolic codings of dynamical systems based on explicit arithmetic expansions of real numbers.
The Arithmetic Dynamics Reference List gives an extensive list of articles and books covering a wide range of arithmetical dynamical topics.
== See also ==
Arithmetic geometry
Arithmetic topology
Combinatorics and dynamical systems
Arboreal Galois representation
== Notes and references ==
== Further reading ==
Lecture Notes on Arithmetic Dynamics Arizona Winter School, March 13–17, 2010, Joseph H. Silverman
Chapter 15 of A first course in dynamics: with a panorama of recent developments, Boris Hasselblatt, A. B. Katok, Cambridge University Press, 2003, ISBN 978-0-521-58750-1
== External links ==
The Arithmetic of Dynamical Systems home page
Arithmetic dynamics bibliography
Analysis and dynamics on the Berkovich projective line
Book review of Joseph H. Silverman's "The Arithmetic of Dynamical Systems", reviewed by Robert L. Benedetto | Wikipedia/Arithmetic_dynamics |
In mathematics, local class field theory, introduced by Helmut Hasse, is the study of abelian extensions of local fields; here, "local field" means a field which is complete with respect to an absolute value or a discrete valuation with a finite residue field: hence every local field is isomorphic (as a topological field) to the real numbers R, the complex numbers C, a finite extension of the p-adic numbers Qp (where p is any prime number), or the field of formal Laurent series Fq((T)) over a finite field Fq.
== Approaches to local class field theory ==
Local class field theory gives a description of the Galois group G of the maximal abelian extension of a local field K via the reciprocity map which acts from the multiplicative group K×=K\{0}. For a finite abelian extension L of K the reciprocity map induces an isomorphism of the quotient group K×/N(L×) of K× by the norm group N(L×) of the extension L× to the Galois group Gal(L/K)
of the extension.
The existence theorem in local class field theory establishes a one-to-one correspondence between open subgroups of finite index in the multiplicative group K× and finite abelian extensions of the field K. For a finite abelian extension L of K the corresponding open subgroup of finite index is the norm group N(L×). The reciprocity map sends higher groups of units to higher ramification subgroups.Ch. 4
Using the local reciprocity map, one defines the Hilbert symbol and its generalizations. Finding explicit formulas for it is one of subdirections of the theory of local fields, it has a long and rich history, see e.g. Sergei Vostokov's review.
There are cohomological approaches and non-cohomological approaches to local class field theory. Cohomological approaches tend to be non-explicit, since they use the cup product of the first Galois cohomology groups.
For various approaches to local class field theory see Ch. IV and sect. 7 Ch. IV of. They include the Hasse approach of using the Brauer group, cohomological approaches, the explicit methods of Jürgen Neukirch, Michiel Hazewinkel, the Lubin-Tate theory and others.
== Generalizations of local class field theory ==
Generalizations of local class field theory to local fields with quasi-finite residue field were easy extensions of the theory, obtained by G. Whaples in the 1950s.ch. V
Explicit p-class field theory for local fields with perfect and imperfect residue fields which are not finite has to deal with the new issue of norm groups of infinite index. Appropriate theories were constructed by Ivan Fesenko.
Fesenko's noncommutative local class field theory for arithmetically profinite Galois extensions of local fields studies appropriate local reciprocity cocycle map and its properties. This arithmetic theory can be viewed as an alternative to the representation-theoretical local Langlands correspondence.
== Higher local class field theory ==
For a higher-dimensional local field
K
{\displaystyle K}
there is a higher local reciprocity map which describes abelian extensions of the field in terms of open subgroups of finite index in the Milnor K-group of the field. Namely, if
K
{\displaystyle K}
is an
n
{\displaystyle n}
-dimensional local field then one uses
K
n
M
(
K
)
{\displaystyle \mathrm {K} _{n}^{\mathrm {M} }(K)}
or its separated quotient endowed with a suitable topology. When
n
=
1
{\displaystyle n=1}
the theory becomes the usual local class field theory. Unlike the classical case, Milnor K-groups do not satisfy Galois module descent if
n
>
1
{\displaystyle n>1}
. General higher-dimensional local class field theory was developed by K. Kato and I. Fesenko.
Higher local class field theory is part of higher class field theory which studies abelian extensions (resp. abelian covers) of rational function fields of proper regular schemes flat over integers.
== References ==
== Further reading ==
Fesenko, Ivan; Vostokov, Sergey (2002), Local Fields and their Extensions (2nd ed.), American Mathematical Society, ISBN 978-0-19-504030-2
Fesenko, Ivan B.; Kurihara, Masato, eds. (2000), Invitation to Higher Local Fields, Geometry & Topology Monographs, vol. 3 (First ed.), University of Warwick: Mathematical Sciences Publishers, doi:10.2140/gtm.2000.3, ISSN 1464-8989, Zbl 0954.00026
Iwasawa, Kenkichi (1986), Local class field theory, Oxford Science Publications, The Clarendon Press Oxford University Press, ISBN 978-0-19-504030-2, MR 0863740
Neukirch, Jürgen (1986), Class field theory, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 280, Berlin, New York: Springer-Verlag, ISBN 978-3-540-15251-4, MR 0819231
Serre, Jean-Pierre (1967), "Local class field theory", in Cassels, John William Scott; Fröhlich, Albrecht (eds.), Algebraic Number Theory (Proc. Instructional Conf., Brighton, 1965), Thompson, Washington, D.C., pp. 128–161, ISBN 978-0-9502734-2-6, MR 0220701
Serre, Jean-Pierre (1979) [1962], Corps Locaux (English translation: Local Fields), Graduate Texts in Mathematics, vol. 67, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90424-5, MR 0150130 | Wikipedia/Local_class_field_theory |
In mathematics, Probabilistic number theory is a subfield of number theory, which explicitly uses probability to answer questions about the integers and integer-valued functions. One basic idea underlying it is that different prime numbers are, in some serious sense, like independent random variables. This however is not an idea that has a unique useful formal expression.
The founders of the theory were Paul Erdős, Aurel Wintner and Mark Kac during the 1930s, one of the periods of investigation in analytic number theory. Foundational results include the Erdős–Wintner theorem, the Erdős–Kac theorem on additive functions and the DDT theorem.
== See also ==
Number theory
Analytic number theory
Areas of mathematics
List of number theory topics
List of probability topics
Probabilistic method
Probable prime
== References ==
Tenenbaum, Gérald (1995). Introduction to Analytic and Probabilistic Number Theory. Cambridge studies in advanced mathematics. Vol. 46. Cambridge University Press. ISBN 0-521-41261-7. Zbl 0831.11001.
== Further reading ==
Kubilius, J. (1964) [1962]. Probabilistic methods in the theory of numbers. Translations of mathematical monographs. Vol. 11. Providence, RI: American Mathematical Society. ISBN 0-8218-1561-X. Zbl 0133.30203. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Probabilistic_number_theory |
In algebra (in particular in algebraic geometry or algebraic number theory), a valuation is a function on a field that provides a measure of the size or multiplicity of elements of the field. It generalizes to commutative algebra the notion of size inherent in consideration of the degree of a pole or multiplicity of a zero in complex analysis, the degree of divisibility of a number by a prime number in number theory, and the geometrical concept of contact between two algebraic or analytic varieties in algebraic geometry. A field with a valuation on it is called a valued field.
== Definition ==
One starts with the following objects:
a field K and its multiplicative group K×,
an abelian totally ordered group (Γ, +, ≥).
The ordering and group law on Γ are extended to the set Γ ∪ {∞} by the rules
∞ ≥ α for all α ∈ Γ,
∞ + α = α + ∞ = ∞ + ∞ = ∞ for all α ∈ Γ.
Then a valuation of K is any map
v : K → Γ ∪ {∞}
that satisfies the following properties for all a, b in K:
v(a) = ∞ if and only if a = 0,
v(ab) = v(a) + v(b),
v(a + b) ≥ min(v(a), v(b)), with equality if v(a) ≠ v(b).
A valuation v is trivial if v(a) = 0 for all a in K×, otherwise it is non-trivial.
The second property asserts that any valuation is a group homomorphism on K×. The third property is a version of the triangle inequality on metric spaces adapted to an arbitrary Γ (see Multiplicative notation below). For valuations used in geometric applications, the first property implies that any non-empty germ of an analytic variety near a point contains that point.
The valuation can be interpreted as the order of the leading-order term. The third property then corresponds to the order of a sum being the order of the larger term, unless the two terms have the same order, in which case they may cancel and the sum may have larger order.
For many applications, Γ is an additive subgroup of the real numbers
R
{\displaystyle \mathbb {R} }
in which case ∞ can be interpreted as +∞ in the extended real numbers; note that
min
(
a
,
+
∞
)
=
min
(
+
∞
,
a
)
=
a
{\displaystyle \min(a,+\infty )=\min(+\infty ,a)=a}
for any real number a, and thus +∞ is the unit under the binary operation of minimum. The real numbers (extended by +∞) with the operations of minimum and addition form a semiring, called the min tropical semiring, and a valuation v is almost a semiring homomorphism from K to the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together.
=== Multiplicative notation and absolute values ===
The concept was developed by Emil Artin in his book Geometric Algebra writing the group in multiplicative notation as (Γ, ·, ≥):
Instead of ∞, we adjoin a formal symbol O to Γ, with the ordering and group law extended by the rules
O ≤ α for all α ∈ Γ,
O · α = α · O = O for all α ∈ Γ.
Then a valuation of K is any map
| ⋅ |v : K → Γ ∪ {O}
satisfying the following properties for all a, b ∈ K:
|a|v = O if and only if a = 0,
|ab|v = |a|v · |b|v,
|a+b|v ≤ max(|a|v, |b|v), with equality if |a|v ≠ |b|v.
(Note that the directions of the inequalities are reversed from those in the additive notation.)
If Γ is a subgroup of the positive real numbers under multiplication, the last condition is the ultrametric inequality, a stronger form of the triangle inequality |a+b|v ≤ |a|v + |b|v, and | ⋅ |v is an absolute value. In this case, we may pass to the additive notation with value group
Γ
+
⊆
(
R
,
+
)
{\displaystyle \Gamma _{+}\subseteq (\mathbb {R} ,+)}
by taking v+(a) = −log |a|v.
Each valuation on K defines a corresponding linear preorder: a ≼ b ⇔ |a|v ≤ |b|v. Conversely, given a "≼" satisfying the required properties, we can define valuation |a|v = {b: b ≼ a ∧ a ≼ b}, with multiplication and ordering based on K and ≼.
=== Terminology ===
In this article, we use the terms defined above, in the additive notation. However, some authors use alternative terms:
our "valuation" (satisfying the ultrametric inequality) is called an "exponential valuation" or "non-Archimedean absolute value" or "ultrametric absolute value";
our "absolute value" (satisfying the triangle inequality) is called a "valuation" or an "Archimedean absolute value".
=== Associated objects ===
There are several objects defined from a given valuation v : K → Γ ∪ {∞} ;
the value group or valuation group Γv = v(K×), a subgroup of Γ (though v is usually surjective so that Γv = Γ);
the valuation ring Rv is the set of a ∈ K with v(a) ≥ 0,
the prime ideal mv is the set of a ∈ K with v(a) > 0 (it is in fact a maximal ideal of Rv),
the residue field kv = Rv/mv,
the place of K associated to v, the class of v under the equivalence defined below.
== Basic properties ==
=== Equivalence of valuations ===
Two valuations v1 and v2 of K with valuation group Γ1 and Γ2, respectively, are said to be equivalent if there is an order-preserving group isomorphism φ : Γ1 → Γ2 such that v2(a) = φ(v1(a)) for all a in K×. This is an equivalence relation.
Two valuations of K are equivalent if and only if they have the same valuation ring.
An equivalence class of valuations of a field is called a place. Ostrowski's theorem gives a complete classification of places of the field of rational numbers
Q
:
{\displaystyle \mathbb {Q} :}
these are precisely the equivalence classes of valuations for the p-adic completions of
Q
.
{\displaystyle \mathbb {Q} .}
=== Extension of valuations ===
Let v be a valuation of K and let L be a field extension of K. An extension of v (to L) is a valuation w of L such that the restriction of w to K is v. The set of all such extensions is studied in the ramification theory of valuations.
Let L/K be a finite extension and let w be an extension of v to L. The index of Γv in Γw, e(w/v) = [Γw : Γv], is called the reduced ramification index of w over v. It satisfies e(w/v) ≤ [L : K] (the degree of the extension L/K). The relative degree of w over v is defined to be f(w/v) = [Rw/mw : Rv/mv] (the degree of the extension of residue fields). It is also less than or equal to the degree of L/K. When L/K is separable, the ramification index of w over v is defined to be e(w/v)pi, where pi is the inseparable degree of the extension Rw/mw over Rv/mv.
=== Complete valued fields ===
When the ordered abelian group Γ is the additive group of the integers, the associated valuation is equivalent to an absolute value, and hence induces a metric on the field K. If K is complete with respect to this metric, then it is called a complete valued field. If K is not complete, one can use the valuation to construct its completion, as in the examples below, and different valuations can define different completion fields.
In general, a valuation induces a uniform structure on K, and K is called a complete valued field if it is complete as a uniform space. There is a related property known as spherical completeness: it is equivalent to completeness if
Γ
=
Z
,
{\displaystyle \Gamma =\mathbb {Z} ,}
but stronger in general.
== Examples ==
=== p-adic valuation ===
The most basic example is the p-adic valuation νp associated to a prime integer p, on the rational numbers
K
=
Q
,
{\displaystyle K=\mathbb {Q} ,}
with valuation ring
R
=
Z
(
p
)
,
{\displaystyle R=\mathbb {Z} _{(p)},}
where
Z
(
p
)
{\displaystyle \mathbb {Z} _{(p)}}
is the localization of
Z
{\displaystyle \mathbb {Z} }
at the prime ideal
(
p
)
{\displaystyle (p)}
. The valuation group is the additive integers
Γ
=
Z
.
{\displaystyle \Gamma =\mathbb {Z} .}
For an integer
a
∈
R
=
Z
,
{\displaystyle a\in R=\mathbb {Z} ,}
the valuation νp(a) measures the divisibility of a by powers of p:
ν
p
(
a
)
=
max
{
e
∈
Z
∣
p
e
divides
a
}
;
{\displaystyle \nu _{p}(a)=\max\{e\in \mathbb {Z} \mid p^{e}{\text{ divides }}a\};}
and for a fraction, νp(a/b) = νp(a) − νp(b).
Writing this multiplicatively yields the p-adic absolute value, which conventionally has as base
1
/
p
=
p
−
1
{\displaystyle 1/p=p^{-1}}
, so
|
a
|
p
:=
p
−
ν
p
(
a
)
{\displaystyle |a|_{p}:=p^{-\nu _{p}(a)}}
.
The completion of
Q
{\displaystyle \mathbb {Q} }
with respect to νp is the field
Q
p
{\displaystyle \mathbb {Q} _{p}}
of p-adic numbers.
=== Order of vanishing ===
Let K = F(x), the rational functions on the affine line X = F1, and take a point a ∈ X. For a polynomial
f
(
x
)
=
a
k
(
x
−
a
)
k
+
a
k
+
1
(
x
−
a
)
k
+
1
+
⋯
+
a
n
(
x
−
a
)
n
{\displaystyle f(x)=a_{k}(x{-}a)^{k}+a_{k+1}(x{-}a)^{k+1}+\cdots +a_{n}(x{-}a)^{n}}
with
a
k
≠
0
{\displaystyle a_{k}\neq 0}
, define va(f) = k, the order of vanishing at x = a; and va(f /g) = va(f) − va(g). Then the valuation ring R consists of rational functions with no pole at x = a, and the completion is the formal Laurent series ring F((x−a)). This can be generalized to the field of Puiseux series K{{t}} (fractional powers), the Levi-Civita field (its Cauchy completion), and the field of Hahn series, with valuation in all cases returning the smallest exponent of t appearing in the series.
=== π-adic valuation ===
Generalizing the previous examples, let R be a principal ideal domain, K be its field of fractions, and π be an irreducible element of R. Since every principal ideal domain is a unique factorization domain, every non-zero element a of R can be written (essentially) uniquely as
a
=
π
e
a
p
1
e
1
p
2
e
2
⋯
p
n
e
n
{\displaystyle a=\pi ^{e_{a}}p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{n}^{e_{n}}}
where the e's are non-negative integers and the pi are irreducible elements of R that are not associates of π. In particular, the integer ea is uniquely determined by a.
The π-adic valuation of K is then given by
v
π
(
0
)
=
∞
{\displaystyle v_{\pi }(0)=\infty }
v
π
(
a
/
b
)
=
e
a
−
e
b
,
for
a
,
b
∈
R
,
a
,
b
≠
0.
{\displaystyle v_{\pi }(a/b)=e_{a}-e_{b},{\text{ for }}a,b\in R,a,b\neq 0.}
If π' is another irreducible element of R such that (π') = (π) (that is, they generate the same ideal in R), then the π-adic valuation and the π'-adic valuation are equal. Thus, the π-adic valuation can be called the P-adic valuation, where P = (π).
=== P-adic valuation on a Dedekind domain ===
The previous example can be generalized to Dedekind domains. Let R be a Dedekind domain, K its field of fractions, and let P be a non-zero prime ideal of R. Then, the localization of R at P, denoted RP, is a principal ideal domain whose field of fractions is K. The construction of the previous section applied to the prime ideal PRP of RP yields the P-adic valuation of K.
== Vector spaces over valuation fields ==
Suppose that Γ ∪ {0} is the set of non-negative real numbers under multiplication. Then we say that the valuation is non-discrete if its range (the valuation group) is infinite (and hence has an accumulation point at 0).
Suppose that X is a vector space over K and that A and B are subsets of X. Then we say that A absorbs B if there exists a α ∈ K such that λ ∈ K and |λ| ≥ |α| implies that B ⊆ λ A. A is called radial or absorbing if A absorbs every finite subset of X. Radial subsets of X are invariant under finite intersection. Also, A is called circled if λ in K and |λ| ≥ |α| implies λ A ⊆ A. The set of circled subsets of L is invariant under arbitrary intersections. The circled hull of A is the intersection of all circled subsets of X containing A.
Suppose that X and Y are vector spaces over a non-discrete valuation field K, let A ⊆ X, B ⊆ Y, and let f : X → Y be a linear map. If B is circled or radial then so is
f
−
1
(
B
)
{\displaystyle f^{-1}(B)}
. If A is circled then so is f(A) but if A is radial then f(A) will be radial under the additional condition that f is surjective.
== See also ==
Discrete valuation
Euclidean valuation
Field norm
Absolute value (algebra)
== Notes ==
== References ==
== External links ==
Danilov, V.I. (2001) [1994], "Valuation", Encyclopedia of Mathematics, EMS Press
Discrete valuation at PlanetMath.
Valuation at PlanetMath.
Weisstein, Eric W. "Valuation". MathWorld. | Wikipedia/Valuation_(algebra) |
Transcendental number theory is a branch of number theory that investigates transcendental numbers (numbers that are not solutions of any polynomial equation with rational coefficients), in both qualitative and quantitative ways.
== Transcendence ==
The fundamental theorem of algebra tells us that if we have a non-constant polynomial with rational coefficients (or equivalently, by clearing denominators, with integer coefficients) then that polynomial will have a root in the complex numbers. That is, for any non-constant polynomial
P
{\displaystyle P}
with rational coefficients there will be a complex number
α
{\displaystyle \alpha }
such that
P
(
α
)
=
0
{\displaystyle P(\alpha )=0}
. Transcendence theory is concerned with the converse question: given a complex number
α
{\displaystyle \alpha }
, is there a polynomial
P
{\displaystyle P}
with rational coefficients such that
P
(
α
)
=
0
?
{\displaystyle P(\alpha )=0?}
If no such polynomial exists then the number is called transcendental.
More generally the theory deals with algebraic independence of numbers. A set of numbers {α1, α2, …, αn} is called algebraically independent over a field K if there is no non-zero polynomial P in n variables with coefficients in K such that P(α1, α2, …, αn) = 0. So working out if a given number is transcendental is really a special case of algebraic independence where n = 1 and the field K is the field of rational numbers.
A related notion is whether there is a closed-form expression for a number, including exponentials and logarithms as well as algebraic operations. There are various definitions of "closed-form", and questions about closed-form can often be reduced to questions about transcendence.
== History ==
=== Approximation by rational numbers: Liouville to Roth ===
Use of the term transcendental to refer to an object that is not algebraic dates back to the seventeenth century, when Gottfried Leibniz proved that the sine function was not an algebraic function. The question of whether certain classes of numbers could be transcendental dates back to 1748 when Euler asserted that the number logab was not algebraic for rational numbers a and b provided b is not of the form b = ac for some rational c.
Euler's assertion was not proved until the twentieth century, but almost a hundred years after his claim Joseph Liouville did manage to prove the existence of numbers that are not algebraic, something that until then had not been known for sure. His original papers on the matter in the 1840s sketched out arguments using simple continued fractions to construct transcendental numbers. Later, in the 1850s, he gave a necessary condition for a number to be algebraic, and thus a sufficient condition for a number to be transcendental. This transcendence criterion was not strong enough to be necessary too, and indeed it fails to detect that the number e is transcendental. But his work did provide a larger class of transcendental numbers, now known as Liouville numbers in his honour.
Liouville's criterion essentially said that algebraic numbers cannot be very well approximated by rational numbers. So if a number can be very well approximated by rational numbers then it must be transcendental. The exact meaning of "very well approximated" in Liouville's work relates to a certain exponent. He showed that if α is an algebraic number of degree d ≥ 2 and ε is any number greater than zero, then the expression
|
α
−
p
q
|
<
1
q
d
+
ε
{\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{q^{d+\varepsilon }}}}
can be satisfied by only finitely many rational numbers p/q. Using this as a criterion for transcendence is not trivial, as one must check whether there are infinitely many solutions p/q for every d ≥ 2.
In the twentieth century work by Axel Thue, Carl Siegel, and Klaus Roth reduced the exponent in Liouville's work from d + ε to d/2 + 1 + ε, and finally, in 1955, to 2 + ε. This result, known as the Thue–Siegel–Roth theorem, is ostensibly the best possible, since if the exponent 2 + ε is replaced by just 2 then the result is no longer true. However, Serge Lang conjectured an improvement of Roth's result; in particular he conjectured that q2+ε in the denominator of the right-hand side could be reduced to
q
2
(
log
q
)
1
+
ϵ
{\displaystyle q^{2}(\log q)^{1+\epsilon }}
.
Roth's work effectively ended the work started by Liouville, and his theorem allowed mathematicians to prove the transcendence of many more numbers, such as the Champernowne constant. The theorem is still not strong enough to detect all transcendental numbers, though, and many famous constants including e and π either are not or are not known to be very well approximable in the above sense.
=== Auxiliary functions: Hermite to Baker ===
Fortunately other methods were pioneered in the nineteenth century to deal with the algebraic properties of e, and consequently of π through Euler's identity. This work centred on use of the so-called auxiliary function. These are functions which typically have many zeros at the points under consideration. Here "many zeros" may mean many distinct zeros, or as few as one zero but with a high multiplicity, or even many zeros all with high multiplicity. Charles Hermite used auxiliary functions that approximated the functions
e
k
x
{\displaystyle e^{kx}}
for each natural number
k
{\displaystyle k}
in order to prove the transcendence of
e
{\displaystyle e}
in 1873. His work was built upon by Ferdinand von Lindemann in the 1880s in order to prove that eα is transcendental for nonzero algebraic numbers α. In particular this proved that π is transcendental since eπi is algebraic, and thus answered in the negative the problem of antiquity as to whether it was possible to square the circle. Karl Weierstrass developed their work yet further and eventually proved the Lindemann–Weierstrass theorem in 1885.
In 1900 David Hilbert posed his famous collection of problems. The seventh of these, and one of the hardest in Hilbert's estimation, asked about the transcendence of numbers of the form ab where a and b are algebraic, a is not zero or one, and b is irrational. In the 1930s Alexander Gelfond and Theodor Schneider proved that all such numbers were indeed transcendental using a non-explicit auxiliary function whose existence was granted by Siegel's lemma. This result, the Gelfond–Schneider theorem, proved the transcendence of numbers such as eπ and the Gelfond–Schneider constant.
The next big result in this field occurred in the 1960s, when Alan Baker made progress on a problem posed by Gelfond on linear forms in logarithms. Gelfond himself had managed to find a non-trivial lower bound for the quantity
|
β
1
log
α
1
+
β
2
log
α
2
|
{\displaystyle |\beta _{1}\log \alpha _{1}+\beta _{2}\log \alpha _{2}|\,}
where all four unknowns are algebraic, the αs being neither zero nor one and the βs being irrational. Finding similar lower bounds for the sum of three or more logarithms had eluded Gelfond, though. The proof of Baker's theorem contained such bounds, solving Gauss' class number problem for class number one in the process. This work won Baker the Fields medal for its uses in solving Diophantine equations. From a purely transcendental number theoretic viewpoint, Baker had proved that if α1, ..., αn are algebraic numbers, none of them zero or one, and β1, ..., βn are algebraic numbers such that 1, β1, ..., βn are linearly independent over the rational numbers, then the number
α
1
β
1
α
2
β
2
⋯
α
n
β
n
{\displaystyle \alpha _{1}^{\beta _{1}}\alpha _{2}^{\beta _{2}}\cdots \alpha _{n}^{\beta _{n}}}
is transcendental.
=== Other techniques: Cantor and Zilber ===
In the 1870s, Georg Cantor started to develop set theory and, in 1874, published a paper proving that the algebraic numbers could be put in one-to-one correspondence with the set of natural numbers, and thus that the set of transcendental numbers must be uncountable. Later, in 1891, Cantor used his more familiar diagonal argument to prove the same result. While Cantor's result is often quoted as being purely existential and thus unusable for constructing a single transcendental number, the proofs in both the aforementioned papers give methods to construct transcendental numbers.
While Cantor used set theory to prove the plenitude of transcendental numbers, a recent development has been the use of model theory in attempts to prove an unsolved problem in transcendental number theory. The problem is to determine the transcendence degree of the field
K
=
Q
(
x
1
,
…
,
x
n
,
e
x
1
,
…
,
e
x
n
)
{\displaystyle K=\mathbb {Q} (x_{1},\ldots ,x_{n},e^{x_{1}},\ldots ,e^{x_{n}})}
for complex numbers x1, ..., xn that are linearly independent over the rational numbers. Stephen Schanuel conjectured that the answer is at least n, but no proof is known. In 2004, though, Boris Zilber published a paper that used model theoretic techniques to create a structure that behaves very much like the complex numbers equipped with the operations of addition, multiplication, and exponentiation. Moreover, in this abstract structure Schanuel's conjecture does indeed hold. Unfortunately it is not yet known that this structure is in fact the same as the complex numbers with the operations mentioned; there could exist some other abstract structure that behaves very similarly to the complex numbers but where Schanuel's conjecture doesn't hold. Zilber did provide several criteria that would prove the structure in question was C, but could not prove the so-called Strong Exponential Closure axiom. The simplest case of this axiom has since been proved, but a proof that it holds in full generality is required to complete the proof of the conjecture.
== Approaches ==
A typical problem in this area of mathematics is to work out whether a given number is transcendental. Cantor used a cardinality argument to show that there are only countably many algebraic numbers, and hence almost all numbers are transcendental. Transcendental numbers therefore represent the typical case; even so, it may be extremely difficult to prove that a given number is transcendental (or even simply irrational).
For this reason transcendence theory often works towards a more quantitative approach. So given a particular complex number α one can ask how close α is to being an algebraic number. For example, if one supposes that the number α is algebraic then can one show that it must have very high degree or a minimum polynomial with very large coefficients? Ultimately if it is possible to show that no finite degree or size of coefficient is sufficient then the number must be transcendental. Since a number α is transcendental if and only if P(α) ≠ 0 for every non-zero polynomial P with integer coefficients, this problem can be approached by trying to find lower bounds of the form
|
P
(
a
)
|
>
F
(
A
,
d
)
{\displaystyle |P(a)|>F(A,d)}
where the right hand side is some positive function depending on some measure A of the size of the coefficients of P, and its degree d, and such that these lower bounds apply to all P ≠ 0. Such a bound is called a transcendence measure.
The case of d = 1 is that of "classical" diophantine approximation asking for lower bounds for
|
a
x
+
b
|
{\displaystyle |ax+b|}
.
The methods of transcendence theory and diophantine approximation have much in common: they both use the auxiliary function concept.
== Major results ==
The Gelfond–Schneider theorem was the major advance in transcendence theory in the period 1900–1950. In the 1960s the method of Alan Baker on linear forms in logarithms of algebraic numbers reanimated transcendence theory, with applications to numerous classical problems and diophantine equations.
== Mahler's classification ==
Kurt Mahler in 1932 partitioned the transcendental numbers into 3 classes, called S, T, and U. Definition of these classes draws on an extension of the idea of a Liouville number (cited above).
=== Measure of irrationality of a real number ===
One way to define a Liouville number is to consider how small a given real number x makes linear polynomials |qx − p| without making them exactly 0. Here p, q are integers with |p|, |q| bounded by a positive integer H.
Let
m
(
x
,
1
,
H
)
{\displaystyle m(x,1,H)}
be the minimum non-zero absolute value these polynomials take and take:
ω
(
x
,
1
,
H
)
=
−
log
m
(
x
,
1
,
H
)
log
H
{\displaystyle \omega (x,1,H)=-{\frac {\log m(x,1,H)}{\log H}}}
ω
(
x
,
1
)
=
lim sup
H
→
∞
ω
(
x
,
1
,
H
)
.
{\displaystyle \omega (x,1)=\limsup _{H\to \infty }\,\omega (x,1,H).}
ω(x, 1) is often called the measure of irrationality of a real number x. For rational numbers, ω(x, 1) = 0 and is at least 1 for irrational real numbers. A Liouville number is defined to have infinite measure of irrationality. Roth's theorem says that irrational real algebraic numbers have measure of irrationality 1.
=== Measure of transcendence of a complex number ===
Next consider the values of polynomials at a complex number x, when these polynomials have integer coefficients, degree at most n, and height at most H, with n, H being positive integers.
Let
m
(
x
,
n
,
H
)
{\displaystyle m(x,n,H)}
be the minimum non-zero absolute value such polynomials take at
x
{\displaystyle x}
and take:
ω
(
x
,
n
,
H
)
=
−
log
m
(
x
,
n
,
H
)
n
log
H
{\displaystyle \omega (x,n,H)=-{\frac {\log m(x,n,H)}{n\log H}}}
ω
(
x
,
n
)
=
lim sup
H
→
∞
ω
(
x
,
n
,
H
)
.
{\displaystyle \omega (x,n)=\limsup _{H\to \infty }\,\omega (x,n,H).}
Suppose this is infinite for some minimum positive integer n. A complex number x in this case is called a U number of degree n.
Now we can define
ω
(
x
)
=
lim sup
n
→
∞
ω
(
x
,
n
)
.
{\displaystyle \omega (x)=\limsup _{n\to \infty }\,\omega (x,n).}
ω(x) is often called the measure of transcendence of x. If the ω(x, n) are bounded, then ω(x) is finite, and x is called an S number. If the ω(x, n) are finite but unbounded, x is called a T number. x is algebraic if and only if ω(x) = 0.
Clearly the Liouville numbers are a subset of the U numbers. William LeVeque in 1953 constructed U numbers of any desired degree. The Liouville numbers and hence the U numbers are uncountable sets. They are sets of measure 0.
T numbers also comprise a set of measure 0. It took about 35 years to show their existence. Wolfgang M. Schmidt in 1968 showed that examples exist. However, almost all complex numbers are S numbers. Mahler proved that the exponential function sends all non-zero algebraic numbers to S numbers: this shows that e is an S number and gives a proof of the transcendence of π. This number π is known not to be a U number. Many other transcendental numbers remain unclassified.
Two numbers x, y are called algebraically dependent if there is a non-zero polynomial P in two indeterminates with integer coefficients such that P(x, y) = 0. There is a powerful theorem that two complex numbers that are algebraically dependent belong to the same Mahler class. This allows construction of new transcendental numbers, such as the sum of a Liouville number with e or π.
The symbol S probably stood for the name of Mahler's teacher Carl Ludwig Siegel, and T and U are just the next two letters.
=== Koksma's equivalent classification ===
Jurjen Koksma in 1939 proposed another classification based on approximation by algebraic numbers.
Consider the approximation of a complex number x by algebraic numbers of degree ≤ n and height ≤ H. Let α be an algebraic number of this finite set such that |x − α| has the minimum positive value. Define ω*(x, H, n) and ω*(x, n) by:
|
x
−
α
|
=
H
−
n
ω
∗
(
x
,
H
,
n
)
−
1
.
{\displaystyle |x-\alpha |=H^{-n\omega ^{*}(x,H,n)-1}.}
ω
∗
(
x
,
n
)
=
lim sup
H
→
∞
ω
∗
(
x
,
n
,
H
)
.
{\displaystyle \omega ^{*}(x,n)=\limsup _{H\to \infty }\,\omega ^{*}(x,n,H).}
If for a smallest positive integer n, ω*(x, n) is infinite, x is called a U*-number of degree n.
If the ω*(x, n) are bounded and do not converge to 0, x is called an S*-number,
A number x is called an A*-number if the ω*(x, n) converge to 0.
If the ω*(x, n) are all finite but unbounded, x is called a T*-number,
Koksma's and Mahler's classifications are equivalent in that they divide the transcendental numbers into the same classes. The A*-numbers are the algebraic numbers.
=== LeVeque's construction ===
Let
λ
=
1
3
+
∑
k
=
1
∞
10
−
k
!
.
{\displaystyle \lambda ={\tfrac {1}{3}}+\sum _{k=1}^{\infty }10^{-k!}.}
It can be shown that the nth root of λ (a Liouville number) is a U-number of degree n.
This construction can be improved to create an uncountable family of U-numbers of degree n. Let Z be the set consisting of every other power of 10 in the series above for λ. The set of all subsets of Z is uncountable. Deleting any of the subsets of Z from the series for λ creates uncountably many distinct Liouville numbers, whose nth roots are U-numbers of degree n.
=== Type ===
The supremum of the sequence {ω(x, n)} is called the type. Almost all real numbers are S numbers of type 1, which is minimal for real S numbers. Almost all complex numbers are S numbers of type 1/2, which is also minimal. The claims of almost all numbers were conjectured by Mahler and in 1965 proved by Vladimir Sprindzhuk.
== Open problems ==
While the Gelfond–Schneider theorem proved that a large class of numbers was transcendental, this class was still countable. Many well-known mathematical constants are still not known to be transcendental, and in some cases it is not even known whether they are rational or irrational. A partial list can be found here.
A major problem in transcendence theory is showing that a particular set of numbers is algebraically independent rather than just showing that individual elements are transcendental. So while we know that e and π are transcendental that doesn't imply that e + π is transcendental, nor other combinations of the two (except eπ, Gelfond's constant, which is known to be transcendental). Another major problem is dealing with numbers that are not related to the exponential function. The main results in transcendence theory tend to revolve around e and the logarithm function, which means that wholly new methods tend to be required to deal with numbers that cannot be expressed in terms of these two objects in an elementary fashion.
Schanuel's conjecture would solve the first of these problems somewhat as it deals with algebraic independence and would indeed confirm that e + π is transcendental. It still revolves around the exponential function, however, and so would not necessarily deal with numbers such as Apéry's constant or the Euler–Mascheroni constant. Another extremely difficult unsolved problem is the so-called constant or identity problem.
== Notes ==
== References ==
Baker, Alan (1975). Transcendental Number Theory. paperback edition 1990. Cambridge University Press. ISBN 0-521-20461-5. Zbl 0297.10013.
Bugeaud, Yann (2012). Distribution modulo one and Diophantine approximation. Cambridge Tracts in Mathematics. Vol. 193. Cambridge University Press. ISBN 978-0-521-11169-0. Zbl 1260.11001.
Burger, Edward B.; Tubbs, Robert (2004). Making transcendence transparent. An intuitive approach to classical transcendental number theory. Springer. ISBN 978-0-387-21444-3. Zbl 1092.11031.
Gelfond, A. O. (1960). Transcendental and Algebraic Numbers. Dover. Zbl 0090.26103.
Lang, Serge (1966). Introduction to Transcendental Numbers. Addison–Wesley. Zbl 0144.04101.
LeVeque, William J. (2002) [1956]. Topics in Number Theory, Volumes I and II. Dover. ISBN 978-0-486-42539-9.
Natarajan, Saradha [in French]; Thangadurai, Ravindranathan (2020). Pillars of Transcendental Number Theory. Springer Verlag. ISBN 978-981-15-4154-4.
Sprindzhuk, Vladimir G. (1969). Mahler's Problem in Metric Number Theory (1967). AMS Translations of Mathematical Monographs. Translated from Russian by B. Volkmann. American Mathematical Society. ISBN 978-1-4704-4442-6.
Sprindzhuk, Vladimir G. (1979). Metric theory of Diophantine approximations. Scripta Series in Mathematics. Translated from Russian by Richard A. Silverman. Foreword by Donald J. Newman. Wiley. ISBN 0-470-26706-2. Zbl 0482.10047.
== Further reading ==
Alan Baker and Gisbert Wüstholz, Logarithmic Forms and Diophantine Geometry, New Mathematical Monographs 9, Cambridge University Press, 2007, ISBN 978-0-521-88268-2 | Wikipedia/Transcendental_number_theory |
Arithmetic topology is an area of mathematics that is a combination of algebraic number theory and topology. It establishes an analogy between number fields and closed, orientable 3-manifolds.
== Analogies ==
The following are some of the analogies used by mathematicians between number fields and 3-manifolds:
A number field corresponds to a closed, orientable 3-manifold
Ideals in the ring of integers correspond to links, and prime ideals correspond to knots.
The field Q of rational numbers corresponds to the 3-sphere.
Expanding on the last two examples, there is an analogy between knots and prime numbers in which one considers "links" between primes. The triple of primes (13, 61, 937) are "linked" modulo 2 (the Rédei symbol is −1) but are "pairwise unlinked" modulo 2 (the Legendre symbols are all 1). Therefore these primes have been called a "proper Borromean triple modulo 2" or "mod 2 Borromean primes".
== History ==
In the 1960s topological interpretations of class field theory were given by John Tate based on Galois cohomology, and also by Michael Artin and Jean-Louis Verdier based on Étale cohomology. Then David Mumford (and independently Yuri Manin) came up with an analogy between prime ideals and knots which was further explored by Barry Mazur. In the 1990s Reznikov and Kapranov began studying these analogies, coining the term arithmetic topology for this area of study.
== See also ==
Arithmetic geometry
Arithmetic dynamics
Topological quantum field theory
Langlands program
== Notes ==
== Further reading ==
Masanori Morishita (2011), Knots and Primes, Springer, ISBN 978-1-4471-2157-2
Masanori Morishita (2009), Analogies Between Knots And Primes, 3-Manifolds And Number Rings
Christopher Deninger (2002), A note on arithmetic topology and dynamical systems
Adam S. Sikora (2001), Analogies between group actions on 3-manifolds and number fields
Curtis T. McMullen (2003), From dynamics on surfaces to rational points on curves
Chao Li and Charmaine Sia (2012), Knots and Primes
== External links ==
Mazur’s knotty dictionary | Wikipedia/Arithmetic_topology |
In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.
== Resolution of conjectures ==
=== Proof ===
Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (1.2 trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample.
Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results.
A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details.
One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software.
When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others.
=== Disproof ===
Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller.
=== Independent conjectures ===
Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry).
In this case, if a proof uses this statement, researchers will often look for a new proof that does not require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular.
== Conditional proofs ==
Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being.
These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type.
== Important examples ==
=== Fermat's Last Theorem ===
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers
a
{\displaystyle a}
,
b
{\displaystyle b}
, and
c
{\displaystyle c}
can satisfy the equation
a
n
+
b
n
=
c
n
{\displaystyle a^{n}+b^{n}=c^{n}}
for any integer value of
n
{\displaystyle n}
greater than two.
This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems".
=== Four color theorem ===
In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.
The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.
=== Hauptvermutung ===
The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze.
This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion.
The manifold version is true in dimensions m ≤ 3. The cases m = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively.
=== Weil conjectures ===
In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields.
A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements.
Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork (1960), the functional equation by Grothendieck (1965), and the analogue of the Riemann hypothesis was proved by Deligne (1974).
=== Poincaré conjecture ===
In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere. An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.
After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct.
The Poincaré conjecture, before being proven, was one of the most important open questions in topology.
=== Riemann hypothesis ===
In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems.
=== P versus NP problem ===
The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution.
=== Other conjectures ===
Goldbach's conjecture
The twin prime conjecture
The Collatz conjecture
The Manin conjecture
The Maldacena conjecture
The Euler conjecture, proposed by Euler in the 18th century but for which counterexamples for a number of exponents (starting with n=4) were found beginning in the mid 20th century
The Hardy-Littlewood conjectures are a pair of conjectures concerning the distribution of prime numbers, the first of which expands upon the aforementioned twin prime conjecture. Neither one has either been proven or disproven, but it has been proven that both cannot simultaneously be true (i.e., at least one must be false). It has not been proven which one is false, but it is widely believed that the first conjecture is true and the second one is false.
The Langlands program is a far-reaching web of these ideas of 'unifying conjectures' that link different subfields of mathematics (e.g. between number theory and representation theory of Lie groups). Some of these conjectures have since been proved.
== In other sciences ==
Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture.
== See also ==
Bold hypothesis
Futures studies
Hypotheticals
List of conjectures
Ramanujan machine
== References ==
=== Works cited ===
Deligne, Pierre (1974), "La conjecture de Weil. I", Publications Mathématiques de l'IHÉS, 43 (43): 273–307, doi:10.1007/BF02684373, ISSN 1618-1913, MR 0340258, S2CID 123139343
Dwork, Bernard (1960), "On the rationality of the zeta function of an algebraic variety", American Journal of Mathematics, 82 (3), American Journal of Mathematics, Vol. 82, No. 3: 631–648, doi:10.2307/2372974, ISSN 0002-9327, JSTOR 2372974, MR 0140494
Grothendieck, Alexander (1995) [1965], "Formule de Lefschetz et rationalité des fonctions L", Séminaire Bourbaki, vol. 9, Paris: Société Mathématique de France, pp. 41–55, MR 1608788
== External links ==
Media related to Conjectures at Wikimedia Commons
Open Problem Garden
Unsolved Problems web site | Wikipedia/Conjectured |
In algebraic geometry, divisors are a generalization of codimension-1 subvarieties of algebraic varieties. Two different generalizations are in common use, Cartier divisors and Weil divisors (named for Pierre Cartier and André Weil by David Mumford). Both are derived from the notion of divisibility in the integers and algebraic number fields.
Globally, every codimension-1 subvariety of projective space is defined by the vanishing of one homogeneous polynomial; by contrast, a codimension-r subvariety need not be definable by only r equations when r is greater than 1. (That is, not every subvariety of projective space is a complete intersection.) Locally, every codimension-1 subvariety of a smooth variety can be defined by one equation in a neighborhood of each point. Again, the analogous statement fails for higher-codimension subvarieties. As a result of this property, much of algebraic geometry studies an arbitrary variety by analysing its codimension-1 subvarieties and the corresponding line bundles.
On singular varieties, this property can also fail, and so one has to distinguish between codimension-1 subvarieties and varieties which can locally be defined by one equation. The former are Weil divisors while the latter are Cartier divisors.
Topologically, Weil divisors play the role of homology classes, while Cartier divisors represent cohomology classes. On a smooth variety (or more generally a regular scheme), a result analogous to Poincaré duality says that Weil and Cartier divisors are the same.
The name "divisor" goes back to the work of Dedekind and Weber, who showed the relevance of Dedekind domains to the study of algebraic curves. The group of divisors on a curve (the free abelian group generated by all divisors) is closely related to the group of fractional ideals for a Dedekind domain.
An algebraic cycle is a higher codimension generalization of a divisor; by definition, a Weil divisor is a cycle of codimension 1.
== Divisors on a Riemann surface ==
A Riemann surface is a 1-dimensional complex manifold, and so its codimension-1 submanifolds have dimension 0. The group of divisors on a compact Riemann surface X is the free abelian group on the points of X.
Equivalently, a divisor on a compact Riemann surface X is a finite linear combination of points of X with integer coefficients. The degree of a divisor on X is the sum of its coefficients.
For any nonzero meromorphic function f on X, one can define the order of vanishing of f at a point p in X, ordp(f). It is an integer, negative if f has a pole at p. The divisor of a nonzero meromorphic function f on the compact Riemann surface X is defined as
(
f
)
:=
∑
p
∈
X
ord
p
(
f
)
p
,
{\displaystyle (f):=\sum _{p\in X}\operatorname {ord} _{p}(f)p,}
which is a finite sum. Divisors of the form (f) are also called principal divisors. Since (fg) = (f) + (g), the set of principal divisors is a subgroup of the group of divisors. Two divisors that differ by a principal divisor are called linearly equivalent.
On a compact Riemann surface, the degree of a principal divisor is zero; that is, the number of zeros of a meromorphic function is equal to the number of poles, counted with multiplicity. As a result, the degree is well-defined on linear equivalence classes of divisors.
Given a divisor D on a compact Riemann surface X, it is important to study the complex vector space of meromorphic functions on X with poles at most given by D, called H0(X, O(D)) or the space of sections of the line bundle associated to D. The degree of D says a lot about the dimension of this vector space. For example, if D has negative degree, then this vector space is zero (because a meromorphic function cannot have more zeros than poles). If D has positive degree, then the dimension of H0(X, O(mD)) grows linearly in m for m sufficiently large. The Riemann–Roch theorem is a more precise statement along these lines. On the other hand, the precise dimension of H0(X, O(D)) for divisors D of low degree is subtle, and not completely determined by the degree of D. The distinctive features of a compact Riemann surface are reflected in these dimensions.
One key divisor on a compact Riemann surface is the canonical divisor. To define it, one first defines the divisor of a nonzero meromorphic 1-form along the lines above. Since the space of meromorphic 1-forms is a 1-dimensional vector space over the field of meromorphic functions, any two nonzero meromorphic 1-forms yield linearly equivalent divisors. Any divisor in this linear equivalence class is called the canonical divisor of X, KX. The genus g of X can be read from the canonical divisor: namely, KX has degree 2g − 2. The key trichotomy among compact Riemann surfaces X is whether the canonical divisor has negative degree (so X has genus zero), zero degree (genus one), or positive degree (genus at least 2). For example, this determines whether X has a Kähler metric with positive curvature, zero curvature, or negative curvature. The canonical divisor has negative degree if and only if X is isomorphic to the Riemann sphere CP1.
== Weil divisors ==
Let X be an integral locally Noetherian scheme. A prime divisor or irreducible divisor on X is an integral closed subscheme Z of codimension 1 in X. A Weil divisor on X is a formal sum over the prime divisors Z of X,
∑
Z
n
Z
Z
,
{\displaystyle \sum _{Z}n_{Z}Z,}
where the collection
{
Z
:
n
Z
≠
0
}
{\displaystyle \{Z:n_{Z}\neq 0\}}
is locally finite. If X is quasi-compact (i.e., Noetherian), local finiteness is equivalent to
{
Z
:
n
Z
≠
0
}
{\displaystyle \{Z:n_{Z}\neq 0\}}
being finite. The group of all Weil divisors is denoted Div(X). A Weil divisor D is effective if all the coefficients are non-negative. One writes D ≥ D′ if the difference D − D′ is effective.
For example, a divisor on an algebraic curve over a field is a formal sum of finitely many closed points. A divisor on Spec Z is a formal sum of prime numbers with integer coefficients and therefore corresponds to a non-zero fractional ideal in Q. A similar characterization is true for divisors on
Spec
O
K
,
{\displaystyle \operatorname {Spec} {\mathcal {O}}_{K},}
where K is a number field.
If Z ⊂ X is a prime divisor, then the local ring
O
X
,
Z
{\displaystyle {\mathcal {O}}_{X,Z}}
has Krull dimension one. If
f
∈
O
X
,
Z
{\displaystyle f\in {\mathcal {O}}_{X,Z}}
is non-zero, then the order of vanishing of f along Z, written ordZ(f), is the length of
O
X
,
Z
/
(
f
)
.
{\displaystyle {\mathcal {O}}_{X,Z}/(f).}
This length is finite, and it is additive with respect to multiplication, that is, ordZ(fg) = ordZ(f) + ordZ(g). If k(X) is the field of rational functions on X, then any non-zero f ∈ k(X) may be written as a quotient g / h, where g and h are in
O
X
,
Z
,
{\displaystyle {\mathcal {O}}_{X,Z},}
and the order of vanishing of f is defined to be ordZ(g) − ordZ(h). With this definition, the order of vanishing is a function ordZ : k(X)× → Z. If X is normal, then the local ring
O
X
,
Z
{\displaystyle {\mathcal {O}}_{X,Z}}
is a discrete valuation ring, and the function ordZ is the corresponding valuation. For a non-zero rational function f on X, the principal Weil divisor associated to f is defined to be the Weil divisor
div
f
=
∑
Z
ord
Z
(
f
)
Z
.
{\displaystyle \operatorname {div} f=\sum _{Z}\operatorname {ord} _{Z}(f)Z.}
It can be shown that this sum is locally finite and hence that it indeed defines a Weil divisor. The principal Weil divisor associated to f is also notated (f). If f is a regular function, then its principal Weil divisor is effective, but in general this is not true. The additivity of the order of vanishing function implies that
div
f
g
=
div
f
+
div
g
.
{\displaystyle \operatorname {div} fg=\operatorname {div} f+\operatorname {div} g.}
Consequently div is a homomorphism, and in particular its image is a subgroup of the group of all Weil divisors.
Let X be a normal integral Noetherian scheme. Every Weil divisor D determines a coherent sheaf
O
X
(
D
)
{\displaystyle {\mathcal {O}}_{X}(D)}
on X. Concretely it may be defined as subsheaf of the sheaf of rational functions
Γ
(
U
,
O
X
(
D
)
)
=
{
f
∈
k
(
X
)
:
f
=
0
or
div
(
f
)
+
D
≥
0
on
U
}
.
{\displaystyle \Gamma (U,{\mathcal {O}}_{X}(D))=\{f\in k(X):f=0{\text{ or }}\operatorname {div} (f)+D\geq 0{\text{ on }}U\}.}
That is, a nonzero rational function f is a section of
O
X
(
D
)
{\displaystyle {\mathcal {O}}_{X}(D)}
over U if and only if for any prime divisor Z intersecting U,
ord
Z
(
f
)
≥
−
n
Z
{\displaystyle \operatorname {ord} _{Z}(f)\geq -n_{Z}}
where nZ is the coefficient of Z in D. If D is principal, so D is the divisor of a rational function g, then there is an isomorphism
{
O
(
D
)
→
O
X
f
↦
f
g
{\displaystyle {\begin{cases}{\mathcal {O}}(D)\to {\mathcal {O}}_{X}\\f\mapsto fg\end{cases}}}
since
div
(
f
g
)
{\displaystyle \operatorname {div} (fg)}
is an effective divisor and so
f
g
{\displaystyle fg}
is regular thanks to the normality of X. Conversely, if
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is isomorphic to
O
X
{\displaystyle {\mathcal {O}}_{X}}
as an
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module, then D is principal. It follows that D is locally principal if and only if
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is invertible; that is, a line bundle.
If D is an effective divisor that corresponds to a subscheme of X (for example D can be a reduced divisor or a prime divisor), then the ideal sheaf of the subscheme D is equal to
O
(
−
D
)
.
{\displaystyle {\mathcal {O}}(-D).}
This leads to an often used short exact sequence,
0
→
O
X
(
−
D
)
→
O
X
→
O
D
→
0.
{\displaystyle 0\to {\mathcal {O}}_{X}(-D)\to {\mathcal {O}}_{X}\to {\mathcal {O}}_{D}\to 0.}
The sheaf cohomology of this sequence shows that
H
1
(
X
,
O
X
(
−
D
)
)
{\displaystyle H^{1}(X,{\mathcal {O}}_{X}(-D))}
contains information on whether regular functions on D are the restrictions of regular functions on X.
There is also an inclusion of sheaves
0
→
O
X
→
O
X
(
D
)
.
{\displaystyle 0\to {\mathcal {O}}_{X}\to {\mathcal {O}}_{X}(D).}
This furnishes a canonical element of
Γ
(
X
,
O
X
(
D
)
)
,
{\displaystyle \Gamma (X,{\mathcal {O}}_{X}(D)),}
namely, the image of the global section 1. This is called the canonical section and may be denoted sD. While the canonical section is the image of a nowhere vanishing rational function, its image in
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
vanishes along D because the transition functions vanish along D. When D is a smooth Cartier divisor, the cokernel of the above inclusion may be identified; see #Cartier divisors below.
Assume that X is a normal integral separated scheme of finite type over a field. Let D be a Weil divisor. Then
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is a rank one reflexive sheaf, and since
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is defined as a subsheaf of
M
X
,
{\displaystyle {\mathcal {M}}_{X},}
it is a fractional ideal sheaf (see below). Conversely, every rank one reflexive sheaf corresponds to a Weil divisor: The sheaf can be restricted to the regular locus, where it becomes free and so corresponds to a Cartier divisor (again, see below), and because the singular locus has codimension at least two, the closure of the Cartier divisor is a Weil divisor.
== Divisor class group ==
The Weil divisor class group Cl(X) is the quotient of Div(X) by the subgroup of all principal Weil divisors. Two divisors are said to be linearly equivalent if their difference is principal, so the divisor class group is the group of divisors modulo linear equivalence. For a variety X of dimension n over a field, the divisor class group is a Chow group; namely, Cl(X) is the Chow group CHn−1(X) of (n−1)-dimensional cycles.
Let Z be a closed subset of X. If Z is irreducible of codimension one, then Cl(X − Z) is isomorphic to the quotient group of Cl(X) by the class of Z. If Z has codimension at least 2 in X, then the restriction Cl(X) → Cl(X − Z) is an isomorphism. (These facts are special cases of the localization sequence for Chow groups.)
On a normal integral Noetherian scheme X, two Weil divisors D, E are linearly equivalent if and only if
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
and
O
(
E
)
{\displaystyle {\mathcal {O}}(E)}
are isomorphic as
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules. Isomorphism classes of reflexive sheaves on X form a monoid with product given as the reflexive hull of a tensor product. Then
D
↦
O
X
(
D
)
{\displaystyle D\mapsto {\mathcal {O}}_{X}(D)}
defines a monoid isomorphism from the Weil divisor class group of X to the monoid of isomorphism classes of rank-one reflexive sheaves on X.
=== Examples ===
Let k be a field, and let n be a positive integer. Since the polynomial ring k[x1, ..., xn] is a unique factorization domain, the divisor class group of affine space An over k is equal to zero. Since projective space Pn over k minus a hyperplane H is isomorphic to An, it follows that the divisor class group of Pn is generated by the class of H. From there, it is straightforward to check that Cl(Pn) is in fact isomorphic to the integers Z, generated by H. Concretely, this means that every codimension-1 subvariety of Pn is defined by the vanishing of a single homogeneous polynomial.
Let X be an algebraic curve over a field k. Every closed point p in X has the form Spec E for some finite extension field E of k, and the degree of p is defined to be the degree of E over k. Extending this by linearity gives the notion of degree for a divisor on X. If X is a projective curve over k, then the divisor of a nonzero rational function f on X has degree zero. As a result, for a projective curve X, the degree gives a homomorphism deg: Cl(X) → Z.
For the projective line P1 over a field k, the degree gives an isomorphism Cl(P1) ≅ Z. For any smooth projective curve X with a k-rational point, the degree homomorphism is surjective, and the kernel is isomorphic to the group of k-points on the Jacobian variety of X, which is an abelian variety of dimension equal to the genus of X. It follows, for example, that the divisor class group of a complex elliptic curve is an uncountable abelian group.
Generalizing the previous example: for any smooth projective variety X over a field k such that X has a k-rational point, the divisor class group Cl(X) is an extension of a finitely generated abelian group, the Néron–Severi group, by the group of k-points of a connected group scheme
Pic
X
/
k
0
.
{\displaystyle \operatorname {Pic} _{X/k}^{0}.}
For k of characteristic zero,
Pic
X
/
k
0
{\displaystyle \operatorname {Pic} _{X/k}^{0}}
is an abelian variety, the Picard variety of X.
For R the ring of integers of a number field, the divisor class group Cl(R) := Cl(Spec R) is also called the ideal class group of R. It is a finite abelian group. Understanding ideal class groups is a central goal of algebraic number theory.
Let X be the quadric cone of dimension 2, defined by the equation xy = z2 in affine 3-space over a field. Then the line D in X defined by x = z = 0 is not principal on X near the origin. Note that D can be defined as a set by one equation on X, namely x = 0; but the function x on X vanishes to order 2 along D, and so we only find that 2D is Cartier (as defined below) on X. In fact, the divisor class group Cl(X) is isomorphic to the cyclic group Z/2, generated by the class of D.
Let X be the quadric cone of dimension 3, defined by the equation xy = zw in affine 4-space over a field. Then the plane D in X defined by x = z = 0 cannot be defined in X by one equation near the origin, even as a set. It follows that D is not Q-Cartier on X; that is, no positive multiple of D is Cartier. In fact, the divisor class group Cl(X) is isomorphic to the integers Z, generated by the class of D.
=== The canonical divisor ===
Let X be a normal variety over a perfect field. The smooth locus U of X is an open subset whose complement has codimension at least 2. Let j: U → X be the inclusion map, then the restriction homomorphism:
j
∗
:
Cl
(
X
)
→
Cl
(
U
)
=
Pic
(
U
)
{\displaystyle j^{*}:\operatorname {Cl} (X)\to \operatorname {Cl} (U)=\operatorname {Pic} (U)}
is an isomorphism, since X − U has codimension at least 2 in X. For example, one can use this isomorphism to define the canonical divisor KX of X: it is the Weil divisor (up to linear equivalence) corresponding to the line bundle of differential forms of top degree on U. Equivalently, the sheaf
O
(
K
X
)
{\displaystyle {\mathcal {O}}(K_{X})}
on X is the direct image sheaf
j
∗
Ω
U
n
,
{\displaystyle j_{*}\Omega _{U}^{n},}
where n is the dimension of X.
Example: Let X = Pn be the projective n-space with the homogeneous coordinates x0, ..., xn. Let U = {x0 ≠ 0}. Then U is isomorphic to the affine n-space with the coordinates yi = xi/x0. Let
ω
=
d
y
1
y
1
∧
⋯
∧
d
y
n
y
n
.
{\displaystyle \omega ={dy_{1} \over y_{1}}\wedge \dots \wedge {dy_{n} \over y_{n}}.}
Then ω is a rational differential form on U; thus, it is a rational section of
Ω
P
n
n
{\displaystyle \Omega _{\mathbf {P} ^{n}}^{n}}
which has simple poles along Zi = {xi = 0}, i = 1, ..., n. Switching to a different affine chart changes only the sign of ω and so we see ω has a simple pole along Z0 as well. Thus, the divisor of ω is
div
(
ω
)
=
−
Z
0
−
⋯
−
Z
n
{\displaystyle \operatorname {div} (\omega )=-Z_{0}-\dots -Z_{n}}
and its divisor class is
K
P
n
=
[
div
(
ω
)
]
=
−
(
n
+
1
)
[
H
]
{\displaystyle K_{\mathbf {P} ^{n}}=[\operatorname {div} (\omega )]=-(n+1)[H]}
where [H] = [Zi], i = 0, ..., n. (See also the Euler sequence.)
== Cartier divisors ==
Let X be an integral Noetherian scheme. Then X has a sheaf of rational functions
M
X
.
{\displaystyle {\mathcal {M}}_{X}.}
All regular functions are rational functions, which leads to a short exact sequence
0
→
O
X
×
→
M
X
×
→
M
X
×
/
O
X
×
→
0.
{\displaystyle 0\to {\mathcal {O}}_{X}^{\times }\to {\mathcal {M}}_{X}^{\times }\to {\mathcal {M}}_{X}^{\times }/{\mathcal {O}}_{X}^{\times }\to 0.}
A Cartier divisor on X is a global section of
M
X
×
/
O
X
×
.
{\displaystyle {\mathcal {M}}_{X}^{\times }/{\mathcal {O}}_{X}^{\times }.}
An equivalent description is that a Cartier divisor is a collection
{
(
U
i
,
f
i
)
}
,
{\displaystyle \{(U_{i},f_{i})\},}
where
{
U
i
}
{\displaystyle \{U_{i}\}}
is an open cover of
X
,
f
i
{\displaystyle X,f_{i}}
is a section of
M
X
×
{\displaystyle {\mathcal {M}}_{X}^{\times }}
on
U
i
,
{\displaystyle U_{i},}
and
f
i
=
f
j
{\displaystyle f_{i}=f_{j}}
on
U
i
∩
U
j
{\displaystyle U_{i}\cap U_{j}}
up to multiplication by a section of
O
X
×
.
{\displaystyle {\mathcal {O}}_{X}^{\times }.}
Cartier divisors also have a sheaf-theoretic description. A fractional ideal sheaf is a sub-
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module of
M
X
.
{\displaystyle {\mathcal {M}}_{X}.}
A fractional ideal sheaf J is invertible if, for each x in X, there exists an open neighborhood U of x on which the restriction of J to U is equal to
O
U
⋅
f
,
{\displaystyle {\mathcal {O}}_{U}\cdot f,}
where
f
∈
M
X
×
(
U
)
{\displaystyle f\in {\mathcal {M}}_{X}^{\times }(U)}
and the product is taken in
M
X
.
{\displaystyle {\mathcal {M}}_{X}.}
Each Cartier divisor defines an invertible fractional ideal sheaf using the description of the Cartier divisor as a collection
{
(
U
i
,
f
i
)
}
,
{\displaystyle \{(U_{i},f_{i})\},}
and conversely, invertible fractional ideal sheaves define Cartier divisors. If the Cartier divisor is denoted D, then the corresponding fractional ideal sheaf is denoted
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
or L(D).
By the exact sequence above, there is an exact sequence of sheaf cohomology groups:
H
0
(
X
,
M
X
×
)
→
H
0
(
X
,
M
X
×
/
O
X
×
)
→
H
1
(
X
,
O
X
×
)
=
Pic
(
X
)
.
{\displaystyle H^{0}(X,{\mathcal {M}}_{X}^{\times })\to H^{0}(X,{\mathcal {M}}_{X}^{\times }/{\mathcal {O}}_{X}^{\times })\to H^{1}(X,{\mathcal {O}}_{X}^{\times })=\operatorname {Pic} (X).}
A Cartier divisor is said to be principal if it is in the image of the homomorphism
H
0
(
X
,
M
X
×
)
→
H
0
(
X
,
M
X
×
/
O
X
×
)
,
{\displaystyle H^{0}(X,{\mathcal {M}}_{X}^{\times })\to H^{0}(X,{\mathcal {M}}_{X}^{\times }/{\mathcal {O}}_{X}^{\times }),}
that is, if it is the divisor of a rational function on X. Two Cartier divisors are linearly equivalent if their difference is principal. Every line bundle L on an integral Noetherian scheme X is the class of some Cartier divisor. As a result, the exact sequence above identifies the Picard group of line bundles on an integral Noetherian scheme X with the group of Cartier divisors modulo linear equivalence. This holds more generally for reduced Noetherian schemes, or for quasi-projective schemes over a Noetherian ring, but it can fail in general (even for proper schemes over C), which lessens the interest of Cartier divisors in full generality.
Assume D is an effective Cartier divisor. Then there is a short exact sequence
0
→
O
X
→
O
X
(
D
)
→
O
D
(
D
)
→
0.
{\displaystyle 0\to {\mathcal {O}}_{X}\to {\mathcal {O}}_{X}(D)\to {\mathcal {O}}_{D}(D)\to 0.}
This sequence is derived from the short exact sequence relating the structure sheaves of X and D and the ideal sheaf of D. Because D is a Cartier divisor,
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is locally free, and hence tensoring that sequence by
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
yields another short exact sequence, the one above. When D is smooth,
O
D
(
D
)
{\displaystyle O_{D}(D)}
is the normal bundle of D in X.
=== Comparison of Weil divisors and Cartier divisors ===
A Weil divisor D is said to be Cartier if and only if the sheaf
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is invertible. When this happens,
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
(with its embedding in MX) is the line bundle associated to a Cartier divisor. More precisely, if
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is invertible, then there exists an open cover {Ui} such that
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
restricts to a trivial bundle on each open set. For each Ui, choose an isomorphism
O
U
i
→
O
(
D
)
|
U
i
.
{\displaystyle {\mathcal {O}}_{U_{i}}\to {\mathcal {O}}(D)|_{U_{i}}.}
The image of
1
∈
Γ
(
U
i
,
O
U
i
)
=
Γ
(
U
i
,
O
X
)
{\displaystyle 1\in \Gamma (U_{i},{\mathcal {O}}_{U_{i}})=\Gamma (U_{i},{\mathcal {O}}_{X})}
under this map is a section of
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
on Ui. Because
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is defined to be a subsheaf of the sheaf of rational functions, the image of 1 may be identified with some rational function fi. The collection
{
(
U
i
,
f
i
)
}
{\displaystyle \{(U_{i},f_{i})\}}
is then a Cartier divisor. This is well-defined because the only choices involved were of the covering and of the isomorphism, neither of which change the Cartier divisor. This Cartier divisor may be used to produce a sheaf, which for distinction we will notate L(D). There is an isomorphism of
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
with L(D) defined by working on the open cover {Ui}. The key fact to check here is that the transition functions of
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
and L(D) are compatible, and this amounts to the fact that these functions all have the form
f
i
/
f
j
.
{\displaystyle f_{i}/f_{j}.}
In the opposite direction, a Cartier divisor
{
(
U
i
,
f
i
)
}
{\displaystyle \{(U_{i},f_{i})\}}
on an integral Noetherian scheme X determines a Weil divisor on X in a natural way, by applying
div
{\displaystyle \operatorname {div} }
to the functions fi on the open sets Ui.
If X is normal, a Cartier divisor is determined by the associated Weil divisor, and a Weil divisor is Cartier if and only if it is locally principal.
A Noetherian scheme X is called factorial if all local rings of X are unique factorization domains. (Some authors say "locally factorial".) In particular, every regular scheme is factorial. On a factorial scheme X, every Weil divisor D is locally principal, and so
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is always a line bundle. In general, however, a Weil divisor on a normal scheme need not be locally principal; see the examples of quadric cones above.
=== Effective Cartier divisors ===
Effective Cartier divisors are those which correspond to ideal sheaves. In fact, the theory of effective Cartier divisors can be developed without any reference to sheaves of rational functions or fractional ideal sheaves.
Let X be a scheme. An effective Cartier divisor on X is an ideal sheaf I which is invertible and such that for every point x in X, the stalk Ix is principal. It is equivalent to require that around each x, there exists an open affine subset U = Spec A such that U ∩ D = Spec A / (f), where f is a non-zero divisor in A. The sum of two effective Cartier divisors corresponds to multiplication of ideal sheaves.
There is a good theory of families of effective Cartier divisors. Let φ : X → S be a morphism. A relative effective Cartier divisor for X over S is an effective Cartier divisor D on X which is flat over S. Because of the flatness assumption, for every
S
′
→
S
,
{\displaystyle S'\to S,}
there is a pullback of D to
X
×
S
S
′
,
{\displaystyle X\times _{S}S',}
and this pullback is an effective Cartier divisor. In particular, this is true for the fibers of φ.
== Kodaira's lemma ==
As a basic result of the (big) Cartier divisor, there is a result called Kodaira's lemma:
Let X be a irreducible projective variety and let D be a big Cartier divisor on X and let H be an arbitrary effective Cartier divisor on X. Then
H
0
(
X
,
O
X
(
m
D
−
H
)
)
≠
0
{\displaystyle H^{0}(X,{\mathcal {O}}_{X}(mD-H))\neq 0}
.
for all sufficiently large
m
∈
N
(
X
,
D
)
{\displaystyle m\in N(X,D)}
.
Kodaira's lemma gives some results about the big divisor.
== Functoriality ==
Let φ : X → Y be a morphism of integral locally Noetherian schemes. It is often—but not always—possible to use φ to transfer a divisor D from one scheme to the other. Whether this is possible depends on whether the divisor is a Weil or Cartier divisor, whether the divisor is to be moved from X to Y or vice versa, and what additional properties φ might have.
If Z is a prime Weil divisor on X, then
φ
(
Z
)
¯
{\displaystyle {\overline {\varphi (Z)}}}
is a closed irreducible subscheme of Y. Depending on φ, it may or may not be a prime Weil divisor. For example, if φ is the blow up of a point in the plane and Z is the exceptional divisor, then its image is not a Weil divisor. Therefore, φ*Z is defined to be
φ
(
Z
)
¯
{\displaystyle {\overline {\varphi (Z)}}}
if that subscheme is a prime divisor and is defined to be the zero divisor otherwise. Extending this by linearity will, assuming X is quasi-compact, define a homomorphism Div(X) → Div(Y) called the pushforward. (If X is not quasi-compact, then the pushforward may fail to be a locally finite sum.) This is a special case of the pushforward on Chow groups.
If Z is a Cartier divisor, then under mild hypotheses on φ, there is a pullback
φ
∗
Z
{\displaystyle \varphi ^{*}Z}
. Sheaf-theoretically, when there is a pullback map
φ
−
1
M
Y
→
M
X
{\displaystyle \varphi ^{-1}{\mathcal {M}}_{Y}\to {\mathcal {M}}_{X}}
, then this pullback can be used to define pullback of Cartier divisors. In terms of local sections, the pullback of
{
(
U
i
,
f
i
)
}
{\displaystyle \{(U_{i},f_{i})\}}
is defined to be
{
(
φ
−
1
(
U
i
)
,
f
i
∘
φ
)
}
{\displaystyle \{(\varphi ^{-1}(U_{i}),f_{i}\circ \varphi )\}}
. Pullback is always defined if φ is dominant, but it cannot be defined in general. For example, if X = Z and φ is the inclusion of Z into Y, then φ*Z is undefined because the corresponding local sections would be everywhere zero. (The pullback of the corresponding line bundle, however, is defined.)
If φ is flat, then pullback of Weil divisors is defined. In this case, the pullback of Z is φ*Z = φ−1(Z). The flatness of φ ensures that the inverse image of Z continues to have codimension one. This can fail for morphisms which are not flat, for example, for a small contraction.
== The first Chern class ==
For an integral Noetherian scheme X, the natural homomorphism from the group of Cartier divisors to that of Weil divisors gives a homomorphism
c
1
:
Pic
(
X
)
→
Cl
(
X
)
,
{\displaystyle c_{1}:\operatorname {Pic} (X)\to \operatorname {Cl} (X),}
known as the first Chern class. The first Chern class is injective if X is normal, and it is an isomorphism if X is factorial (as defined above). In particular, Cartier divisors can be identified with Weil divisors on any regular scheme, and so the first Chern class is an isomorphism for X regular.
Explicitly, the first Chern class can be defined as follows. For a line bundle L on an integral Noetherian scheme X, let s be a nonzero rational section of L (that is, a section on some nonempty open subset of L), which exists by local triviality of L. Define the Weil divisor (s) on X by analogy with the divisor of a rational function. Then the first Chern class of L can be defined to be the divisor (s). Changing the rational section s changes this divisor by linear equivalence, since (fs) = (f) + (s) for a nonzero rational function f and a nonzero rational section s of L. So the element c1(L) in Cl(X) is well-defined.
For a complex variety X of dimension n, not necessarily smooth or proper over C, there is a natural homomorphism, the cycle map, from the divisor class group to Borel–Moore homology:
Cl
(
X
)
→
H
2
n
−
2
BM
(
X
,
Z
)
.
{\displaystyle \operatorname {Cl} (X)\to H_{2n-2}^{\operatorname {BM} }(X,\mathbf {Z} ).}
The latter group is defined using the space X(C) of complex points of X, with its classical (Euclidean) topology. Likewise, the Picard group maps to integral cohomology, by the first Chern class in the topological sense:
Pic
(
X
)
→
H
2
(
X
,
Z
)
.
{\displaystyle \operatorname {Pic} (X)\to H^{2}(X,\mathbf {Z} ).}
The two homomorphisms are related by a commutative diagram, where the right vertical map is cap product with the fundamental class of X in Borel–Moore homology:
Pic
(
X
)
⟶
H
2
(
X
,
Z
)
↓
↓
Cl
(
X
)
⟶
H
2
n
−
2
BM
(
X
,
Z
)
{\displaystyle {\begin{array}{ccc}\operatorname {Pic} (X)&\longrightarrow &H^{2}(X,\mathbf {Z} )\\\downarrow &&\downarrow \\\operatorname {Cl} (X)&\longrightarrow &H_{2n-2}^{\operatorname {BM} }(X,\mathbf {Z} )\end{array}}}
For X smooth over C, both vertical maps are isomorphisms.
== Global sections of line bundles and linear systems ==
A Cartier divisor is effective if its local defining functions fi are regular (not just rational functions). In that case, the Cartier divisor can be identified with a closed subscheme of codimension 1 in X, the subscheme defined locally by fi = 0. A Cartier divisor D is linearly equivalent to an effective divisor if and only if its associated line bundle
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
has a nonzero global section s; then D is linearly equivalent to the zero locus of s.
Let X be a projective variety over a field k. Then multiplying a global section of
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
by a nonzero scalar in k does not change its zero locus. As a result, the projective space of lines in the k-vector space of global sections H0(X, O(D)) can be identified with the set of effective divisors linearly equivalent to D, called the complete linear system of D. A projective linear subspace of this projective space is called a linear system of divisors.
One reason to study the space of global sections of a line bundle is to understand the possible maps from a given variety to projective space. This is essential for the classification of algebraic varieties. Explicitly, a morphism from a variety X to projective space Pn over a field k determines a line bundle L on X, the pullback of the standard line bundle
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
on Pn. Moreover, L comes with n+1 sections whose base locus (the intersection of their zero sets) is empty. Conversely, any line bundle L with n+1 global sections whose common base locus is empty determines a morphism X → Pn. These observations lead to several notions of positivity for Cartier divisors (or line bundles), such as ample divisors and nef divisors.
For a divisor D on a projective variety X over a field k, the k-vector space H0(X, O(D)) has finite dimension. The Riemann–Roch theorem is a fundamental tool for computing the dimension of this vector space when X is a projective curve. Successive generalizations, the Hirzebruch–Riemann–Roch theorem and the Grothendieck–Riemann–Roch theorem, give some information about the dimension of H0(X, O(D)) for a projective variety X of any dimension over a field.
Because the canonical divisor is intrinsically associated to a variety, a key role in the classification of varieties is played by the maps to projective space given by KX and its positive multiples. The Kodaira dimension of X is a key birational invariant, measuring the growth of the vector spaces H0(X, mKX) (meaning H0(X, O(mKX))) as m increases. The Kodaira dimension divides all n-dimensional varieties into n+2 classes, which (very roughly) go from positive curvature to negative curvature.
== Q-divisors ==
Let X be a normal variety. A (Weil) Q-divisor is a finite formal linear combination of irreducible codimension-1 subvarieties of X with rational coefficients. (An R-divisor is defined similarly.) A Q-divisor is effective if the coefficients are nonnegative. A Q-divisor D is Q-Cartier if mD is a Cartier divisor for some positive integer m. If X is smooth, then every Q-divisor is Q-Cartier.
If
D
=
∑
j
a
j
Z
j
{\displaystyle D=\sum _{j}a_{j}Z_{j}}
is a Q-divisor, then its round-down is the divisor
⌊
D
⌋
=
∑
⌊
a
j
⌋
Z
j
,
{\displaystyle \lfloor D\rfloor =\sum \lfloor a_{j}\rfloor Z_{j},}
where
⌊
a
⌋
{\displaystyle \lfloor a\rfloor }
is the greatest integer less than or equal to a. The sheaf
O
(
D
)
{\displaystyle {\mathcal {O}}(D)}
is then defined to be
O
(
⌊
D
⌋
)
.
{\displaystyle {\mathcal {O}}(\lfloor D\rfloor ).}
== The Grothendieck–Lefschetz hyperplane theorem ==
The Lefschetz hyperplane theorem implies that for a smooth complex projective variety X of dimension at least 4 and a smooth ample divisor Y in X, the restriction Pic(X) → Pic(Y) is an isomorphism. For example, if Y is a smooth complete intersection variety of dimension at least 3 in complex projective space, then the Picard group of Y is isomorphic to Z, generated by the restriction of the line bundle O(1) on projective space.
Grothendieck generalized Lefschetz's theorem in several directions, involving arbitrary base fields, singular varieties, and results on local rings rather than projective varieties. In particular, if R is a complete intersection local ring which is factorial in codimension at most 3 (for example, if the non-regular locus of R has codimension at least 4), then R is a unique factorization domain (and hence every Weil divisor on Spec(R) is Cartier). The dimension bound here is optimal, as shown by the example of the 3-dimensional quadric cone, above.
== Notes ==
== References ==
Dieudonné, Jean (1985), History of Algebraic Geometry, Wadsworth Mathematics Series, translated by Judith D. Sally, Belmont, CA: Wadsworth International Group, ISBN 0-534-03723-2, MR 0780183
Eisenbud, David; Harris, Joe (2016), 3264 and All That: A Second Course in Algebraic Geometry, C. U.P., ISBN 978-1107602724
Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32: 5–361. doi:10.1007/bf02732123. MR 0238860.
Grothendieck, Alexander; Raynaud, Michèle (2005) [1968], Laszlo, Yves (ed.), Cohomologie locale des faisceaux cohérents et théorèmes de Lefschetz locaux et globaux (SGA 2), Documents Mathématiques, vol. 4, Paris: Société Mathématique de France, arXiv:math/0511279, Bibcode:2005math.....11279G, ISBN 978-2-85629-169-6, MR 2171939
Section II.6 of Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York, Heidelberg: Springer-Verlag, doi:10.1007/978-1-4757-3849-0, ISBN 0-387-90244-9, MR 0463157
Kleiman, Steven (2005), "The Picard scheme", Fundamental Algebraic Geometry, Math. Surveys Monogr., vol. 123, Providence, R.I.: American Mathematical Society, pp. 235–321, arXiv:math/0504020, Bibcode:2005math......4020K, MR 2223410
Kollár, János (2013), Singularities of the Minimal Model Program, Cambridge University Press, doi:10.1017/CBO9781139547895, ISBN 978-1-107-03534-8, MR 3057950
Lazarsfeld, Robert (2004), Positivity in Algebraic Geometry, vol. 1, Berlin: Springer-Verlag, doi:10.1007/978-3-642-18808-4, ISBN 3-540-22533-1, MR 2095471
== External links ==
The Stacks Project Authors, The Stacks Project | Wikipedia/Divisor_(algebraic_geometry) |
Additive number theory is the subfield of number theory concerning the study of subsets of integers and their behavior under addition. More abstractly, the field of additive number theory includes the study of abelian groups and commutative semigroups with an operation of addition. Additive number theory has close ties to combinatorial number theory and the geometry of numbers. Principal objects of study include the sumset of two subsets A and B of elements from an abelian group G,
A
+
B
=
{
a
+
b
:
a
∈
A
,
b
∈
B
}
,
{\displaystyle A+B=\{a+b:a\in A,b\in B\},}
and the h-fold sumset of A,
h
A
=
A
+
⋯
+
A
⏟
h
.
{\displaystyle hA={\underset {h}{\underbrace {A+\cdots +A} }}\,.}
== Additive number theory ==
The field is principally devoted to consideration of direct problems over (typically) the integers, that is, determining the structure of hA from the structure of A: for example, determining which elements can be represented as a sum from hA, where A is a fixed subset. Two classical problems of this type are the Goldbach conjecture (which is the conjecture that 2ℙ contains all even numbers greater than two, where ℙ is the set of primes) and Waring's problem (which asks how large must h be to guarantee that hAk contains all positive integers, where
A
k
=
{
0
k
,
1
k
,
2
k
,
3
k
,
…
}
{\displaystyle A_{k}=\{0^{k},1^{k},2^{k},3^{k},\ldots \}}
is the set of kth powers). Many of these problems are studied using the tools from the Hardy-Littlewood circle method and from sieve methods. For example, Vinogradov proved that every sufficiently large odd number is the sum of three primes, and so every sufficiently large even integer is the sum of four primes. Hilbert proved that, for every integer k > 1, every non-negative integer is the sum of a bounded number of kth powers. In general, a set A of nonnegative integers is called a basis of order h if hA contains all positive integers, and it is called an asymptotic basis if hA contains all sufficiently large integers. Much current research in this area concerns properties of general asymptotic bases of finite order. For example, a set A is called a minimal asymptotic basis of order h if A is an asymptotic basis of order h but no proper subset of A is an asymptotic basis of order h. It has been proved that minimal asymptotic bases of order h exist for all h, and that there also exist asymptotic bases of order h that contain no minimal asymptotic bases of order h. Another question to be considered is how small can the number of representations of n as a sum of h elements in an asymptotic basis can be. This is the content of the Erdős–Turán conjecture on additive bases.
== See also ==
Shapley–Folkman lemma
Additive combinatorics
Multiplicative combinatorics
Multiplicative number theory
== References ==
Henry Mann (1976). Addition Theorems: The Addition Theorems of Group Theory and Number Theory (Corrected reprint of 1965 Wiley ed.). Huntington, New York: Robert E. Krieger Publishing Company. ISBN 0-88275-418-1.
Nathanson, Melvyn B. (1996). Additive Number Theory: The Classical Bases. Graduate Texts in Mathematics. Vol. 164. Springer-Verlag. ISBN 0-387-94656-X. Zbl 0859.11002.
Nathanson, Melvyn B. (1996). Additive Number Theory: Inverse Problems and the Geometry of Sumsets. Graduate Texts in Mathematics. Vol. 165. Springer-Verlag. ISBN 0-387-94655-1. Zbl 0859.11003.
Tao, Terence; Vu, Van (2006). Additive Combinatorics. Cambridge Studies in Advanced Mathematics. Vol. 105. Cambridge University Press.
== External links ==
"Additive number theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Additive Number Theory". MathWorld. | Wikipedia/Additive_number_theory |
In abstract algebra, an abelian group
(
G
,
+
)
{\displaystyle (G,+)}
is called finitely generated if there exist finitely many elements
x
1
,
…
,
x
s
{\displaystyle x_{1},\dots ,x_{s}}
in
G
{\displaystyle G}
such that every
x
{\displaystyle x}
in
G
{\displaystyle G}
can be written in the form
x
=
n
1
x
1
+
n
2
x
2
+
⋯
+
n
s
x
s
{\displaystyle x=n_{1}x_{1}+n_{2}x_{2}+\cdots +n_{s}x_{s}}
for some integers
n
1
,
…
,
n
s
{\displaystyle n_{1},\dots ,n_{s}}
. In this case, we say that the set
{
x
1
,
…
,
x
s
}
{\displaystyle \{x_{1},\dots ,x_{s}\}}
is a generating set of
G
{\displaystyle G}
or that
x
1
,
…
,
x
s
{\displaystyle x_{1},\dots ,x_{s}}
generate
G
{\displaystyle G}
. So, finitely generated abelian groups can be thought of as a generalization of cyclic groups.
Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified.
== Examples ==
The integers,
(
Z
,
+
)
{\displaystyle \left(\mathbb {Z} ,+\right)}
, are a finitely generated abelian group.
The integers modulo
n
{\displaystyle n}
,
(
Z
/
n
Z
,
+
)
{\displaystyle \left(\mathbb {Z} /n\mathbb {Z} ,+\right)}
, are a finite (hence finitely generated) abelian group.
Any direct sum of finitely many finitely generated abelian groups is again a finitely generated abelian group.
Every lattice forms a finitely generated free abelian group.
There are no other examples (up to isomorphism). In particular, the group
(
Q
,
+
)
{\displaystyle \left(\mathbb {Q} ,+\right)}
of rational numbers is not finitely generated: if
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are rational numbers, pick a natural number
k
{\displaystyle k}
coprime to all the denominators; then
1
/
k
{\displaystyle 1/k}
cannot be generated by
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
. The group
(
Q
∗
,
⋅
)
{\displaystyle \left(\mathbb {Q} ^{*},\cdot \right)}
of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition
(
R
,
+
)
{\displaystyle \left(\mathbb {R} ,+\right)}
and non-zero real numbers under multiplication
(
R
∗
,
⋅
)
{\displaystyle \left(\mathbb {R} ^{*},\cdot \right)}
are also not finitely generated.
== Classification ==
The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of finite abelian groups. The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain, which in turn admits further generalizations.
=== Primary decomposition ===
The primary decomposition formulation states that every finitely generated abelian group G is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups. A primary cyclic group is one whose order is a power of a prime. That is, every finitely generated abelian group is isomorphic to a group of the form
Z
n
⊕
Z
/
q
1
Z
⊕
⋯
⊕
Z
/
q
t
Z
,
{\displaystyle \mathbb {Z} ^{n}\oplus \mathbb {Z} /q_{1}\mathbb {Z} \oplus \cdots \oplus \mathbb {Z} /q_{t}\mathbb {Z} ,}
where n ≥ 0 is the rank, and the numbers q1, ..., qt are powers of (not necessarily distinct) prime numbers. In particular, G is finite if and only if n = 0. The values of n, q1, ..., qt are (up to rearranging the indices) uniquely determined by G, that is, there is one and only one way to represent G as such a decomposition.
The proof of this statement uses the basis theorem for finite abelian group: every finite abelian group is a direct sum of primary cyclic groups. Denote the torsion subgroup of G as tG. Then, G/tG is a torsion-free abelian group and thus it is free abelian. tG is a direct summand of G, which means there exists a subgroup F of G s.t.
G
=
t
G
⊕
F
{\displaystyle G=tG\oplus F}
, where
F
≅
G
/
t
G
{\displaystyle F\cong G/tG}
. Then, F is also free abelian. Since tG is finitely generated and each element of tG has finite order, tG is finite. By the basis theorem for finite abelian group, tG can be written as direct sum of primary cyclic groups.
=== Invariant factor decomposition ===
We can also write any finitely generated abelian group G as a direct sum of the form
Z
n
⊕
Z
/
k
1
Z
⊕
⋯
⊕
Z
/
k
u
Z
,
{\displaystyle \mathbb {Z} ^{n}\oplus \mathbb {Z} /{k_{1}}\mathbb {Z} \oplus \cdots \oplus \mathbb {Z} /{k_{u}}\mathbb {Z} ,}
where k1 divides k2, which divides k3 and so on up to ku. Again, the rank n and the invariant factors k1, ..., ku are uniquely determined by G (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism.
=== Equivalence ===
These statements are equivalent as a result of the Chinese remainder theorem, which implies that
Z
j
k
≅
Z
j
⊕
Z
k
{\displaystyle \mathbb {Z} _{jk}\cong \mathbb {Z} _{j}\oplus \mathbb {Z} _{k}}
if and only if j and k are coprime.
=== History ===
The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. The finitely presented case is solved by Smith normal form, and hence frequently credited to (Smith 1861), though the finitely generated case is sometimes instead credited to Poincaré in 1900; details follow.
Group theorist László Fuchs states:
As far as the fundamental theorem on finite abelian groups is concerned, it is not clear how far back in time one needs to go to trace its origin. ... it took a long time to formulate and prove the fundamental theorem in its present form ...
The fundamental theorem for finite abelian groups was proven by Leopold Kronecker in 1870, using a group-theoretic proof, though without stating it in group-theoretic terms; a modern presentation of Kronecker's proof is given in (Stillwell 2012), 5.2.2 Kronecker's Theorem, 176–177. This generalized an earlier result of Carl Friedrich Gauss from Disquisitiones Arithmeticae (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882.
The fundamental theorem for finitely presented abelian groups was proven by Henry John Stephen Smith in (Smith 1861), as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups.
The fundamental theorem for finitely generated abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). This was done in the context of computing the
homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part.
Kronecker's proof was generalized to finitely generated abelian groups by Emmy Noether in 1926.
== Corollaries ==
Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of G. The rank of G is defined as the rank of the torsion-free part of G; this is just the number n in the above formulas.
A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here:
Q
{\displaystyle \mathbb {Q} }
is torsion-free but not free abelian.
Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms, form an abelian category which is a Serre subcategory of the category of abelian groups.
== Non-finitely generated abelian groups ==
Note that not every abelian group of finite rank is finitely generated; the rank 1 group
Q
{\displaystyle \mathbb {Q} }
is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of
Z
2
{\displaystyle \mathbb {Z} _{2}}
is another one.
== See also ==
The composition series in the Jordan–Hölder theorem is a non-abelian generalization.
== Notes ==
== References == | Wikipedia/Finitely_generated_abelian_group |
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers, the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC).
It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules,
and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity.
The version of the Euclidean algorithm described above—which follows Euclid's original presentation—may require many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.
The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations.
The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.
== Background: greatest common divisor ==
The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for GCD include greatest common factor (GCF), highest common factor (HCF), highest common divisor (HCD), and greatest common measure (GCM). The greatest common divisor is often written as gcd(a, b) or, more simply, as (a, b), although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.
If gcd(a, b) = 1, then a and b are said to be coprime (or relatively prime). This property does not imply that a or b are themselves prime numbers. For example, 6 and 35 factor as 6 = 2 × 3 and 35 = 5 × 7, so they are not prime, but their prime factors are different, so 6 and 35 are coprime, with no common factors other than 1.
Let g = gcd(a, b). Since a and b are both multiples of g, they can be written a = mg and b = ng, and there is no larger number G > g for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c.
The greatest common divisor can be visualized as follows. Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The GCD g is the largest value of c for which this is possible. For illustration, a 24×60 rectangular area can be divided into a grid of: 1×1 squares, 2×2 squares, 3×3 squares, 4×4 squares, 6×6 squares or 12×12 squares. Therefore, 12 is the GCD of 24 and 60. A 24×60 rectangular area can be divided into a grid of 12×12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).
The greatest common divisor of two numbers a and b is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as it divides both a and b. For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the GCD of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors (with 3 repeated since 3 × 3 divides both). If two numbers have no common prime factors, their GCD is 1 (obtained here as an instance of the empty product); in other words, they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.
Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor g of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form ua + vb where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of ua + vb). The equivalence of this GCD definition with the other definitions is described below.
The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example,
gcd(a, b, c) = gcd(a, gcd(b, c)) = gcd(gcd(a, b), c) = gcd(gcd(a, c), b).
Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.
=== Procedure ===
The Euclidean algorithm can be thought of as constructing a sequence of non-negative integers that begins with the two given integers
r
−
2
=
a
{\displaystyle r_{-2}=a}
and
r
−
1
=
b
{\displaystyle r_{-1}=b}
and will eventually terminate with the integer zero:
{
r
−
2
=
a
,
r
−
1
=
b
,
r
0
,
r
1
,
⋯
,
r
n
−
1
,
r
n
=
0
}
{\displaystyle \{r_{-2}=a,\ r_{-1}=b,\ r_{0},\ r_{1},\ \cdots ,\ r_{n-1},\ r_{n}=0\}}
with
r
k
+
1
<
r
k
{\displaystyle r_{k+1}<r_{k}}
. The integer
r
n
−
1
{\displaystyle r_{n-1}}
will then be the GCD and we can state
gcd
(
a
,
b
)
=
r
n
−
1
{\displaystyle {\text{gcd}}(a,b)=r_{n-1}}
. The algorithm indicates how to construct the intermediate remainders
r
k
{\displaystyle r_{k}}
via division-with-remainder on the preceding pair
(
r
k
−
2
,
r
k
−
1
)
{\displaystyle (r_{k-2},\ r_{k-1})}
by finding an integer quotient
q
k
{\displaystyle q_{k}}
so that:
r
k
−
2
=
q
k
⋅
r
k
−
1
+
r
k
, with
r
k
−
1
>
r
k
≥
0.
{\displaystyle r_{k-2}=q_{k}\cdot r_{k-1}+r_{k}{\text{, with }}\ r_{k-1}>r_{k}\geq 0.}
Because the sequence of non-negative integers
{
r
k
}
{\displaystyle \{r_{k}\}}
is strictly decreasing, it eventually must terminate. In other words, since
r
k
≥
0
{\displaystyle r_{k}\geq 0}
for every
k
{\displaystyle k}
, and each
r
k
{\displaystyle r_{k}}
is an integer that is strictly smaller than the preceding
r
k
−
1
{\displaystyle r_{k-1}}
, there eventually cannot be a non-negative integer smaller than zero, and hence the algorithm must terminate. In fact, the algorithm will always terminate at the nth step with
r
n
{\displaystyle r_{n}}
equal to zero.
To illustrate, suppose the GCD of 1071 and 462 is requested. The sequence is initially
{
r
−
2
=
1071
,
r
−
1
=
462
}
{\displaystyle \{r_{-2}=1071,\ r_{-1}=462\}}
and in order to find
r
0
{\displaystyle r_{0}}
, we need to find integers
q
0
{\displaystyle q_{0}}
and
r
0
<
r
−
1
{\displaystyle r_{0}<r_{-1}}
such that:
1071
=
q
0
⋅
462
+
r
0
{\displaystyle 1071=q_{0}\cdot 462+r_{0}}
.
This is the quotient
q
0
=
2
{\displaystyle q_{0}=2}
since
1071
=
2
⋅
462
+
147
{\displaystyle 1071=2\cdot 462+147}
. This determines
r
0
=
147
{\displaystyle r_{0}=147}
and so the sequence is now
{
1071
,
462
,
r
0
=
147
}
{\displaystyle \{1071,\ 462,\ r_{0}=147\}}
. The next step is to continue the sequence to find
r
1
{\displaystyle r_{1}}
by finding integers
q
1
{\displaystyle q_{1}}
and
r
1
<
r
0
{\displaystyle r_{1}<r_{0}}
such that:
462
=
q
1
⋅
147
+
r
1
{\displaystyle 462=q_{1}\cdot 147+r_{1}}
.
This is the quotient
q
1
=
3
{\displaystyle q_{1}=3}
since
462
=
3
⋅
147
+
21
{\displaystyle 462=3\cdot 147+21}
. This determines
r
1
=
21
{\displaystyle r_{1}=21}
and so the sequence is now
{
1071
,
462
,
147
,
r
1
=
21
}
{\displaystyle \{1071,\ 462,\ 147,\ r_{1}=21\}}
. The next step is to continue the sequence to find
r
2
{\displaystyle r_{2}}
by finding integers
q
2
{\displaystyle q_{2}}
and
r
2
<
r
1
{\displaystyle r_{2}<r_{1}}
such that:
147
=
q
2
⋅
21
+
r
2
{\displaystyle 147=q_{2}\cdot 21+r_{2}}
.
This is the quotient
q
2
=
7
{\displaystyle q_{2}=7}
since
147
=
7
⋅
21
+
0
{\displaystyle 147=7\cdot 21+0}
. This determines
r
2
=
0
{\displaystyle r_{2}=0}
and so the sequence is completed as
{
1071
,
462
,
147
,
21
,
r
2
=
0
}
{\displaystyle \{1071,\ 462,\ 147,\ 21,\ r_{2}=0\}}
as no further non-negative integer smaller than
0
{\displaystyle 0}
can be found. The penultimate remainder
21
{\displaystyle 21}
is therefore the requested GCD:
gcd
(
1071
,
462
)
=
21.
{\displaystyle {\text{gcd}}(1071,\ 462)=21.}
We can generalize slightly by dropping any ordering requirement on the initial two values
a
{\displaystyle a}
and
b
{\displaystyle b}
. If
a
=
b
{\displaystyle a=b}
, the algorithm may continue and trivially find that
gcd
(
a
,
a
)
=
a
{\displaystyle {\text{gcd}}(a,\ a)=a}
as the sequence of remainders will be
{
a
,
a
,
0
}
{\displaystyle \{a,\ a,\ 0\}}
. If
a
<
b
{\displaystyle a<b}
, then we can also continue since
a
≡
0
⋅
b
+
a
{\displaystyle a\equiv 0\cdot b+a}
, suggesting the next remainder should be
a
{\displaystyle a}
itself, and the sequence is
{
a
,
b
,
a
,
⋯
}
{\displaystyle \{a,\ b,\ a,\ \cdots \}}
. Normally, this would be invalid because it breaks the requirement
r
0
<
r
−
1
{\displaystyle r_{0}<r_{-1}}
but now we have
a
<
b
{\displaystyle a<b}
by construction, so the requirement is automatically satisfied and the Euclidean algorithm can continue as normal. Therefore, dropping any ordering between the first two integers does not affect the conclusion that the sequence must eventually terminate because the next remainder will always satisfy
r
0
<
b
{\displaystyle r_{0}<b}
and everything continues as above. The only modifications that need to be made are that
r
k
<
r
k
−
1
{\displaystyle r_{k}<r_{k-1}}
only for
k
≥
0
{\displaystyle k\geq 0}
, and that the sub-sequence of non-negative integers
{
r
k
−
1
}
{\displaystyle \{r_{k-1}\}}
for
k
≥
0
{\displaystyle k\geq 0}
is strictly decreasing, therefore excluding
a
=
r
−
2
{\displaystyle a=r_{-2}}
from both statements.
=== Proof of validity ===
The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder rN−1 is shown to divide both a and b. Since it is a common divisor, it must be less than or equal to the greatest common divisor g. In the second step, it is shown that any common divisor of a and b, including g, must divide rN−1; therefore, g must be less than or equal to rN−1. These two opposite inequalities imply rN−1 = g.
To demonstrate that rN−1 divides both a and b (the first step), rN−1 divides its predecessor rN−2
rN−2 = qN rN−1
since the final remainder rN is zero. rN−1 also divides its next predecessor rN−3
rN−3 = qN−1 rN−2 + rN−1
because it divides both terms on the right-hand side of the equation. Iterating the same argument, rN−1 divides all the preceding remainders, including a and b. None of the preceding remainders rN−2, rN−3, etc. divide a and b, since they leave a remainder. Since rN−1 is a common divisor of a and b, rN−1 ≤ g.
In the second step, any natural number c that divides both a and b (in other words, any common divisor of a and b) divides the remainders rk. By definition, a and b can be written as multiples of c: a = mc and b = nc, where m and n are natural numbers. Therefore, c divides the initial remainder r0, since r0 = a − q0b = mc − q0nc = (m − q0n)c. An analogous argument shows that c also divides the subsequent remainders r1, r2, etc. Therefore, the greatest common divisor g must divide rN−1, which implies that g ≤ rN−1. Since the first part of the argument showed the reverse (rN−1 ≤ g), it follows that g = rN−1. Thus, g is the greatest common divisor of all the succeeding pairs:
g = gcd(a, b) = gcd(b, r0) = gcd(r0, r1) = ... = gcd(rN−2, rN−1) = rN−1.
=== Worked example ===
For illustration, the Euclidean algorithm can be used to find the greatest common divisor of a = 1071 and b = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (q0 = 2), leaving a remainder of 147:
1071 = 2 × 462 + 147.
Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (q1 = 3), leaving a remainder of 21:
462 = 3 × 147 + 21.
Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (q2 = 7), leaving no remainder:
147 = 7 × 21 + 0.
Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are:
=== Visualization ===
The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an a×b rectangle with square tiles exactly, where a is the larger of the two numbers. We first attempt to tile the rectangle using b×b square tiles; however, this leaves an r0×b residual rectangle untiled, where r0 < b. We then attempt to tile the residual rectangle with r0×r0 square tiles. This leaves a second residual rectangle r1×r0, which we attempt to tile using r1×r1 square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21×21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green).
=== Euclidean division ===
At every step k, the Euclidean algorithm computes a quotient qk and remainder rk from two numbers rk−1 and rk−2
rk−2 = qk rk−1 + rk,
where the rk is non-negative and is strictly less than the absolute value of rk−1. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique.
In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, rk−1 is subtracted from rk−2 repeatedly until the remainder rk is smaller than rk−1. After that rk and rk−1 are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply
rk = rk−2 mod rk−1.
=== Implementations ===
Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
At the beginning of the kth iteration, the variable b holds the latest remainder rk−1, whereas the variable a holds its predecessor, rk−2. The step b := a mod b is equivalent to the above recursion formula rk ≡ rk−2 mod rk−1. The temporary variable t holds the value of rk−1 while the next remainder rk is being calculated. At the end of the loop iteration, the variable b holds the remainder rk, whereas the variable a holds its predecessor, rk−1.
(If negative inputs are allowed, or if the mod function may return negative values, the last line must be replaced with return abs(a).)
In the subtraction-based version, which was Euclid's original version, the remainder calculation (b := a mod b) is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when a = b:
function gcd(a, b)
while a ≠ b
if a > b
a := a − b
else
b := b − a
return a
The variables a and b alternate holding the previous remainders rk−1 and rk−2. Assume that a is larger than b at the beginning of an iteration; then a equals rk−2, since rk−2 > rk−1. During the loop iteration, a is reduced by multiples of the previous remainder b until a is smaller than b. Then a is the next remainder rk. Then b is reduced by multiples of a until it is again smaller than a, giving the next remainder rk+1, and so on.
The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd(rN−1, 0) = rN−1.
function gcd(a, b)
if b = 0
return a
else
return gcd(b, a mod b)
(As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction return a must be replaced by return max(a, −a).)
For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21.
=== Method of least absolute remainders ===
In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation
rk−2 = qk rk−1 + rk
assumed that |rk−1| > rk > 0. However, an alternative negative remainder ek can be computed:
rk−2 = (qk + 1) rk−1 + ek
if rk−1 > 0 or
rk−2 = (qk – 1) rk−1 + ek
if rk−1 < 0.
If rk is replaced by ek. when |ek| < |rk|, then one gets a variant of Euclidean algorithm such that
|rk| ≤ |rk−1| / 2
at each step.
Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers a and b, the number of steps is minimal if and only if qk is chosen in order that
|
r
k
+
1
r
k
|
<
1
φ
∼
0.618
,
{\displaystyle \left|{\frac {r_{k+1}}{r_{k}}}\right|<{\frac {1}{\varphi }}\sim 0.618,}
where
φ
{\displaystyle \varphi }
is the golden ratio.
== Historical development ==
The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths a and b corresponds to the greatest length g that measures a and b evenly; in other words, the lengths a and b are both integer multiples of the length g.
The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle. Claude Brezinski, following remarks by Pappus of Alexandria, credits the algorithm to Theaetetus (c. 417 – c. 369 BC).
Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently.
In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his Disquisitiones Arithmeticae (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals.
Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval.
The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm.
In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called The Game of Euclid, which has an optimal strategy. The players begin with two piles of a and b stones. The players take turns removing m multiples of the smaller pile from the larger. Thus, if the two piles consist of x and y stones, where x is larger than y, the next player can reduce the larger pile from x stones to x − my stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones.
== Mathematical applications ==
=== Bézout's identity ===
Bézout's identity states that the greatest common divisor g of two integers a and b can be represented as a linear sum of the original two numbers a and b. In other words, it is always possible to find integers s and t such that g = sa + tb.
The integers s and t can be calculated from the quotients q0, q1, etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, g can be expressed in terms of the quotient qN−1 and the two preceding remainders, rN−2 and rN−3:
g = rN−1 = rN−3 − qN−1 rN−2.
Those two remainders can be likewise expressed in terms of their quotients and preceding remainders,
rN−2 = rN−4 − qN−2 rN−3 and
rN−3 = rN−5 − qN−3 rN−4.
Substituting these formulae for rN−2 and rN−3 into the first equation yields g as a linear sum of the remainders rN−4 and rN−5. The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers a and b are reached:
r2 = r0 − q2 r1
r1 = b − q1 r0
r0 = a − q0 b.
After all the remainders r0, r1, etc. have been substituted, the final equation expresses g as a linear sum of a and b, so that g = sa + tb.
The Euclidean algorithm, and thus Bézout's identity, can be generalized to the context of Euclidean domains.
=== Principal ideals and related problems ===
Bézout's identity provides yet another definition of the greatest common divisor g of two numbers a and b. Consider the set of all numbers ua + vb, where u and v are any two integers. Since a and b are both divisible by g, every number in the set is divisible by g. In other words, every number of the set is an integer multiple of g. This is true for every common divisor of a and b. However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing u = s and v = t gives g. A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by g. Conversely, any multiple m of g can be obtained by choosing u = ms and v = mt, where s and t are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by m,
mg = msa + mtb.
Therefore, the set of all numbers ua + vb is equivalent to the set of multiples m of g. In other words, the set of all possible sums of integer multiples of two numbers (a and b) is equivalent to the set of multiples of gcd(a, b). The GCD is said to be the generator of the ideal of a and b. This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal).
Certain problems can be solved using this result. For example, consider two measuring cups of volume a and b. By adding/subtracting u multiples of the first cup and v multiples of the second cup, any volume ua + vb can be measured out. These volumes are all multiples of g = gcd(a, b).
=== Extended Euclidean algorithm ===
The integers s and t of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm
sk = sk−2 − qksk−1
tk = tk−2 − qktk−1
with the starting values
s−2 = 1, t−2 = 0
s−1 = 0, t−1 = 1.
Using this recursion, Bézout's integers s and t are given by s = sN and t = tN, where N + 1 is the step on which the algorithm terminates with rN+1 = 0.
The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step k − 1 of the algorithm; in other words, assume that
rj = sj a + tj b
for all j less than k. The kth step of the algorithm gives the equation
rk = rk−2 − qkrk−1.
Since the recursion formula has been assumed to be correct for rk−2 and rk−1, they may be expressed in terms of the corresponding s and t variables
rk = (sk−2 a + tk−2 b) − qk(sk−1 a + tk−1 b).
Rearranging this equation yields the recursion formula for step k, as required
rk = sk a + tk b = (sk−2 − qksk−1) a + (tk−2 − qktk−1) b.
=== Matrix method ===
The integers s and t can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm
a
=
q
0
b
+
r
0
b
=
q
1
r
0
+
r
1
⋮
r
N
−
2
=
q
N
r
N
−
1
+
0
{\displaystyle {\begin{aligned}a&=q_{0}b+r_{0}\\b&=q_{1}r_{0}+r_{1}\\&\,\,\,\vdots \\r_{N-2}&=q_{N}r_{N-1}+0\end{aligned}}}
can be written as a product of 2×2 quotient matrices multiplying a two-dimensional remainder vector
(
a
b
)
=
(
q
0
1
1
0
)
(
b
r
0
)
=
(
q
0
1
1
0
)
(
q
1
1
1
0
)
(
r
0
r
1
)
=
⋯
=
∏
i
=
0
N
(
q
i
1
1
0
)
(
r
N
−
1
0
)
.
{\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}={\begin{pmatrix}q_{0}&1\\1&0\end{pmatrix}}{\begin{pmatrix}b\\r_{0}\end{pmatrix}}={\begin{pmatrix}q_{0}&1\\1&0\end{pmatrix}}{\begin{pmatrix}q_{1}&1\\1&0\end{pmatrix}}{\begin{pmatrix}r_{0}\\r_{1}\end{pmatrix}}=\cdots =\prod _{i=0}^{N}{\begin{pmatrix}q_{i}&1\\1&0\end{pmatrix}}{\begin{pmatrix}r_{N-1}\\0\end{pmatrix}}\,.}
Let M represent the product of all the quotient matrices
M
=
(
m
11
m
12
m
21
m
22
)
=
∏
i
=
0
N
(
q
i
1
1
0
)
=
(
q
0
1
1
0
)
(
q
1
1
1
0
)
⋯
(
q
N
1
1
0
)
.
{\displaystyle \mathbf {M} ={\begin{pmatrix}m_{11}&m_{12}\\m_{21}&m_{22}\end{pmatrix}}=\prod _{i=0}^{N}{\begin{pmatrix}q_{i}&1\\1&0\end{pmatrix}}={\begin{pmatrix}q_{0}&1\\1&0\end{pmatrix}}{\begin{pmatrix}q_{1}&1\\1&0\end{pmatrix}}\cdots {\begin{pmatrix}q_{N}&1\\1&0\end{pmatrix}}\,.}
This simplifies the Euclidean algorithm to the form
(
a
b
)
=
M
(
r
N
−
1
0
)
=
M
(
g
0
)
.
{\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}=\mathbf {M} {\begin{pmatrix}r_{N-1}\\0\end{pmatrix}}=\mathbf {M} {\begin{pmatrix}g\\0\end{pmatrix}}\,.}
To express g as a linear sum of a and b, both sides of this equation can be multiplied by the inverse of the matrix M. The determinant of M equals (−1)N+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M
(
g
0
)
=
M
−
1
(
a
b
)
=
(
−
1
)
N
+
1
(
m
22
−
m
12
−
m
21
m
11
)
(
a
b
)
.
{\displaystyle {\begin{pmatrix}g\\0\end{pmatrix}}=\mathbf {M} ^{-1}{\begin{pmatrix}a\\b\end{pmatrix}}=(-1)^{N+1}{\begin{pmatrix}m_{22}&-m_{12}\\-m_{21}&m_{11}\end{pmatrix}}{\begin{pmatrix}a\\b\end{pmatrix}}\,.}
Since the top equation gives
g = (−1)N+1 ( m22 a − m12 b),
the two integers of Bézout's identity are s = (−1)N+1m22 and t = (−1)Nm12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.
=== Euclid's lemma and unique factorization ===
Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number L can be written as a product of two factors u and v, that is, L = uv. If another number w also divides L but is coprime with u, then w must divide v, by the following argument: If the greatest common divisor of u and w is 1, then integers s and t can be found such that
1 = su + tw
by Bézout's identity. Multiplying both sides by v gives the relation:
v = suv + twv = sL + twv
Since w divides both terms on the right-hand side, it must also divide the left-hand side, v. This result is known as Euclid's lemma. Specifically, if a prime number divides L, then it must divide at least one factor of L. Conversely, if a number w is coprime to each of a series of numbers a1, a2, ..., an, then w is also coprime to their product, a1 × a2 × ... × an.
Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of L into m and n prime factors, respectively
L = p1p2...pm = q1q2...qn .
Since each prime p divides L by assumption, it must also divide one of the q factors; since each q is prime as well, it must be that p = q. Iteratively dividing by the p factors shows that each p has an equal counterpart q; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.
=== Linear Diophantine equations ===
Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical linear Diophantine equation seeks integers x and y such that
ax + by = c
where a, b and c are given integers. This can be written as an equation for x in modular arithmetic:
ax ≡ c mod b.
Let g be the greatest common divisor of a and b. Both terms in ax + by are divisible by g; therefore, c must also be divisible by g, or the equation has no solutions. By dividing both sides by c/g, the equation can be reduced to Bezout's identity
sa + tb = g,
where s and t can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, x1 = s (c/g) and y1 = t (c/g).
In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, (x1, y1) and (x2, y2), where
ax1 + by1 = c = ax2 + by2
or equivalently
a(x1 − x2) = b(y2 − y1).
Therefore, the smallest difference between two x solutions is b/g, whereas the smallest difference between two y solutions is a/g. Thus, the solutions may be expressed as
x = x1 − bu/g
y = y1 + au/g.
By allowing u to vary over all possible integers, an infinite family of solutions can be generated from a single solution (x1, y1). If the solutions are required to be positive integers (x > 0, y > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system).
=== Multiplicative inverses and the RSA algorithm ===
A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, ..., 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 × 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime p; using more sophisticated definitions, they can also be defined for any power m of a prime pm. Finite fields are often called Galois fields, and are abbreviated as GF(p) or GF(pm).
In such a field with m numbers, every nonzero element a has a unique modular multiplicative inverse, a−1 such that aa−1 = a−1a ≡ 1 mod m. This inverse can be found by solving the congruence equation ax ≡ 1 mod m, or the equivalent linear Diophantine equation
ax + my = 1.
This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields.
=== Chinese remainder theorem ===
Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer x. Instead of representing an integer by its digits, it may be represented by its remainders xi modulo a set of N coprime numbers mi:
x
1
≡
x
(
mod
m
1
)
x
2
≡
x
(
mod
m
2
)
⋮
x
N
≡
x
(
mod
m
N
)
.
{\displaystyle {\begin{aligned}x_{1}&\equiv x{\pmod {m_{1}}}\\x_{2}&\equiv x{\pmod {m_{2}}}\\&\,\,\,\vdots \\x_{N}&\equiv x{\pmod {m_{N}}}\,.\end{aligned}}}
The goal is to determine x from its N remainders xi. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus M that is the product of all the individual moduli mi, and define Mi as
M
i
=
M
m
i
.
{\displaystyle M_{i}={\frac {M}{m_{i}}}.}
Thus, each Mi is the product of all the moduli except mi. The solution depends on finding N new numbers hi such that
M
i
h
i
≡
1
(
mod
m
i
)
.
{\displaystyle M_{i}h_{i}\equiv 1{\pmod {m_{i}}}\,.}
With these numbers hi, any integer x can be reconstructed from its remainders xi by the equation
x
≡
(
x
1
M
1
h
1
+
x
2
M
2
h
2
+
⋯
+
x
N
M
N
h
N
)
(
mod
M
)
.
{\displaystyle x\equiv (x_{1}M_{1}h_{1}+x_{2}M_{2}h_{2}+\cdots +x_{N}M_{N}h_{N}){\pmod {M}}\,.}
Since these numbers hi are the multiplicative inverses of the Mi, they may be found using Euclid's algorithm as described in the previous subsection.
=== Stern–Brocot tree ===
The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree.
The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number a/b can be found by computing gcd(a,b) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b. This fact can be used to prove that each positive rational number appears exactly once in this tree.
For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice:
gcd
(
3
,
4
)
←
=
gcd
(
3
,
1
)
→
=
gcd
(
2
,
1
)
→
=
gcd
(
1
,
1
)
.
{\displaystyle {\begin{aligned}&\gcd(3,4)&\leftarrow \\={}&\gcd(3,1)&\rightarrow \\={}&\gcd(2,1)&\rightarrow \\={}&\gcd(1,1).\end{aligned}}}
The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.
=== Continued fractions ===
The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form
a
b
=
q
0
+
r
0
b
b
r
0
=
q
1
+
r
1
r
0
r
0
r
1
=
q
2
+
r
2
r
1
⋮
r
k
−
2
r
k
−
1
=
q
k
+
r
k
r
k
−
1
⋮
r
N
−
2
r
N
−
1
=
q
N
.
{\displaystyle {\begin{aligned}{\frac {a}{b}}&=q_{0}+{\frac {r_{0}}{b}}\\{\frac {b}{r_{0}}}&=q_{1}+{\frac {r_{1}}{r_{0}}}\\{\frac {r_{0}}{r_{1}}}&=q_{2}+{\frac {r_{2}}{r_{1}}}\\&\,\,\,\vdots \\{\frac {r_{k-2}}{r_{k-1}}}&=q_{k}+{\frac {r_{k}}{r_{k-1}}}\\&\,\,\,\vdots \\{\frac {r_{N-2}}{r_{N-1}}}&=q_{N}\,.\end{aligned}}}
The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form
a
b
=
q
0
+
1
q
1
+
r
1
r
0
.
{\displaystyle {\frac {a}{b}}=q_{0}+{\cfrac {1}{q_{1}+{\cfrac {r_{1}}{r_{0}}}}}\,.}
The third equation may be used to substitute the denominator term r1/r0, yielding
a
b
=
q
0
+
1
q
1
+
1
q
2
+
r
2
r
1
.
{\displaystyle {\frac {a}{b}}=q_{0}+{\cfrac {1}{q_{1}+{\cfrac {1}{q_{2}+{\cfrac {r_{2}}{r_{1}}}}}}}\,.}
The final ratio of remainders rk/rk−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction
a
b
=
q
0
+
1
q
1
+
1
q
2
+
1
⋱
+
1
q
N
=
[
q
0
;
q
1
,
q
2
,
…
,
q
N
]
.
{\displaystyle {\frac {a}{b}}=q_{0}+{\cfrac {1}{q_{1}+{\cfrac {1}{q_{2}+{\cfrac {1}{\ddots +{\cfrac {1}{q_{N}}}}}}}}}=[q_{0};q_{1},q_{2},\ldots ,q_{N}]\,.}
In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written
1071
462
=
2
+
1
3
+
1
7
=
[
2
;
3
,
7
]
{\displaystyle {\frac {1071}{462}}=2+{\cfrac {1}{3+{\cfrac {1}{7}}}}=[2;3,7]}
as can be confirmed by calculation.
=== Factorization algorithms ===
Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm.
== Algorithmic efficiency ==
The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number h of base-10 digits of the smaller number b.
In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also O(h). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as O(h2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also O(h2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD.
=== Number of steps ===
The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(a, b). If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then
T(a, b) = T(m, n)
as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs.
The recursive nature of the Euclidean algorithm gives another equation
T(a, b) = 1 + T(b, r0) = 2 + T(r0, r1) = … = N + T(rN−2, rN−1) = N + 1
where T(x, 0) = 0 by assumption.
==== Worst-case ====
If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2,
which is the desired inequality.
This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers.
This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b.
==== Average ====
The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time T(a) required to calculate the GCD of a given number a and a smaller natural number b chosen with equal probability from the integers 0 to a − 1
T
(
a
)
=
1
a
∑
0
≤
b
<
a
T
(
a
,
b
)
.
{\displaystyle T(a)={\frac {1}{a}}\sum _{0\leq b<a}T(a,b).}
However, since T(a, b) fluctuates dramatically with the GCD of the two numbers, the averaged function T(a) is likewise "noisy".
To reduce this noise, a second average τ(a) is taken over all numbers coprime with a
τ
(
a
)
=
1
φ
(
a
)
∑
0
≤
b
<
a
gcd
(
a
,
b
)
=
1
T
(
a
,
b
)
.
{\displaystyle \tau (a)={\frac {1}{\varphi (a)}}\sum _{\begin{smallmatrix}0\leq b<a\\\gcd(a,b)=1\end{smallmatrix}}T(a,b).}
There are φ(a) coprime integers less than a, where φ is Euler's totient function. This tau average grows smoothly with a
τ
(
a
)
=
12
π
2
ln
2
ln
a
+
C
+
O
(
a
−
1
/
6
−
ε
)
{\displaystyle \tau (a)={\frac {12}{\pi ^{2}}}\ln 2\ln a+C+O(a^{-1/6-\varepsilon })}
with the residual error being of order a−(1/6)+ε, where ε is infinitesimal. The constant C in this formula is called Porter's constant and equals
C
=
−
1
2
+
6
ln
2
π
2
(
4
γ
−
24
π
2
ζ
′
(
2
)
+
3
ln
2
−
2
)
≈
1.467
{\displaystyle C=-{\frac {1}{2}}+{\frac {6\ln 2}{\pi ^{2}}}\left(4\gamma -{\frac {24}{\pi ^{2}}}\zeta '(2)+3\ln 2-2\right)\approx 1.467}
where γ is the Euler–Mascheroni constant and ζ′ is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods.
Since the first average can be calculated from the tau average by summing over the divisors d of a
T
(
a
)
=
1
a
∑
d
∣
a
φ
(
d
)
τ
(
d
)
{\displaystyle T(a)={\frac {1}{a}}\sum _{d\mid a}\varphi (d)\tau (d)}
it can be approximated by the formula
T
(
a
)
≈
C
+
12
π
2
ln
2
(
ln
a
−
∑
d
∣
a
Λ
(
d
)
d
)
{\displaystyle T(a)\approx C+{\frac {12}{\pi ^{2}}}\ln 2\,{\biggl (}{\ln a}-\sum _{d\mid a}{\frac {\Lambda (d)}{d}}{\biggr )}}
where Λ(d) is the Mangoldt function.
A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n
Y
(
n
)
=
1
n
2
∑
a
=
1
n
∑
b
=
1
n
T
(
a
,
b
)
=
1
n
∑
a
=
1
n
T
(
a
)
.
{\displaystyle Y(n)={\frac {1}{n^{2}}}\sum _{a=1}^{n}\sum _{b=1}^{n}T(a,b)={\frac {1}{n}}\sum _{a=1}^{n}T(a).}
Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n)
Y
(
n
)
≈
12
π
2
ln
2
ln
n
+
0.06.
{\displaystyle Y(n)\approx {\frac {12}{\pi ^{2}}}\ln 2\ln n+0.06.}
=== Computational expense per step ===
In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1
rk−2 = qk rk−1 + rk.
The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk
rk = rk−2 − qk rk−1.
The computational expense of dividing h-bit numbers scales as O(h(ℓ + 1)), where ℓ is the length of the quotient.
For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately ln |u/(u − 1)| where u = (q + 1)2. For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm.
Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let h0, h1, ..., hN−1 represent the number of digits in the successive remainders r0, r1, ..., rN−1. Since the number of steps N grows linearly with h, the running time is bounded by
O
(
∑
i
<
N
h
i
(
h
i
−
h
i
+
1
+
2
)
)
⊆
O
(
h
∑
i
<
N
(
h
i
−
h
i
+
1
+
2
)
)
⊆
O
(
h
(
h
0
+
2
N
)
)
⊆
O
(
h
2
)
.
{\displaystyle O{\Big (}\sum _{i<N}h_{i}(h_{i}-h_{i+1}+2){\Big )}\subseteq O{\Big (}h\sum _{i<N}(h_{i}-h_{i+1}+2){\Big )}\subseteq O(h(h_{0}+2N))\subseteq O(h^{2}).}
=== Alternative methods ===
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined.
One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency.
The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like O(h²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases.
A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as O(h log h2 log log h).
== Generalizations ==
Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory.
=== Rational and real numbers ===
Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his Elements. The goal of the algorithm is to identify a real number g such that two given real numbers, a and b, are integer multiples of it: a = mg and b = ng, where m and n are integers. This identification is equivalent to finding an integer relation among the real numbers a and b; that is, it determines integers s and t such that sa + tb = 0. If such an equation is possible, a and b are called commensurable lengths, otherwise they are incommensurable lengths.
The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders rk are real numbers, although the quotients qk are integers as before. Second, the algorithm is not guaranteed to end in a finite number N of steps. If it does, the fraction a/b is a rational number, i.e., the ratio of two integers
a
b
=
m
g
n
g
=
m
n
,
{\displaystyle {\frac {a}{b}}={\frac {mg}{ng}}={\frac {m}{n}},}
and can be written as a finite continued fraction [q0; q1, q2, ..., qN]. If the algorithm does not stop, the fraction a/b is an irrational number and can be described by an infinite continued fraction [q0; q1, q2, …]. Examples of infinite continued fractions are the golden ratio φ = [1; 1, 1, ...] and the square root of two, √2 = [1; 2, 2, ...]. The algorithm is unlikely to stop, since almost all ratios a/b of two real numbers are irrational.
An infinite continued fraction may be truncated at a step k [q0; q1, q2, ..., qk] to yield an approximation to a/b that improves as k is increased. The approximation is described by convergents mk/nk; the numerator and denominators are coprime and obey the recurrence relation
m
k
=
q
k
m
k
−
1
+
m
k
−
2
n
k
=
q
k
n
k
−
1
+
n
k
−
2
,
{\displaystyle {\begin{aligned}m_{k}&=q_{k}m_{k-1}+m_{k-2}\\n_{k}&=q_{k}n_{k-1}+n_{k-2},\end{aligned}}}
where m−1 = n−2 = 1 and m−2 = n−1 = 0 are the initial values of the recursion. The convergent mk/nk is the best rational number approximation to a/b with denominator nk:
|
a
b
−
m
k
n
k
|
<
1
n
k
2
.
{\displaystyle \left|{\frac {a}{b}}-{\frac {m_{k}}{n_{k}}}\right|<{\frac {1}{n_{k}^{2}}}.}
=== Polynomials ===
Polynomials in a single variable x can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial g(x) of two polynomials a(x) and b(x) is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step k, a quotient polynomial qk(x) and a remainder polynomial rk(x) are identified to satisfy the recursive equation
r
k
−
2
(
x
)
=
q
k
(
x
)
r
k
−
1
(
x
)
+
r
k
(
x
)
,
{\displaystyle r_{k-2}(x)=q_{k}(x)r_{k-1}(x)+r_{k}(x),}
where r−2(x) = a(x) and r−1(x) = b(x). Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: deg[rk(x)] < deg[rk−1(x)]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, a(x) and b(x).
For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials
a
(
x
)
=
x
4
−
4
x
3
+
4
x
2
−
3
x
+
14
=
(
x
2
−
5
x
+
7
)
(
x
2
+
x
+
2
)
and
b
(
x
)
=
x
4
+
8
x
3
+
12
x
2
+
17
x
+
6
=
(
x
2
+
7
x
+
3
)
(
x
2
+
x
+
2
)
.
{\displaystyle {\begin{aligned}a(x)&=x^{4}-4x^{3}+4x^{2}-3x+14=(x^{2}-5x+7)(x^{2}+x+2)\qquad {\text{and}}\\b(x)&=x^{4}+8x^{3}+12x^{2}+17x+6=(x^{2}+7x+3)(x^{2}+x+2).\end{aligned}}}
Dividing a(x) by b(x) yields a remainder r0(x) = x3 + (2/3)x2 + (5/3)x − (2/3). In the next step, b(x) is divided by r0(x) yielding a remainder r1(x) = x2 + x + 2. Finally, dividing r0(x) by r1(x) yields a zero remainder, indicating that r1(x) is the greatest common divisor polynomial of a(x) and b(x), consistent with their factorization.
Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined.
The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory.
Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF(p) described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials.
=== Gaussian integers ===
The Gaussian integers are complex numbers of the form α = u + vi, where u and v are ordinary integers and i is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments.
The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for ordinary integers, but differs in two respects. As before, we set r−2 = α and r−1 = β, and the task at each step k is to identify a quotient qk and a remainder rk such that
r
k
=
r
k
−
2
−
q
k
r
k
−
1
,
{\displaystyle r_{k}=r_{k-2}-q_{k}r_{k-1},}
where every remainder is strictly smaller than its predecessor: |rk| < |rk−1|. The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients qk are generally found by rounding the real and complex parts of the exact ratio (such as the complex number α/β) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function f(u + vi) = u2 + v2 is defined, which converts every Gaussian integer u + vi into an ordinary integer. After each step k of the Euclidean algorithm, the norm of the remainder f(rk) is smaller than the norm of the preceding remainder, f(rk−1). Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is gcd(α, β), the Gaussian integer of largest norm that divides both α and β; it is unique up to multiplication by a unit, ±1 or ±i.
Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined.
=== Euclidean domains ===
A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring R and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity.
The generalized Euclidean algorithm requires a Euclidean function, i.e., a mapping f from R into the set of nonnegative integers such that, for any two nonzero elements a and b in R, there exist q and r in R such that a = qb + r and f(r) < f(b). Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces f inexorably; hence, if f can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member.
The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain.
The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form x + ωy, where x and y are integers, and ω = e2iπ/n is an nth root of 1, that is, ωn = 1. Although this approach succeeds for some values of n (such as n = 3, the Eisenstein integers), in general such numbers do not factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals.
==== Unique factorization of quadratic integers ====
The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number ω. Thus, they have the form u + vω, where u and v are integers and ω has one of two forms, depending on a parameter D. If D does not equal a multiple of four plus one, then
ω
=
D
.
{\displaystyle \omega ={\sqrt {D}}.}
If, however, D does equal a multiple of four plus one, then
ω
=
1
+
D
2
.
{\displaystyle \omega ={\frac {1+{\sqrt {D}}}{2}}.}
If the function f corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where D is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases D = −1 and D = −3 yield the Gaussian integers and Eisenstein integers, respectively.
If f is allowed to be any Euclidean function, then the list of possible values of D for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with D = 69) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with D > 0 is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds.
=== Noncommutative rings ===
The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let α and β represent two elements from such a ring. They have a common right divisor δ if α = ξδ and β = ηδ for some choice of ξ and η in the ring. Similarly, they have a common left divisor if α = dξ and β = dη for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the gcd(α, β) by the Euclidean algorithm can be written
ρ
0
=
α
−
ψ
0
β
=
(
ξ
−
ψ
0
η
)
δ
,
{\displaystyle \rho _{0}=\alpha -\psi _{0}\beta =(\xi -\psi _{0}\eta )\delta ,}
where ψ0 represents the quotient and ρ0 the remainder. Here the quotient and remainder are chosen so that (if nonzero) the remainder has N(ρ0) < N(β) for a "Euclidean function" N defined analogously to the Euclidean functions of Euclidean domains in the non-commutative case. This equation shows that any common right divisor of α and β is likewise a common divisor of the remainder ρ0. The analogous equation for the left divisors would be
ρ
0
=
α
−
β
ψ
0
=
δ
(
ξ
−
η
ψ
0
)
.
{\displaystyle \rho _{0}=\alpha -\beta \psi _{0}=\delta (\xi -\eta \psi _{0}).}
With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder ρ0 (formally, its Euclidean function or "norm") must be strictly smaller than β, and there must be only a finite number of possible sizes for ρ0, so that the algorithm is guaranteed to terminate.
Many results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right gcd(α, β) can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that
Γ
right
=
σ
α
+
τ
β
.
{\displaystyle \Gamma _{\text{right}}=\sigma \alpha +\tau \beta .}
The analogous identity for the left GCD is nearly the same:
Γ
left
=
α
σ
+
β
τ
.
{\displaystyle \Gamma _{\text{left}}=\alpha \sigma +\beta \tau .}
Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way.
== See also ==
Euclidean rhythm, a method for using the Euclidean algorithm to generate musical rhythms
== Notes ==
== References ==
== Bibliography ==
Bueso, José; Gómez-Torrecillas, José; Verschoren, Alain (2003). Algorithmic Methods in Non-Commutative Algebra: Applications to Quantum Groups. Mathematical Modelling: Theory and Applications. Vol. 17. Kluwer Academic Publishers, Dordrecht. doi:10.1007/978-94-017-0285-0. ISBN 1-4020-1402-3. MR 2006329.
Cohen, H. (1993). A Course in Computational Algebraic Number Theory. New York: Springer-Verlag. ISBN 0-387-55640-0.
Cohn, H. (1980). Advanced Number Theory. New York: Dover. ISBN 0-486-64023-X.
Cox, D.; Little, J.; O'Shea, D. (1997). Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra (2nd ed.). Springer-Verlag. ISBN 0-387-94680-2.
Crandall, R.; Pomerance, C. (2001). Prime Numbers: A Computational Perspective (1st ed.). New York: Springer-Verlag. ISBN 0-387-94777-9.
Lejeune Dirichlet, P. G. (1894). Dedekind, Richard (ed.). Vorlesungen über Zahlentheorie (Lectures on Number Theory) (in German). Braunschweig: Vieweg. LCCN 03005859. OCLC 490186017.. See also Vorlesungen über Zahlentheorie
Knuth, D. E. (1997). The Art of Computer Programming, Volume 2: Seminumerical Algorithms (3rd ed.). Addison–Wesley. ISBN 0-201-89684-2.
LeVeque, W. J. (1996) [1977]. Fundamentals of Number Theory. New York: Dover. ISBN 0-486-68906-9.
Mollin, R. A. (2008). Fundamental Number Theory with Applications (2nd ed.). Boca Raton: Chapman & Hall/CRC. ISBN 978-1-4200-6659-3.
Ore, O. (1948). Number Theory and Its History. New York: McGraw–Hill.
Rosen, K. H. (2000). Elementary Number Theory and its Applications (4th ed.). Reading, MA: Addison–Wesley. ISBN 0-201-87073-8.
Schroeder, M. (2005). Number Theory in Science and Communication (4th ed.). Springer-Verlag. ISBN 0-387-15800-6.
Stark, H. (1978). An Introduction to Number Theory. MIT Press. ISBN 0-262-69060-8.
Stillwell, J. (1997). Numbers and Geometry. New York: Springer-Verlag. ISBN 0-387-98289-2.
Stillwell, J. (2003). Elements of Number Theory. New York: Springer-Verlag. ISBN 0-387-95587-9.
Tattersall, J. J. (2005). Elementary Number Theory in Nine Chapters. Cambridge: Cambridge University Press. ISBN 978-0-521-85014-8.
== External links ==
Demonstrations of Euclid's algorithm
Weisstein, Eric W. "Euclidean Algorithm". MathWorld.
Euclid's Algorithm at cut-the-knot
Euclid's algorithm at PlanetMath.
The Euclidean Algorithm at MathPages
Euclid's Game at cut-the-knot
Music and Euclid's algorithm | Wikipedia/Euclidean_algorithm |
In algebra, a unit or invertible element of a ring is an invertible element for the multiplication of the ring. That is, an element u of a ring R is a unit if there exists v in R such that
v
u
=
u
v
=
1
,
{\displaystyle vu=uv=1,}
where 1 is the multiplicative identity; the element v is unique for this property and is called the multiplicative inverse of u. The set of units of R forms a group R× under multiplication, called the group of units or unit group of R. Other notations for the unit group are R∗, U(R), and E(R) (from the German term Einheit).
Less commonly, the term unit is sometimes used to refer to the element 1 of the ring, in expressions like ring with a unit or unit ring, and also unit matrix. Because of this ambiguity, 1 is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of a rng.
== Examples ==
The multiplicative identity 1 and its additive inverse −1 are always units. More generally, any root of unity in a ring R is a unit: if rn = 1, then rn−1 is a multiplicative inverse of r.
In a nonzero ring, the element 0 is not a unit, so R× is not closed under addition.
A nonzero ring R in which every nonzero element is a unit (that is, R× = R ∖ {0}) is called a division ring (or a skew-field). A commutative division ring is called a field. For example, the unit group of the field of real numbers R is R ∖ {0}.
=== Integer ring ===
In the ring of integers Z, the only units are 1 and −1.
In the ring Z/nZ of integers modulo n, the units are the congruence classes (mod n) represented by integers coprime to n. They constitute the multiplicative group of integers modulo n.
=== Ring of integers of a number field ===
In the ring Z[√3] obtained by adjoining the quadratic integer √3 to Z, one has (2 + √3)(2 − √3) = 1, so 2 + √3 is a unit, and so are its powers, so Z[√3] has infinitely many units.
More generally, for the ring of integers R in a number field F, Dirichlet's unit theorem states that R× is isomorphic to the group
Z
n
×
μ
R
{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}
where
μ
R
{\displaystyle \mu _{R}}
is the (finite, cyclic) group of roots of unity in R and n, the rank of the unit group, is
n
=
r
1
+
r
2
−
1
,
{\displaystyle n=r_{1}+r_{2}-1,}
where
r
1
,
r
2
{\displaystyle r_{1},r_{2}}
are the number of real embeddings and the number of pairs of complex embeddings of F, respectively.
This recovers the Z[√3] example: The unit group of (the ring of integers of) a real quadratic field is infinite of rank 1, since
r
1
=
2
,
r
2
=
0
{\displaystyle r_{1}=2,r_{2}=0}
.
=== Polynomials and power series ===
For a commutative ring R, the units of the polynomial ring R[x] are the polynomials
p
(
x
)
=
a
0
+
a
1
x
+
⋯
+
a
n
x
n
{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}
such that a0 is a unit in R and the remaining coefficients
a
1
,
…
,
a
n
{\displaystyle a_{1},\dots ,a_{n}}
are nilpotent, i.e., satisfy
a
i
N
=
0
{\displaystyle a_{i}^{N}=0}
for some N.
In particular, if R is a domain (or more generally reduced), then the units of R[x] are the units of R.
The units of the power series ring
R
[
[
x
]
]
{\displaystyle R[[x]]}
are the power series
p
(
x
)
=
∑
i
=
0
∞
a
i
x
i
{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}
such that a0 is a unit in R.
=== Matrix rings ===
The unit group of the ring Mn(R) of n × n matrices over a ring R is the group GLn(R) of invertible matrices. For a commutative ring R, an element A of Mn(R) is invertible if and only if the determinant of A is invertible in R. In that case, A−1 can be given explicitly in terms of the adjugate matrix.
=== In general ===
For elements x and y in a ring R, if
1
−
x
y
{\displaystyle 1-xy}
is invertible, then
1
−
y
x
{\displaystyle 1-yx}
is invertible with inverse
1
+
y
(
1
−
x
y
)
−
1
x
{\displaystyle 1+y(1-xy)^{-1}x}
; this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:
(
1
−
y
x
)
−
1
=
∑
n
≥
0
(
y
x
)
n
=
1
+
y
(
∑
n
≥
0
(
x
y
)
n
)
x
=
1
+
y
(
1
−
x
y
)
−
1
x
.
{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}
See Hua's identity for similar results.
== Group of units ==
A commutative ring is a local ring if R ∖ R× is a maximal ideal.
As it turns out, if R ∖ R× is an ideal, then it is necessarily a maximal ideal and R is local since a maximal ideal is disjoint from R×.
If R is a finite field, then R× is a cyclic group of order |R| − 1.
Every ring homomorphism f : R → S induces a group homomorphism R× → S×, since f maps units to units. In fact, the formation of the unit group defines a functor from the category of rings to the category of groups. This functor has a left adjoint which is the integral group ring construction.
The group scheme
GL
1
{\displaystyle \operatorname {GL} _{1}}
is isomorphic to the multiplicative group scheme
G
m
{\displaystyle \mathbb {G} _{m}}
over any base, so for any commutative ring R, the groups
GL
1
(
R
)
{\displaystyle \operatorname {GL} _{1}(R)}
and
G
m
(
R
)
{\displaystyle \mathbb {G} _{m}(R)}
are canonically isomorphic to U(R). Note that the functor
G
m
{\displaystyle \mathbb {G} _{m}}
(that is, R ↦ U(R)) is representable in the sense:
G
m
(
R
)
≃
Hom
(
Z
[
t
,
t
−
1
]
,
R
)
{\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}
for commutative rings R (this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphisms
Z
[
t
,
t
−
1
]
→
R
{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}
and the set of unit elements of R (in contrast,
Z
[
t
]
{\displaystyle \mathbb {Z} [t]}
represents the additive group
G
a
{\displaystyle \mathbb {G} _{a}}
, the forgetful functor from the category of commutative rings to the category of abelian groups).
== Associatedness ==
Suppose that R is commutative. Elements r and s of R are called associate if there exists a unit u in R such that r = us; then write r ~ s. In any ring, pairs of additive inverse elements x and −x are associate, since any ring includes the unit −1. For example, 6 and −6 are associate in Z. In general, ~ is an equivalence relation on R.
Associatedness can also be described in terms of the action of R× on R via multiplication: Two elements of R are associate if they are in the same R×-orbit.
In an integral domain, the set of associates of a given nonzero element has the same cardinality as R×.
The equivalence relation ~ can be viewed as any one of Green's semigroup relations specialized to the multiplicative semigroup of a commutative ring R.
== See also ==
S-units
Localization of a ring and a module
== Notes ==
== Citations ==
== Sources == | Wikipedia/Unit_(ring_theory) |
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing.
Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City.
== History ==
Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second-largest academic publisher with 65 staff in 1872. In 1964, Springer expanded its business internationally, opening an office in New York City. Offices in Tokyo, Paris, Milan, Hong Kong, and Delhi soon followed.
In 1999, the academic publishing company BertelsmannSpringer was formed after the media and entertainment company Bertelsmann bought a majority stake in Springer-Verlag. In 2003, the British investment groups Cinven and Candover bought BertelsmannSpringer from Bertelsmann. They merged the company in 2004 with the Dutch publisher Kluwer Academic Publishers (successor of D. Reidel, Dr. W. Junk, Plenum Publishers, most of Chapman & Hall, and Baltzer Science Publishers) which they bought from Wolters Kluwer in 2002, to form Springer Science+Business Media.
In 2006, Springer acquired Humana Press.
Springer acquired the open-access publisher BioMed Central in October 2008 for an undisclosed amount.
In 2009, Cinven and Candover sold Springer to two private equity firms, EQT AB and Government of Singapore Investment Corporation, confirmed in February 2010 after the competition authorities in the US and in Europe approved the transfer.
In 2011, Springer acquired Pharma Marketing and Publishing Services (MPS) from Wolters Kluwer.
In 2013, the London-based private equity firm BC Partners acquired a majority stake in Springer from EQT and GIC for $4.4 billion.
In January 2015, Holtzbrinck Publishing Group / Nature Publishing Group and Springer Science+Business Media announced a merger. in May 2015 they concluded the transaction and formed a new joint venture company, Springer Nature with Holtzbrinck in the majority 53% share and BC Partners retaining 47% interest in the company.
== Products ==
In 1996, Springer launched electronic book and journal content on its SpringerLink site.
SpringerImages was launched in 2008. In 2009, SpringerMaterials, a platform for accessing the Landolt-Börnstein database of research and information on materials and their properties, was launched.
AuthorMapper is a free online tool for visualizing scientific research that enables document discovery based on author locations and geographic maps, helping users explore patterns in scientific research, identify literature trends, discover collaborative relationships, and locate experts in several scientific/medical fields.
Springer Protocols contained a collection of laboratory protocols, recipes that provide step-by-step instructions for conducting experiments, which in 2018 was made available in SpringerLink instead.
Book publications include major reference works, textbooks, monographs and book series; more than 168,000 titles are available as e-books in 24 subject collections.
=== Open access ===
Springer is a member of the Open Access Scholarly Publishers Association. For some of its journals, Springer does not require its authors to transfer their copyrights, and allows them to decide whether their articles are published under an open-access license or in the traditional restricted licence model. While open-access publishing typically requires the author to pay a fee for copyright retention, this fee is sometimes covered by a third party. For example, a national institution in Poland allows authors to publish in open-access journals without incurring any personal cost but using public funds.
== Controversies ==
In 1938, Springer-Verlag was pressed to apply Nazi principles on the journal Zentralblatt MATH. Tullio Levi-Civita, who was Jewish, was forced out from the editorial board, and Otto Neugebauer resigned in protest along with most of the rest of the board.
In 2014, it was revealed that 16 papers in conference proceedings published by Springer had been computer-generated using SCIgen. Springer subsequently retracted all papers from these proceedings. IEEE had removed more than 100 fake papers from its conference proceedings.
In 2015, Springer retracted 64 papers from 10 of its journals it had published after a fraudulent peer review process was uncovered.
=== Manipulation of bibliometrics ===
According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as a proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics.
Seven Springer Nature journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total.
== Selected imprints ==
== Selected publications ==
Cellular Oncology
Encyclopaedia of Mathematics
Ergebnisse der Mathematik und ihrer Grenzgebiete (book series)
Graduate Texts in Mathematics (book series)
Grothendieck's Séminaire de géométrie algébrique
The International Journal of Advanced Manufacturing Technology
Lecture Notes in Computer Science
Undergraduate Texts in Mathematics (book series)
Zentralblatt MATH
MRS Bulletin
== See also ==
Category:Springer Science+Business Media academic journals
List of publishers
Media concentration
== References ==
== External links ==
Official website
Mary H. Munroe (2004). "Springer Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on 2014-10-20 – via Northern Illinois University. | Wikipedia/Springer_Science_and_Business_Media |
In mathematics, class field theory (CFT) is the fundamental branch of algebraic number theory whose goal is to describe all the abelian Galois extensions of local and global fields using objects associated to the ground field.
Hilbert is credited as one of pioneers of the notion of a class field. However, this notion was already familiar to Kronecker and it was actually Weber who coined the term before Hilbert's fundamental papers came out. The relevant ideas were developed in the period of several decades, giving rise to a set of conjectures by Hilbert that were subsequently proved by Takagi and Artin (with the help of Chebotarev's theorem).
One of the major results is: given a number field F, and writing K for the maximal abelian unramified extension of F, the Galois group of K over F is canonically isomorphic to the ideal class group of F. This statement was generalized to the so called Artin reciprocity law; in the idelic language, writing CF for the idele class group of F, and taking L to be any finite abelian extension of F, this law gives a canonical isomorphism
θ
L
/
F
:
C
F
/
N
L
/
F
(
C
L
)
→
Gal
(
L
/
F
)
,
{\displaystyle \theta _{L/F}:C_{F}/{N_{L/F}(C_{L})}\to \operatorname {Gal} (L/F),}
where
N
L
/
F
{\displaystyle N_{L/F}}
denotes the idelic norm map from L to F. This isomorphism is named the reciprocity map.
The existence theorem states that the reciprocity map can be used to give a bijection between the set of abelian extensions of F and the set of closed subgroups of finite index of
C
F
.
{\displaystyle C_{F}.}
A standard method for developing global class field theory since the 1930s was to construct local class field theory, which describes abelian extensions of local fields, and then use it to construct global class field theory. This was first done by Emil Artin and Tate using the theory of group cohomology, and in particular by developing the notion of class formations. Later, Neukirch found a proof of the main statements of global class field theory without using cohomological ideas. His method was explicit and algorithmic.
Inside class field theory one can distinguish special class field theory and general class field theory.
Explicit class field theory provides an explicit construction of maximal abelian extensions of a number field in various situations. This portion of the theory consists of Kronecker–Weber theorem, which can be used to construct the abelian extensions of
Q
{\displaystyle \mathbb {Q} }
, and the theory of complex multiplication to construct abelian extensions of CM-fields.
There are three main generalizations of class field theory: higher class field theory, the Langlands program (or 'Langlands correspondences'), and anabelian geometry.
== Formulation in contemporary language ==
In modern mathematical language, class field theory (CFT) can be formulated as follows. Consider the maximal abelian extension A of a local or global field K. It is of infinite degree over K; the Galois group G of A over K is an infinite profinite group, so a compact topological group, and it is abelian. The central aims of class field theory are: to describe G in terms of certain appropriate topological objects associated to K, to describe finite abelian extensions of K in terms of open subgroups of finite index in the topological object associated to K. In particular, one wishes to establish a one-to-one correspondence between finite abelian extensions of K and their norm groups in this topological object for K. This topological object is the multiplicative group in the case of local fields with finite residue field and the idele class group in the case of global fields. The finite abelian extension corresponding to an open subgroup of finite index is called the class field for that subgroup, which gave the name to the theory.
The fundamental result of general class field theory states that the group G is naturally isomorphic to the profinite completion of CK, the multiplicative group of a local field or the idele class group of the global field, with respect to the natural topology on CK related to the specific structure of the field K. Equivalently, for any finite Galois extension L of K, there is an isomorphism (the Artin reciprocity map)
Gal
(
L
/
K
)
ab
→
C
K
/
N
L
/
K
(
C
L
)
{\displaystyle \operatorname {Gal} (L/K)^{\operatorname {ab} }\to C_{K}/N_{L/K}(C_{L})}
of the abelianization of the Galois group of the extension with the quotient of the idele class group of K by the image of the norm of the idele class group of L.
For some small fields, such as the field of rational numbers
Q
{\displaystyle \mathbb {Q} }
or its quadratic imaginary extensions there is a more detailed very explicit but too specific theory which provides more information. For example, the abelianized absolute Galois group G of
Q
{\displaystyle \mathbb {Q} }
is (naturally isomorphic to) an infinite product of the group of units of the p-adic integers taken over all prime numbers p, and the corresponding maximal abelian extension of the rationals is the field generated by all roots of unity. This is known as the Kronecker–Weber theorem, originally conjectured by Leopold Kronecker. In this case the reciprocity isomorphism of class field theory (or Artin reciprocity map) also admits an explicit description due to the Kronecker–Weber theorem. However, principal constructions of such more detailed theories for small algebraic number fields are not extendable to the general case of algebraic number fields, and different conceptual principles are in use in the general class field theory.
The standard method to construct the reciprocity homomorphism is to first construct the local reciprocity isomorphism from the multiplicative group of the completion of a global field to the Galois group of its maximal abelian extension (this is done inside local class field theory) and then prove that the product of all such local reciprocity maps when defined on the idele group of the global field is trivial on the image of the multiplicative group of the global field. The latter property is called the global reciprocity law and is a far reaching generalization of the Gauss quadratic reciprocity law.
One of the methods to construct the reciprocity homomorphism uses class formation which derives class field theory from axioms of class field theory. This derivation is purely topological group theoretical, while to establish the axioms one has to use the ring structure of the ground field.
There are methods which use cohomology groups, in particular the Brauer group, and there are methods which do not use cohomology groups and are very explicit and fruitful for applications.
== History ==
The origins of class field theory lie in the quadratic reciprocity law proved by Gauss. The generalization took place as a long-term historical project, involving quadratic forms and their 'genus theory', work of Ernst Kummer and Leopold Kronecker/Kurt Hensel on ideals and completions, the theory of cyclotomic and Kummer extensions.
The first two class field theories were very explicit cyclotomic and complex multiplication class field theories. They used additional structures: in the case of the field of rational numbers they use roots of unity, in the case of imaginary quadratic extensions of the field of rational numbers they use elliptic curves with complex multiplication and their points of finite order. Much later, the theory of Shimura provided another very explicit class field theory for a class of algebraic number fields. In positive characteristic
p
{\displaystyle p}
, Kawada and Satake used Witt duality to get a very easy description of the
p
{\displaystyle p}
-part of the reciprocity homomorphism.
However, these very explicit theories could not be extended to more general number fields. General class field theory used different concepts and constructions which work over every global field.
The famous problems of David Hilbert stimulated further development, which led to the reciprocity laws, and proofs by Teiji Takagi, Philipp Furtwängler, Emil Artin, Helmut Hasse and many others. The crucial Takagi existence theorem was known by 1920 and all the main results by about 1930. One of the last classical conjectures to be proved was the principalisation property. The first proofs of class field theory used substantial analytic methods. In the 1930s and subsequently saw the increasing use of infinite extensions and Wolfgang Krull's theory of their Galois groups. This combined with Pontryagin duality to give a clearer if more abstract formulation of the central result, the Artin reciprocity law. An important step was the introduction of ideles by Claude Chevalley in the 1930s to replace ideal classes, essentially clarifying and simplifying the description of abelian extensions of global fields. Most of the central results were proved by 1940.
Later the results were reformulated in terms of group cohomology, which became a standard way to learn class field theory for several generations of number theorists. One drawback of the cohomological method is its relative inexplicitness. As the result of local contributions by Bernard Dwork, John Tate, Michiel Hazewinkel and a local and global reinterpretation by Jürgen Neukirch and also in relation to the work on explicit reciprocity formulas by many mathematicians, a very explicit and cohomology-free presentation of class field theory was established in the 1990s. (See, for example, Class Field Theory by Neukirch.)
== Applications ==
Class field theory is used to prove Artin-Verdier duality. Very explicit class field theory is used in many subareas of algebraic number theory such as Iwasawa theory and Galois modules theory.
Most main achievements toward the Langlands correspondence for number fields, the BSD conjecture for number fields, and Iwasawa theory for number fields use very explicit but narrow class field theory methods or their generalizations. The open question is therefore to use generalizations of general class field theory in these three directions.
== Generalizations of class field theory ==
There are three main generalizations, each of great interest. They are: the Langlands program, anabelian geometry, and higher class field theory.
Often, the Langlands correspondence is viewed as a nonabelian class field theory. If or when it is fully established, it would contain a certain theory of nonabelian Galois extensions of global fields. However, the Langlands correspondence does not include as much arithmetical information about finite Galois extensions as class field theory does in the abelian case. It also does not include an analog of the existence theorem in class field theory: the concept of class fields is absent in the Langlands correspondence. There are several other nonabelian theories, local and global, which provide alternatives to the Langlands correspondence point of view.
Another generalization of class field theory is anabelian geometry, which studies algorithms to restore the original object (e.g. a number field or a hyperbolic curve over it) from the knowledge of its full absolute Galois group or algebraic fundamental group.
Another natural generalization is higher class field theory, divided into higher local class field theory and higher global class field theory. It describes abelian extensions of higher local fields and higher global fields. The latter come as function fields of schemes of finite type over integers and their appropriate localizations and completions. It uses algebraic K-theory, and appropriate Milnor K-groups generalize the
K
1
{\displaystyle K_{1}}
used in one-dimensional class field theory.
== See also ==
Frobenioid
== Citations ==
== References == | Wikipedia/Class_field_theory |
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros. For functions from the real numbers to real numbers or from the complex numbers to the complex numbers, these are expressed either as floating-point numbers without error bounds or as floating-point values together with error bounds. The latter, approximations with error bounds, are equivalent to small isolating intervals for real roots or disks for complex roots.
Solving an equation f(x) = g(x) is the same as finding the roots of the function h(x) = f(x) – g(x). Thus root-finding algorithms can be used to solve any equation of continuous functions. However, most root-finding algorithms do not guarantee that they will find all roots of a function, and if such an algorithm does not find any root, that does not necessarily mean that no root exists.
Most numerical root-finding methods are iterative methods, producing a sequence of numbers that ideally converges towards a root as a limit. They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points and for converging rapidly to these fixed points.
The behavior of general root-finding algorithms is studied in numerical analysis. However, for polynomials specifically, the study of root-finding algorithms belongs to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency and applicability of an algorithm may depend sensitively on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed and for locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root within each interval (or disk).
== Bracketing methods ==
Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root is considered found. These generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require starting with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods such as Descartes' rule of signs, Budan's theorem and Sturm's theorem for bounding or determining the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which find all real roots with a guaranteed accuracy.
=== Bisection method ===
The simplest root-finding algorithm is the bisection method. Let f be a continuous function for which one knows an interval [a, b] such that f(a) and f(b) have opposite signs (a bracket). Let c = (a + b)/2 be the middle of the interval (the midpoint or the point that bisects the interval). Then either f(a) and f(c), or f(c) and f(b) have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one bit of accuracy with each iteration. Therefore, the number of function evaluations required for finding an ε-approximate root is
log
2
b
−
a
ε
{\displaystyle \log _{2}{\frac {b-a}{\varepsilon }}}
. Other methods, under appropriate conditions, can gain accuracy faster.
=== False position (regula falsi) ===
The false position method, also called the regula falsi method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the x-intercept of the line that connects the plotted function values at the endpoints of the interval, that is
c
=
a
f
(
b
)
−
b
f
(
a
)
f
(
b
)
−
f
(
a
)
.
{\displaystyle c={\frac {af(b)-bf(a)}{f(b)-f(a)}}.}
False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method. However, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for f(c). Typically, this may occur if the derivative of f is large in the neighborhood of the root.
== Interpolation ==
Many root-finding processes work by interpolation. This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated.
Interpolating two values yields a line: a polynomial of degree one. This is the basis of the secant method. Regula falsi is also an interpolation method that interpolates two points at a time but it differs from the secant method by using two points that are not necessarily the last two computed points. Three values define a parabolic curve: a quadratic function. This is the basis of Muller's method.
== Iterative methods ==
Although all root-finding algorithms proceed by iteration, an iterative root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point of the auxiliary function is reached to the desired precision, i.e., when a new computed value is sufficiently close to the preceding ones.
=== Newton's method (and similar derivative-based methods) ===
Newton's method assumes the function f to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method; its order of convergence is usually quadratic whereas the bisection method's is linear. Newton's method is also important because it readily generalizes to higher-dimensional problems. Householder's methods are a class of Newton-like methods with higher orders of convergence. The first one after Newton's method is Halley's method with cubic order of convergence.
=== Secant method ===
Replacing the derivative in Newton's method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order of convergence is the golden ratio, approximately 1.62). A generalization of the secant method in higher dimensions is Broyden's method.
=== Steffensen's method ===
If we use a polynomial fit to remove the quadratic part of the finite difference used in the secant method, so that it better approximates the derivative, we obtain Steffensen's method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.
=== Fixed point iteration method ===
We can use the fixed-point iteration to find the root of a function. Given a function
f
(
x
)
{\displaystyle f(x)}
which we have set to zero to find the root (
f
(
x
)
=
0
{\displaystyle f(x)=0}
), we rewrite the equation in terms of
x
{\displaystyle x}
so that
f
(
x
)
=
0
{\displaystyle f(x)=0}
becomes
x
=
g
(
x
)
{\displaystyle x=g(x)}
(note, there are often many
g
(
x
)
{\displaystyle g(x)}
functions for each
f
(
x
)
=
0
{\displaystyle f(x)=0}
function). Next, we relabel each side of the equation as
x
n
+
1
=
g
(
x
n
)
{\displaystyle x_{n+1}=g(x_{n})}
so that we can perform the iteration. Next, we pick a value for
x
1
{\displaystyle x_{1}}
and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if
|
g
′
(
r
o
o
t
)
|
<
1
{\displaystyle |g'(root)|<1}
.
As an example of converting
f
(
x
)
=
0
{\displaystyle f(x)=0}
to
x
=
g
(
x
)
{\displaystyle x=g(x)}
, if given the function
f
(
x
)
=
x
2
+
x
−
1
{\displaystyle f(x)=x^{2}+x-1}
, we will rewrite it as one of the following equations.
x
n
+
1
=
(
1
/
x
n
)
−
1
{\displaystyle x_{n+1}=(1/x_{n})-1}
,
x
n
+
1
=
1
/
(
x
n
+
1
)
{\displaystyle x_{n+1}=1/(x_{n}+1)}
,
x
n
+
1
=
1
−
x
n
2
{\displaystyle x_{n+1}=1-x_{n}^{2}}
,
x
n
+
1
=
x
n
2
+
2
x
n
−
1
{\displaystyle x_{n+1}=x_{n}^{2}+2x_{n}-1}
, or
x
n
+
1
=
±
1
−
x
n
{\displaystyle x_{n+1}=\pm {\sqrt {1-x_{n}}}}
.
=== Inverse interpolation ===
The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
== Combinations of methods ==
=== Brent's method ===
Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
=== Ridders' method ===
Ridders' method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.
== Roots of polynomials ==
== Finding roots in higher dimensions ==
The bisection method has been generalized to higher dimensions; these methods are called generalized bisection methods. At each iteration, the domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root. In one dimension, the criterion for decision is that the function has opposite signs. The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily and guarantees the existence of a root.
The Poincaré–Miranda theorem gives a criterion for the existence of a root in a rectangle, but it is hard to verify because it requires evaluating the function on the entire boundary of the rectangle.
Another criterion is given by a theorem of Kronecker. It says that, if the topological degree of a function f on a rectangle is non-zero, then the rectangle must contain at least one root of f. This criterion is the basis for several root-finding methods, such as those of Stenger and Kearfott. However, computing the topological degree can be time-consuming.
A third criterion is based on a characteristic polyhedron. This criterion is used by a method called Characteristic Bisection.: 19-- It does not require computing the topological degree; it only requires computing the signs of function values. The number of required evaluations is at least
log
2
(
D
/
ϵ
)
{\displaystyle \log _{2}(D/\epsilon )}
, where D is the length of the longest edge of the characteristic polyhedron.: 11, Lemma.4.7 Note that Vrahatis and Iordanidis prove a lower bound on the number of evaluations, and not an upper bound.
A fourth method uses an intermediate value theorem on simplices. Again, no upper bound on the number of queries is given.
== See also ==
== References ==
== Further reading ==
Victor Yakovlevich Pan: "Solving a Polynomial Equation: Some History and Recent Progress", SIAM Review, Vol.39, No.2, pp.187-220 (June, 1997).
John Michael McNamee: Numerical Methods for Roots of Polynomials - Part I, Elsevier, ISBN 978-0-444-52729-5 (2007).
John Michael McNamee and Victor Yakovlevich Pan: Numerical Methods for Roots of Polynomials - Part II, Elsevier, ISBN 978-0-444-52730-1 (2013). | Wikipedia/Root-finding_algorithm |
In mathematics, Sendov's conjecture, sometimes also called Ilieff's conjecture, concerns the relationship between the locations of roots and critical points of a polynomial function of a complex variable. It is named after Blagovest Sendov.
The conjecture states that for a polynomial
f
(
z
)
=
(
z
−
r
1
)
⋯
(
z
−
r
n
)
,
(
n
≥
2
)
{\displaystyle f(z)=(z-r_{1})\cdots (z-r_{n}),\qquad (n\geq 2)}
with all roots r1, ..., rn inside the closed unit disk |z| ≤ 1, each of the n roots is at a distance no more than 1 from at least one critical point.
The Gauss–Lucas theorem says that all of the critical points lie within the convex hull of the roots. It follows that the critical points must be within the unit disk, since the roots are.
The conjecture has been proven for n < 9 by Brown-Xiang and for n sufficiently large by Tao.
== History ==
The conjecture was first proposed by Blagovest Sendov in 1959; he described the conjecture to his colleague Nikola Obreshkov. In 1967 the conjecture was misattributed to Ljubomir Iliev by Walter Hayman. In 1969 Meir and Sharma proved the conjecture for polynomials with n < 6. In 1991 Brown proved the conjecture for n < 7. Borcea extended the proof to n < 8 in 1996. Brown and Xiang proved the conjecture for n < 9 in 1999. Terence Tao proved the conjecture for sufficiently large n in 2020.
== References ==
G. Schmeisser, "The Conjectures of Sendov and Smale," Approximation Theory: A Volume Dedicated to Blagovest Sendov (B. Bojoanov, ed.), Sofia: DARBA, 2002 pp. 353–369.
== External links ==
Sendov's Conjecture by Bruce Torrence with contributions from Paul Abbott at The Wolfram Demonstrations Project | Wikipedia/Sendov's_conjecture |
In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives (differentiability class) it has over its domain.
A function of class
C
k
{\displaystyle C^{k}}
is a function of smoothness at least k; that is, a function of class
C
k
{\displaystyle C^{k}}
is a function that has a kth derivative that is continuous in its domain.
A function of class
C
∞
{\displaystyle C^{\infty }}
or
C
∞
{\displaystyle C^{\infty }}
-function (pronounced C-infinity function) is an infinitely differentiable function, that is, a function that has derivatives of all orders (this implies that all these derivatives are continuous).
Generally, the term smooth function refers to a
C
∞
{\displaystyle C^{\infty }}
-function. However, it may also mean "sufficiently differentiable" for the problem under consideration.
== Differentiability classes ==
Differentiability class is a classification of functions according to the properties of their derivatives. It is a measure of the highest order of derivative that exists and is continuous for a function.
Consider an open set
U
{\displaystyle U}
on the real line and a function
f
{\displaystyle f}
defined on
U
{\displaystyle U}
with real values. Let k be a non-negative integer. The function
f
{\displaystyle f}
is said to be of differentiability class
C
k
{\displaystyle C^{k}}
if the derivatives
f
′
,
f
″
,
…
,
f
(
k
)
{\displaystyle f',f'',\dots ,f^{(k)}}
exist and are continuous on
U
.
{\displaystyle U.}
If
f
{\displaystyle f}
is
k
{\displaystyle k}
-differentiable on
U
,
{\displaystyle U,}
then it is at least in the class
C
k
−
1
{\displaystyle C^{k-1}}
since
f
′
,
f
″
,
…
,
f
(
k
−
1
)
{\displaystyle f',f'',\dots ,f^{(k-1)}}
are continuous on
U
.
{\displaystyle U.}
The function
f
{\displaystyle f}
is said to be infinitely differentiable, smooth, or of class
C
∞
,
{\displaystyle C^{\infty },}
if it has derivatives of all orders on
U
.
{\displaystyle U.}
(So all these derivatives are continuous functions over
U
.
{\displaystyle U.}
) The function
f
{\displaystyle f}
is said to be of class
C
ω
,
{\displaystyle C^{\omega },}
or analytic, if
f
{\displaystyle f}
is smooth (i.e.,
f
{\displaystyle f}
is in the class
C
∞
{\displaystyle C^{\infty }}
) and its Taylor series expansion around any point in its domain converges to the function in some neighborhood of the point. There exist functions that are smooth but not analytic;
C
ω
{\displaystyle C^{\omega }}
is thus strictly contained in
C
∞
.
{\displaystyle C^{\infty }.}
Bump functions are examples of functions with this property.
To put it differently, the class
C
0
{\displaystyle C^{0}}
consists of all continuous functions. The class
C
1
{\displaystyle C^{1}}
consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a
C
1
{\displaystyle C^{1}}
function is exactly a function whose derivative exists and is of class
C
0
.
{\displaystyle C^{0}.}
In general, the classes
C
k
{\displaystyle C^{k}}
can be defined recursively by declaring
C
0
{\displaystyle C^{0}}
to be the set of all continuous functions, and declaring
C
k
{\displaystyle C^{k}}
for any positive integer
k
{\displaystyle k}
to be the set of all differentiable functions whose derivative is in
C
k
−
1
.
{\displaystyle C^{k-1}.}
In particular,
C
k
{\displaystyle C^{k}}
is contained in
C
k
−
1
{\displaystyle C^{k-1}}
for every
k
>
0
,
{\displaystyle k>0,}
and there are examples to show that this containment is strict (
C
k
⊊
C
k
−
1
{\displaystyle C^{k}\subsetneq C^{k-1}}
). The class
C
∞
{\displaystyle C^{\infty }}
of infinitely differentiable functions, is the intersection of the classes
C
k
{\displaystyle C^{k}}
as
k
{\displaystyle k}
varies over the non-negative integers.
=== Examples ===
==== Example: continuous (C0) but not differentiable ====
The function
f
(
x
)
=
{
x
if
x
≥
0
,
0
if
x
<
0
{\displaystyle f(x)={\begin{cases}x&{\mbox{if }}x\geq 0,\\0&{\text{if }}x<0\end{cases}}}
is continuous, but not differentiable at x = 0, so it is of class C0, but not of class C1.
==== Example: finitely-times differentiable (Ck) ====
For each even integer k, the function
f
(
x
)
=
|
x
|
k
+
1
{\displaystyle f(x)=|x|^{k+1}}
is continuous and k times differentiable at all x. At x = 0, however,
f
{\displaystyle f}
is not (k + 1) times differentiable, so
f
{\displaystyle f}
is of class Ck, but not of class Cj where j > k.
==== Example: differentiable but not continuously differentiable (not C1) ====
The function
g
(
x
)
=
{
x
2
sin
(
1
x
)
if
x
≠
0
,
0
if
x
=
0
{\displaystyle g(x)={\begin{cases}x^{2}\sin {\left({\tfrac {1}{x}}\right)}&{\text{if }}x\neq 0,\\0&{\text{if }}x=0\end{cases}}}
is differentiable, with derivative
g
′
(
x
)
=
{
−
cos
(
1
x
)
+
2
x
sin
(
1
x
)
if
x
≠
0
,
0
if
x
=
0.
{\displaystyle g'(x)={\begin{cases}-{\mathord {\cos \left({\tfrac {1}{x}}\right)}}+2x\sin \left({\tfrac {1}{x}}\right)&{\text{if }}x\neq 0,\\0&{\text{if }}x=0.\end{cases}}}
Because
cos
(
1
/
x
)
{\displaystyle \cos(1/x)}
oscillates as x → 0,
g
′
(
x
)
{\displaystyle g'(x)}
is not continuous at zero. Therefore,
g
(
x
)
{\displaystyle g(x)}
is differentiable but not of class C1.
==== Example: differentiable but not Lipschitz continuous ====
The function
h
(
x
)
=
{
x
4
/
3
sin
(
1
x
)
if
x
≠
0
,
0
if
x
=
0
{\displaystyle h(x)={\begin{cases}x^{4/3}\sin {\left({\tfrac {1}{x}}\right)}&{\text{if }}x\neq 0,\\0&{\text{if }}x=0\end{cases}}}
is differentiable but its derivative is unbounded on a compact set. Therefore,
h
{\displaystyle h}
is an example of a function that is differentiable but not locally Lipschitz continuous.
==== Example: analytic (Cω) ====
The exponential function
e
x
{\displaystyle e^{x}}
is analytic, and hence falls into the class Cω (where ω is the smallest transfinite ordinal). The trigonometric functions are also analytic wherever they are defined, because they are linear combinations of complex exponential functions
e
i
x
{\displaystyle e^{ix}}
and
e
−
i
x
{\displaystyle e^{-ix}}
.
==== Example: smooth (C∞) but not analytic (Cω) ====
The bump function
f
(
x
)
=
{
e
−
1
1
−
x
2
if
|
x
|
<
1
,
0
otherwise
{\displaystyle f(x)={\begin{cases}e^{-{\frac {1}{1-x^{2}}}}&{\text{ if }}|x|<1,\\0&{\text{ otherwise }}\end{cases}}}
is smooth, so of class C∞, but it is not analytic at x = ±1, and hence is not of class Cω. The function f is an example of a smooth function with compact support.
=== Multivariate differentiability classes ===
A function
f
:
U
⊆
R
n
→
R
{\displaystyle f:U\subseteq \mathbb {R} ^{n}\to \mathbb {R} }
defined on an open set
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
is said to be of class
C
k
{\displaystyle C^{k}}
on
U
{\displaystyle U}
, for a positive integer
k
{\displaystyle k}
, if all partial derivatives
∂
α
f
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
(
y
1
,
y
2
,
…
,
y
n
)
{\displaystyle {\frac {\partial ^{\alpha }f}{\partial x_{1}^{\alpha _{1}}\,\partial x_{2}^{\alpha _{2}}\,\cdots \,\partial x_{n}^{\alpha _{n}}}}(y_{1},y_{2},\ldots ,y_{n})}
exist and are continuous, for every
α
1
,
α
2
,
…
,
α
n
{\displaystyle \alpha _{1},\alpha _{2},\ldots ,\alpha _{n}}
non-negative integers, such that
α
=
α
1
+
α
2
+
⋯
+
α
n
≤
k
{\displaystyle \alpha =\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}\leq k}
, and every
(
y
1
,
y
2
,
…
,
y
n
)
∈
U
{\displaystyle (y_{1},y_{2},\ldots ,y_{n})\in U}
. Equivalently,
f
{\displaystyle f}
is of class
C
k
{\displaystyle C^{k}}
on
U
{\displaystyle U}
if the
k
{\displaystyle k}
-th order Fréchet derivative of
f
{\displaystyle f}
exists and is continuous at every point of
U
{\displaystyle U}
. The function
f
{\displaystyle f}
is said to be of class
C
{\displaystyle C}
or
C
0
{\displaystyle C^{0}}
if it is continuous on
U
{\displaystyle U}
. Functions of class
C
1
{\displaystyle C^{1}}
are also said to be continuously differentiable.
A function
f
:
U
⊂
R
n
→
R
m
{\displaystyle f:U\subset \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
, defined on an open set
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
, is said to be of class
C
k
{\displaystyle C^{k}}
on
U
{\displaystyle U}
, for a positive integer
k
{\displaystyle k}
, if all of its components
f
i
(
x
1
,
x
2
,
…
,
x
n
)
=
(
π
i
∘
f
)
(
x
1
,
x
2
,
…
,
x
n
)
=
π
i
(
f
(
x
1
,
x
2
,
…
,
x
n
)
)
for
i
=
1
,
2
,
3
,
…
,
m
{\displaystyle f_{i}(x_{1},x_{2},\ldots ,x_{n})=(\pi _{i}\circ f)(x_{1},x_{2},\ldots ,x_{n})=\pi _{i}(f(x_{1},x_{2},\ldots ,x_{n})){\text{ for }}i=1,2,3,\ldots ,m}
are of class
C
k
{\displaystyle C^{k}}
, where
π
i
{\displaystyle \pi _{i}}
are the natural projections
π
i
:
R
m
→
R
{\displaystyle \pi _{i}:\mathbb {R} ^{m}\to \mathbb {R} }
defined by
π
i
(
x
1
,
x
2
,
…
,
x
m
)
=
x
i
{\displaystyle \pi _{i}(x_{1},x_{2},\ldots ,x_{m})=x_{i}}
. It is said to be of class
C
{\displaystyle C}
or
C
0
{\displaystyle C^{0}}
if it is continuous, or equivalently, if all components
f
i
{\displaystyle f_{i}}
are continuous, on
U
{\displaystyle U}
.
=== The space of Ck functions ===
Let
D
{\displaystyle D}
be an open subset of the real line. The set of all
C
k
{\displaystyle C^{k}}
real-valued functions defined on
D
{\displaystyle D}
is a Fréchet vector space, with the countable family of seminorms
p
K
,
m
=
sup
x
∈
K
|
f
(
m
)
(
x
)
|
{\displaystyle p_{K,m}=\sup _{x\in K}\left|f^{(m)}(x)\right|}
where
K
{\displaystyle K}
varies over an increasing sequence of compact sets whose union is
D
{\displaystyle D}
, and
m
=
0
,
1
,
…
,
k
{\displaystyle m=0,1,\dots ,k}
.
The set of
C
∞
{\displaystyle C^{\infty }}
functions over
D
{\displaystyle D}
also forms a Fréchet space. One uses the same seminorms as above, except that
m
{\displaystyle m}
is allowed to range over all non-negative integer values.
The above spaces occur naturally in applications where functions having derivatives of certain orders are necessary; however, particularly in the study of partial differential equations, it can sometimes be more fruitful to work instead with the Sobolev spaces.
== Continuity ==
The terms parametric continuity (Ck) and geometric continuity (Gn) were introduced by Brian Barsky, to show that the smoothness of a curve could be measured by removing restrictions on the speed, with which the parameter traces out the curve.
=== Parametric continuity ===
Parametric continuity (Ck) is a concept applied to parametric curves, which describes the smoothness of the parameter's value with distance along the curve. A (parametric) curve
s
:
[
0
,
1
]
→
R
n
{\displaystyle s:[0,1]\to \mathbb {R} ^{n}}
is said to be of class Ck, if
d
k
s
d
t
k
{\displaystyle \textstyle {\frac {d^{k}s}{dt^{k}}}}
exists and is continuous on
[
0
,
1
]
{\displaystyle [0,1]}
, where derivatives at the end-points
0
{\displaystyle 0}
and
1
{\displaystyle 1}
are taken to be one sided derivatives (from the right at
0
{\displaystyle 0}
and from the left at
1
{\displaystyle 1}
).
As a practical application of this concept, a curve describing the motion of an object with a parameter of time must have C1 continuity and its first derivative is differentiable—for the object to have finite acceleration. For smoother motion, such as that of a camera's path while making a film, higher orders of parametric continuity are required.
==== Order of parametric continuity ====
The various order of parametric continuity can be described as follows:
C
0
{\displaystyle C^{0}}
: zeroth derivative is continuous (curves are continuous)
C
1
{\displaystyle C^{1}}
: zeroth and first derivatives are continuous
C
2
{\displaystyle C^{2}}
: zeroth, first and second derivatives are continuous
C
n
{\displaystyle C^{n}}
: 0-th through
n
{\displaystyle n}
-th derivatives are continuous
=== Geometric continuity ===
A curve or surface can be described as having
G
n
{\displaystyle G^{n}}
continuity, with
n
{\displaystyle n}
being the increasing measure of smoothness. Consider the segments either side of a point on a curve:
G
0
{\displaystyle G^{0}}
: The curves touch at the join point.
G
1
{\displaystyle G^{1}}
: The curves also share a common tangent direction at the join point.
G
2
{\displaystyle G^{2}}
: The curves also share a common center of curvature at the join point.
In general,
G
n
{\displaystyle G^{n}}
continuity exists if the curves can be reparameterized to have
C
n
{\displaystyle C^{n}}
(parametric) continuity. A reparametrization of the curve is geometrically identical to the original; only the parameter is affected.
Equivalently, two vector functions
f
(
t
)
{\displaystyle f(t)}
and
g
(
t
)
{\displaystyle g(t)}
such that
f
(
1
)
=
g
(
0
)
{\displaystyle f(1)=g(0)}
have
G
n
{\displaystyle G^{n}}
continuity at the point where they meet if
they satisfy equations known as Beta-constraints. For example, the Beta-constraints for
G
4
{\displaystyle G^{4}}
continuity are:
g
(
1
)
(
0
)
=
β
1
f
(
1
)
(
1
)
g
(
2
)
(
0
)
=
β
1
2
f
(
2
)
(
1
)
+
β
2
f
(
1
)
(
1
)
g
(
3
)
(
0
)
=
β
1
3
f
(
3
)
(
1
)
+
3
β
1
β
2
f
(
2
)
(
1
)
+
β
3
f
(
1
)
(
1
)
g
(
4
)
(
0
)
=
β
1
4
f
(
4
)
(
1
)
+
6
β
1
2
β
2
f
(
3
)
(
1
)
+
(
4
β
1
β
3
+
3
β
2
2
)
f
(
2
)
(
1
)
+
β
4
f
(
1
)
(
1
)
{\displaystyle {\begin{aligned}g^{(1)}(0)&=\beta _{1}f^{(1)}(1)\\g^{(2)}(0)&=\beta _{1}^{2}f^{(2)}(1)+\beta _{2}f^{(1)}(1)\\g^{(3)}(0)&=\beta _{1}^{3}f^{(3)}(1)+3\beta _{1}\beta _{2}f^{(2)}(1)+\beta _{3}f^{(1)}(1)\\g^{(4)}(0)&=\beta _{1}^{4}f^{(4)}(1)+6\beta _{1}^{2}\beta _{2}f^{(3)}(1)+(4\beta _{1}\beta _{3}+3\beta _{2}^{2})f^{(2)}(1)+\beta _{4}f^{(1)}(1)\\\end{aligned}}}
where
β
2
{\displaystyle \beta _{2}}
,
β
3
{\displaystyle \beta _{3}}
, and
β
4
{\displaystyle \beta _{4}}
are arbitrary, but
β
1
{\displaystyle \beta _{1}}
is constrained to be positive.: 65
In the case
n
=
1
{\displaystyle n=1}
, this reduces to
f
′
(
1
)
≠
0
{\displaystyle f'(1)\neq 0}
and
f
′
(
1
)
=
k
g
′
(
0
)
{\displaystyle f'(1)=kg'(0)}
, for a scalar
k
>
0
{\displaystyle k>0}
(i.e., the direction, but not necessarily the magnitude, of the two vectors is equal).
While it may be obvious that a curve would require
G
1
{\displaystyle G^{1}}
continuity to appear smooth, for good aesthetics, such as those aspired to in architecture and sports car design, higher levels of geometric continuity are required. For example, reflections in a car body will not appear smooth unless the body has
G
2
{\displaystyle G^{2}}
continuity.
A rounded rectangle (with ninety degree circular arcs at the four corners) has
G
1
{\displaystyle G^{1}}
continuity, but does not have
G
2
{\displaystyle G^{2}}
continuity. The same is true for a rounded cube, with octants of a sphere at its corners and quarter-cylinders along its edges. If an editable curve with
G
2
{\displaystyle G^{2}}
continuity is required, then cubic splines are typically chosen; these curves are frequently used in industrial design.
== Other concepts ==
=== Relation to analyticity ===
While all analytic functions are "smooth" (i.e. have all derivatives continuous) on the set on which they are analytic, examples such as bump functions (mentioned above) show that the converse is not true for functions on the reals: there exist smooth real functions that are not analytic. Simple examples of functions that are smooth but not analytic at any point can be made by means of Fourier series; another example is the Fabius function. Although it might seem that such functions are the exception rather than the rule, it turns out that the analytic functions are scattered very thinly among the smooth ones; more rigorously, the analytic functions form a meagre subset of the smooth functions. Furthermore, for every open subset A of the real line, there exist smooth functions that are analytic on A and nowhere else.
It is useful to compare the situation to that of the ubiquity of transcendental numbers on the real line. Both on the real line and the set of smooth functions, the examples we come up with at first thought (algebraic/rational numbers and analytic functions) are far better behaved than the majority of cases: the transcendental numbers and nowhere analytic functions have full measure (their complements are meagre).
The situation thus described is in marked contrast to complex differentiable functions. If a complex function is differentiable just once on an open set, it is both infinitely differentiable and analytic on that set.
=== Smooth partitions of unity ===
Smooth functions with given closed support are used in the construction of smooth partitions of unity (see partition of unity and topology glossary); these are essential in the study of smooth manifolds, for example to show that Riemannian metrics can be defined globally starting from their local existence. A simple case is that of a bump function on the real line, that is, a smooth function f that takes the value 0 outside an interval [a,b] and such that
f
(
x
)
>
0
for
a
<
x
<
b
.
{\displaystyle f(x)>0\quad {\text{ for }}\quad a<x<b.\,}
Given a number of overlapping intervals on the line, bump functions can be constructed on each of them, and on semi-infinite intervals
(
−
∞
,
c
]
{\displaystyle (-\infty ,c]}
and
[
d
,
+
∞
)
{\displaystyle [d,+\infty )}
to cover the whole line, such that the sum of the functions is always 1.
From what has just been said, partitions of unity do not apply to holomorphic functions; their different behavior relative to existence and analytic continuation is one of the roots of sheaf theory. In contrast, sheaves of smooth functions tend not to carry much topological information.
=== Smooth functions on and between manifolds ===
Given a smooth manifold
M
{\displaystyle M}
, of dimension
m
,
{\displaystyle m,}
and an atlas
U
=
{
(
U
α
,
ϕ
α
)
}
α
,
{\displaystyle {\mathfrak {U}}=\{(U_{\alpha },\phi _{\alpha })\}_{\alpha },}
then a map
f
:
M
→
R
{\displaystyle f:M\to \mathbb {R} }
is smooth on
M
{\displaystyle M}
if for all
p
∈
M
{\displaystyle p\in M}
there exists a chart
(
U
,
ϕ
)
∈
U
,
{\displaystyle (U,\phi )\in {\mathfrak {U}},}
such that
p
∈
U
,
{\displaystyle p\in U,}
and
f
∘
ϕ
−
1
:
ϕ
(
U
)
→
R
{\displaystyle f\circ \phi ^{-1}:\phi (U)\to \mathbb {R} }
is a smooth function from a neighborhood of
ϕ
(
p
)
{\displaystyle \phi (p)}
in
R
m
{\displaystyle \mathbb {R} ^{m}}
to
R
{\displaystyle \mathbb {R} }
(all partial derivatives up to a given order are continuous). Smoothness can be checked with respect to any chart of the atlas that contains
p
,
{\displaystyle p,}
since the smoothness requirements on the transition functions between charts ensure that if
f
{\displaystyle f}
is smooth near
p
{\displaystyle p}
in one chart it will be smooth near
p
{\displaystyle p}
in any other chart.
If
F
:
M
→
N
{\displaystyle F:M\to N}
is a map from
M
{\displaystyle M}
to an
n
{\displaystyle n}
-dimensional manifold
N
{\displaystyle N}
, then
F
{\displaystyle F}
is smooth if, for every
p
∈
M
,
{\displaystyle p\in M,}
there is a chart
(
U
,
ϕ
)
{\displaystyle (U,\phi )}
containing
p
,
{\displaystyle p,}
and a chart
(
V
,
ψ
)
{\displaystyle (V,\psi )}
containing
F
(
p
)
{\displaystyle F(p)}
such that
F
(
U
)
⊂
V
,
{\displaystyle F(U)\subset V,}
and
ψ
∘
F
∘
ϕ
−
1
:
ϕ
(
U
)
→
ψ
(
V
)
{\displaystyle \psi \circ F\circ \phi ^{-1}:\phi (U)\to \psi (V)}
is a smooth function from
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
Smooth maps between manifolds induce linear maps between tangent spaces: for
F
:
M
→
N
{\displaystyle F:M\to N}
, at each point the pushforward (or differential) maps tangent vectors at
p
{\displaystyle p}
to tangent vectors at
F
(
p
)
{\displaystyle F(p)}
:
F
∗
,
p
:
T
p
M
→
T
F
(
p
)
N
,
{\displaystyle F_{*,p}:T_{p}M\to T_{F(p)}N,}
and on the level of the tangent bundle, the pushforward is a vector bundle homomorphism:
F
∗
:
T
M
→
T
N
.
{\displaystyle F_{*}:TM\to TN.}
The dual to the pushforward is the pullback, which "pulls" covectors on
N
{\displaystyle N}
back to covectors on
M
,
{\displaystyle M,}
and
k
{\displaystyle k}
-forms to
k
{\displaystyle k}
-forms:
F
∗
:
Ω
k
(
N
)
→
Ω
k
(
M
)
.
{\displaystyle F^{*}:\Omega ^{k}(N)\to \Omega ^{k}(M).}
In this way smooth functions between manifolds can transport local data, like vector fields and differential forms, from one manifold to another, or down to Euclidean space where computations like integration are well understood.
Preimages and pushforwards along smooth functions are, in general, not manifolds without additional assumptions. Preimages of regular points (that is, if the differential does not vanish on the preimage) are manifolds; this is the preimage theorem. Similarly, pushforwards along embeddings are manifolds.
=== Smooth functions between subsets of manifolds ===
There is a corresponding notion of smooth map for arbitrary subsets of manifolds. If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a function whose domain and range are subsets of manifolds
X
⊆
M
{\displaystyle X\subseteq M}
and
Y
⊆
N
{\displaystyle Y\subseteq N}
respectively.
f
{\displaystyle f}
is said to be smooth if for all
x
∈
X
{\displaystyle x\in X}
there is an open set
U
⊆
M
{\displaystyle U\subseteq M}
with
x
∈
U
{\displaystyle x\in U}
and a smooth function
F
:
U
→
N
{\displaystyle F:U\to N}
such that
F
(
p
)
=
f
(
p
)
{\displaystyle F(p)=f(p)}
for all
p
∈
U
∩
X
.
{\displaystyle p\in U\cap X.}
== See also ==
Discontinuity – Mathematical analysis of discontinuous pointsPages displaying short descriptions of redirect targets
Hadamard's lemma
Non-analytic smooth function – Mathematical functions which are smooth but not analytic
Quasi-analytic function
Singularity (mathematics) – Point where a function, a curve or another mathematical object does not behave regularly
Sinuosity – Ratio of arc length and straight-line distance between two points on a wave-like function
Smooth scheme – type of schemePages displaying wikidata descriptions as a fallback
Smooth number – Integer having only small prime factors (number theory)
Smoothing – Fitting an approximating function to data
Spline – Mathematical function defined piecewise by polynomials
Sobolev mapping
== References == | Wikipedia/Smooth_function |
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range.
== Example: Helix ==
A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as
r
(
t
)
=
f
(
t
)
i
+
g
(
t
)
j
+
h
(
t
)
k
{\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} }
where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation:
r
(
t
)
=
⟨
f
(
t
)
,
g
(
t
)
,
h
(
t
)
⟩
{\displaystyle \mathbf {r} (t)=\langle f(t),g(t),h(t)\rangle }
The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function.
The vector shown in the graph to the right is the evaluation of the function
⟨
2
cos
t
,
4
sin
t
,
t
⟩
{\displaystyle \langle 2\cos t,\,4\sin t,\,t\rangle }
near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π.
In 2D, we can analogously speak about vector-valued functions as:
r
(
t
)
=
f
(
t
)
i
+
g
(
t
)
j
{\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} }
or
r
(
t
)
=
⟨
f
(
t
)
,
g
(
t
)
⟩
{\displaystyle \mathbf {r} (t)=\langle f(t),g(t)\rangle }
== Linear case ==
In the linear case the function can be expressed in terms of matrices:
y
=
A
x
,
{\displaystyle \mathbf {y} =A\mathbf {x} ,}
where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form
y
=
A
x
+
b
,
{\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} ,}
where in addition b'' is an n × 1 vector of parameters.
The linear case arises often, for example in multiple regression, where for instance the n × 1 vector
y
^
{\displaystyle {\hat {y}}}
of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector
β
^
{\displaystyle {\hat {\boldsymbol {\beta }}}}
(k < n) of estimated values of model parameters:
y
^
=
X
β
^
,
{\displaystyle {\hat {\mathbf {y} }}=X{\hat {\boldsymbol {\beta }}},}
in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers.
== Parametric representation of a surface ==
A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface:
(
x
,
y
,
z
)
=
(
f
(
s
,
t
)
,
g
(
s
,
t
)
,
h
(
s
,
t
)
)
≡
F
(
s
,
t
)
.
{\displaystyle (x,y,z)=(f(s,t),g(s,t),h(s,t))\equiv \mathbf {F} (s,t).}
Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation
(
x
1
,
x
2
,
…
,
x
n
)
=
(
f
1
(
s
,
t
)
,
f
2
(
s
,
t
)
,
…
,
f
n
(
s
,
t
)
)
≡
F
(
s
,
t
)
.
{\displaystyle (x_{1},x_{2},\dots ,x_{n})=(f_{1}(s,t),f_{2}(s,t),\dots ,f_{n}(s,t))\equiv \mathbf {F} (s,t).}
== Derivative of a three-dimensional vector function ==
Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if
r
(
t
)
=
f
(
t
)
i
+
g
(
t
)
j
+
h
(
t
)
k
{\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} }
is a vector-valued function, then
d
r
d
t
=
f
′
(
t
)
i
+
g
′
(
t
)
j
+
h
′
(
t
)
k
.
{\displaystyle {\frac {d\mathbf {r} }{dt}}=f'(t)\mathbf {i} +g'(t)\mathbf {j} +h'(t)\mathbf {k} .}
The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle
v
(
t
)
=
d
r
d
t
.
{\displaystyle \mathbf {v} (t)={\frac {d\mathbf {r} }{dt}}.}
Likewise, the derivative of the velocity is the acceleration
d
v
d
t
=
a
(
t
)
.
{\displaystyle {\frac {d\mathbf {v} }{dt}}=\mathbf {a} (t).}
=== Partial derivative ===
The partial derivative of a vector function a with respect to a scalar variable q is defined as
∂
a
∂
q
=
∑
i
=
1
n
∂
a
i
∂
q
e
i
{\displaystyle {\frac {\partial \mathbf {a} }{\partial q}}=\sum _{i=1}^{n}{\frac {\partial a_{i}}{\partial q}}\mathbf {e} _{i}}
where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken.
=== Ordinary derivative ===
If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t,
d
a
d
t
=
∑
i
=
1
n
d
a
i
d
t
e
i
.
{\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{i=1}^{n}{\frac {da_{i}}{dt}}\mathbf {e} _{i}.}
=== Total derivative ===
If the vector a is a function of a number n of scalar variables qr (r = 1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as
d
a
d
t
=
∑
r
=
1
n
∂
a
∂
q
r
d
q
r
d
t
+
∂
a
∂
t
.
{\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{r=1}^{n}{\frac {\partial \mathbf {a} }{\partial q_{r}}}{\frac {dq_{r}}{dt}}+{\frac {\partial \mathbf {a} }{\partial t}}.}
Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr.
=== Reference frames ===
Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship.
=== Derivative of a vector function with nonfixed bases ===
The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is
N
d
a
d
t
=
∑
i
=
1
3
d
a
i
d
t
e
i
+
∑
i
=
1
3
a
i
N
d
e
i
d
t
{\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}=\sum _{i=1}^{3}{\frac {da_{i}}{dt}}\mathbf {e} _{i}+\sum _{i=1}^{3}a_{i}{\frac {{}^{\mathrm {N} }d\mathbf {e} _{i}}{dt}}}
where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is
N
d
a
d
t
=
E
d
a
d
t
+
N
ω
E
×
a
{\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}={\frac {{}^{\mathrm {E} }d\mathbf {a} }{dt}}+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {a} }
where NωE is the angular velocity of the reference frame E relative to the reference frame N.
One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula
N
d
d
t
(
r
R
)
=
E
d
d
t
(
r
R
)
+
N
ω
E
×
r
R
.
{\displaystyle {\frac {{}^{\mathrm {N} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })={\frac {{}^{\mathrm {E} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }.}
where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution,
N
v
R
=
E
v
R
+
N
ω
E
×
r
R
{\displaystyle {}^{\mathrm {N} }\mathbf {v} ^{\mathrm {R} }={}^{\mathrm {E} }\mathbf {v} ^{\mathrm {R} }+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }}
where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth.
=== Derivative and vector multiplication ===
The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q,
∂
∂
q
(
p
a
)
=
∂
p
∂
q
a
+
p
∂
a
∂
q
.
{\displaystyle {\frac {\partial }{\partial q}}(p\mathbf {a} )={\frac {\partial p}{\partial q}}\mathbf {a} +p{\frac {\partial \mathbf {a} }{\partial q}}.}
In the case of dot multiplication, for two vectors a and b that are both functions of q,
∂
∂
q
(
a
⋅
b
)
=
∂
a
∂
q
⋅
b
+
a
⋅
∂
b
∂
q
.
{\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \cdot \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\cdot \mathbf {b} +\mathbf {a} \cdot {\frac {\partial \mathbf {b} }{\partial q}}.}
Similarly, the derivative of the cross product of two vector functions is
∂
∂
q
(
a
×
b
)
=
∂
a
∂
q
×
b
+
a
×
∂
b
∂
q
.
{\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \times \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\times \mathbf {b} +\mathbf {a} \times {\frac {\partial \mathbf {b} }{\partial q}}.}
=== Derivative of an n-dimensional vector function ===
A function f of a real number t with values in the space
R
n
{\displaystyle \mathbb {R} ^{n}}
can be written as
f
(
t
)
=
(
f
1
(
t
)
,
f
2
(
t
)
,
…
,
f
n
(
t
)
)
{\displaystyle \mathbf {f} (t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))}
. Its derivative equals
f
′
(
t
)
=
(
f
1
′
(
t
)
,
f
2
′
(
t
)
,
…
,
f
n
′
(
t
)
)
.
{\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),\ldots ,f_{n}'(t)).}
If f is a function of several variables, say of
t
∈
R
m
{\displaystyle t\in \mathbb {R} ^{m}}
, then the partial derivatives of the components of f form a
n
×
m
{\displaystyle n\times m}
matrix called the Jacobian matrix of f.
== Infinite-dimensional vector functions ==
If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function.
=== Functions with values in a Hilbert space ===
If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case:
f
′
(
t
)
=
lim
h
→
0
f
(
t
+
h
)
−
f
(
t
)
h
.
{\displaystyle \mathbf {f} '(t)=\lim _{h\to 0}{\frac {\mathbf {f} (t+h)-\mathbf {f} (t)}{h}}.}
Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g.,
t
∈
R
n
{\displaystyle t\in \mathbb {R} ^{n}}
or even
t
∈
Y
{\displaystyle t\in Y}
, where Y is an infinite-dimensional vector space).
N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if
f
=
(
f
1
,
f
2
,
f
3
,
…
)
{\displaystyle \mathbf {f} =(f_{1},f_{2},f_{3},\ldots )}
(i.e.,
f
=
f
1
e
1
+
f
2
e
2
+
f
3
e
3
+
⋯
{\displaystyle \mathbf {f} =f_{1}\mathbf {e} _{1}+f_{2}\mathbf {e} _{2}+f_{3}\mathbf {e} _{3}+\cdots }
, where
e
1
,
e
2
,
e
3
,
…
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3},\ldots }
is an orthonormal basis of the space X ), and
f
′
(
t
)
{\displaystyle f'(t)}
exists, then
f
′
(
t
)
=
(
f
1
′
(
t
)
,
f
2
′
(
t
)
,
f
3
′
(
t
)
,
…
)
.
{\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).}
However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.
=== Other infinite-dimensional vector spaces ===
Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.
== Vector field ==
== See also ==
Coordinate vector
Curve
Multivalued function
Parametric surface
Position vector
Parametrization
== Notes ==
== References ==
== External links ==
Vector-valued functions and their properties (from Lake Tahoe Community College)
Weisstein, Eric W. "Vector Function". MathWorld.
Everything2 article
3 Dimensional vector-valued functions (from East Tennessee State University)
"Position Vector Valued Functions" Khan Academy module | Wikipedia/Vector-valued_function |
In algebraic geometry, an affine variety or affine algebraic variety is a certain kind of algebraic variety that can be described as a subset of an affine space.
More formally, an affine algebraic set is the set of the common zeros over an algebraically closed field k of some family of polynomials in the polynomial ring
k
[
x
1
,
…
,
x
n
]
.
{\displaystyle k[x_{1},\ldots ,x_{n}].}
An affine variety is an affine algebraic set which is not the union of two smaller algebraic sets; algebraically, this means that (the radical of) the ideal generated by the defining polynomials is prime. One-dimensional affine varieties are called affine algebraic curves, while two-dimensional ones are affine algebraic surfaces.
Some texts use the term variety for any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense).
In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field k in which the coefficients are considered, from the algebraically closed field K (containing k) over which the common zeros are considered (that is, the points of the affine algebraic set are in Kn). In this case, the variety is said defined over k, and the points of the variety that belong to kn are said k-rational or rational over k. In the common case where k is the field of real numbers, a k-rational point is called a real point. When the field k is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by xn + yn − 1 = 0 has no rational points for any integer n greater than two.
== Introduction ==
An affine algebraic set is the set of solutions in an algebraically closed field k of a system of polynomial equations with coefficients in k. More precisely, if
f
1
,
…
,
f
m
{\displaystyle f_{1},\ldots ,f_{m}}
are polynomials with coefficients in k, they define an affine algebraic set
V
(
f
1
,
…
,
f
m
)
=
{
(
a
1
,
…
,
a
n
)
∈
k
n
|
f
1
(
a
1
,
…
,
a
n
)
=
…
=
f
m
(
a
1
,
…
,
a
n
)
=
0
}
.
{\displaystyle V(f_{1},\ldots ,f_{m})=\left\{(a_{1},\ldots ,a_{n})\in k^{n}\;|\;f_{1}(a_{1},\ldots ,a_{n})=\ldots =f_{m}(a_{1},\ldots ,a_{n})=0\right\}.}
An affine (algebraic) variety is an affine algebraic set that is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible.
If X is an affine algebraic set, and I is the ideal of all polynomials that are zero on X, then the quotient ring
R
=
k
[
x
1
,
…
,
x
n
]
/
I
{\displaystyle R=k[x_{1},\ldots ,x_{n}]/I}
is called the coordinate ring of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or the polynomial functions on the variety. They form the ring of regular functions on the variety, or, simply, the ring of the variety; in more technical terms (see § Structure sheaf), it is the space of global sections of the structure sheaf of X.
The dimension of a variety is an integer associated to every variety, and even to every algebraic set, whose importance relies on the large number of its equivalent definitions (see Dimension of an algebraic variety).
== Examples ==
The complement of a hypersurface in an affine variety X (that is X \ { f = 0 } for some polynomial f) is affine. Its defining equations are obtained by saturating by f the defining ideal of X. The coordinate ring is thus the localization
k
[
X
]
[
f
−
1
]
{\displaystyle k[X][f^{-1}]}
. For instance, for X = kn and f ∈ k[x1,..., xn], kn \ { f = 0 } is isomorphic to the hypersurface V(1 − xn+1f) in kn+1.
In particular,
k
−
0
{\displaystyle k-0}
(the affine line with the origin removed) is affine, isomorphic to the curve
V
(
1
−
x
y
)
{\displaystyle V(1-xy)}
in
k
2
{\displaystyle k^{2}}
(see Algebraic group § Examples).
On the other hand,
k
2
−
0
{\displaystyle k^{2}-0}
(the affine plane with the origin removed) is not an affine variety (compare this to Hartogs' extension theorem in complex analysis). See Spectrum of a ring § Non-affine examples.
The subvarieties of codimension one in the affine space
k
n
{\displaystyle k^{n}}
are exactly the hypersurfaces, that is the varieties defined by a single polynomial.
The normalization of an irreducible affine variety is affine; the coordinate ring of the normalization is the integral closure of the coordinate ring of the variety. (Similarly, the normalization of a projective variety is a projective variety.)
== Rational points ==
For an affine variety
V
⊆
K
n
{\displaystyle V\subseteq K^{n}}
over an algebraically closed field K, and a subfield k of K, a k-rational point of V is a point
p
∈
V
∩
k
n
.
{\displaystyle p\in V\cap k^{n}.}
That is, a point of V whose coordinates are elements of k. The collection of k-rational points of an affine variety V is often denoted
V
(
k
)
.
{\displaystyle V(k).}
Often, if the base field is the complex numbers C, points that are R-rational (where R is the real numbers) are called real points of the variety, and Q-rational points (Q the rational numbers) are often simply called rational points.
For instance, (1, 0) is a Q-rational and an R-rational point of the variety
V
=
V
(
x
2
+
y
2
−
1
)
⊆
C
2
,
{\displaystyle V=V(x^{2}+y^{2}-1)\subseteq \mathbf {C} ^{2},}
as it is in V and all its coordinates are integers. The point (√2/2, √2/2) is a real point of V that is not Q-rational, and
(
i
,
2
)
{\displaystyle (i,{\sqrt {2}})}
is a point of V that is not R-rational. This variety is called a circle, because the set of its R-rational points is the unit circle. It has infinitely many Q-rational points that are the points
(
1
−
t
2
1
+
t
2
,
2
t
1
+
t
2
)
{\displaystyle \left({\frac {1-t^{2}}{1+t^{2}}},{\frac {2t}{1+t^{2}}}\right)}
where t is a rational number.
The circle
V
(
x
2
+
y
2
−
3
)
⊆
C
2
{\displaystyle V(x^{2}+y^{2}-3)\subseteq \mathbf {C} ^{2}}
is an example of an algebraic curve of degree two that has no Q-rational point. This can be deduced from the fact that, modulo 4, the sum of two squares cannot be 3.
It can be proved that an algebraic curve of degree two with a Q-rational point has infinitely many other Q-rational points; each such point is the second intersection point of the curve and a line with a rational slope passing through the rational point.
The complex variety
V
(
x
2
+
y
2
+
1
)
⊆
C
2
{\displaystyle V(x^{2}+y^{2}+1)\subseteq \mathbf {C} ^{2}}
has no R-rational points, but has many complex points.
If V is an affine variety in C2 defined over the complex numbers C, the R-rational points of V can be drawn on a piece of paper or by graphing software. The figure on the right shows the R-rational points of
V
(
y
2
−
x
3
+
x
2
+
16
x
)
⊆
C
2
.
{\displaystyle V(y^{2}-x^{3}+x^{2}+16x)\subseteq \mathbf {C} ^{2}.}
== Singular points and tangent space ==
Let V be an affine variety defined by the polynomials
f
1
,
…
,
f
r
∈
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle f_{1},\dots ,f_{r}\in k[x_{1},\dots ,x_{n}],}
and
a
=
(
a
1
,
…
,
a
n
)
{\displaystyle a=(a_{1},\dots ,a_{n})}
be a point of V.
The Jacobian matrix JV(a) of V at a is the matrix of the partial derivatives
∂
f
j
∂
x
i
(
a
1
,
…
,
a
n
)
.
{\displaystyle {\frac {\partial f_{j}}{\partial {x_{i}}}}(a_{1},\dots ,a_{n}).}
The point a is regular if the rank of JV(a) equals the codimension of V, and singular otherwise.
If a is regular, the tangent space to V at a is the affine subspace of
k
n
{\displaystyle k^{n}}
defined by the linear equations
∑
i
=
1
n
∂
f
j
∂
x
i
(
a
1
,
…
,
a
n
)
(
x
i
−
a
i
)
=
0
,
j
=
1
,
…
,
r
.
{\displaystyle \sum _{i=1}^{n}{\frac {\partial f_{j}}{\partial {x_{i}}}}(a_{1},\dots ,a_{n})(x_{i}-a_{i})=0,\quad j=1,\dots ,r.}
If the point is singular, the affine subspace defined by these equations is also called a tangent space by some authors, while other authors say that there is no tangent space at a singular point.
A more intrinsic definition which does not use coordinates is given by Zariski tangent space.
== The Zariski topology ==
The affine algebraic sets of kn form the closed sets of a topology on kn, called the Zariski topology. This follows from the fact that
V
(
0
)
=
k
n
,
{\displaystyle V(0)=k^{n},}
V
(
1
)
=
∅
,
{\displaystyle V(1)=\emptyset ,}
V
(
S
)
∪
V
(
T
)
=
V
(
S
T
)
,
{\displaystyle V(S)\cup V(T)=V(ST),}
and
V
(
S
)
∩
V
(
T
)
=
V
(
S
,
T
)
{\displaystyle V(S)\cap V(T)=V(S,T)}
(in fact, a countable intersection of affine algebraic sets is an affine algebraic set).
The Zariski topology can also be described by way of basic open sets, where Zariski-open sets are countable unions of sets of the form
U
f
=
{
p
∈
k
n
:
f
(
p
)
≠
0
}
{\displaystyle U_{f}=\{p\in k^{n}:f(p)\neq 0\}}
for
f
∈
k
[
x
1
,
…
,
x
n
]
.
{\displaystyle f\in k[x_{1},\ldots ,x_{n}].}
These basic open sets are the complements in kn of the closed sets
V
(
f
)
=
D
f
=
{
p
∈
k
n
:
f
(
p
)
=
0
}
,
{\displaystyle V(f)=D_{f}=\{p\in k^{n}:f(p)=0\},}
zero loci of a single polynomial. If k is Noetherian (for instance, if k is a field or a principal ideal domain), then every ideal of k is finitely-generated, so every open set is a finite union of basic open sets.
If V is an affine subvariety of kn the Zariski topology on V is simply the subspace topology inherited from the Zariski topology on kn.
== Geometry–algebra correspondence ==
The geometric structure of an affine variety is linked in a deep way to the algebraic structure of its coordinate ring. Let I and J be ideals of k[V], the coordinate ring of an affine variety V. Let I(V) be the set of all polynomials in
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle k[x_{1},\ldots ,x_{n}],}
that vanish on V, and let
I
{\displaystyle {\sqrt {I}}}
denote the radical of the ideal I, the set of polynomials f for which some power of f is in I. The reason that the base field is required to be algebraically closed is that affine varieties automatically satisfy Hilbert's nullstellensatz: for an ideal J in
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle k[x_{1},\ldots ,x_{n}],}
where k is an algebraically closed field,
I
(
V
(
J
)
)
=
J
.
{\displaystyle I(V(J))={\sqrt {J}}.}
Radical ideals (ideals that are their own radical) of k[V] correspond to algebraic subsets of V. Indeed, for radical ideals I and J,
I
⊆
J
{\displaystyle I\subseteq J}
if and only if
V
(
J
)
⊆
V
(
I
)
.
{\displaystyle V(J)\subseteq V(I).}
Hence V(I)=V(J) if and only if I=J. Furthermore, the function taking an affine algebraic set W and returning I(W), the set of all functions that also vanish on all points of W, is the inverse of the function assigning an algebraic set to a radical ideal, by the nullstellensatz. Hence the correspondence between affine algebraic sets and radical ideals is a bijection. The coordinate ring of an affine algebraic set is reduced (nilpotent-free), as an ideal I in a ring R is radical if and only if the quotient ring R/I is reduced.
Prime ideals of the coordinate ring correspond to affine subvarieties. An affine algebraic set V(I) can be written as the union of two other algebraic sets if and only if I=JK for proper ideals J and K not equal to I (in which case
V
(
I
)
=
V
(
J
)
∪
V
(
K
)
{\displaystyle V(I)=V(J)\cup V(K)}
). This is the case if and only if I is not prime. Affine subvarieties are precisely those whose coordinate ring is an integral domain. This is because an ideal is prime if and only if the quotient of the ring by the ideal is an integral domain.
Maximal ideals of k[V] correspond to points of V. If I and J are radical ideals, then
V
(
J
)
⊆
V
(
I
)
{\displaystyle V(J)\subseteq V(I)}
if and only if
I
⊆
J
.
{\displaystyle I\subseteq J.}
As maximal ideals are radical, maximal ideals correspond to minimal algebraic sets (those that contain no proper algebraic subsets), which are points in V. If V is an affine variety with coordinate ring
R
=
k
[
x
1
,
…
,
x
n
]
/
⟨
f
1
,
…
,
f
m
⟩
,
{\displaystyle R=k[x_{1},\ldots ,x_{n}]/\langle f_{1},\ldots ,f_{m}\rangle ,}
this correspondence becomes explicit through the map
(
a
1
,
…
,
a
n
)
↦
⟨
x
1
−
a
1
¯
,
…
,
x
n
−
a
n
¯
⟩
,
{\displaystyle (a_{1},\ldots ,a_{n})\mapsto \langle {\overline {x_{1}-a_{1}}},\ldots ,{\overline {x_{n}-a_{n}}}\rangle ,}
where
x
i
−
a
i
¯
{\displaystyle {\overline {x_{i}-a_{i}}}}
denotes the image in the quotient algebra R of the polynomial
x
i
−
a
i
.
{\displaystyle x_{i}-a_{i}.}
An algebraic subset is a point if and only if the coordinate ring of the subset is a field, as the quotient of a ring by a maximal ideal is a field.
The following table summarizes this correspondence, for algebraic subsets of an affine variety and ideals of the corresponding coordinate ring:
== Products of affine varieties ==
A product of affine varieties can be defined using the isomorphism An × Am = An+m, then embedding the product in this new affine space. Let An and Am have coordinate rings k[x1,..., xn] and k[y1,..., ym] respectively, so that their product An+m has coordinate ring k[x1,..., xn, y1,..., ym]. Let V = V( f1,..., fN) be an algebraic subset of An, and W = V( g1,..., gM) an algebraic subset of Am. Then each fi is a polynomial in k[x1,..., xn], and each gj is in k[y1,..., ym]. The product of V and W is defined as the algebraic set V × W = V( f1,..., fN, g1,..., gM) in An+m. The product is irreducible if each V, W is irreducible.
The Zariski topology on An × Am is not the topological product of the Zariski topologies on the two spaces. Indeed, the product topology is generated by products of the basic open sets Uf = An − V( f ) and Tg = Am − V( g ). Hence, polynomials that are in k[x1,..., xn, y1,..., ym] but cannot be obtained as a product of a polynomial in k[x1,..., xn] with a polynomial in k[y1,..., ym] will define algebraic sets that are closed in the Zariski topology on An × Am , but not in the product topology.
== Morphisms of affine varieties ==
A morphism, or regular map, of affine varieties is a function between affine varieties that is polynomial in each coordinate: more precisely, for affine varieties V ⊆ kn and W ⊆ km, a morphism from V to W is a map φ : V → W of the form φ(a1, ..., an) = (f1(a1, ..., an), ..., fm(a1, ..., an)), where fi ∈ k[X1, ..., Xn] for each i = 1, ..., m. These are the morphisms in the category of affine varieties.
There is a one-to-one correspondence between morphisms of affine varieties over an algebraically closed field k, and homomorphisms of coordinate rings of affine varieties over k going in the opposite direction. Because of this, along with the fact that there is a one-to-one correspondence between affine varieties over k and their coordinate rings, the category of affine varieties over k is dual to the category of coordinate rings of affine varieties over k. The category of coordinate rings of affine varieties over k is precisely the category of finitely-generated, nilpotent-free algebras over k.
More precisely, for each morphism φ : V → W of affine varieties, there is a homomorphism φ# : k[W] → k[V] between the coordinate rings (going in the opposite direction), and for each such homomorphism, there is a morphism of the varieties associated to the coordinate rings. This can be shown explicitly: let V ⊆ kn and W ⊆ km be affine varieties with coordinate rings k[V] = k[X1, ..., Xn] / I and k[W] = k[Y1, ..., Ym] / J respectively. Let φ : V → W be a morphism. Indeed, a homomorphism between polynomial rings θ : k[Y1, ..., Ym] / J → k[X1, ..., Xn] / I factors uniquely through the ring k[X1, ..., Xn], and a homomorphism ψ : k[Y1, ..., Ym] / J → k[X1, ..., Xn] is determined uniquely by the images of Y1, ..., Ym. Hence, each homomorphism φ# : k[W] → k[V] corresponds uniquely to a choice of image for each Yi. Then given any morphism φ = (f1, ..., fm) from V to W, a homomorphism can be constructed φ# : k[W] → k[V] that sends Yi to
f
i
¯
,
{\displaystyle {\overline {f_{i}}},}
where
f
i
¯
{\displaystyle {\overline {f_{i}}}}
is the equivalence class of fi in k[V].
Similarly, for each homomorphism of the coordinate rings, a morphism of the affine varieties can be constructed in the opposite direction. Mirroring the paragraph above, a homomorphism φ# : k[W] → k[V] sends Yi to a polynomial
f
i
(
X
1
,
…
,
X
n
)
{\displaystyle f_{i}(X_{1},\dots ,X_{n})}
in k[V]. This corresponds to the morphism of varieties φ : V → W defined by φ(a1, ... , an) = (f1(a1, ..., an), ..., fm(a1, ..., an)).
== Structure sheaf ==
Equipped with the structure sheaf described below, an affine variety is a locally ringed space.
Given an affine variety X with coordinate ring A, the sheaf of k-algebras
O
X
{\displaystyle {\mathcal {O}}_{X}}
is defined by letting
O
X
(
U
)
=
Γ
(
U
,
O
X
)
{\displaystyle {\mathcal {O}}_{X}(U)=\Gamma (U,{\mathcal {O}}_{X})}
be the ring of regular functions on U.
Let D(f) = { x | f(x) ≠ 0 } for each f in A. They form a base for the topology of X and so
O
X
{\displaystyle {\mathcal {O}}_{X}}
is determined by its values on the open sets D(f). (See also: sheaf of modules#Sheaf associated to a module.)
The key fact, which relies on Hilbert nullstellensatz in the essential way, is the following:
Proof: The inclusion ⊃ is clear. For the opposite, let g be in the left-hand side and
J
=
{
h
∈
A
|
h
g
∈
A
}
{\displaystyle J=\{h\in A|hg\in A\}}
, which is an ideal. If x is in D(f), then, since g is regular near x, there is some open affine neighborhood D(h) of x such that
g
∈
k
[
D
(
h
)
]
=
A
[
h
−
1
]
{\displaystyle g\in k[D(h)]=A[h^{-1}]}
; that is, hm g is in A and thus x is not in V(J). In other words,
V
(
J
)
⊂
{
x
|
f
(
x
)
=
0
}
{\displaystyle V(J)\subset \{x|f(x)=0\}}
and thus the Hilbert nullstellensatz implies f is in the radical of J; i.e.,
f
n
g
∈
A
{\displaystyle f^{n}g\in A}
.
◻
{\displaystyle \square }
The claim, first of all, implies that X is a "locally ringed" space since
O
X
,
x
=
lim
→
f
(
x
)
≠
0
A
[
f
−
1
]
=
A
m
x
{\displaystyle {\mathcal {O}}_{X,x}=\varinjlim _{f(x)\neq 0}A[f^{-1}]=A_{{\mathfrak {m}}_{x}}}
where
m
x
=
{
f
∈
A
|
f
(
x
)
=
0
}
{\displaystyle {\mathfrak {m}}_{x}=\{f\in A|f(x)=0\}}
. Secondly, the claim implies that
O
X
{\displaystyle {\mathcal {O}}_{X}}
is a sheaf; indeed, it says if a function is regular (pointwise) on D(f), then it must be in the coordinate ring of D(f); that is, "regular-ness" can be patched together.
Hence,
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a locally ringed space.
== Serre's theorem on affineness ==
A theorem of Serre gives a cohomological characterization of an affine variety; it says an algebraic variety is affine if and only if
H
i
(
X
,
F
)
=
0
{\displaystyle H^{i}(X,F)=0}
for any
i
>
0
{\displaystyle i>0}
and any quasi-coherent sheaf F on X. (cf. Cartan's theorem B.) This makes the cohomological study of an affine variety non-existent, in a sharp contrast to the projective case in which cohomology groups of line bundles are of central interest.
== Affine algebraic groups ==
An affine variety G over an algebraically closed field k is called an affine algebraic group if it has:
A multiplication μ: G × G → G, which is a regular morphism that follows the associativity axiom—that is, such that μ(μ(f, g), h) = μ(f, μ(g, h)) for all points f, g and h in G;
An identity element e such that μ(e, g) = μ(g, e) = g for every g in G;
An inverse morphism, a regular bijection ι: G → G such that μ(ι(g), g) = μ(g, ι(g)) = e for every g in G.
Together, these define a group structure on the variety. The above morphisms are often written using ordinary group notation: μ(f, g) can be written as f + g, f⋅g, or fg; the inverse ι(g) can be written as −g or g−1. Using the multiplicative notation, the associativity, identity and inverse laws can be rewritten as: f(gh) = (fg)h, ge = eg = g and gg−1 = g−1g = e.
The most prominent example of an affine algebraic group is GLn(k), the general linear group of degree n. This is the group of linear transformations of the vector space kn; if a basis of kn, is fixed, this is equivalent to the group of n×n invertible matrices with entries in k. It can be shown that any affine algebraic group is isomorphic to a subgroup of GLn(k). For this reason, affine algebraic groups are often called linear algebraic groups.
Affine algebraic groups play an important role in the classification of finite simple groups, as the groups of Lie type are all sets of Fq-rational points of an affine algebraic group, where Fq is a finite field.
== Generalizations ==
If an author requires the base field of an affine variety to be algebraically closed (as this article does), then irreducible affine algebraic sets over non-algebraically closed fields are a generalization of affine varieties. This generalization notably includes affine varieties over the real numbers.
An open subset of an affine variety is called a quasi-affine variety, so every affine variety is quasi-affine. Any quasi-affine variety is in turn a quasi-projective variety.
Affine varieties play the role of local charts for algebraic varieties; that is to say, general algebraic varieties such as projective varieties are obtained by gluing affine varieties. Linear structures that are attached to varieties are also (trivially) affine varieties; e.g., tangent spaces, fibers of algebraic vector bundles.
The construction given in § Structure sheaf allows for a generalization that is used in scheme theory, the modern approach to algebraic geometry. An affine variety is (up to an equivalence of categories) a special case of an affine scheme, a locally-ringed space that is isomorphic to the spectrum of a commutative ring. Each affine variety has an affine scheme associated to it: if V(I) is an affine variety in kn with coordinate ring R = k[x1, ..., xn] / I, then the scheme corresponding to V(I) is Spec(R), the set of prime ideals of R. The affine scheme has "classical points", which correspond with points of the variety (and hence maximal ideals of the coordinate ring of the variety), and also a point for each closed subvariety of the variety (these points correspond to prime, non-maximal ideals of the coordinate ring). This creates a more well-defined notion of the "generic point" of an affine variety, by assigning to each closed subvariety an open point that is dense in the subvariety. More generally, an affine scheme is an affine variety if it is reduced, irreducible, and of finite type over an algebraically closed field k.
== Notes ==
== See also ==
Representations on coordinate rings
== References ==
The original article was written as a partial human translation of the corresponding French article.
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
Fulton, William (1969). Algebraic Curves (PDF). Addison-Wesley. ISBN 0-201-510103.
Milne, James S. (2017). "Algebraic Geometry" (PDF). www.jmilne.org. Retrieved 16 July 2021.
Milne, James S. Lectures on Étale cohomology
Mumford, David (1999). The Red Book of Varieties and Schemes: Includes the Michigan Lectures (1974) on Curves and Their Jacobians. Lecture Notes in Mathematics. Vol. 1358 (2nd ed.). Springer-Verlag. doi:10.1007/b62130. ISBN 354063293X.
Reid, Miles (1988). Undergraduate Algebraic Geometry. Cambridge University Press. ISBN 0-521-35662-8. | Wikipedia/Affine_algebraic_set |
In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function g is a function f satisfying f(f(x)) = g(x) for all x.
== Notation ==
Notations expressing that f is a functional square root of g are f = g[1/2] and f = g1/2, or rather f = g 1/2 (see Iterated Function), although this leaves the usual ambiguity with taking the function to that power in the multiplicative sense, just as f ² = f ∘ f can be misinterpreted as x ↦ f(x)².
== History ==
The functional square root of the exponential function (now known as a half-exponential function) was studied by Hellmuth Kneser in 1950, later providing the basis for extending tetration to non-integer heights in 2017.
The solutions of f(f(x)) = x over
R
{\displaystyle \mathbb {R} }
(the involutions of the real numbers) were first studied by Charles Babbage in 1815, and this equation is called Babbage's functional equation. A particular solution is f(x) = (b − x)/(1 + cx) for bc ≠ −1. Babbage noted that for any given solution f, its functional conjugate Ψ−1∘ f ∘ Ψ by an arbitrary invertible function Ψ is also a solution. In other words, the group of all invertible functions on the real line acts on the subset consisting of solutions to Babbage's functional equation by conjugation.
== Solutions ==
A systematic procedure to produce arbitrary functional n-roots (including arbitrary real, negative, and infinitesimal n) of functions
g
:
C
→
C
{\displaystyle g:\mathbb {C} \rightarrow \mathbb {C} }
relies on the solutions of Schröder's equation. Infinitely many trivial solutions exist when the domain of a root function f is allowed to be sufficiently larger than that of g.
== Examples ==
f(x) = 2x2 is a functional square root of g(x) = 8x4.
A functional square root of the nth Chebyshev polynomial,
g
(
x
)
=
T
n
(
x
)
{\displaystyle g(x)=T_{n}(x)}
, is
f
(
x
)
=
cos
(
n
arccos
(
x
)
)
{\displaystyle f(x)=\cos {({\sqrt {n}}\arccos(x))}}
, which in general is not a polynomial.
f
(
x
)
=
x
/
(
2
+
x
(
1
−
2
)
)
{\displaystyle f(x)=x/({\sqrt {2}}+x(1-{\sqrt {2}}))}
is a functional square root of
g
(
x
)
=
x
/
(
2
−
x
)
{\displaystyle g(x)=x/(2-x)}
.
sin[2](x) = sin(sin(x)) [red curve]
sin[1](x) = sin(x) = rin(rin(x)) [blue curve]
sin[1/2](x) = rin(x) = qin(qin(x)) [orange curve], although this is not unique, the opposite - rin being a solution of sin = rin ∘ rin, too.
sin[1/4](x) = qin(x) [black curve above the orange curve]
sin[–1](x) = arcsin(x) [dashed curve]
Using this extension, sin[1/2](1) can be shown to be approximately equal to 0.90871.
(See. For the notation, see [1] Archived 2022-12-05 at the Wayback Machine.)
== See also ==
== References == | Wikipedia/Functional_square_root |
In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality.
A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set.
An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions.
For example, the equation x + y = 2x – 1 is solved for the unknown x by the expression x = y + 1, because substituting y + 1 for x in the equation results in (y + 1) + y = 2(y + 1) – 1, a true statement. It is also possible to take the variable y to be the unknown, and then the equation is solved by y = x – 1. Or x and y can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is (x, y) = (a + 1, a), where the variable a may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, a = 0 gives (x, y) = (1, 0) (that is, x = 1, y = 0), and a = 1 gives (x, y) = (2, 1).
The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in x and y", or "solve for x and y", which indicate the unknowns, here x and y.
However, it is common to reserve x, y, z, ... to denote the unknowns, and to use a, b, c, ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quadratic equations. However, for some problems, all variables may assume either role.
Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution.
== Overview ==
One general form of an equation is
f
(
x
1
,
…
,
x
n
)
=
c
,
{\displaystyle f\left(x_{1},\dots ,x_{n}\right)=c,}
where f is a function, x1, ..., xn are the unknowns, and c is a constant. Its solutions are the elements of the inverse image (fiber)
f
−
1
(
c
)
=
{
(
a
1
,
…
,
a
n
)
∈
D
∣
f
(
a
1
,
…
,
a
n
)
=
c
}
,
{\displaystyle f^{-1}(c)={\bigl \{}(a_{1},\dots ,a_{n})\in D\mid f\left(a_{1},\dots ,a_{n}\right)=c{\bigr \}},}
where D is the domain of the function f. The set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infinite (there are infinitely many solutions).
For example, an equation such as
3
x
+
2
y
=
21
z
,
{\displaystyle 3x+2y=21z,}
with unknowns x, y and z, can be put in the above form by subtracting 21z from both sides of the equation, to obtain
3
x
+
2
y
−
21
z
=
0
{\displaystyle 3x+2y-21z=0}
In this particular case there is not just one solution, but an infinite set of solutions, which can be written using set builder notation as
{
(
x
,
y
,
z
)
∣
3
x
+
2
y
−
21
z
=
0
}
.
{\displaystyle {\bigl \{}(x,y,z)\mid 3x+2y-21z=0{\bigr \}}.}
One particular solution is x = 0, y = 0, z = 0. Two other solutions are x = 3, y = 6, z = 1, and x = 8, y = 9, z = 2. There is a unique plane in three-dimensional space which passes through the three points with these coordinates, and this plane is the set of all points whose coordinates are solutions of the equation.
== Solution sets ==
The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all the equations or inequalities.
If the solution set is empty, then there are no values of the unknowns that satisfy simultaneously all equations and inequalities.
For a simple example, consider the equation
x
2
=
2.
{\displaystyle x^{2}=2.}
This equation can be viewed as a Diophantine equation, that is, an equation for which only integer solutions are sought. In this case, the solution set is the empty set, since 2 is not the square of an integer. However, if one searches for real solutions, there are two solutions, √2 and –√2; in other words, the solution set is {√2, −√2}.
When an equation contains several unknowns, and when one has several equations with more unknowns than equations, the solution set is often infinite. In this case, the solutions cannot be listed. For representing them, a parametrization is often useful, which consists of expressing the solutions in terms of some of the unknowns or auxiliary variables. This is always possible when all the equations are linear.
Such infinite solution sets can naturally be interpreted as geometric shapes such as lines, curves (see picture), planes, and more generally algebraic varieties or manifolds. In particular, algebraic geometry may be viewed as the study of solution sets of algebraic equations.
== Methods of solution ==
The methods for solving equations generally depend on the type of equation, both the kind of expressions in the equation and the kind of values that may be assumed by the unknowns. The variety in types of equations is large, and so are the corresponding methods. Only a few specific types are mentioned below.
In general, given a class of equations, there may be no known systematic method (algorithm) that is guaranteed to work. This may be due to a lack of mathematical knowledge; some problems were only solved after centuries of effort. But this also reflects that, in general, no such method can exist: some problems are known to be unsolvable by an algorithm, such as Hilbert's tenth problem, which was proved unsolvable in 1970.
For several classes of equations, algorithms have been found for solving them, some of which have been implemented and incorporated in computer algebra systems, but often require no more sophisticated technology than pencil and paper. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success.
=== Brute force, trial and error, inspired guess ===
If the solution set of an equation is restricted to a finite set (as is the case for equations in modular arithmetic, for example), or can be limited to a finite number of possibilities (as is the case with some Diophantine equations), the solution set can be found by brute force, that is, by testing each of the possible values (candidate solutions). It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods.
As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess.
=== Elementary algebra ===
Equations involving linear or simple rational functions of a single real-valued unknown, say x, such as
8
x
+
7
=
4
x
+
35
or
4
x
+
9
3
x
+
4
=
2
,
{\displaystyle 8x+7=4x+35\quad {\text{or}}\quad {\frac {4x+9}{3x+4}}=2\,,}
can be solved using the methods of elementary algebra.
=== Systems of linear equations ===
Smaller systems of linear equations can be solved likewise by methods of elementary algebra. For solving larger systems, algorithms are used that are based on linear algebra. See Gaussian elimination and numerical solution of linear systems.
=== Polynomial equations ===
Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example
4
x
5
−
x
3
−
3
=
0
{\displaystyle 4x^{5}-x^{3}-3=0}
(by using the rational root theorem), and
x
6
−
5
x
3
+
6
=
0
,
{\displaystyle x^{6}-5x^{3}+6=0\,,}
(by using the substitution x = z1⁄3, which simplifies this to a quadratic equation in z).
=== Diophantine equations ===
In Diophantine equations the solutions are required to be integers. In some cases a brute force approach can be used, as mentioned above. In some other cases, in particular if the equation is in one unknown, it is possible to solve the equation for rational-valued unknowns (see Rational root theorem), and then find solutions to the Diophantine equation by restricting the solution set to integer-valued solutions. For example, the polynomial equation
2
x
5
−
5
x
4
−
x
3
−
7
x
2
+
2
x
+
3
=
0
{\displaystyle 2x^{5}-5x^{4}-x^{3}-7x^{2}+2x+3=0\,}
has as rational solutions x = −1/2 and x = 3, and so, viewed as a Diophantine equation, it has the unique solution x = 3.
In general, however, Diophantine equations are among the most difficult equations to solve.
=== Inverse functions ===
In the simple case of a function of one variable, say, h(x), we can solve an equation of the form h(x) = c for some constant c by considering what is known as the inverse function of h.
Given a function h : A → B, the inverse function, denoted h−1 and defined as h−1 : B → A, is a function such that
h
−
1
(
h
(
x
)
)
=
h
(
h
−
1
(
x
)
)
=
x
.
{\displaystyle h^{-1}{\bigl (}h(x){\bigr )}=h{\bigl (}h^{-1}(x){\bigr )}=x\,.}
Now, if we apply the inverse function to both sides of h(x) = c, where c is a constant value in B, we obtain
h
−
1
(
h
(
x
)
)
=
h
−
1
(
c
)
x
=
h
−
1
(
c
)
{\displaystyle {\begin{aligned}h^{-1}{\bigl (}h(x){\bigr )}&=h^{-1}(c)\\x&=h^{-1}(c)\\\end{aligned}}}
and we have found the solution to the equation. However, depending on the function, the inverse may be difficult to be defined, or may not be a function on all of the set B (only on some subset), and have many values at some point.
If just one solution will do, instead of the full solution set, it is actually sufficient if only the functional identity
h
(
h
−
1
(
x
)
)
=
x
{\displaystyle h\left(h^{-1}(x)\right)=x}
holds. For example, the projection π1 : R2 → R defined by π1(x, y) = x has no post-inverse, but it has a pre-inverse π−11 defined by π−11(x) = (x, 0). Indeed, the equation π1(x, y) = c is solved by
(
x
,
y
)
=
π
1
−
1
(
c
)
=
(
c
,
0
)
.
{\displaystyle (x,y)=\pi _{1}^{-1}(c)=(c,0).}
Examples of inverse functions include the nth root (inverse of xn); the logarithm (inverse of ax); the inverse trigonometric functions; and Lambert's W function (inverse of xex).
=== Factorization ===
If the left-hand side expression of an equation P = 0 can be factorized as P = QR, the solution set of the original solution consists of the union of the solution sets of the two equations Q = 0 and R = 0.
For example, the equation
tan
x
+
cot
x
=
2
{\displaystyle \tan x+\cot x=2}
can be rewritten, using the identity tan x cot x = 1 as
tan
2
x
−
2
tan
x
+
1
tan
x
=
0
,
{\displaystyle {\frac {\tan ^{2}x-2\tan x+1}{\tan x}}=0,}
which can be factorized into
(
tan
x
−
1
)
2
tan
x
=
0.
{\displaystyle {\frac {\left(\tan x-1\right)^{2}}{\tan x}}=0.}
The solutions are thus the solutions of the equation tan x = 1, and are thus the set
x
=
π
4
+
k
π
,
k
=
0
,
±
1
,
±
2
,
…
.
{\displaystyle x={\tfrac {\pi }{4}}+k\pi ,\quad k=0,\pm 1,\pm 2,\ldots .}
=== Numerical methods ===
With more complicated equations in real or complex numbers, simple methods to solve equations can fail. Often, root-finding algorithms like the Newton–Raphson method can be used to find a numerical solution to an equation, which, for some applications, can be entirely sufficient to solve some problem.
There are also numerical methods for systems of linear equations.
=== Matrix equations ===
Equations involving matrices and vectors of real numbers can often be solved by using methods from linear algebra.
=== Differential equations ===
There is a vast body of methods for solving various kinds of differential equations, both numerically and analytically. A particular class of problem that can be considered to belong here is integration, and the analytic methods for solving this kind of problems are now called symbolic integration. Solutions of differential equations can be implicit or explicit.
== See also ==
Extraneous and missing solutions
Simultaneous equations
Equating coefficients
Solving the geodesic equations
Unification (computer science) — solving equations involving symbolic expressions
== References == | Wikipedia/Solution_(mathematics) |
An approximation is anything that is intentionally similar but not exactly equal to something else.
== Etymology and usage ==
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
== Mathematics ==
Approximation theory is a branch of mathematics, and a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between 1,450,000 and 1,550,000); this is in contrast to the notation 1.500 × 106, which means that the true value is 1,500,000 to the nearest thousand (implying that the true value is somewhere between 1,499,500 and 1,500,500).
Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results. Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.
Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum
k
/
2
+
k
/
4
+
k
/
8
+
⋯
+
k
/
2
n
{\displaystyle k/2+k/4+k/8+\cdots +k/2^{n}}
is asymptotically equal to k. No consistent notation is used throughout mathematics and some texts use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.
=== Typography ===
The approximately equals sign, ≈, was introduced by British mathematician Alfred Greenhill in 1892, in his book Applications of Elliptic Functions.
==== LaTeX symbols ====
Typical meanings of LaTeX symbols.
≈
{\displaystyle \approx }
(\approx) : approximate equality, like
π
≈
3.14
{\displaystyle \pi \approx 3.14}
.
≉
{\displaystyle \not \approx }
(\not\approx) : inequality, despite any approximation (
1
≉
2
{\displaystyle 1\not \approx 2}
).
≃
{\displaystyle \simeq }
(\simeq) : function asymptotic equivalence, like
f
(
n
)
≃
3
n
2
{\displaystyle f(n)\simeq 3n^{2}}
.
Thus,
π
≃
3.14
{\displaystyle \pi \simeq 3.14}
is wrong under this definition, despite wide use.
∼
{\displaystyle \sim }
(\sim) : function proportionality; the
f
(
n
)
{\displaystyle f(n)}
used in \simeq is
f
(
n
)
∼
n
2
{\displaystyle f(n)\sim n^{2}}
.
≅
{\displaystyle \cong }
(\cong) : figure congruence, like
Δ
A
B
C
≅
Δ
A
′
B
′
C
′
{\displaystyle \Delta ABC\cong \Delta A'B'C'}
.
≂
{\displaystyle \eqsim }
(\eqsim) : equal up to a constant.
⪅
{\displaystyle \lessapprox }
(\lessapprox) and
⪆
{\displaystyle \gtrapprox }
(\gtrapprox) : either an inequality holds or approximate equality.
==== Unicode ====
Approximate equalities denoted by wavy or dotted symbols.
== Science ==
Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.
The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work. The old theory becomes an approximation to the new theory.
Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes.
Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other. An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.
The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.
The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured.
== Law ==
Within the European Union (EU), "approximation" refers to a process through which EU legislation is implemented and incorporated within Member States' national laws, despite variations in the existing legal framework in each country. Approximation is required as part of the pre-accession process for new member states, and as a continuing process when required by an EU Directive. Approximation is a key word generally employed within the title of a directive, for example the Trade Marks Directive of 16 December 2015 serves "to approximate the laws of the Member States relating to trade marks". The European Commission describes approximation of law as "a unique obligation of membership in the European Union".
== See also ==
== References ==
== External links ==
Media related to Approximation at Wikimedia Commons | Wikipedia/Approximation |
In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain.
Real-valued functions of a real variable (commonly called real functions) and real-valued functions of several real variables are the main object of study of calculus and, more generally, real analysis. In particular, many function spaces consist of real-valued functions.
== Algebraic structure ==
Let
F
(
X
,
R
)
{\displaystyle {\mathcal {F}}(X,{\mathbb {R} })}
be the set of all functions from a set X to real numbers
R
{\displaystyle \mathbb {R} }
. Because
R
{\displaystyle \mathbb {R} }
is a field,
F
(
X
,
R
)
{\displaystyle {\mathcal {F}}(X,{\mathbb {R} })}
may be turned into a vector space and a commutative algebra over the reals with the following operations:
f
+
g
:
x
↦
f
(
x
)
+
g
(
x
)
{\displaystyle f+g:x\mapsto f(x)+g(x)}
– vector addition
0
:
x
↦
0
{\displaystyle \mathbf {0} :x\mapsto 0}
– additive identity
c
f
:
x
↦
c
f
(
x
)
,
c
∈
R
{\displaystyle cf:x\mapsto cf(x),\quad c\in \mathbb {R} }
– scalar multiplication
f
g
:
x
↦
f
(
x
)
g
(
x
)
{\displaystyle fg:x\mapsto f(x)g(x)}
– pointwise multiplication
These operations extend to partial functions from X to
R
,
{\displaystyle \mathbb {R} ,}
with the restriction that the partial functions f + g and f g are defined only if the domains of f and g have a nonempty intersection; in this case, their domain is the intersection of the domains of f and g.
Also, since
R
{\displaystyle \mathbb {R} }
is an ordered set, there is a partial order
f
≤
g
⟺
∀
x
:
f
(
x
)
≤
g
(
x
)
,
{\displaystyle \ f\leq g\quad \iff \quad \forall x:f(x)\leq g(x),}
on
F
(
X
,
R
)
,
{\displaystyle {\mathcal {F}}(X,{\mathbb {R} }),}
which makes
F
(
X
,
R
)
{\displaystyle {\mathcal {F}}(X,{\mathbb {R} })}
a partially ordered ring.
== Measurable ==
The σ-algebra of Borel sets is an important structure on real numbers. If X has its σ-algebra and a function f is such that the preimage f −1(B) of any Borel set B belongs to that σ-algebra, then f is said to be measurable. Measurable functions also form a vector space and an algebra as explained above in § Algebraic structure.
Moreover, a set (family) of real-valued functions on X can actually define a σ-algebra on X generated by all preimages of all Borel sets (or of intervals only, it is not important). This is the way how σ-algebras arise in (Kolmogorov's) probability theory, where real-valued functions on the sample space Ω are real-valued random variables.
== Continuous ==
Real numbers form a topological space and a complete metric space. Continuous real-valued functions (which implies that X is a topological space) are important in theories of topological spaces and of metric spaces. The extreme value theorem states that for any real continuous function on a compact space its global maximum and minimum exist.
The concept of metric space itself is defined with a real-valued function of two variables, the metric, which is continuous. The space of continuous functions on a compact Hausdorff space has a particular importance. Convergent sequences also can be considered as real-valued continuous functions on a special topological space.
Continuous functions also form a vector space and an algebra as explained above in § Algebraic structure, and are a subclass of measurable functions because any topological space has the σ-algebra generated by open (or closed) sets.
== Smooth ==
Real numbers are used as the codomain to define smooth functions. A domain of a real smooth function can be the real coordinate space (which yields a real multivariable function), a topological vector space, an open subset of them, or a smooth manifold.
Spaces of smooth functions also are vector spaces and algebras as explained above in § Algebraic structure and are subspaces of the space of continuous functions.
== Appearances in measure theory ==
A measure on a set is a non-negative real-valued functional on a σ-algebra of subsets. Lp spaces on sets with a measure are defined from aforementioned real-valued measurable functions, although they are actually quotient spaces. More precisely, whereas a function satisfying an appropriate summability condition defines an element of Lp space, in the opposite direction for any f ∈ Lp(X) and x ∈ X which is not an atom, the value f(x) is undefined. Though, real-valued Lp spaces still have some of the structure described above in § Algebraic structure. Each of Lp spaces is a vector space and have a partial order, and there exists a pointwise multiplication of "functions" which changes p, namely
⋅
:
L
1
/
α
×
L
1
/
β
→
L
1
/
(
α
+
β
)
,
0
≤
α
,
β
≤
1
,
α
+
β
≤
1.
{\displaystyle \cdot :L^{1/\alpha }\times L^{1/\beta }\to L^{1/(\alpha +\beta )},\quad 0\leq \alpha ,\beta \leq 1,\quad \alpha +\beta \leq 1.}
For example, pointwise product of two L2 functions belongs to L1.
== Other appearances ==
Other contexts where real-valued functions and their special properties are used include monotonic functions (on ordered sets), convex functions (on vector and affine spaces), harmonic and subharmonic functions (on Riemannian manifolds), analytic functions (usually of one or more real variables), algebraic functions (on real algebraic varieties), and polynomials (of one or more real variables).
== See also ==
Real analysis
Partial differential equations, a major user of real-valued functions
Norm (mathematics)
Scalar (mathematics)
== Footnotes ==
== References ==
Apostol, Tom M. (1974). Mathematical Analysis (2nd ed.). Addison–Wesley. ISBN 978-0-201-00288-1.
Gerald Folland, Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, ISBN 0-471-31716-0.
Rudin, Walter (1976). Principles of Mathematical Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054235-8.
== External links ==
Weisstein, Eric W. "Real Function". MathWorld. | Wikipedia/Real-valued_function |
In mathematics, a field F is algebraically closed if every non-constant polynomial in F[x] (the univariate polynomial ring with coefficients in F) has a root in F. In other words, a field is algebraically closed if the fundamental theorem of algebra holds for it.
Every field
K
{\displaystyle K}
is contained in an algebraically closed field
C
,
{\displaystyle C,}
and the roots in
C
{\displaystyle C}
of the polynomials with coefficients in
K
{\displaystyle K}
form an algebraically closed field called an algebraic closure of
K
.
{\displaystyle K.}
Given two algebraic closures of
K
{\displaystyle K}
there are isomorphisms between them that fix the elements of
K
.
{\displaystyle K.}
Algebraically closed fields appear in the following chain of class inclusions:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields
== Examples ==
As an example, the field of real numbers is not algebraically closed, because the polynomial equation
x
2
+
1
=
0
{\displaystyle x^{2}+1=0}
has no solution in real numbers, even though all its coefficients (1 and 0) are real. The same argument proves that no subfield of the real field is algebraically closed; in particular, the field of rational numbers is not algebraically closed. By contrast, the fundamental theorem of algebra states that the field of complex numbers is algebraically closed. Another example of an algebraically closed field is the field of (complex) algebraic numbers.
No finite field F is algebraically closed, because if a1, a2, ..., an are the elements of F, then the polynomial (x − a1)(x − a2) ⋯ (x − an) + 1
has no zero in F. However, the union of all finite fields of a fixed characteristic p (p prime) is an algebraically closed field, which is, in fact, the algebraic closure of the field
F
p
{\displaystyle \mathbb {F} _{p}}
with p elements.
The field
C
(
x
)
{\displaystyle \mathbb {C} (x)}
of rational functions with complex coefficients is not closed; for example, the polynomial
y
2
−
x
{\displaystyle y^{2}-x}
has roots
±
x
{\displaystyle \pm {\sqrt {x}}}
, which are not elements of
C
(
x
)
{\displaystyle \mathbb {C} (x)}
.
== Equivalent properties ==
Given a field F, the assertion "F is algebraically closed" is equivalent to other assertions:
=== The only irreducible polynomials are those of degree one ===
The field F is algebraically closed if and only if the only irreducible polynomials in the polynomial ring F[x] are those of degree one.
The assertion "the polynomials of degree one are irreducible" is trivially true for any field. If F is algebraically closed and p(x) is an irreducible polynomial of F[x], then it has some root a and therefore p(x) is a multiple of x − a. Since p(x) is irreducible, this means that p(x) = k(x − a), for some k ∈ F \ {0} . On the other hand, if F is not algebraically closed, then there is some non-constant polynomial p(x) in F[x] without roots in F. Let q(x) be some irreducible factor of p(x). Since p(x) has no roots in F, q(x) also has no roots in F. Therefore, q(x) has degree greater than one, since every first degree polynomial has one root in F.
=== Every polynomial is a product of first degree polynomials ===
The field F is algebraically closed if and only if every polynomial p(x) of degree n ≥ 1, with coefficients in F, splits into linear factors. In other words, there are elements k, x1, x2, ..., xn of the field F such that p(x) = k(x − x1)(x − x2) ⋯ (x − xn).
If F has this property, then clearly every non-constant polynomial in F[x] has some root in F; in other words, F is algebraically closed. On the other hand, that the property stated here holds for F if F is algebraically closed follows from the previous property together with the fact that, for any field K, any polynomial in K[x] can be written as a product of irreducible polynomials.
=== Polynomials of prime degree have roots ===
If every polynomial over F of prime degree has a root in F, then every non-constant polynomial has a root in F. It follows that a field is algebraically closed if and only if every polynomial over F of prime degree has a root in F.
=== The field has no proper algebraic extension ===
The field F is algebraically closed if and only if it has no proper algebraic extension.
If F has no proper algebraic extension, let p(x) be some irreducible polynomial in F[x]. Then the quotient of F[x] modulo the ideal generated by p(x) is an algebraic extension of F whose degree is equal to the degree of p(x). Since it is not a proper extension, its degree is 1 and therefore the degree of p(x) is 1.
On the other hand, if F has some proper algebraic extension K, then the minimal polynomial of an element in K \ F is irreducible and its degree is greater than 1.
=== The field has no proper finite extension ===
The field F is algebraically closed if and only if it has no proper finite extension because if, within the previous proof, the term "algebraic extension" is replaced by the term "finite extension", then the proof is still valid. (Finite extensions are necessarily algebraic.)
=== Every endomorphism of Fn has some eigenvector ===
The field F is algebraically closed if and only if, for each natural number n, every linear map from Fn into itself has some eigenvector.
An endomorphism of Fn has an eigenvector if and only if its characteristic polynomial has some root. Therefore, when F is algebraically closed, every endomorphism of Fn has some eigenvector. On the other hand, if every endomorphism of Fn has an eigenvector, let p(x) be an element of F[x]. Dividing by its leading coefficient, we get another polynomial q(x) which has roots if and only if p(x) has roots. But if q(x) = xn + an − 1 xn − 1 + ⋯ + a0, then q(x) is the characteristic polynomial of the n×n companion matrix
(
0
0
⋯
0
−
a
0
1
0
⋯
0
−
a
1
0
1
⋯
0
−
a
2
⋮
⋮
⋱
⋮
⋮
0
0
⋯
1
−
a
n
−
1
)
.
{\displaystyle {\begin{pmatrix}0&0&\cdots &0&-a_{0}\\1&0&\cdots &0&-a_{1}\\0&1&\cdots &0&-a_{2}\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &1&-a_{n-1}\end{pmatrix}}.}
=== Decomposition of rational expressions ===
The field F is algebraically closed if and only if every rational function in one variable x, with coefficients in F, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n, where n is a natural number, and a and b are elements of F.
If F is algebraically closed then, since the irreducible polynomials in F[x] are all of degree 1, the property stated above holds by the theorem on partial fraction decomposition.
On the other hand, suppose that the property stated above holds for the field F. Let p(x) be an irreducible element in F[x]. Then the rational function 1/p can be written as the sum of a polynomial function q with rational functions of the form a/(x – b)n. Therefore, the rational expression
1
p
(
x
)
−
q
(
x
)
=
1
−
p
(
x
)
q
(
x
)
p
(
x
)
{\displaystyle {\frac {1}{p(x)}}-q(x)={\frac {1-p(x)q(x)}{p(x)}}}
can be written as a quotient of two polynomials in which the denominator is a product of first degree polynomials. Since p(x) is irreducible, it must divide this product and, therefore, it must also be a first degree polynomial.
=== Relatively prime polynomials and roots ===
For any field F, if two polynomials p(x), q(x) ∈ F[x] are relatively prime then they do not have a common root, for if a ∈ F was a common root, then p(x) and q(x) would both be multiples of x − a and therefore they would not be relatively prime. The fields for which the reverse implication holds (that is, the fields such that whenever two polynomials have no common root then they are relatively prime) are precisely the algebraically closed fields.
If the field F is algebraically closed, let p(x) and q(x) be two polynomials which are not relatively prime and let r(x) be their greatest common divisor. Then, since r(x) is not constant, it will have some root a, which will be then a common root of p(x) and q(x).
If F is not algebraically closed, let p(x) be a polynomial whose degree is at least 1 without roots. Then p(x) and p(x) are not relatively prime, but they have no common roots (since none of them has roots).
== Other properties ==
If F is an algebraically closed field and n is a natural number, then F contains all nth roots of unity, because these are (by definition) the n (not necessarily distinct) zeroes of the polynomial xn − 1. A field extension that is contained in an extension generated by the roots of unity is a cyclotomic extension, and the extension of a field generated by all roots of unity is sometimes called its cyclotomic closure. Thus algebraically closed fields are cyclotomically closed. The converse is not true. Even assuming that every polynomial of the form xn − a splits into linear factors is not enough to assure that the field is algebraically closed.
If a proposition which can be expressed in the language of first-order logic is true for an algebraically closed field, then it is true for every algebraically closed field with the same characteristic. Furthermore, if such a proposition is valid for an algebraically closed field with characteristic 0, then not only is it valid for all other algebraically closed fields with characteristic 0, but there is some natural number N such that the proposition is valid for every algebraically closed field with characteristic p when p > N.
Every field F has some extension which is algebraically closed. Such an extension is called an algebraically closed extension. Among all such extensions there is one and only one (up to isomorphism, but not unique isomorphism) which is an algebraic extension of F; it is called the algebraic closure of F.
The theory of algebraically closed fields has quantifier elimination.
== Notes ==
== References == | Wikipedia/Algebraically_closed_extension |
In mathematics, an alternating algebra is a Z-graded algebra for which xy = (−1)deg(x)deg(y)yx for all nonzero homogeneous elements x and y (i.e. it is an anticommutative algebra) and has the further property that x2 = 0 (nilpotence) for every homogeneous element x of odd degree.
== Examples ==
The differential forms on a differentiable manifold form an alternating algebra.
The exterior algebra is an alternating algebra.
The cohomology ring of a topological space is an alternating algebra.
== Properties ==
The algebra formed as the direct sum of the homogeneous subspaces of even degree of an anticommutative algebra A is a subalgebra contained in the centre of A, and is thus commutative.
An anticommutative algebra A over a (commutative) base ring R in which 2 is not a zero divisor is alternating.
== See also ==
Alternating multilinear map
Exterior algebra
Graded-symmetric algebra
Supercommutative algebra
== References == | Wikipedia/Alternating_algebra |
In mathematics, in particular abstract algebra, a graded ring is a ring such that the underlying additive group is a direct sum of abelian groups
R
i
{\displaystyle R_{i}}
such that
R
i
R
j
⊆
R
i
+
j
{\displaystyle R_{i}R_{j}\subseteq R_{i+j}}
. The index set is usually the set of nonnegative integers or the set of integers, but can be any monoid. The direct sum decomposition is usually referred to as gradation or grading.
A graded module is defined similarly (see below for the precise definition). It generalizes graded vector spaces. A graded module that is also a graded ring is called a graded algebra. A graded ring could also be viewed as a graded
Z
{\displaystyle \mathbb {Z} }
-algebra.
The associativity is not important (in fact not used at all) in the definition of a graded ring; hence, the notion applies to non-associative algebras as well; e.g., one can consider a graded Lie algebra.
== First properties ==
Generally, the index set of a graded ring is assumed to be the set of nonnegative integers, unless otherwise explicitly specified. This is the case in this article.
A graded ring is a ring that is decomposed into a direct sum
R
=
⨁
n
=
0
∞
R
n
=
R
0
⊕
R
1
⊕
R
2
⊕
⋯
{\displaystyle R=\bigoplus _{n=0}^{\infty }R_{n}=R_{0}\oplus R_{1}\oplus R_{2}\oplus \cdots }
of
additive groups, such that
R
m
R
n
⊆
R
m
+
n
{\displaystyle R_{m}R_{n}\subseteq R_{m+n}}
for all nonnegative integers
m
{\displaystyle m}
and
n
{\displaystyle n}
.
A nonzero element of
R
n
{\displaystyle R_{n}}
is said to be homogeneous of degree
n
{\displaystyle n}
. By definition of a direct sum, every nonzero element
a
{\displaystyle a}
of
R
{\displaystyle R}
can be uniquely written as a sum
a
=
a
0
+
a
1
+
⋯
+
a
n
{\displaystyle a=a_{0}+a_{1}+\cdots +a_{n}}
where each
a
i
{\displaystyle a_{i}}
is either 0 or homogeneous of degree
i
{\displaystyle i}
. The nonzero
a
i
{\displaystyle a_{i}}
are the homogeneous components of
a
{\displaystyle a}
.
Some basic properties are:
R
0
{\displaystyle R_{0}}
is a subring of
R
{\displaystyle R}
; in particular, the multiplicative identity
1
{\displaystyle 1}
is a homogeneous element of degree zero.
For any
n
{\displaystyle n}
,
R
n
{\displaystyle R_{n}}
is a two-sided
R
0
{\displaystyle R_{0}}
-module, and the direct sum decomposition is a direct sum of
R
0
{\displaystyle R_{0}}
-modules.
R
{\displaystyle R}
is an associative
R
0
{\displaystyle R_{0}}
-algebra.
An ideal
I
⊆
R
{\displaystyle I\subseteq R}
is homogeneous, if for every
a
∈
I
{\displaystyle a\in I}
, the homogeneous components of
a
{\displaystyle a}
also belong to
I
{\displaystyle I}
. (Equivalently, if it is a graded submodule of
R
{\displaystyle R}
; see § Graded module.) The intersection of a homogeneous ideal
I
{\displaystyle I}
with
R
n
{\displaystyle R_{n}}
is an
R
0
{\displaystyle R_{0}}
-submodule of
R
n
{\displaystyle R_{n}}
called the homogeneous part of degree
n
{\displaystyle n}
of
I
{\displaystyle I}
. A homogeneous ideal is the direct sum of its homogeneous parts.
If
I
{\displaystyle I}
is a two-sided homogeneous ideal in
R
{\displaystyle R}
, then
R
/
I
{\displaystyle R/I}
is also a graded ring, decomposed as
R
/
I
=
⨁
n
=
0
∞
R
n
/
I
n
,
{\displaystyle R/I=\bigoplus _{n=0}^{\infty }R_{n}/I_{n},}
where
I
n
{\displaystyle I_{n}}
is the homogeneous part of degree
n
{\displaystyle n}
of
I
{\displaystyle I}
.
== Basic examples ==
Any (non-graded) ring R can be given a gradation by letting
R
0
=
R
{\displaystyle R_{0}=R}
, and
R
i
=
0
{\displaystyle R_{i}=0}
for i ≠ 0. This is called the trivial gradation on R.
The polynomial ring
R
=
k
[
t
1
,
…
,
t
n
]
{\displaystyle R=k[t_{1},\ldots ,t_{n}]}
is graded by degree: it is a direct sum of
R
i
{\displaystyle R_{i}}
consisting of homogeneous polynomials of degree i.
Let S be the set of all nonzero homogeneous elements in a graded integral domain R. Then the localization of R with respect to S is a
Z
{\displaystyle \mathbb {Z} }
-graded ring.
If I is an ideal in a commutative ring R, then
⨁
n
=
0
∞
I
n
/
I
n
+
1
{\textstyle \bigoplus _{n=0}^{\infty }I^{n}/I^{n+1}}
is a graded ring called the associated graded ring of R along I; geometrically, it is the coordinate ring of the normal cone along the subvariety defined by I.
Let X be a topological space, H i(X; R) the ith cohomology group with coefficients in a ring R. Then H *(X; R), the cohomology ring of X with coefficients in R, is a graded ring whose underlying group is
⨁
i
=
0
∞
H
i
(
X
;
R
)
{\textstyle \bigoplus _{i=0}^{\infty }H^{i}(X;R)}
with the multiplicative structure given by the cup product.
== Graded module ==
The corresponding idea in module theory is that of a graded module, namely a left module M over a graded ring R such that
M
=
⨁
i
∈
N
M
i
,
{\displaystyle M=\bigoplus _{i\in \mathbb {N} }M_{i},}
and
R
i
M
j
⊆
M
i
+
j
{\displaystyle R_{i}M_{j}\subseteq M_{i+j}}
for every i and j.
Examples:
A graded vector space is an example of a graded module over a field (with the field having trivial grading).
A graded ring is a graded module over itself. An ideal in a graded ring is homogeneous if and only if it is a graded submodule. The annihilator of a graded module is a homogeneous ideal.
Given an ideal I in a commutative ring R and an R-module M, the direct sum
⨁
n
=
0
∞
I
n
M
/
I
n
+
1
M
{\displaystyle \bigoplus _{n=0}^{\infty }I^{n}M/I^{n+1}M}
is a graded module over the associated graded ring
⨁
0
∞
I
n
/
I
n
+
1
{\textstyle \bigoplus _{0}^{\infty }I^{n}/I^{n+1}}
.
A morphism
f
:
N
→
M
{\displaystyle f:N\to M}
of graded modules, called a graded morphism or graded homomorphism , is a homomorphism of the underlying modules that respects grading; i.e.,
f
(
N
i
)
⊆
M
i
{\displaystyle f(N_{i})\subseteq M_{i}}
. A graded submodule is a submodule that is a graded module in own right and such that the set-theoretic inclusion is a morphism of graded modules. Explicitly, a graded module N is a graded submodule of M if and only if it is a submodule of M and satisfies
N
i
=
N
∩
M
i
{\displaystyle N_{i}=N\cap M_{i}}
. The kernel and the image of a morphism of graded modules are graded submodules.
Remark: To give a graded morphism from a graded ring to another graded ring with the image lying in the center is the same as to give the structure of a graded algebra to the latter ring.
Given a graded module
M
{\displaystyle M}
, the
ℓ
{\displaystyle \ell }
-twist of
M
{\displaystyle M}
is a graded module defined by
M
(
ℓ
)
n
=
M
n
+
ℓ
{\displaystyle M(\ell )_{n}=M_{n+\ell }}
(cf. Serre's twisting sheaf in algebraic geometry).
Let M and N be graded modules. If
f
:
M
→
N
{\displaystyle f\colon M\to N}
is a morphism of modules, then f is said to have degree d if
f
(
M
n
)
⊆
N
n
+
d
{\displaystyle f(M_{n})\subseteq N_{n+d}}
. An exterior derivative of differential forms in differential geometry is an example of such a morphism having degree 1.
== Invariants of graded modules ==
Given a graded module M over a commutative graded ring R, one can associate the formal power series
P
(
M
,
t
)
∈
Z
[
[
t
]
]
{\displaystyle P(M,t)\in \mathbb {Z} [\![t]\!]}
:
P
(
M
,
t
)
=
∑
ℓ
(
M
n
)
t
n
{\displaystyle P(M,t)=\sum \ell (M_{n})t^{n}}
(assuming
ℓ
(
M
n
)
{\displaystyle \ell (M_{n})}
are finite.) It is called the Hilbert–Poincaré series of M.
A graded module is said to be finitely generated if the underlying module is finitely generated. The generators may be taken to be homogeneous (by replacing the generators by their homogeneous parts.)
Suppose R is a polynomial ring
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\dots ,x_{n}]}
, k a field, and M a finitely generated graded module over it. Then the function
n
↦
dim
k
M
n
{\displaystyle n\mapsto \dim _{k}M_{n}}
is called the Hilbert function of M. The function coincides with the integer-valued polynomial for large n called the Hilbert polynomial of M.
== Graded algebra ==
An associative algebra A over a ring R is a graded algebra if it is graded as a ring.
In the usual case where the ring R is not graded (in particular if R is a field), it is given the trivial grading (every element of R is of degree 0). Thus,
R
⊆
A
0
{\displaystyle R\subseteq A_{0}}
and the graded pieces
A
i
{\displaystyle A_{i}}
are R-modules.
In the case where the ring R is also a graded ring, then one requires that
R
i
A
j
⊆
A
i
+
j
{\displaystyle R_{i}A_{j}\subseteq A_{i+j}}
In other words, we require A to be a graded left module over R.
Examples of graded algebras are common in mathematics:
Polynomial rings. The homogeneous elements of degree n are exactly the homogeneous polynomials of degree n.
The tensor algebra
T
∙
V
{\displaystyle T^{\bullet }V}
of a vector space V. The homogeneous elements of degree n are the tensors of order n,
T
n
V
{\displaystyle T^{n}V}
.
The exterior algebra
⋀
∙
V
{\displaystyle \textstyle \bigwedge \nolimits ^{\bullet }V}
and the symmetric algebra
S
∙
V
{\displaystyle S^{\bullet }V}
are also graded algebras.
The cohomology ring
H
∙
{\displaystyle H^{\bullet }}
in any cohomology theory is also graded, being the direct sum of the cohomology groups
H
n
{\displaystyle H^{n}}
.
Graded algebras are much used in commutative algebra and algebraic geometry, homological algebra, and algebraic topology. One example is the close relationship between homogeneous polynomials and projective varieties (cf. Homogeneous coordinate ring.)
== G-graded rings and algebras ==
The above definitions have been generalized to rings graded using any monoid G as an index set. A G-graded ring R is a ring with a direct sum decomposition
R
=
⨁
i
∈
G
R
i
{\displaystyle R=\bigoplus _{i\in G}R_{i}}
such that
R
i
R
j
⊆
R
i
⋅
j
.
{\displaystyle R_{i}R_{j}\subseteq R_{i\cdot j}.}
Elements of R that lie inside
R
i
{\displaystyle R_{i}}
for some
i
∈
G
{\displaystyle i\in G}
are said to be homogeneous of grade i.
The previously defined notion of "graded ring" now becomes the same thing as an
N
{\displaystyle \mathbb {N} }
-graded ring, where
N
{\displaystyle \mathbb {N} }
is the monoid of natural numbers under addition. The definitions for graded modules and algebras can also be extended this way replacing the indexing set
N
{\displaystyle \mathbb {N} }
with any monoid G.
Remarks:
If we do not require that the ring have an identity element, semigroups may replace monoids.
Examples:
A group naturally grades the corresponding group ring; similarly, monoid rings are graded by the corresponding monoid.
An (associative) superalgebra is another term for a
Z
2
{\displaystyle \mathbb {Z} _{2}}
-graded algebra. Examples include Clifford algebras. Here the homogeneous elements are either of degree 0 (even) or 1 (odd).
=== Anticommutativity ===
Some graded rings (or algebras) are endowed with an anticommutative structure. This notion requires a homomorphism of the monoid of the gradation into the additive monoid of
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
, the field with two elements. Specifically, a signed monoid consists of a pair
(
Γ
,
ε
)
{\displaystyle (\Gamma ,\varepsilon )}
where
Γ
{\displaystyle \Gamma }
is a monoid and
ε
:
Γ
→
Z
/
2
Z
{\displaystyle \varepsilon \colon \Gamma \to \mathbb {Z} /2\mathbb {Z} }
is a homomorphism of additive monoids. An anticommutative
Γ
{\displaystyle \Gamma }
-graded ring is a ring A graded with respect to
Γ
{\displaystyle \Gamma }
such that:
x
y
=
(
−
1
)
ε
(
deg
x
)
ε
(
deg
y
)
y
x
,
{\displaystyle xy=(-1)^{\varepsilon (\deg x)\varepsilon (\deg y)}yx,}
for all homogeneous elements x and y.
=== Examples ===
An exterior algebra is an example of an anticommutative algebra, graded with respect to the structure
(
Z
,
ε
)
{\displaystyle (\mathbb {Z} ,\varepsilon )}
where
ε
:
Z
→
Z
/
2
Z
{\displaystyle \varepsilon \colon \mathbb {Z} \to \mathbb {Z} /2\mathbb {Z} }
is the quotient map.
A supercommutative algebra (sometimes called a skew-commutative associative ring) is the same thing as an anticommutative
(
Z
,
ε
)
{\displaystyle (\mathbb {Z} ,\varepsilon )}
-graded algebra, where
ε
{\displaystyle \varepsilon }
is the identity map of the additive structure of
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
.
== Graded monoid ==
Intuitively, a graded monoid is the subset of a graded ring,
⨁
n
∈
N
0
R
n
{\textstyle \bigoplus _{n\in \mathbb {N} _{0}}R_{n}}
, generated by the
R
n
{\displaystyle R_{n}}
's, without using the additive part. That is, the set of elements of the graded monoid is
⋃
n
∈
N
0
R
n
{\displaystyle \bigcup _{n\in \mathbb {N} _{0}}R_{n}}
.
Formally, a graded monoid is a monoid
(
M
,
⋅
)
{\displaystyle (M,\cdot )}
, with a gradation function
ϕ
:
M
→
N
0
{\displaystyle \phi :M\to \mathbb {N} _{0}}
such that
ϕ
(
m
⋅
m
′
)
=
ϕ
(
m
)
+
ϕ
(
m
′
)
{\displaystyle \phi (m\cdot m')=\phi (m)+\phi (m')}
. Note that the gradation of
1
M
{\displaystyle 1_{M}}
is necessarily 0. Some authors request furthermore that
ϕ
(
m
)
≠
0
{\displaystyle \phi (m)\neq 0}
when m is not the identity.
Assuming the gradations of non-identity elements are non-zero, the number of elements of gradation n is at most
g
n
{\displaystyle g^{n}}
where g is the cardinality of a generating set G of the monoid. Therefore the number of elements of gradation n or less is at most
n
+
1
{\displaystyle n+1}
(for
g
=
1
{\displaystyle g=1}
) or
g
n
+
1
−
1
g
−
1
{\textstyle {\frac {g^{n+1}-1}{g-1}}}
else. Indeed, each such element is the product of at most n elements of G, and only
g
n
+
1
−
1
g
−
1
{\textstyle {\frac {g^{n+1}-1}{g-1}}}
such products exist. Similarly, the identity element can not be written as the product of two non-identity elements. That is, there is no unit divisor in such a graded monoid.
=== Power series indexed by a graded monoid ===
These notions allow us to extend the notion of power series ring. Instead of the indexing family being
N
{\displaystyle \mathbb {N} }
, the indexing family could be any graded monoid, assuming that the number of elements of degree n is finite, for each integer n.
More formally, let
(
K
,
+
K
,
×
K
)
{\displaystyle (K,+_{K},\times _{K})}
be an arbitrary semiring and
(
R
,
⋅
,
ϕ
)
{\displaystyle (R,\cdot ,\phi )}
a graded monoid. Then
K
⟨
⟨
R
⟩
⟩
{\displaystyle K\langle \langle R\rangle \rangle }
denotes the semiring of power series with coefficients in K indexed by R. Its elements are functions from R to K. The sum of two elements
s
,
s
′
∈
K
⟨
⟨
R
⟩
⟩
{\displaystyle s,s'\in K\langle \langle R\rangle \rangle }
is defined pointwise, it is the function sending
m
∈
R
{\displaystyle m\in R}
to
s
(
m
)
+
K
s
′
(
m
)
{\displaystyle s(m)+_{K}s'(m)}
, and the product is the function sending
m
∈
R
{\displaystyle m\in R}
to the infinite sum
∑
p
,
q
∈
R
p
⋅
q
=
m
s
(
p
)
×
K
s
′
(
q
)
{\displaystyle \sum _{p,q\in R \atop p\cdot q=m}s(p)\times _{K}s'(q)}
. This sum is correctly defined (i.e., finite) because, for each m, there are only a finite number of pairs (p, q) such that pq = m.
=== Free monoid ===
In formal language theory, given an alphabet A, the free monoid of words over A can be considered as a graded monoid, where the gradation of a word is its length.
== See also ==
Associated graded ring
Differential graded algebra
Filtered algebra, a generalization
Graded (mathematics)
Graded category
Graded vector space
Tensor algebra
Differential graded module
== Notes ==
=== Citations ===
=== References === | Wikipedia/Graded_algebra |
In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Georg Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory (Nakayama 1939), (Nakayama 1941). Jean Dieudonné used this to characterize Frobenius algebras (Dieudonné 1958). Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory.
== Definition ==
A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form σ : A × A → k that satisfies the following equation: σ(a·b, c) = σ(a, b·c). This bilinear form is called the Frobenius form of the algebra.
Equivalently, one may equip A with a linear functional λ : A → k such that the kernel of λ contains no nonzero left ideal of A.
A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies λ(a·b) = λ(b·a).
There is also a different, mostly unrelated notion of the symmetric algebra of a vector space.
== Nakayama automorphism ==
For a Frobenius algebra A with σ as above, the automorphism ν of A such that σ(a, b) = σ(ν(b), a) is the Nakayama automorphism associated to A and σ.
== Examples ==
Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace.
Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra.
Every group ring k[G] of a finite group G over a field k is a symmetric Frobenius algebra, with Frobenius form σ(a,b) given by the coefficient of the identity element in a·b.
For a field k, the four-dimensional k-algebra k[x,y]/ (x2, y2) is a Frobenius algebra. This follows from the characterization of commutative local Frobenius rings below, since this ring is a local ring with its maximal ideal generated by x and y, and unique minimal ideal generated by xy.
For a field k, the three-dimensional k-algebra A=k[x,y]/ (x, y)2 is not a Frobenius algebra. The A homomorphism from xA into A induced by x ↦ y cannot be extended to an A homomorphism from A into A, showing that the ring is not self-injective, thus not Frobenius.
Any finite-dimensional Hopf algebra, by a 1969 theorem of Larson-Sweedler on Hopf modules and integrals.
== Properties ==
The direct product and tensor product of Frobenius algebras are Frobenius algebras.
A finite-dimensional commutative local algebra over a field is Frobenius if and only if the right regular module is injective, if and only if the algebra has a unique minimal ideal.
Commutative, local Frobenius algebras are precisely the zero-dimensional local Gorenstein rings containing their residue field and finite-dimensional over it.
Frobenius algebras are quasi-Frobenius rings, and in particular, they are left and right Artinian and left and right self-injective.
For a field k, a finite-dimensional, unital, associative algebra is Frobenius if and only if the injective right A-module Homk(A,k) is isomorphic to the right regular representation of A.
For an infinite field k, a finite-dimensional, unital, associative k-algebra is a Frobenius algebra if it has only finitely many minimal right ideals.
If F is a finite-dimensional extension field of k, then a finite-dimensional F-algebra is naturally a finite-dimensional k-algebra via restriction of scalars, and is a Frobenius F-algebra if and only if it is a Frobenius k-algebra. In other words, the Frobenius property does not depend on the field, as long as the algebra remains a finite-dimensional algebra.
Similarly, if F is a finite-dimensional extension field of k, then every k-algebra A gives rise naturally to an F algebra, F ⊗k A, and A is a Frobenius k-algebra if and only if F ⊗k A is a Frobenius F-algebra.
Amongst those finite-dimensional, unital, associative algebras whose right regular representation is injective, the Frobenius algebras A are precisely those whose simple modules M have the same dimension as their A-duals, HomA(M,A). Amongst these algebras, the A-duals of simple modules are always simple.
A finite-dimensional bi-Frobenius algebra or strict double Frobenius algebra is a k-vector-space A with two multiplication structures as unital Frobenius algebras (A, • , 1) and (A,
⋆
{\displaystyle \star }
,
ι
{\displaystyle \iota }
): there must be multiplicative homomorphisms
ϕ
{\displaystyle \phi }
and
ε
{\displaystyle \varepsilon }
of A into k with
ϕ
(
a
⋅
b
)
{\displaystyle \phi (a\cdot b)}
and
ε
(
a
⋆
b
)
{\displaystyle \varepsilon (a\star b)}
non-degenerate, and a k-isomorphism S of A onto itself which is an anti-automorphism for both structures, such that
ϕ
(
a
⋅
b
)
=
ε
(
S
(
a
)
⋆
b
)
.
{\displaystyle \phi (a\cdot b)=\varepsilon (S(a)\star b).}
This is the case precisely when A is a finite-dimensional Hopf algebra over k and S is its antipode. The group algebra of a finite group gives an example.
== Category-theoretical definition ==
In category theory, the notion of Frobenius object is an abstract definition of a Frobenius algebra in a category. A Frobenius object
(
A
,
μ
,
η
,
δ
,
ε
)
{\displaystyle (A,\mu ,\eta ,\delta ,\varepsilon )}
in a monoidal category
(
C
,
⊗
,
I
)
{\displaystyle (C,\otimes ,I)}
consists of an object A of C together with four morphisms
μ
:
A
⊗
A
→
A
,
η
:
I
→
A
,
δ
:
A
→
A
⊗
A
a
n
d
ε
:
A
→
I
{\displaystyle \mu :A\otimes A\to A,\qquad \eta :I\to A,\qquad \delta :A\to A\otimes A\qquad \mathrm {and} \qquad \varepsilon :A\to I}
such that
(
A
,
μ
,
η
)
{\displaystyle (A,\mu ,\eta )\,}
is a monoid object in C,
(
A
,
δ
,
ε
)
{\displaystyle (A,\delta ,\varepsilon )}
is a comonoid object in C,
the diagrams
and
commute (for simplicity the diagrams are given here in the case where the monoidal category C is strict) and are known as Frobenius conditions.
More compactly, a Frobenius algebra in C is a so-called Frobenius monoidal functor A:1 → C, where 1 is the category consisting of one object and one arrow.
A Frobenius algebra is called isometric or special if
μ
∘
δ
=
I
d
A
{\displaystyle \mu \circ \delta =\mathrm {Id} _{A}}
.
== Applications ==
Frobenius algebras originally were studied as part of an investigation into the representation theory of finite groups, and have contributed to the study of number theory, algebraic geometry, and combinatorics. They have been used to study Hopf algebras, coding theory, and cohomology rings of compact oriented manifolds.
=== Topological quantum field theories ===
Recently, it has been seen that they play an important role in the algebraic treatment and axiomatic foundation of topological quantum field theory. A commutative Frobenius algebra determines uniquely (up to isomorphism) a (1+1)-dimensional TQFT. More precisely, the category of commutative Frobenius
K
{\displaystyle K}
-algebras is equivalent to the category of symmetric strong monoidal functors from
2
{\displaystyle 2}
-
Cob
{\displaystyle {\textbf {Cob}}}
(the category of 2-dimensional cobordisms between 1-dimensional manifolds) to
Vect
K
{\displaystyle {\textbf {Vect}}_{K}}
(the category of vector spaces over
K
{\displaystyle K}
).
The correspondence between TQFTs and Frobenius algebras is given as follows:
1-dimensional manifolds are disjoint unions of circles: a TQFT associates a vector space with a circle, and the tensor product of vector spaces with a disjoint union of circles,
a TQFT associates (functorially) to each cobordism between manifolds a map between vector spaces,
the map associated with a pair of pants (a cobordism between 1 circle and 2 circles) gives a product map
V
⊗
V
→
V
{\displaystyle V\otimes V\to V}
or a coproduct map
V
→
V
⊗
V
{\displaystyle V\to V\otimes V}
, depending on how the boundary components are grouped – which is commutative or cocommutative, and
the map associated with a disk gives a counit (trace) or unit (scalars), depending on grouping of boundary.
This relation between Frobenius algebras and (1+1)-dimensional TQFTs can be used to explain Khovanov's categorification of the Jones polynomial.
== Generalizations ==
=== Frobenius extensions ===
Let B be a subring sharing the identity element of a unital associative ring A. This is also known as ring extension A | B. Such a ring extension is called Frobenius if
There is a linear mapping E: A → B satisfying the bimodule condition E(bac) = bE(a)c for all b,c ∈ B and a ∈ A.
There are elements in A denoted
{
x
i
}
i
=
1
n
{\displaystyle \{x_{i}\}_{i=1}^{n}}
and
{
y
i
}
i
=
1
n
{\displaystyle \{y_{i}\}_{i=1}^{n}}
such that for all a ∈ A we have:
∑
i
=
1
n
E
(
a
x
i
)
y
i
=
a
=
∑
i
=
1
n
x
i
E
(
y
i
a
)
{\displaystyle \sum _{i=1}^{n}E(ax_{i})y_{i}=a=\sum _{i=1}^{n}x_{i}E(y_{i}a)}
The map E is sometimes referred to as a Frobenius homomorphism and the elements
x
i
,
y
i
{\displaystyle x_{i},y_{i}}
as dual bases. (As an exercise it is possible to give an equivalent definition of Frobenius extension as a Frobenius algebra-coalgebra object in the category of B-B-bimodules, where the equations just given become the counit equations for the counit E.)
For example, a Frobenius algebra A over a commutative ring K, with associative nondegenerate bilinear form (-,-) and projective K-bases
x
i
,
y
i
{\displaystyle x_{i},y_{i}}
is a Frobenius extension A | K with E(a) = (a,1). Other examples of Frobenius extensions are pairs of group algebras associated to a subgroup of finite index, Hopf subalgebras of a semisimple Hopf algebra, Galois extensions and certain von Neumann algebra subfactors of finite index. Another source of examples of Frobenius extensions (and twisted versions) are certain subalgebra pairs of Frobenius algebras, where the subalgebra is stabilized by the symmetrizing automorphism of the overalgebra.
The details of the group ring example are the following application of elementary notions in group theory. Let G be a group and H a subgroup of finite index n in G; let g1, ..., gn. be left coset representatives, so that G is a disjoint union of the cosets g1H, ..., gnH. Over any commutative base ring k define the group algebras A = k[G] and B = k[H], so B is a subalgebra of A. Define a Frobenius homomorphism E: A → B by letting E(h) = h for all h in H, and E(g) = 0 for g not in H : extend this linearly from the basis group elements to all of A, so one obtains the B-B-bimodule projection
E
(
∑
g
∈
G
n
g
g
)
=
∑
h
∈
H
n
h
h
for
n
g
∈
k
{\displaystyle E\left(\sum _{g\in G}n_{g}g\right)=\sum _{h\in H}n_{h}h\ \ \ {\text{ for }}n_{g}\in k}
(The orthonormality condition
E
(
g
i
−
1
g
j
)
=
δ
i
j
1
{\displaystyle E(g_{i}^{-1}g_{j})=\delta _{ij}1}
follows.) The dual base is given by
x
i
=
g
i
,
y
i
=
g
i
−
1
{\displaystyle x_{i}=g_{i},y_{i}=g_{i}^{-1}}
, since
∑
i
=
1
n
g
i
E
(
g
i
−
1
∑
g
∈
G
n
g
g
)
=
∑
i
∑
h
∈
H
n
g
i
h
g
i
h
=
∑
g
∈
G
n
g
g
{\displaystyle \sum _{i=1}^{n}g_{i}E\left(g_{i}^{-1}\sum _{g\in G}n_{g}g\right)=\sum _{i}\sum _{h\in H}n_{g_{i}h}g_{i}h=\sum _{g\in G}n_{g}g}
The other dual base equation may be derived from the observation that G is also a disjoint union of the right cosets
H
g
1
−
1
,
…
,
H
g
n
−
1
{\displaystyle Hg_{1}^{-1},\ldots ,Hg_{n}^{-1}}
.
Also Hopf-Galois extensions are Frobenius extensions by a theorem of Kreimer and Takeuchi from 1989. A simple example of this is a finite group G acting by automorphisms on an algebra A with subalgebra of invariants:
B
=
{
x
∈
A
∣
∀
g
∈
G
,
g
(
x
)
=
x
}
.
{\displaystyle B=\{x\in A\mid \forall g\in G,g(x)=x\}.}
By DeMeyer's criterion A is G-Galois over B if there are elements
{
a
i
}
i
=
1
n
,
{
b
i
}
i
=
1
n
{\displaystyle \{a_{i}\}_{i=1}^{n},\{b_{i}\}_{i=1}^{n}}
in A satisfying:
∀
g
∈
G
:
∑
i
=
1
n
a
i
g
(
b
i
)
=
δ
g
,
1
G
1
A
{\displaystyle \forall g\in G:\ \ \sum _{i=1}^{n}a_{i}g(b_{i})=\delta _{g,1_{G}}1_{A}}
whence also
∀
g
∈
G
:
∑
i
=
1
n
g
(
a
i
)
b
i
=
δ
g
,
1
G
1
A
.
{\displaystyle \forall g\in G:\ \ \sum _{i=1}^{n}g(a_{i})b_{i}=\delta _{g,1_{G}}1_{A}.}
Then A is a Frobenius extension of B with E: A → B defined by
E
(
a
)
=
∑
g
∈
G
g
(
a
)
{\displaystyle E(a)=\sum _{g\in G}g(a)}
which satisfies
∀
x
∈
A
:
∑
i
=
1
n
E
(
x
a
i
)
b
i
=
x
=
∑
i
=
1
n
a
i
E
(
b
i
x
)
.
{\displaystyle \forall x\in A:\ \ \sum _{i=1}^{n}E(xa_{i})b_{i}=x=\sum _{i=1}^{n}a_{i}E(b_{i}x).}
(Furthermore, an example of a separable algebra extension since
e
=
∑
i
=
1
n
a
i
⊗
B
b
i
{\textstyle e=\sum _{i=1}^{n}a_{i}\otimes _{B}b_{i}}
is a separability element satisfying ea = ae for all a in A as well as
∑
i
=
1
n
a
i
b
i
=
1
{\textstyle \sum _{i=1}^{n}a_{i}b_{i}=1}
. Also an example of a depth two subring (B in A) since
a
⊗
B
1
=
∑
g
∈
G
t
g
g
(
a
)
{\displaystyle a\otimes _{B}1=\sum _{g\in G}t_{g}g(a)}
where
t
g
=
∑
i
=
1
n
a
i
⊗
B
g
(
b
i
)
{\displaystyle t_{g}=\sum _{i=1}^{n}a_{i}\otimes _{B}g(b_{i})}
for each g in G and a in A.)
Frobenius extensions have a well-developed theory of induced representations investigated in papers by Kasch and Pareigis, Nakayama and Tzuzuku in the 1950s and 1960s. For example, for each B-module M, the induced module A ⊗B M (if M is a left module) and co-induced module HomB(A, M) are naturally isomorphic as A-modules (as an exercise one defines the isomorphism given E and dual bases). The endomorphism ring theorem of Kasch from 1960 states that if A | B is a Frobenius extension, then so is A → End(AB) where the mapping is given by a ↦ λa(x) and λa(x) = ax for each a,x ∈ A. Endomorphism ring theorems and converses were investigated later by Mueller, Morita, Onodera and others.
=== Frobenius adjunctions ===
As already hinted at in the previous paragraph, Frobenius extensions have an equivalent categorical formulation.
Namely, given a ring extension
S
⊂
R
{\displaystyle S\subset R}
, the induced induction functor
R
⊗
S
−
:
Mod
(
S
)
→
Mod
(
R
)
{\displaystyle R\otimes _{S}-\colon {\text{Mod}}(S)\to {\text{Mod}}(R)}
from the category of, say, left S-modules to the category of left R-modules has both a left and a right adjoint, called co-restriction and restriction, respectively.
The ring extension is then called Frobenius if and only if the left and the right adjoint are naturally isomorphic.
This leads to the obvious abstraction to ordinary category theory:
An adjunction
F
⊣
G
{\displaystyle F\dashv G}
is called a Frobenius adjunction iff also
G
⊣
F
{\displaystyle G\dashv F}
.
A functor F is a Frobenius functor if it is part of a Frobenius adjunction, i.e. if it has isomorphic left and right adjoints.
== See also ==
== References ==
== External links ==
Street, Ross (2004). "Frobenius algebras and monoidal categories" (PDF). Annual Meeting Aust. Math. Soc. CiteSeerX 10.1.1.180.7082. | Wikipedia/Symmetric_Frobenius_algebra |
In linear algebra, a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function
f
:
V
1
×
⋯
×
V
n
→
W
,
{\displaystyle f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}}
where
V
1
,
…
,
V
n
{\displaystyle V_{1},\ldots ,V_{n}}
(
n
∈
Z
≥
0
{\displaystyle n\in \mathbb {Z} _{\geq 0}}
) and
W
{\displaystyle W}
are vector spaces (or modules over a commutative ring), with the following property: for each
i
{\displaystyle i}
, if all of the variables but
v
i
{\displaystyle v_{i}}
are held constant, then
f
(
v
1
,
…
,
v
i
,
…
,
v
n
)
{\displaystyle f(v_{1},\ldots ,v_{i},\ldots ,v_{n})}
is a linear function of
v
i
{\displaystyle v_{i}}
. One way to visualize this is to imagine two orthogonal vectors; if one of these vectors is scaled by a factor of 2 while the other remains unchanged, the cross product likewise scales by a factor of two. If both are scaled by a factor of 2, the cross product scales by a factor of
2
2
{\displaystyle 2^{2}}
.
A multilinear map of one variable is a linear map, and of two variables is a bilinear map. More generally, for any nonnegative integer
k
{\displaystyle k}
, a multilinear map of k variables is called a k-linear map. If the codomain of a multilinear map is the field of scalars, it is called a multilinear form. Multilinear maps and multilinear forms are fundamental objects of study in multilinear algebra.
If all variables belong to the same space, one can consider symmetric, antisymmetric and alternating k-linear maps. The latter two coincide if the underlying ring (or field) has a characteristic different from two, else the former two coincide.
== Examples ==
Any bilinear map is a multilinear map. For example, any inner product on a
R
{\displaystyle \mathbb {R} }
-vector space is a multilinear map, as is the cross product of vectors in
R
3
{\displaystyle \mathbb {R} ^{3}}
.
The determinant of a square matrix is a multilinear function of the columns (or rows); it is also an alternating function of the columns (or rows).
If
F
:
R
m
→
R
n
{\displaystyle F\colon \mathbb {R} ^{m}\to \mathbb {R} ^{n}}
is a Ck function, then the
k
{\displaystyle k}
th derivative of
F
{\displaystyle F}
at each point
p
{\displaystyle p}
in its domain can be viewed as a symmetric
k
{\displaystyle k}
-linear function
D
k
F
:
R
m
×
⋯
×
R
m
→
R
n
{\displaystyle D^{k}\!F\colon \mathbb {R} ^{m}\times \cdots \times \mathbb {R} ^{m}\to \mathbb {R} ^{n}}
.
== Coordinate representation ==
Let
f
:
V
1
×
⋯
×
V
n
→
W
,
{\displaystyle f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}}
be a multilinear map between finite-dimensional vector spaces, where
V
i
{\displaystyle V_{i}\!}
has dimension
d
i
{\displaystyle d_{i}\!}
, and
W
{\displaystyle W\!}
has dimension
d
{\displaystyle d\!}
. If we choose a basis
{
e
i
1
,
…
,
e
i
d
i
}
{\displaystyle \{{\textbf {e}}_{i1},\ldots ,{\textbf {e}}_{id_{i}}\}}
for each
V
i
{\displaystyle V_{i}\!}
and a basis
{
b
1
,
…
,
b
d
}
{\displaystyle \{{\textbf {b}}_{1},\ldots ,{\textbf {b}}_{d}\}}
for
W
{\displaystyle W\!}
(using bold for vectors), then we can define a collection of scalars
A
j
1
⋯
j
n
k
{\displaystyle A_{j_{1}\cdots j_{n}}^{k}}
by
f
(
e
1
j
1
,
…
,
e
n
j
n
)
=
A
j
1
⋯
j
n
1
b
1
+
⋯
+
A
j
1
⋯
j
n
d
b
d
.
{\displaystyle f({\textbf {e}}_{1j_{1}},\ldots ,{\textbf {e}}_{nj_{n}})=A_{j_{1}\cdots j_{n}}^{1}\,{\textbf {b}}_{1}+\cdots +A_{j_{1}\cdots j_{n}}^{d}\,{\textbf {b}}_{d}.}
Then the scalars
{
A
j
1
⋯
j
n
k
∣
1
≤
j
i
≤
d
i
,
1
≤
k
≤
d
}
{\displaystyle \{A_{j_{1}\cdots j_{n}}^{k}\mid 1\leq j_{i}\leq d_{i},1\leq k\leq d\}}
completely determine the multilinear function
f
{\displaystyle f\!}
. In particular, if
v
i
=
∑
j
=
1
d
i
v
i
j
e
i
j
{\displaystyle {\textbf {v}}_{i}=\sum _{j=1}^{d_{i}}v_{ij}{\textbf {e}}_{ij}\!}
for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n\!}
, then
f
(
v
1
,
…
,
v
n
)
=
∑
j
1
=
1
d
1
⋯
∑
j
n
=
1
d
n
∑
k
=
1
d
A
j
1
⋯
j
n
k
v
1
j
1
⋯
v
n
j
n
b
k
.
{\displaystyle f({\textbf {v}}_{1},\ldots ,{\textbf {v}}_{n})=\sum _{j_{1}=1}^{d_{1}}\cdots \sum _{j_{n}=1}^{d_{n}}\sum _{k=1}^{d}A_{j_{1}\cdots j_{n}}^{k}v_{1j_{1}}\cdots v_{nj_{n}}{\textbf {b}}_{k}.}
== Example ==
Let's take a trilinear function
g
:
R
2
×
R
2
×
R
2
→
R
,
{\displaystyle g\colon R^{2}\times R^{2}\times R^{2}\to R,}
where Vi = R2, di = 2, i = 1,2,3, and W = R, d = 1.
A basis for each Vi is
{
e
i
1
,
…
,
e
i
d
i
}
=
{
e
1
,
e
2
}
=
{
(
1
,
0
)
,
(
0
,
1
)
}
.
{\displaystyle \{{\textbf {e}}_{i1},\ldots ,{\textbf {e}}_{id_{i}}\}=\{{\textbf {e}}_{1},{\textbf {e}}_{2}\}=\{(1,0),(0,1)\}.}
Let
g
(
e
1
i
,
e
2
j
,
e
3
k
)
=
f
(
e
i
,
e
j
,
e
k
)
=
A
i
j
k
,
{\displaystyle g({\textbf {e}}_{1i},{\textbf {e}}_{2j},{\textbf {e}}_{3k})=f({\textbf {e}}_{i},{\textbf {e}}_{j},{\textbf {e}}_{k})=A_{ijk},}
where
i
,
j
,
k
∈
{
1
,
2
}
{\displaystyle i,j,k\in \{1,2\}}
. In other words, the constant
A
i
j
k
{\displaystyle A_{ijk}}
is a function value at one of the eight possible triples of basis vectors (since there are two choices for each of the three
V
i
{\displaystyle V_{i}}
), namely:
{
e
1
,
e
1
,
e
1
}
,
{
e
1
,
e
1
,
e
2
}
,
{
e
1
,
e
2
,
e
1
}
,
{
e
1
,
e
2
,
e
2
}
,
{
e
2
,
e
1
,
e
1
}
,
{
e
2
,
e
1
,
e
2
}
,
{
e
2
,
e
2
,
e
1
}
,
{
e
2
,
e
2
,
e
2
}
.
{\displaystyle \{{\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{1}\},\{{\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{2}\},\{{\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{1}\},\{{\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{2}\},\{{\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{1}\},\{{\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{2}\},\{{\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{1}\},\{{\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{2}\}.}
Each vector
v
i
∈
V
i
=
R
2
{\displaystyle {\textbf {v}}_{i}\in V_{i}=R^{2}}
can be expressed as a linear combination of the basis vectors
v
i
=
∑
j
=
1
2
v
i
j
e
i
j
=
v
i
1
×
e
1
+
v
i
2
×
e
2
=
v
i
1
×
(
1
,
0
)
+
v
i
2
×
(
0
,
1
)
.
{\displaystyle {\textbf {v}}_{i}=\sum _{j=1}^{2}v_{ij}{\textbf {e}}_{ij}=v_{i1}\times {\textbf {e}}_{1}+v_{i2}\times {\textbf {e}}_{2}=v_{i1}\times (1,0)+v_{i2}\times (0,1).}
The function value at an arbitrary collection of three vectors
v
i
∈
R
2
{\displaystyle {\textbf {v}}_{i}\in R^{2}}
can be expressed as
g
(
v
1
,
v
2
,
v
3
)
=
∑
i
=
1
2
∑
j
=
1
2
∑
k
=
1
2
A
i
j
k
v
1
i
v
2
j
v
3
k
,
{\displaystyle g({\textbf {v}}_{1},{\textbf {v}}_{2},{\textbf {v}}_{3})=\sum _{i=1}^{2}\sum _{j=1}^{2}\sum _{k=1}^{2}A_{ijk}v_{1i}v_{2j}v_{3k},}
or in expanded form as
g
(
(
a
,
b
)
,
(
c
,
d
)
,
(
e
,
f
)
)
=
a
c
e
×
g
(
e
1
,
e
1
,
e
1
)
+
a
c
f
×
g
(
e
1
,
e
1
,
e
2
)
+
a
d
e
×
g
(
e
1
,
e
2
,
e
1
)
+
a
d
f
×
g
(
e
1
,
e
2
,
e
2
)
+
b
c
e
×
g
(
e
2
,
e
1
,
e
1
)
+
b
c
f
×
g
(
e
2
,
e
1
,
e
2
)
+
b
d
e
×
g
(
e
2
,
e
2
,
e
1
)
+
b
d
f
×
g
(
e
2
,
e
2
,
e
2
)
.
{\displaystyle {\begin{aligned}g((a,b),(c,d)&,(e,f))=ace\times g({\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{1})+acf\times g({\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{2})\\&+ade\times g({\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{1})+adf\times g({\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{2})+bce\times g({\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{1})+bcf\times g({\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{2})\\&+bde\times g({\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{1})+bdf\times g({\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{2}).\end{aligned}}}
== Relation to tensor products ==
There is a natural one-to-one correspondence between multilinear maps
f
:
V
1
×
⋯
×
V
n
→
W
,
{\displaystyle f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}}
and linear maps
F
:
V
1
⊗
⋯
⊗
V
n
→
W
,
{\displaystyle F\colon V_{1}\otimes \cdots \otimes V_{n}\to W{\text{,}}}
where
V
1
⊗
⋯
⊗
V
n
{\displaystyle V_{1}\otimes \cdots \otimes V_{n}\!}
denotes the tensor product of
V
1
,
…
,
V
n
{\displaystyle V_{1},\ldots ,V_{n}}
. The relation between the functions
f
{\displaystyle f}
and
F
{\displaystyle F}
is given by the formula
f
(
v
1
,
…
,
v
n
)
=
F
(
v
1
⊗
⋯
⊗
v
n
)
.
{\displaystyle f(v_{1},\ldots ,v_{n})=F(v_{1}\otimes \cdots \otimes v_{n}).}
== Multilinear functions on n×n matrices ==
One can consider multilinear functions, on an n×n matrix over a commutative ring K with identity, as a function of the rows (or equivalently the columns) of the matrix. Let A be such a matrix and ai, 1 ≤ i ≤ n, be the rows of A. Then the multilinear function D can be written as
D
(
A
)
=
D
(
a
1
,
…
,
a
n
)
,
{\displaystyle D(A)=D(a_{1},\ldots ,a_{n}),}
satisfying
D
(
a
1
,
…
,
c
a
i
+
a
i
′
,
…
,
a
n
)
=
c
D
(
a
1
,
…
,
a
i
,
…
,
a
n
)
+
D
(
a
1
,
…
,
a
i
′
,
…
,
a
n
)
.
{\displaystyle D(a_{1},\ldots ,ca_{i}+a_{i}',\ldots ,a_{n})=cD(a_{1},\ldots ,a_{i},\ldots ,a_{n})+D(a_{1},\ldots ,a_{i}',\ldots ,a_{n}).}
If we let
e
^
j
{\displaystyle {\hat {e}}_{j}}
represent the jth row of the identity matrix, we can express each row ai as the sum
a
i
=
∑
j
=
1
n
A
(
i
,
j
)
e
^
j
.
{\displaystyle a_{i}=\sum _{j=1}^{n}A(i,j){\hat {e}}_{j}.}
Using the multilinearity of D we rewrite D(A) as
D
(
A
)
=
D
(
∑
j
=
1
n
A
(
1
,
j
)
e
^
j
,
a
2
,
…
,
a
n
)
=
∑
j
=
1
n
A
(
1
,
j
)
D
(
e
^
j
,
a
2
,
…
,
a
n
)
.
{\displaystyle D(A)=D\left(\sum _{j=1}^{n}A(1,j){\hat {e}}_{j},a_{2},\ldots ,a_{n}\right)=\sum _{j=1}^{n}A(1,j)D({\hat {e}}_{j},a_{2},\ldots ,a_{n}).}
Continuing this substitution for each ai we get, for 1 ≤ i ≤ n,
D
(
A
)
=
∑
1
≤
k
1
≤
n
…
∑
1
≤
k
i
≤
n
…
∑
1
≤
k
n
≤
n
A
(
1
,
k
1
)
A
(
2
,
k
2
)
…
A
(
n
,
k
n
)
D
(
e
^
k
1
,
…
,
e
^
k
n
)
.
{\displaystyle D(A)=\sum _{1\leq k_{1}\leq n}\ldots \sum _{1\leq k_{i}\leq n}\ldots \sum _{1\leq k_{n}\leq n}A(1,k_{1})A(2,k_{2})\dots A(n,k_{n})D({\hat {e}}_{k_{1}},\dots ,{\hat {e}}_{k_{n}}).}
Therefore, D(A) is uniquely determined by how D operates on
e
^
k
1
,
…
,
e
^
k
n
{\displaystyle {\hat {e}}_{k_{1}},\dots ,{\hat {e}}_{k_{n}}}
.
== Example ==
In the case of 2×2 matrices, we get
D
(
A
)
=
A
1
,
1
A
1
,
2
D
(
e
^
1
,
e
^
1
)
+
A
1
,
1
A
2
,
2
D
(
e
^
1
,
e
^
2
)
+
A
1
,
2
A
2
,
1
D
(
e
^
2
,
e
^
1
)
+
A
1
,
2
A
2
,
2
D
(
e
^
2
,
e
^
2
)
,
{\displaystyle D(A)=A_{1,1}A_{1,2}D({\hat {e}}_{1},{\hat {e}}_{1})+A_{1,1}A_{2,2}D({\hat {e}}_{1},{\hat {e}}_{2})+A_{1,2}A_{2,1}D({\hat {e}}_{2},{\hat {e}}_{1})+A_{1,2}A_{2,2}D({\hat {e}}_{2},{\hat {e}}_{2}),\,}
where
e
^
1
=
[
1
,
0
]
{\displaystyle {\hat {e}}_{1}=[1,0]}
and
e
^
2
=
[
0
,
1
]
{\displaystyle {\hat {e}}_{2}=[0,1]}
. If we restrict
D
{\displaystyle D}
to be an alternating function, then
D
(
e
^
1
,
e
^
1
)
=
D
(
e
^
2
,
e
^
2
)
=
0
{\displaystyle D({\hat {e}}_{1},{\hat {e}}_{1})=D({\hat {e}}_{2},{\hat {e}}_{2})=0}
and
D
(
e
^
2
,
e
^
1
)
=
−
D
(
e
^
1
,
e
^
2
)
=
−
D
(
I
)
{\displaystyle D({\hat {e}}_{2},{\hat {e}}_{1})=-D({\hat {e}}_{1},{\hat {e}}_{2})=-D(I)}
. Letting
D
(
I
)
=
1
{\displaystyle D(I)=1}
, we get the determinant function on 2×2 matrices:
D
(
A
)
=
A
1
,
1
A
2
,
2
−
A
1
,
2
A
2
,
1
.
{\displaystyle D(A)=A_{1,1}A_{2,2}-A_{1,2}A_{2,1}.}
== Properties ==
A multilinear map has a value of zero whenever one of its arguments is zero.
== See also ==
Algebraic form
Multilinear form
Homogeneous polynomial
Homogeneous function
Tensors
== References == | Wikipedia/Multilinear_function |
In mathematics, the universal enveloping algebra of a Lie algebra is the unital associative algebra whose representations correspond precisely to the representations of that Lie algebra.
Universal enveloping algebras are used in the representation theory of Lie groups and Lie algebras. For example, Verma modules can be constructed as quotients of the universal enveloping algebra. In addition, the enveloping algebra gives a precise definition for the Casimir operators. Because Casimir operators commute with all elements of a Lie algebra, they can be used to classify representations. The precise definition also allows the importation of Casimir operators into other areas of mathematics, specifically, those that have a differential algebra. They also play a central role in some recent developments in mathematics. In particular, their dual provides a commutative example of the objects studied in non-commutative geometry, the quantum groups. This dual can be shown, by the Gelfand–Naimark theorem, to contain the C* algebra of the corresponding Lie group. This relationship generalizes to the idea of Tannaka–Krein duality between compact topological groups and their representations.
From an analytic viewpoint, the universal enveloping algebra of the Lie algebra of a Lie group may be identified with the algebra of left-invariant differential operators on the group.
== Informal construction ==
The idea of the universal enveloping algebra is to embed a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
into an associative algebra
A
{\displaystyle {\mathcal {A}}}
with identity in such a way that the abstract bracket operation in
g
{\displaystyle {\mathfrak {g}}}
corresponds to the commutator
x
y
−
y
x
{\displaystyle xy-yx}
in
A
{\displaystyle {\mathcal {A}}}
and the algebra
A
{\displaystyle {\mathcal {A}}}
is generated by the elements of
g
{\displaystyle {\mathfrak {g}}}
. There may be many ways to make such an embedding, but there is a unique "largest" such
A
{\displaystyle {\mathcal {A}}}
, called the universal enveloping algebra of
g
{\displaystyle {\mathfrak {g}}}
.
=== Generators and relations ===
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra, assumed finite-dimensional for simplicity, with basis
X
1
,
…
X
n
{\displaystyle X_{1},\ldots X_{n}}
. Let
c
i
j
k
{\displaystyle c_{ijk}}
be the structure constants for this basis, so that
[
X
i
,
X
j
]
=
∑
k
=
1
n
c
i
j
k
X
k
.
{\displaystyle [X_{i},X_{j}]=\sum _{k=1}^{n}c_{ijk}X_{k}.}
Then the universal enveloping algebra is the associative algebra (with identity) generated by elements
x
1
,
…
x
n
{\displaystyle x_{1},\ldots x_{n}}
subject to the relations
x
i
x
j
−
x
j
x
i
=
∑
k
=
1
n
c
i
j
k
x
k
{\displaystyle x_{i}x_{j}-x_{j}x_{i}=\sum _{k=1}^{n}c_{ijk}x_{k}}
and no other relations. Below we will make this "generators and relations" construction more precise by constructing the universal enveloping algebra as a quotient of the tensor algebra over
g
{\displaystyle {\mathfrak {g}}}
.
Consider, for example, the Lie algebra sl(2,C), spanned by the matrices
E
=
(
0
1
0
0
)
F
=
(
0
0
1
0
)
H
=
(
1
0
0
−
1
)
,
{\displaystyle E={\begin{pmatrix}0&1\\0&0\end{pmatrix}}\qquad F={\begin{pmatrix}0&0\\1&0\end{pmatrix}}\qquad H={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}~,}
which satisfy the commutation relations
[
H
,
E
]
=
2
E
{\displaystyle [H,E]=2E}
,
[
H
,
F
]
=
−
2
F
{\displaystyle [H,F]=-2F}
, and
[
E
,
F
]
=
H
{\displaystyle [E,F]=H}
. The universal enveloping algebra of sl(2,C) is then the algebra generated by three elements
e
,
f
,
h
{\displaystyle e,f,h}
subject to the relations
h
e
−
e
h
=
2
e
,
h
f
−
f
h
=
−
2
f
,
e
f
−
f
e
=
h
,
{\displaystyle he-eh=2e,\quad hf-fh=-2f,\quad ef-fe=h,}
and no other relations. We emphasize that the universal enveloping algebra is not the same as (or contained in) the algebra of
2
×
2
{\displaystyle 2\times 2}
matrices. For example, the
2
×
2
{\displaystyle 2\times 2}
matrix
E
{\displaystyle E}
satisfies
E
2
=
0
{\displaystyle E^{2}=0}
, as is easily verified. But in the universal enveloping algebra, the element
e
{\displaystyle e}
does not satisfy
e
2
=
0
{\displaystyle e^{2}=0}
because we do not impose this relation in the construction of the enveloping algebra. Indeed, it follows from the Poincaré–Birkhoff–Witt theorem (discussed § below) that the elements
1
,
e
,
e
2
,
e
3
,
…
{\displaystyle 1,e,e^{2},e^{3},\ldots }
are all linearly independent in the universal enveloping algebra.
=== Finding a basis ===
In general, elements of the universal enveloping algebra are linear combinations of products of the generators in all possible orders. Using the defining relations of the universal enveloping algebra, we can always re-order those products in a particular order, say with all the factors of
x
1
{\displaystyle x_{1}}
first, then factors of
x
2
{\displaystyle x_{2}}
, etc. For example, whenever we have a term that contains
x
2
x
1
{\displaystyle x_{2}x_{1}}
(in the "wrong" order), we can use the relations to rewrite this as
x
1
x
2
{\displaystyle x_{1}x_{2}}
plus a linear combination of the
x
j
{\displaystyle x_{j}}
's. Doing this sort of thing repeatedly eventually converts any element into a linear combination of terms in ascending order. Thus, elements of the form
x
1
k
1
x
2
k
2
⋯
x
n
k
n
{\displaystyle x_{1}^{k_{1}}x_{2}^{k_{2}}\cdots x_{n}^{k_{n}}}
with the
k
j
{\displaystyle k_{j}}
's being non-negative integers, span the enveloping algebra. (We allow
k
j
=
0
{\displaystyle k_{j}=0}
, meaning that we allow terms in which no factors of
x
j
{\displaystyle x_{j}}
occur.) The Poincaré–Birkhoff–Witt theorem, discussed below, asserts that these elements are linearly independent and thus form a basis for the universal enveloping algebra. In particular, the universal enveloping algebra is always infinite dimensional.
The Poincaré–Birkhoff–Witt theorem implies, in particular, that the elements
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
themselves are linearly independent. It is therefore common—if potentially confusing—to identify the
x
j
{\displaystyle x_{j}}
's with the generators
X
j
{\displaystyle X_{j}}
of the original Lie algebra. That is to say, we identify the original Lie algebra as the subspace of its universal enveloping algebra spanned by the generators. Although
g
{\displaystyle {\mathfrak {g}}}
may be an algebra of
n
×
n
{\displaystyle n\times n}
matrices, the universal enveloping of
g
{\displaystyle {\mathfrak {g}}}
does not consist of (finite-dimensional) matrices. In particular, there is no finite-dimensional algebra that contains the universal enveloping of
g
{\displaystyle {\mathfrak {g}}}
; the universal enveloping algebra is always infinite dimensional. Thus, in the case of sl(2,C), if we identify our Lie algebra as a subspace of its universal enveloping algebra, we must not interpret
E
{\displaystyle E}
,
F
{\displaystyle F}
and
H
{\displaystyle H}
as
2
×
2
{\displaystyle 2\times 2}
matrices, but rather as symbols with no further properties (other than the commutation relations).
=== Formalities ===
The formal construction of the universal enveloping algebra takes the above ideas, and wraps them in notation and terminology that makes it more convenient to work with. The most important difference is that the free associative algebra used in the above is narrowed to the tensor algebra, so that the product of symbols is understood to be the tensor product. The commutation relations are imposed by constructing a quotient space of the tensor algebra quotiented by the smallest two-sided ideal containing elements of the form
x
i
x
j
−
x
j
x
i
−
Σ
c
i
j
k
x
k
{\displaystyle x_{i}x_{j}-x_{j}x_{i}-\Sigma c_{ijk}x_{k}}
. The universal enveloping algebra is the "largest" unital associative algebra generated by elements of
g
{\displaystyle {\mathfrak {g}}}
with a Lie bracket compatible with the original Lie algebra.
== Formal definition ==
Recall that every Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is in particular a vector space. Thus, one is free to construct the tensor algebra
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
from it. The tensor algebra is a free algebra: it simply contains all possible tensor products of all possible vectors in
g
{\displaystyle {\mathfrak {g}}}
, without any restrictions whatsoever on those products.
That is, one constructs the space
T
(
g
)
=
K
⊕
g
⊕
(
g
⊗
g
)
⊕
(
g
⊗
g
⊗
g
)
⊕
⋯
{\displaystyle T({\mathfrak {g}})=K\,\oplus \,{\mathfrak {g}}\,\oplus \,({\mathfrak {g}}\otimes {\mathfrak {g}})\,\oplus \,({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\,\oplus \,\cdots }
where
⊗
{\displaystyle \otimes }
is the tensor product, and
⊕
{\displaystyle \oplus }
is the direct sum of vector spaces. Here, K is the field over which the Lie algebra is defined. From here, through to the remainder of this article, the tensor product is always explicitly shown. Many authors omit it, since, with practice, its location can usually be inferred from context. Here, a very explicit approach is adopted, to minimize any possible confusion about the meanings of expressions.
The first step in the construction is to "lift" the Lie bracket from the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
(where it is defined) to the tensor algebra
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
(where it is not), so that one can coherently work with the Lie bracket of two tensors. The lifting is done recursively. Let us define
[
a
⊗
b
,
c
]
=
a
⊗
[
b
,
c
]
+
[
a
,
c
]
⊗
b
{\displaystyle [a\otimes b,c]=a\otimes [b,c]+[a,c]\otimes b}
and
[
a
,
b
⊗
c
]
=
[
a
,
b
]
⊗
c
+
b
⊗
[
a
,
c
]
{\displaystyle [a,b\otimes c]=[a,b]\otimes c+b\otimes [a,c]}
It is straightforward to verify that the above definition is bilinear and skew-symmetric; one can also show that it obeys the Jacobi identity. The final result is that one has a Lie bracket that is consistently defined on all of
T
(
g
)
;
{\displaystyle T({\mathfrak {g}});}
one says that it has been "lifted" to all of
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
in the conventional sense of a "lift" from a base space (here, the Lie algebra) to a covering space (here, the tensor algebra).
The result of this lifting is that
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
becomes a Poisson algebra: a unital associative algebra with a Lie bracket that is compatible with the original Lie algebra bracket (by construction). It is not the smallest such algebra, however; it contains far more elements than needed. One can get something smaller by projecting back down. The universal enveloping algebra
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
of
g
{\displaystyle {\mathfrak {g}}}
is defined as the quotient space
U
(
g
)
=
T
(
g
)
/
∼
{\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/\sim }
where the equivalence relation
∼
{\displaystyle \sim }
is given by
a
⊗
b
−
b
⊗
a
=
[
a
,
b
]
{\displaystyle a\otimes b-b\otimes a=[a,b]}
for all
a
,
b
∈
T
(
g
)
{\displaystyle a,b\in T({\mathfrak {g}})}
. That is, the Lie bracket defines the equivalence relation used to perform the quotienting. The result is still a unital associative algebra, and one can still take the Lie bracket of any two members. Computing the result is straight-forward, if one keeps in mind that each element of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
can be understood as a coset: one just takes the bracket as usual, and searches for the coset that contains the result. It is the smallest such algebra; one cannot find anything smaller that still obeys the axioms of an associative algebra.
The universal enveloping algebra is what remains of the tensor algebra after modding out the Poisson algebra structure. (This is a non-trivial statement; the tensor algebra has a rather complicated structure: it is, among other things, a Hopf algebra; the Poisson algebra is likewise rather complicated, with many peculiar properties. It is compatible with the tensor algebra, and so the modding can be performed. The Hopf algebra structure is conserved; this is what leads to its many novel applications, e.g. in string theory. However, for the purposes of the formal definition, none of this particularly matters.)
The construction can be performed in a slightly different (but equivalent) way. Consider the two-sided ideal I generated by elements of the form
a
⊗
b
−
b
⊗
a
−
[
a
,
b
]
{\displaystyle a\otimes b-b\otimes a-[a,b]}
for
a
,
b
∈
g
{\displaystyle a,b\in {\mathfrak {g}}}
. (Note that these generators are elements of
g
⊕
(
g
⊗
g
)
⊂
T
(
g
)
{\displaystyle {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\subset T({\mathfrak {g}})}
.) A general member of the ideal I will be linear combinations of elements of the form
⋯
⊗
c
⊗
d
⊗
(
a
⊗
b
−
b
⊗
a
−
[
a
,
b
]
)
⊗
f
⊗
g
⊗
⋯
{\displaystyle \cdots \otimes c\otimes d\otimes (a\otimes b-b\otimes a-[a,b])\otimes f\otimes g\otimes \cdots }
where all lower-case letters are elements of
g
{\displaystyle {\mathfrak {g}}}
. Since I is an ideal, we can quotient by it. The universal enveloping algebra can then be defined as
U
(
g
)
=
T
(
g
)
/
I
{\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I}
=== Superalgebras and other generalizations ===
The above construction focuses on Lie algebras and on the Lie bracket, and its skewness and antisymmetry. To some degree, these properties are incidental to the construction. Consider instead some (arbitrary) algebra (not a Lie algebra) over a vector space, that is, a vector space
V
{\displaystyle V}
endowed with multiplication
m
:
V
×
V
→
V
{\displaystyle m:V\times V\to V}
that takes elements
a
×
b
↦
m
(
a
,
b
)
.
{\displaystyle a\times b\mapsto m(a,b).}
If the multiplication is bilinear, then the same construction and definitions can go through. One starts by lifting
m
{\displaystyle m}
up to
T
(
V
)
{\displaystyle T(V)}
so that the lifted
m
{\displaystyle m}
obeys all of the same properties that the base
m
{\displaystyle m}
does – symmetry or antisymmetry or whatever. The lifting is done exactly as before, starting with
m
:
V
⊗
V
→
V
a
⊗
b
↦
m
(
a
,
b
)
{\displaystyle {\begin{aligned}m:V\otimes V&\to V\\a\otimes b&\mapsto m(a,b)\end{aligned}}}
This is consistent precisely because the tensor product is bilinear, and the multiplication is bilinear. The rest of the lift is performed so as to preserve multiplication as a homomorphism. By definition, one writes
m
(
a
⊗
b
,
c
)
=
a
⊗
m
(
b
,
c
)
+
m
(
a
,
c
)
⊗
b
{\displaystyle m(a\otimes b,c)=a\otimes m(b,c)+m(a,c)\otimes b}
and also that
m
(
a
,
b
⊗
c
)
=
m
(
a
,
b
)
⊗
c
+
b
⊗
m
(
a
,
c
)
{\displaystyle m(a,b\otimes c)=m(a,b)\otimes c+b\otimes m(a,c)}
This extension is consistent by appeal to a lemma on free objects: since the tensor algebra is a free algebra, any homomorphism on its generating set can be extended to the entire algebra. Everything else proceeds as described above: upon completion, one has a unital associative algebra; one can take a quotient in either of the two ways described above.
The above is exactly how the universal enveloping algebra for Lie superalgebras is constructed. One need only to carefully keep track of the sign, when permuting elements. In this case, the (anti-)commutator of the superalgebra lifts to an (anti-)commuting Poisson bracket.
Another possibility is to use something other than the tensor algebra as the covering algebra. One such possibility is to use the exterior algebra; that is, to replace every occurrence of the tensor product by the exterior product. If the base algebra is a Lie algebra, then the result is the Gerstenhaber algebra; it is the exterior algebra of the corresponding Lie group. As before, it has a grading naturally coming from the grading on the exterior algebra. (The Gerstenhaber algebra should not be confused with the Poisson superalgebra; both invoke anticommutation, but in different ways.)
The construction has also been generalized for Malcev algebras, Bol algebras and left alternative algebras.
== Universal property ==
The universal enveloping algebra, or rather the universal enveloping algebra together with the canonical map
h
:
g
→
U
(
g
)
{\displaystyle h\colon {\mathfrak {g}}\to U({\mathfrak {g}})}
, possesses a universal property. Suppose we have any Lie algebra map
φ
:
g
→
A
{\displaystyle \varphi :{\mathfrak {g}}\to A}
to a unital associative algebra A (with Lie bracket in A given by the commutator). More explicitly, this means that we assume
φ
(
[
X
,
Y
]
)
=
φ
(
X
)
φ
(
Y
)
−
φ
(
Y
)
φ
(
X
)
{\displaystyle \varphi ([X,Y])=\varphi (X)\varphi (Y)-\varphi (Y)\varphi (X)}
for all
X
,
Y
∈
g
{\displaystyle X,Y\in {\mathfrak {g}}}
. Then there exists a unique unital algebra homomorphism
φ
^
:
U
(
g
)
→
A
{\displaystyle {\widehat {\varphi }}\colon U({\mathfrak {g}})\to A}
such that
φ
=
φ
^
∘
h
{\displaystyle \varphi ={\widehat {\varphi }}\circ h}
where
h
:
g
→
U
(
g
)
{\displaystyle h:{\mathfrak {g}}\to U({\mathfrak {g}})}
is the canonical map. (The map
h
{\displaystyle h}
is obtained by embedding
g
{\displaystyle {\mathfrak {g}}}
into its tensor algebra and then composing with the quotient map to the universal enveloping algebra. This map is an embedding, by the Poincaré–Birkhoff–Witt theorem.)
To put it differently, if
φ
:
g
→
A
{\displaystyle \varphi \colon {\mathfrak {g}}\rightarrow A}
is a linear map into a unital algebra
A
{\displaystyle A}
satisfying
φ
(
[
X
,
Y
]
)
=
φ
(
X
)
φ
(
Y
)
−
φ
(
Y
)
φ
(
X
)
{\displaystyle \varphi ([X,Y])=\varphi (X)\varphi (Y)-\varphi (Y)\varphi (X)}
, then
φ
{\displaystyle \varphi }
extends to an algebra homomorphism of
φ
^
:
U
(
g
)
→
A
{\displaystyle {\widehat {\varphi }}:U({\mathfrak {g}})\to A}
. Since
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is generated by elements of
g
{\displaystyle {\mathfrak {g}}}
, the map
φ
^
{\displaystyle {\widehat {\varphi }}}
must be uniquely determined by the requirement that
φ
^
(
X
i
1
⋯
X
i
N
)
=
φ
(
X
i
1
)
⋯
φ
(
X
i
N
)
,
X
i
j
∈
g
{\displaystyle {\widehat {\varphi }}(X_{i_{1}}\cdots X_{i_{N}})=\varphi (X_{i_{1}})\cdots \varphi (X_{i_{N}}),\quad X_{i_{j}}\in {\mathfrak {g}}}
.
The point is that because there are no other relations in the universal enveloping algebra besides those coming from the commutation relations of
g
{\displaystyle {\mathfrak {g}}}
, the map
φ
^
{\displaystyle {\widehat {\varphi }}}
is well defined, independent of how one writes a given element
x
∈
U
(
g
)
{\displaystyle x\in U({\mathfrak {g}})}
as a linear combination of products of Lie algebra elements.
The universal property of the enveloping algebra immediately implies that every representation of
g
{\displaystyle {\mathfrak {g}}}
acting on a vector space
V
{\displaystyle V}
extends uniquely to a representation of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. (Take
A
=
E
n
d
(
V
)
{\displaystyle A=\mathrm {End} (V)}
.) This observation is important because it allows (as discussed below) the Casimir elements to act on
V
{\displaystyle V}
. These operators (from the center of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
) act as scalars and provide important information about the representations. The quadratic Casimir element is of particular importance in this regard.
=== Other algebras ===
Although the canonical construction, given above, can be applied to other algebras, the result, in general, does not have the universal property. Thus, for example, when the construction is applied to Jordan algebras, the resulting enveloping algebra contains the special Jordan algebras, but not the exceptional ones: that is, it does not envelope the Albert algebras. Likewise, the Poincaré–Birkhoff–Witt theorem, below, constructs a basis for an enveloping algebra; it just won't be universal. Similar remarks hold for the Lie superalgebras.
== Poincaré–Birkhoff–Witt theorem ==
The Poincaré–Birkhoff–Witt theorem gives a precise description of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. This can be done in either one of two different ways: either by reference to an explicit vector basis on the Lie algebra, or in a coordinate-free fashion.
=== Using basis elements ===
One way is to suppose that the Lie algebra can be given a totally ordered basis, that is, it is the free vector space of a totally ordered set. Recall that a free vector space is defined as the space of all finitely supported functions from a set X to the field K (finitely supported means that only finitely many values are non-zero); it can be given a basis
e
a
:
X
→
K
{\displaystyle e_{a}:X\to K}
such that
e
a
(
b
)
=
δ
a
b
{\displaystyle e_{a}(b)=\delta _{ab}}
is the indicator function for
a
,
b
∈
X
{\displaystyle a,b\in X}
. Let
h
:
g
→
T
(
g
)
{\displaystyle h:{\mathfrak {g}}\to T({\mathfrak {g}})}
be the injection into the tensor algebra; this is used to give the tensor algebra a basis as well. This is done by lifting: given some arbitrary sequence of
e
a
{\displaystyle e_{a}}
, one defines the extension of
h
{\displaystyle h}
to be
h
(
e
a
⊗
e
b
⊗
⋯
⊗
e
c
)
=
h
(
e
a
)
⊗
h
(
e
b
)
⊗
⋯
⊗
h
(
e
c
)
{\displaystyle h(e_{a}\otimes e_{b}\otimes \cdots \otimes e_{c})=h(e_{a})\otimes h(e_{b})\otimes \cdots \otimes h(e_{c})}
The Poincaré–Birkhoff–Witt theorem then states that one can obtain a basis for
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
from the above, by enforcing the total order of X onto the algebra. That is,
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
has a basis
e
a
⊗
e
b
⊗
⋯
⊗
e
c
{\displaystyle e_{a}\otimes e_{b}\otimes \cdots \otimes e_{c}}
where
a
≤
b
≤
⋯
≤
c
{\displaystyle a\leq b\leq \cdots \leq c}
, the ordering being that of total order on the set X. The proof of the theorem involves noting that, if one starts with out-of-order basis elements, these can always be swapped by using the commutator (together with the structure constants). The hard part of the proof is establishing that the final result is unique and independent of the order in which the swaps were performed.
This basis should be easily recognized as the basis of a symmetric algebra. That is, the underlying vector spaces of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
and the symmetric algebra are isomorphic, and it is the PBW theorem that shows that this is so. See, however, the section on the algebra of symbols, below, for a more precise statement of the nature of the isomorphism.
It is useful, perhaps, to split the process into two steps. In the first step, one constructs the free Lie algebra: this is what one gets, if one mods out by all commutators, without specifying what the values of the commutators are. The second step is to apply the specific commutation relations from
g
.
{\displaystyle {\mathfrak {g}}.}
The first step is universal, and does not depend on the specific
g
.
{\displaystyle {\mathfrak {g}}.}
It can also be precisely defined: the basis elements are given by Hall words, a special case of which are the Lyndon words; these are explicitly constructed to behave appropriately as commutators.
=== Coordinate-free ===
One can also state the theorem in a coordinate-free fashion, avoiding the use of total orders and basis elements. This is convenient when there are difficulties in defining the basis vectors, as there can be for infinite-dimensional Lie algebras. It also gives a more natural form that is more easily extended to other kinds of algebras. This is accomplished by constructing a filtration
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
whose limit is the universal enveloping algebra
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
First, a notation is needed for an ascending sequence of subspaces of the tensor algebra. Let
T
m
g
=
K
⊕
g
⊕
T
2
g
⊕
⋯
⊕
T
m
g
{\displaystyle T_{m}{\mathfrak {g}}=K\oplus {\mathfrak {g}}\oplus T^{2}{\mathfrak {g}}\oplus \cdots \oplus T^{m}{\mathfrak {g}}}
where
T
m
g
=
T
⊗
m
g
=
g
⊗
⋯
⊗
g
{\displaystyle T^{m}{\mathfrak {g}}=T^{\otimes m}{\mathfrak {g}}={\mathfrak {g}}\otimes \cdots \otimes {\mathfrak {g}}}
is the m-times tensor product of
g
.
{\displaystyle {\mathfrak {g}}.}
The
T
m
g
{\displaystyle T_{m}{\mathfrak {g}}}
form a filtration:
K
⊂
g
⊂
T
2
g
⊂
⋯
⊂
T
m
g
⊂
⋯
{\displaystyle K\subset {\mathfrak {g}}\subset T_{2}{\mathfrak {g}}\subset \cdots \subset T_{m}{\mathfrak {g}}\subset \cdots }
More precisely, this is a filtered algebra, since the filtration preserves the algebraic properties of the subspaces. Note that the limit of this filtration is the tensor algebra
T
(
g
)
.
{\displaystyle T({\mathfrak {g}}).}
It was already established, above, that quotienting by the ideal is a natural transformation that takes one from
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
to
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
This also works naturally on the subspaces, and so one obtains a filtration
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
whose limit is the universal enveloping algebra
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
Next, define the space
G
m
g
=
U
m
g
/
U
m
−
1
g
{\displaystyle G_{m}{\mathfrak {g}}=U_{m}{\mathfrak {g}}/U_{m-1}{\mathfrak {g}}}
This is the space
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
modulo all of the subspaces
U
n
g
{\displaystyle U_{n}{\mathfrak {g}}}
of strictly smaller filtration degree. Note that
G
m
g
{\displaystyle G_{m}{\mathfrak {g}}}
is not at all the same as the leading term
U
m
g
{\displaystyle U^{m}{\mathfrak {g}}}
of the filtration, as one might naively surmise. It is not constructed through a set subtraction mechanism associated with the filtration.
Quotienting
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
by
U
m
−
1
g
{\displaystyle U_{m-1}{\mathfrak {g}}}
has the effect of setting all Lie commutators defined in
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
to zero. One can see this by observing that the commutator of a pair of elements whose products lie in
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
actually gives an element in
U
m
−
1
g
{\displaystyle U_{m-1}{\mathfrak {g}}}
. This is perhaps not immediately obvious: to get this result, one must repeatedly apply the commutation relations, and turn the crank. The essence of the Poincaré–Birkhoff–Witt theorem is that it is always possible to do this, and that the result is unique.
Since commutators of elements whose products are defined in
U
m
g
{\displaystyle U_{m}{\mathfrak {g}}}
lie in
U
m
−
1
g
{\displaystyle U_{m-1}{\mathfrak {g}}}
, the quotienting that defines
G
m
g
{\displaystyle G_{m}{\mathfrak {g}}}
has the effect of setting all commutators to zero. What PBW states is that the commutator of elements in
G
m
g
{\displaystyle G_{m}{\mathfrak {g}}}
is necessarily zero. What is left are the elements that are not expressible as commutators.
In this way, one is lead immediately to the symmetric algebra. This is the algebra where all commutators vanish. It can be defined as a filtration
S
m
g
{\displaystyle S_{m}{\mathfrak {g}}}
of symmetric tensor products
Sym
m
g
{\displaystyle \operatorname {Sym} ^{m}{\mathfrak {g}}}
. Its limit is the symmetric algebra
S
(
g
)
{\displaystyle S({\mathfrak {g}})}
. It is constructed by appeal to the same notion of naturality as before. One starts with the same tensor algebra, and just uses a different ideal, the ideal that makes all elements commute:
S
(
g
)
=
T
(
g
)
/
(
a
⊗
b
−
b
⊗
a
)
{\displaystyle S({\mathfrak {g}})=T({\mathfrak {g}})/(a\otimes b-b\otimes a)}
Thus, one can view the Poincaré–Birkhoff–Witt theorem as stating that
G
(
g
)
{\displaystyle G({\mathfrak {g}})}
is isomorphic to the symmetric algebra
S
(
g
)
{\displaystyle S({\mathfrak {g}})}
, both as a vector space and as a commutative algebra.
The
G
m
g
{\displaystyle G_{m}{\mathfrak {g}}}
also form a filtered algebra; its limit is
G
(
g
)
.
{\displaystyle G({\mathfrak {g}}).}
This is the associated graded algebra of the filtration.
The construction above, due to its use of quotienting, implies that the limit of
G
(
g
)
{\displaystyle G({\mathfrak {g}})}
is isomorphic to
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
In more general settings, with loosened conditions, one finds that
S
(
g
)
→
G
(
g
)
{\displaystyle S({\mathfrak {g}})\to G({\mathfrak {g}})}
is a projection, and one then gets PBW-type theorems for the associated graded algebra of a filtered algebra. To emphasize this, the notation
gr
U
(
g
)
{\displaystyle \operatorname {gr} U({\mathfrak {g}})}
is sometimes used for
G
(
g
)
,
{\displaystyle G({\mathfrak {g}}),}
serving to remind that it is the filtered algebra.
=== Other algebras ===
The theorem, applied to Jordan algebras, yields the exterior algebra, rather than the symmetric algebra. In essence, the construction zeros out the anti-commutators. The resulting algebra is an enveloping algebra, but is not universal. As mentioned above, it fails to envelop the exceptional Jordan algebras.
== Left-invariant differential operators ==
Suppose
G
{\displaystyle G}
is a real Lie group with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. Following the modern approach, we may identify
g
{\displaystyle {\mathfrak {g}}}
with the space of left-invariant vector fields (i.e., first-order left-invariant differential operators). Specifically, if we initially think of
g
{\displaystyle {\mathfrak {g}}}
as the tangent space to
G
{\displaystyle G}
at the identity, then each vector in
g
{\displaystyle {\mathfrak {g}}}
has a unique left-invariant extension. We then identify the vector in the tangent space with the associated left-invariant vector field. Now, the commutator (as differential operators) of two left-invariant vector fields is again a vector field and again left-invariant. We can then define the bracket operation on
g
{\displaystyle {\mathfrak {g}}}
as the commutator on the associated left-invariant vector fields. This definition agrees with any other standard definition of the bracket structure on the Lie algebra of a Lie group.
We may then consider left-invariant differential operators of arbitrary order. Every such operator
A
{\displaystyle A}
can be expressed (non-uniquely) as a linear combination of products of left-invariant vector fields. The collection of all left-invariant differential operators on
G
{\displaystyle G}
forms an algebra, denoted
D
(
G
)
{\displaystyle D(G)}
. It can be shown that
D
(
G
)
{\displaystyle D(G)}
is isomorphic to the universal enveloping algebra
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
.
In the case that
g
{\displaystyle {\mathfrak {g}}}
arises as the Lie algebra of a real Lie group, one can use left-invariant differential operators to give an analytic proof of the Poincaré–Birkhoff–Witt theorem. Specifically, the algebra
D
(
G
)
{\displaystyle D(G)}
of left-invariant differential operators is generated by elements (the left-invariant vector fields) that satisfy the commutation relations of
g
{\displaystyle {\mathfrak {g}}}
. Thus, by the universal property of the enveloping algebra,
D
(
G
)
{\displaystyle D(G)}
is a quotient of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. Thus, if the PBW basis elements are linearly independent in
D
(
G
)
{\displaystyle D(G)}
—which one can establish analytically—they must certainly be linearly independent in
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. (And, at this point, the isomorphism of
D
(
G
)
{\displaystyle D(G)}
with
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is apparent.)
== Algebra of symbols ==
The underlying vector space of
S
(
g
)
{\displaystyle S({\mathfrak {g}})}
may be given a new algebra structure so that
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
and
S
(
g
)
{\displaystyle S({\mathfrak {g}})}
are isomorphic as associative algebras. This leads to the concept of the algebra of symbols
⋆
(
g
)
{\displaystyle \star ({\mathfrak {g}})}
: the space of symmetric polynomials, endowed with a product, the
⋆
{\displaystyle \star }
, that places the algebraic structure of the Lie algebra onto what is otherwise a standard associative algebra. That is, what the PBW theorem obscures (the commutation relations) the algebra of symbols restores into the spotlight.
The algebra is obtained by taking elements of
S
(
g
)
{\displaystyle S({\mathfrak {g}})}
and replacing each generator
e
i
{\displaystyle e_{i}}
by an indeterminate, commuting variable
t
i
{\displaystyle t_{i}}
to obtain the space of symmetric polynomials
K
[
t
i
]
{\displaystyle K[t_{i}]}
over the field
K
{\displaystyle K}
. Indeed, the correspondence is trivial: one simply substitutes the symbol
t
i
{\displaystyle t_{i}}
for
e
i
{\displaystyle e_{i}}
. The resulting polynomial is called the symbol of the corresponding element of
S
(
g
)
{\displaystyle S({\mathfrak {g}})}
. The inverse map is
w
:
⋆
(
g
)
→
U
(
g
)
{\displaystyle w:\star ({\mathfrak {g}})\to U({\mathfrak {g}})}
that replaces each symbol
t
i
{\displaystyle t_{i}}
by
e
i
{\displaystyle e_{i}}
. The algebraic structure is obtained by requiring that the product
⋆
{\displaystyle \star }
act as an isomorphism, that is, so that
w
(
p
⋆
q
)
=
w
(
p
)
⊗
w
(
q
)
{\displaystyle w(p\star q)=w(p)\otimes w(q)}
for polynomials
p
,
q
∈
⋆
(
g
)
.
{\displaystyle p,q\in \star ({\mathfrak {g}}).}
The primary issue with this construction is that
w
(
p
)
⊗
w
(
q
)
{\displaystyle w(p)\otimes w(q)}
is not trivially, inherently a member of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
, as written, and that one must first perform a tedious reshuffling of the basis elements (applying the structure constants as needed) to obtain an element of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
in the properly ordered basis. An explicit expression for this product can be given: this is the Berezin formula. It follows essentially from the Baker–Campbell–Hausdorff formula for the product of two elements of a Lie group.
A closed form expression is given by
p
(
t
)
⋆
q
(
t
)
=
exp
(
t
i
m
i
(
∂
∂
u
,
∂
∂
v
)
)
p
(
u
)
q
(
v
)
|
u
=
v
=
t
{\displaystyle p(t)\star q(t)=\left.\exp \left(t_{i}m^{i}\left({\frac {\partial }{\partial u}},{\frac {\partial }{\partial v}}\right)\right)p(u)q(v)\right\vert _{u=v=t}}
where
m
(
A
,
B
)
=
log
(
e
A
e
B
)
−
A
−
B
{\displaystyle m(A,B)=\log \left(e^{A}e^{B}\right)-A-B}
and
m
i
{\displaystyle m^{i}}
is just
m
{\displaystyle m}
in the chosen basis.
The universal enveloping algebra of the Heisenberg algebra is the Weyl algebra (modulo the relation that the center be the unit); here, the
⋆
{\displaystyle \star }
product is called the Moyal product.
== Representation theory ==
The universal enveloping algebra preserves the representation theory: the representations of
g
{\displaystyle {\mathfrak {g}}}
correspond in a one-to-one manner to the modules over
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. In more abstract terms, the abelian category of all representations of
g
{\displaystyle {\mathfrak {g}}}
is isomorphic to the abelian category of all left modules over
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
.
The representation theory of semisimple Lie algebras rests on the observation that there is an isomorphism, known as the Kronecker product:
U
(
g
1
⊕
g
2
)
≅
U
(
g
1
)
⊗
U
(
g
2
)
{\displaystyle U({\mathfrak {g}}_{1}\oplus {\mathfrak {g}}_{2})\cong U({\mathfrak {g}}_{1})\otimes U({\mathfrak {g}}_{2})}
for Lie algebras
g
1
,
g
2
{\displaystyle {\mathfrak {g}}_{1},{\mathfrak {g}}_{2}}
. The isomorphism follows from a lifting of the embedding
i
(
g
1
⊕
g
2
)
=
i
1
(
g
1
)
⊗
1
⊕
1
⊗
i
2
(
g
2
)
{\displaystyle i({\mathfrak {g}}_{1}\oplus {\mathfrak {g}}_{2})=i_{1}({\mathfrak {g}}_{1})\otimes 1\oplus 1\otimes i_{2}({\mathfrak {g}}_{2})}
where
i
:
g
→
U
(
g
)
{\displaystyle i:{\mathfrak {g}}\to U({\mathfrak {g}})}
is just the canonical embedding (with subscripts, respectively for algebras one and two). It is straightforward to verify that this embedding lifts, given the prescription above. See, however, the discussion of the bialgebra structure in the article on tensor algebras for a review of some of the finer points of doing so: in particular, the shuffle product employed there corresponds to the Wigner-Racah coefficients, i.e. the 6j and 9j-symbols, etc.
Also important is that the universal enveloping algebra of a free Lie algebra is isomorphic to the free associative algebra.
Construction of representations typically proceeds by building the Verma modules of the highest weights.
In a typical context where
g
{\displaystyle {\mathfrak {g}}}
is acting by infinitesimal transformations, the elements of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
act like differential operators, of all orders. (See, for example, the realization of the universal enveloping algebra as left-invariant differential operators on the associated group, as discussed above.)
== Casimir operators ==
The center
Z
(
U
(
g
)
)
{\displaystyle Z(U({\mathfrak {g}}))}
of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
can be identified with the centralizer of
g
{\displaystyle {\mathfrak {g}}}
in
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
Any element of
Z
(
U
(
g
)
)
{\displaystyle Z(U({\mathfrak {g}}))}
must commute with all of
U
(
g
)
,
{\displaystyle U({\mathfrak {g}}),}
and in particular with the canonical embedding of
g
{\displaystyle {\mathfrak {g}}}
into
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
Because of this, the center is directly useful for classifying representations of
g
{\displaystyle {\mathfrak {g}}}
. For a finite-dimensional semisimple Lie algebra, the Casimir operators form a distinguished basis from the center
Z
(
U
(
g
)
)
{\displaystyle Z(U({\mathfrak {g}}))}
. These may be constructed as follows.
The center
Z
(
U
(
g
)
)
{\displaystyle Z(U({\mathfrak {g}}))}
corresponds to linear combinations of all elements
z
=
v
⊗
w
⊗
⋯
⊗
u
∈
U
(
g
)
{\displaystyle z=v\otimes w\otimes \cdots \otimes u\in U({\mathfrak {g}})}
that commute with all elements
x
∈
g
;
{\displaystyle x\in {\mathfrak {g}};}
that is, for which
[
z
,
x
]
=
ad
x
(
z
)
=
0.
{\displaystyle [z,x]={\mbox{ad}}_{x}(z)=0.}
That is, they are in the kernel of
ad
g
.
{\displaystyle {\mbox{ad}}_{\mathfrak {g}}.}
Thus, a technique is needed for computing that kernel. What we have is the action of the adjoint representation on
g
;
{\displaystyle {\mathfrak {g}};}
we need it on
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
The easiest route is to note that
ad
g
{\displaystyle {\mbox{ad}}_{\mathfrak {g}}}
is a derivation, and that the space of derivations can be lifted to
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
and thus to
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
This implies that both of these are differential algebras.
By definition,
δ
:
g
→
g
{\displaystyle \delta :{\mathfrak {g}}\to {\mathfrak {g}}}
is a derivation on
g
{\displaystyle {\mathfrak {g}}}
if it obeys Leibniz's law:
δ
(
[
v
,
w
]
)
=
[
δ
(
v
)
,
w
]
+
[
v
,
δ
(
w
)
]
{\displaystyle \delta ([v,w])=[\delta (v),w]+[v,\delta (w)]}
(When
g
{\displaystyle {\mathfrak {g}}}
is the space of left invariant vector fields on a group
G
{\displaystyle G}
, the Lie bracket is that of vector fields.) The lifting is performed by defining
δ
(
v
⊗
w
⊗
⋯
⊗
u
)
=
δ
(
v
)
⊗
w
⊗
⋯
⊗
u
+
v
⊗
δ
(
w
)
⊗
⋯
⊗
u
+
⋯
+
v
⊗
w
⊗
⋯
⊗
δ
(
u
)
.
{\displaystyle {\begin{aligned}\delta (v\otimes w\otimes \cdots \otimes u)=&\,\delta (v)\otimes w\otimes \cdots \otimes u\\&+v\otimes \delta (w)\otimes \cdots \otimes u\\&+\cdots +v\otimes w\otimes \cdots \otimes \delta (u).\end{aligned}}}
Since
ad
x
{\displaystyle {\mbox{ad}}_{x}}
is a derivation for any
x
∈
g
,
{\displaystyle x\in {\mathfrak {g}},}
the above defines
ad
x
{\displaystyle {\mbox{ad}}_{x}}
acting on
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
and
U
(
g
)
.
{\displaystyle U({\mathfrak {g}}).}
From the PBW theorem, it is clear that all central elements are linear combinations of symmetric homogeneous polynomials in the basis elements
e
a
{\displaystyle e_{a}}
of the Lie algebra. The Casimir invariants are the irreducible homogeneous polynomials of a given, fixed degree. That is, given a basis
e
a
{\displaystyle e_{a}}
, a Casimir operator of order
m
{\displaystyle m}
has the form
C
(
m
)
=
κ
a
b
⋯
c
e
a
⊗
e
b
⊗
⋯
⊗
e
c
{\displaystyle C_{(m)}=\kappa ^{ab\cdots c}e_{a}\otimes e_{b}\otimes \cdots \otimes e_{c}}
where there are
m
{\displaystyle m}
terms in the tensor product, and
κ
a
b
⋯
c
{\displaystyle \kappa ^{ab\cdots c}}
is a completely symmetric tensor of order
m
{\displaystyle m}
belonging to the adjoint representation. That is,
κ
a
b
⋯
c
{\displaystyle \kappa ^{ab\cdots c}}
can be (should be) thought of as an element of
(
ad
g
)
⊗
m
.
{\displaystyle \left(\operatorname {ad} _{\mathfrak {g}}\right)^{\otimes m}.}
Recall that the adjoint representation is given directly by the structure constants, and so an explicit indexed form of the above equations can be given, in terms of the Lie algebra basis; this is originally a theorem of Israel Gel'fand. That is, from
[
x
,
C
(
m
)
]
=
0
{\displaystyle [x,C_{(m)}]=0}
, it follows that
f
i
j
k
κ
j
l
⋯
m
+
f
i
j
l
κ
k
j
⋯
m
+
⋯
+
f
i
j
m
κ
k
l
⋯
j
=
0
{\displaystyle f_{ij}^{\;\;k}\kappa ^{jl\cdots m}+f_{ij}^{\;\;l}\kappa ^{kj\cdots m}+\cdots +f_{ij}^{\;\;m}\kappa ^{kl\cdots j}=0}
where the structure constants are
[
e
i
,
e
j
]
=
f
i
j
k
e
k
{\displaystyle [e_{i},e_{j}]=f_{ij}^{\;\;k}e_{k}}
As an example, the quadratic Casimir operator is
C
(
2
)
=
κ
i
j
e
i
⊗
e
j
{\displaystyle C_{(2)}=\kappa ^{ij}e_{i}\otimes e_{j}}
where
κ
i
j
{\displaystyle \kappa ^{ij}}
is the inverse matrix of the Killing form
κ
i
j
.
{\displaystyle \kappa _{ij}.}
That the Casimir operator
C
(
2
)
{\displaystyle C_{(2)}}
belongs to the center
Z
(
U
(
g
)
)
{\displaystyle Z(U({\mathfrak {g}}))}
follows from the fact that the Killing form is invariant under the adjoint action.
The center of the universal enveloping algebra of a simple Lie algebra is given in detail by the Harish-Chandra isomorphism.
=== Rank ===
The number of algebraically independent Casimir operators of a finite-dimensional semisimple Lie algebra is equal to the rank of that algebra, i.e. is equal to the rank of the Cartan–Weyl basis. This may be seen as follows. For a d-dimensional vector space V, recall that the determinant is the completely antisymmetric tensor on
V
⊗
d
{\displaystyle V^{\otimes d}}
. Given a matrix M, one may write the characteristic polynomial of M as
det
(
t
I
−
M
)
=
∑
n
=
0
d
p
n
t
n
{\displaystyle \det(tI-M)=\sum _{n=0}^{d}p_{n}t^{n}}
For a d-dimensional Lie algebra, that is, an algebra whose adjoint representation is d-dimensional, the linear operator
ad
:
g
→
End
(
g
)
{\displaystyle \operatorname {ad} :{\mathfrak {g}}\to \operatorname {End} ({\mathfrak {g}})}
implies that
ad
x
{\displaystyle \operatorname {ad} _{x}}
is a d-dimensional endomorphism, and so one has the characteristic equation
det
(
t
I
−
ad
x
)
=
∑
n
=
0
d
p
n
(
x
)
t
n
{\displaystyle \det(tI-\operatorname {ad} _{x})=\sum _{n=0}^{d}p_{n}(x)t^{n}}
for elements
x
∈
g
.
{\displaystyle x\in {\mathfrak {g}}.}
The non-zero roots of this characteristic polynomial (that are roots for all x) form the root system of the algebra. In general, there are only r such roots; this is the rank of the algebra. This implies that the highest value of n for which the
p
n
(
x
)
{\displaystyle p_{n}(x)}
is non-vanishing is r.
The
p
n
(
x
)
{\displaystyle p_{n}(x)}
are homogeneous polynomials of degree d − n. This can be seen in several ways: Given a constant
k
∈
K
{\displaystyle k\in K}
, ad is linear, so that
ad
k
x
=
k
ad
x
.
{\displaystyle \operatorname {ad} _{kx}=k\,\operatorname {ad} _{x}.}
By plugging and chugging in the above, one obtains that
p
n
(
k
x
)
=
k
d
−
n
p
n
(
x
)
.
{\displaystyle p_{n}(kx)=k^{d-n}p_{n}(x).}
By linearity, if one expands in the basis,
x
=
∑
i
=
1
d
x
i
e
i
{\displaystyle x=\sum _{i=1}^{d}x_{i}e_{i}}
then the polynomial has the form
p
n
(
x
)
=
x
a
x
b
⋯
x
c
κ
a
b
⋯
c
{\displaystyle p_{n}(x)=x_{a}x_{b}\cdots x_{c}\kappa ^{ab\cdots c}}
that is, a
κ
{\displaystyle \kappa }
is a tensor of rank
m
=
d
−
n
{\displaystyle m=d-n}
. By linearity and the commutativity of addition, i.e. that
ad
x
+
y
=
ad
y
+
x
,
{\displaystyle \operatorname {ad} _{x+y}=\operatorname {ad} _{y+x},}
, one concludes that this tensor must be completely symmetric. This tensor is exactly the Casimir invariant of order m.
The center
Z
(
g
)
{\displaystyle Z({\mathfrak {g}})}
corresponded to those elements
z
∈
Z
(
g
)
{\displaystyle z\in Z({\mathfrak {g}})}
for which
ad
x
(
z
)
=
0
{\displaystyle \operatorname {ad} _{x}(z)=0}
for all x; by the above, these clearly corresponds to the roots of the characteristic equation. One concludes that the roots form a space of rank r and that the Casimir invariants span this space. That is, the Casimir invariants generate the center
Z
(
U
(
g
)
)
.
{\displaystyle Z(U({\mathfrak {g}})).}
=== Example: Rotation group SO(3) ===
The rotation group SO(3) is of rank one, and thus has one Casimir operator. It is three-dimensional, and thus the Casimir operator must have order (3 − 1) = 2 i.e. be quadratic. Of course, this is the Lie algebra of
A
1
.
{\displaystyle A_{1}.}
As an elementary exercise, one can compute this directly. Changing notation to
e
i
=
L
i
,
{\displaystyle e_{i}=L_{i},}
with
L
i
{\displaystyle L_{i}}
belonging to the adjoint rep, a general algebra element is
x
L
1
+
y
L
2
+
z
L
3
{\displaystyle xL_{1}+yL_{2}+zL_{3}}
and direct computation gives
det
(
x
L
1
+
y
L
2
+
z
L
3
−
t
I
)
=
−
t
3
−
(
x
2
+
y
2
+
z
2
)
t
{\displaystyle \det \left(xL_{1}+yL_{2}+zL_{3}-tI\right)=-t^{3}-(x^{2}+y^{2}+z^{2})t}
The quadratic term can be read off as
κ
i
j
=
δ
i
j
{\displaystyle \kappa ^{ij}=\delta ^{ij}}
, and so the squared angular momentum operator for the rotation group is that Casimir operator. That is,
C
(
2
)
=
L
2
=
e
1
⊗
e
1
+
e
2
⊗
e
2
+
e
3
⊗
e
3
{\displaystyle C_{(2)}=L^{2}=e_{1}\otimes e_{1}+e_{2}\otimes e_{2}+e_{3}\otimes e_{3}}
and explicit computation shows that
[
L
2
,
e
k
]
=
0
{\displaystyle [L^{2},e_{k}]=0}
after making use of the structure constants
[
e
i
,
e
j
]
=
ε
i
j
k
e
k
{\displaystyle [e_{i},e_{j}]=\varepsilon _{ij}^{\;\;k}e_{k}}
=== Example: Pseudo-differential operators ===
A key observation during the construction of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
above was that it was a differential algebra, by dint of the fact that any derivation on the Lie algebra can be lifted to
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. Thus, one is led to a ring of pseudo-differential operators, from which one can construct Casimir invariants.
If the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
acts on a space of linear operators, such as in Fredholm theory, then one can construct Casimir invariants on the corresponding space of operators. The quadratic Casimir operator corresponds to an elliptic operator.
If the Lie algebra acts on a differentiable manifold, then each Casimir operator corresponds to a higher-order differential on the cotangent manifold, the second-order differential being the most common and most important.
If the action of the algebra is isometric, as would be the case for Riemannian or pseudo-Riemannian manifolds endowed with a metric and the symmetry groups SO(N) and SO (P, Q), respectively, one can then contract upper and lower indices (with the metric tensor) to obtain more interesting structures. For the quadratic Casimir invariant, this is the Laplacian. Quartic Casimir operators allow one to square the stress–energy tensor, giving rise to the Yang-Mills action. The Coleman–Mandula theorem restricts the form that these can take, when one considers ordinary Lie algebras. However, the Lie superalgebras are able to evade the premises of the Coleman–Mandula theorem, and can be used to mix together space and internal symmetries.
== Examples in particular cases ==
If
g
=
s
l
2
{\displaystyle {\mathfrak {g}}={\mathfrak {sl}}_{2}}
, then it has a basis of matrices
H
=
(
−
1
0
0
1
)
,
E
=
(
0
1
0
0
)
,
F
=
(
0
0
1
0
)
{\displaystyle H={\begin{pmatrix}-1&0\\0&1\end{pmatrix}},{\text{ }}E={\begin{pmatrix}0&1\\0&0\end{pmatrix}},{\text{ }}F={\begin{pmatrix}0&0\\1&0\end{pmatrix}}}
which satisfy the following identities under the standard bracket:
[
H
,
E
]
=
−
2
E
{\displaystyle [H,E]=-2E}
,
[
H
,
F
]
=
2
F
{\displaystyle [H,F]=2F}
, and
[
E
,
F
]
=
−
H
{\displaystyle [E,F]=-H}
this shows us that the universal enveloping algebra has the presentation
U
(
s
l
2
)
=
C
⟨
x
,
y
,
z
⟩
(
x
y
−
y
x
+
2
y
,
x
z
−
z
x
−
2
z
,
y
z
−
z
y
+
x
)
{\displaystyle U({\mathfrak {sl}}_{2})={\frac {\mathbb {C} \langle x,y,z\rangle }{(xy-yx+2y,xz-zx-2z,yz-zy+x)}}}
as a non-commutative ring.
If
g
{\displaystyle {\mathfrak {g}}}
is abelian (that is, the bracket is always 0), then
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is commutative; and if a basis of the vector space
g
{\displaystyle {\mathfrak {g}}}
has been chosen, then
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
can be identified with the polynomial algebra over K, with one variable per basis element.
If
g
{\displaystyle {\mathfrak {g}}}
is the Lie algebra corresponding to the Lie group G, then
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
can be identified with the algebra of left-invariant differential operators (of all orders) on G; with
g
{\displaystyle {\mathfrak {g}}}
lying inside it as the left-invariant vector fields as first-order differential operators.
To relate the above two cases: if
g
{\displaystyle {\mathfrak {g}}}
is a vector space V as abelian Lie algebra, the left-invariant differential operators are the constant coefficient operators, which are indeed a polynomial algebra in the partial derivatives of first order.
The center
Z
(
g
)
{\displaystyle Z({\mathfrak {g}})}
consists of the left- and right- invariant differential operators; this, in the case of G not commutative, is often not generated by first-order operators (see for example Casimir operator of a semi-simple Lie algebra).
Another characterization in Lie group theory is of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
as the convolution algebra of distributions supported only at the identity element e of G.
The algebra of differential operators in n variables with polynomial coefficients may be obtained starting with the Lie algebra of the Heisenberg group. See Weyl algebra for this; one must take a quotient, so that the central elements of the Lie algebra act as prescribed scalars.
The universal enveloping algebra of a finite-dimensional Lie algebra is a filtered quadratic algebra.
== Hopf algebras and quantum groups ==
The construction of the group algebra for a given group is in many ways analogous to constructing the universal enveloping algebra for a given Lie algebra. Both constructions are universal and translate representation theory into module theory. Furthermore, both group algebras and universal enveloping algebras carry natural comultiplications that turn them into Hopf algebras. This is made precise in the article on the tensor algebra: the tensor algebra has a Hopf algebra structure on it, and because the Lie bracket is consistent with (obeys the consistency conditions for) that Hopf structure, it is inherited by the universal enveloping algebra.
Given a Lie group G, one can construct the vector space C(G) of continuous complex-valued functions on G, and turn it into a C*-algebra. This algebra has a natural Hopf algebra structure: given two functions
φ
,
ψ
∈
C
(
G
)
{\displaystyle \varphi ,\psi \in C(G)}
, one defines multiplication as
(
∇
(
φ
,
ψ
)
)
(
x
)
=
φ
(
x
)
ψ
(
x
)
{\displaystyle (\nabla (\varphi ,\psi ))(x)=\varphi (x)\psi (x)}
and comultiplication as
(
Δ
(
φ
)
)
(
x
⊗
y
)
=
φ
(
x
y
)
,
{\displaystyle (\Delta (\varphi ))(x\otimes y)=\varphi (xy),}
the counit as
ε
(
φ
)
=
φ
(
e
)
{\displaystyle \varepsilon (\varphi )=\varphi (e)}
and the antipode as
(
S
(
φ
)
)
(
x
)
=
φ
(
x
−
1
)
.
{\displaystyle (S(\varphi ))(x)=\varphi (x^{-1}).}
Now, the Gelfand–Naimark theorem essentially states that every commutative Hopf algebra is isomorphic to the Hopf algebra of continuous functions on some compact topological group G—the theory of compact topological groups and the theory of commutative Hopf algebras are the same. For Lie groups, this implies that C(G) is isomorphically dual to
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
; more precisely, it is isomorphic to a subspace of the dual space
U
∗
(
g
)
.
{\displaystyle U^{*}({\mathfrak {g}}).}
These ideas can then be extended to the non-commutative case. One starts by defining the quasi-triangular Hopf algebras, and then performing what is called a quantum deformation to obtain the quantum universal enveloping algebra, or quantum group, for short.
== See also ==
Milnor–Moore theorem
Harish-Chandra homomorphism
== References ==
Dixmier, Jacques (1996) [1974], Enveloping algebras, Graduate Studies in Mathematics, vol. 11, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0560-2, MR 0498740
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Helgason, Sigurdur (2001), Differential geometry, Lie groups, and symmetric spaces, Graduate Studies in Mathematics, vol. 34, Providence, R.I.: American Mathematical Society, doi:10.1090/gsm/034, ISBN 978-0-8218-2848-9, MR 1834454, S2CID 120016227
Musson, Ian M. (2012), Lie Superalgebras and Enveloping Algebras, Graduate Studies in Mathematics, vol. 131, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-6867-6, Zbl 1255.17001
Shlomo Sternberg (2004), Lie algebras, Harvard University.
Universal enveloping algebra at the nLab | Wikipedia/Universal_enveloping_algebra |
In mathematics, a Lie algebra (pronounced LEE) is a vector space
g
{\displaystyle {\mathfrak {g}}}
together with an operation called the Lie bracket, an alternating bilinear map
g
×
g
→
g
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}}
, that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors
x
{\displaystyle x}
and
y
{\displaystyle y}
is denoted
[
x
,
y
]
{\displaystyle [x,y]}
. A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket,
[
x
,
y
]
=
x
y
−
y
x
{\displaystyle [x,y]=xy-yx}
.
Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra.
In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space
g
{\displaystyle {\mathfrak {g}}}
to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give
g
{\displaystyle {\mathfrak {g}}}
the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces.
In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics.
An elementary example (not directly coming from an associative algebra) is the 3-dimensional space
g
=
R
3
{\displaystyle {\mathfrak {g}}=\mathbb {R} ^{3}}
with Lie bracket defined by the cross product
[
x
,
y
]
=
x
×
y
.
{\displaystyle [x,y]=x\times y.}
This is skew-symmetric since
x
×
y
=
−
y
×
x
{\displaystyle x\times y=-y\times x}
, and instead of associativity it satisfies the Jacobi identity:
x
×
(
y
×
z
)
+
y
×
(
z
×
x
)
+
z
×
(
x
×
y
)
=
0.
{\displaystyle x\times (y\times z)+\ y\times (z\times x)+\ z\times (x\times y)\ =\ 0.}
This is the Lie algebra of the Lie group of rotations of space, and each vector
v
∈
R
3
{\displaystyle v\in \mathbb {R} ^{3}}
may be pictured as an infinitesimal rotation around the axis
v
{\displaystyle v}
, with angular speed equal to the magnitude
of
v
{\displaystyle v}
. The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property
[
x
,
x
]
=
x
×
x
=
0
{\displaystyle [x,x]=x\times x=0}
.
== History ==
Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used.
== Definition of a Lie algebra ==
A Lie algebra is a vector space
g
{\displaystyle \,{\mathfrak {g}}}
over a field
F
{\displaystyle F}
together with a binary operation
[
⋅
,
⋅
]
:
g
×
g
→
g
{\displaystyle [\,\cdot \,,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}}
called the Lie bracket, satisfying the following axioms:
Bilinearity,
[
a
x
+
b
y
,
z
]
=
a
[
x
,
z
]
+
b
[
y
,
z
]
,
{\displaystyle [ax+by,z]=a[x,z]+b[y,z],}
[
z
,
a
x
+
b
y
]
=
a
[
z
,
x
]
+
b
[
z
,
y
]
{\displaystyle [z,ax+by]=a[z,x]+b[z,y]}
for all scalars
a
,
b
{\displaystyle a,b}
in
F
{\displaystyle F}
and all elements
x
,
y
,
z
{\displaystyle x,y,z}
in
g
{\displaystyle {\mathfrak {g}}}
.
The Alternating property,
[
x
,
x
]
=
0
{\displaystyle [x,x]=0\ }
for all
x
{\displaystyle x}
in
g
{\displaystyle {\mathfrak {g}}}
.
The Jacobi identity,
[
x
,
[
y
,
z
]
]
+
[
z
,
[
x
,
y
]
]
+
[
y
,
[
z
,
x
]
]
=
0
{\displaystyle [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0\ }
for all
x
,
y
,
z
{\displaystyle x,y,z}
in
g
{\displaystyle {\mathfrak {g}}}
.
Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation.
Using bilinearity to expand the Lie bracket
[
x
+
y
,
x
+
y
]
{\displaystyle [x+y,x+y]}
and using the alternating property shows that
[
x
,
y
]
+
[
y
,
x
]
=
0
{\displaystyle [x,y]+[y,x]=0}
for all
x
,
y
{\displaystyle x,y}
in
g
{\displaystyle {\mathfrak {g}}}
. Thus bilinearity and the alternating property together imply
Anticommutativity,
[
x
,
y
]
=
−
[
y
,
x
]
,
{\displaystyle [x,y]=-[y,x],\ }
for all
x
,
y
{\displaystyle x,y}
in
g
{\displaystyle {\mathfrak {g}}}
. If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies
[
x
,
x
]
=
−
[
x
,
x
]
.
{\displaystyle [x,x]=-[x,x].}
It is customary to denote a Lie algebra by a lower-case fraktur letter such as
g
,
h
,
b
,
n
{\displaystyle {\mathfrak {g,h,b,n}}}
. If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
.
=== Generators and dimension ===
The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
means a subset of
g
{\displaystyle {\mathfrak {g}}}
such that any Lie subalgebra (as defined below) that contains S must be all of
g
{\displaystyle {\mathfrak {g}}}
. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is spanned (as a vector space) by all iterated brackets of elements of S.
== Basic examples ==
=== Abelian Lie algebras ===
A Lie algebra is called abelian if its Lie bracket is identically zero. Any vector space
V
{\displaystyle V}
endowed with the identically zero Lie bracket becomes a Lie algebra. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket.
=== The Lie algebra of matrices ===
On an associative algebra
A
{\displaystyle A}
over a field
F
{\displaystyle F}
with multiplication written as
x
y
{\displaystyle xy}
, a Lie bracket may be defined by the commutator
[
x
,
y
]
=
x
y
−
y
x
{\displaystyle [x,y]=xy-yx}
. With this bracket,
A
{\displaystyle A}
is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on
A
{\displaystyle A}
.)
The endomorphism ring of an
F
{\displaystyle F}
-vector space
V
{\displaystyle V}
with the above Lie bracket is denoted
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
.
For a field F and a positive integer n, the space of n × n matrices over F, denoted
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
or
g
l
n
(
F
)
{\displaystyle {\mathfrak {gl}}_{n}(F)}
, is a Lie algebra with bracket given by the commutator of matrices:
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra.
When F is the real numbers,
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
is the Lie algebra of the general linear group
G
L
(
n
,
R
)
{\displaystyle \mathrm {GL} (n,\mathbb {R} )}
, the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise,
g
l
(
n
,
C
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )}
is the Lie algebra of the complex Lie group
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
. The Lie bracket on
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
can be viewed as the Lie algebra of the algebraic group
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
over F.
== Definitions ==
=== Subalgebras, ideals and homomorphisms ===
The Lie bracket is not required to be associative, meaning that
[
[
x
,
y
]
,
z
]
{\displaystyle [[x,y],z]}
need not be equal to
[
x
,
[
y
,
z
]
]
{\displaystyle [x,[y,z]]}
. Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace
h
⊆
g
{\displaystyle {\mathfrak {h}}\subseteq {\mathfrak {g}}}
which is closed under the Lie bracket. An ideal
i
⊆
g
{\displaystyle {\mathfrak {i}}\subseteq {\mathfrak {g}}}
is a linear subspace that satisfies the stronger condition:
[
g
,
i
]
⊆
i
.
{\displaystyle [{\mathfrak {g}},{\mathfrak {i}}]\subseteq {\mathfrak {i}}.}
In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals.
A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets:
ϕ
:
g
→
h
,
ϕ
(
[
x
,
y
]
)
=
[
ϕ
(
x
)
,
ϕ
(
y
)
]
for all
x
,
y
∈
g
.
{\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}},\quad \phi ([x,y])=[\phi (x),\phi (y)]\ {\text{for all}}\ x,y\in {\mathfrak {g}}.}
An isomorphism of Lie algebras is a bijective homomorphism.
As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and an ideal
i
{\displaystyle {\mathfrak {i}}}
in it, the quotient Lie algebra
g
/
i
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}}
is defined, with a surjective homomorphism
g
→
g
/
i
{\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}}
of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism
ϕ
:
g
→
h
{\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}}}
of Lie algebras, the image of
ϕ
{\displaystyle \phi }
is a Lie subalgebra of
h
{\displaystyle {\mathfrak {h}}}
that is isomorphic to
g
/
ker
(
ϕ
)
{\displaystyle {\mathfrak {g}}/{\text{ker}}(\phi )}
.
For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements
x
,
y
∈
g
{\displaystyle x,y\in {\mathfrak {g}}}
are said to commute if their bracket vanishes:
[
x
,
y
]
=
0
{\displaystyle [x,y]=0}
.
The centralizer subalgebra of a subset
S
⊂
g
{\displaystyle S\subset {\mathfrak {g}}}
is the set of elements commuting with
S
{\displaystyle S}
: that is,
z
g
(
S
)
=
{
x
∈
g
:
[
x
,
s
]
=
0
for all
s
∈
S
}
{\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]=0\ {\text{ for all }}s\in S\}}
. The centralizer of
g
{\displaystyle {\mathfrak {g}}}
itself is the center
z
(
g
)
{\displaystyle {\mathfrak {z}}({\mathfrak {g}})}
. Similarly, for a subspace S, the normalizer subalgebra of
S
{\displaystyle S}
is
n
g
(
S
)
=
{
x
∈
g
:
[
x
,
s
]
∈
S
for all
s
∈
S
}
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]\in S\ {\text{ for all}}\ s\in S\}}
. If
S
{\displaystyle S}
is a Lie subalgebra,
n
g
(
S
)
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)}
is the largest subalgebra such that
S
{\displaystyle S}
is an ideal of
n
g
(
S
)
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)}
.
==== Example ====
The subspace
t
n
{\displaystyle {\mathfrak {t}}_{n}}
of diagonal matrices in
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is an abelian Lie subalgebra. (It is a Cartan subalgebra of
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
, analogous to a maximal torus in the theory of compact Lie groups.) Here
t
n
{\displaystyle {\mathfrak {t}}_{n}}
is not an ideal in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
for
n
≥
2
{\displaystyle n\geq 2}
. For example, when
n
=
2
{\displaystyle n=2}
, this follows from the calculation:
[
[
a
b
c
d
]
,
[
x
0
0
y
]
]
=
[
a
x
b
y
c
x
d
y
]
−
[
a
x
b
x
c
y
d
y
]
=
[
0
b
(
y
−
x
)
c
(
x
−
y
)
0
]
{\displaystyle {\begin{aligned}\left[{\begin{bmatrix}a&b\\c&d\end{bmatrix}},{\begin{bmatrix}x&0\\0&y\end{bmatrix}}\right]&={\begin{bmatrix}ax&by\\cx&dy\\\end{bmatrix}}-{\begin{bmatrix}ax&bx\\cy&dy\\\end{bmatrix}}\\&={\begin{bmatrix}0&b(y-x)\\c(x-y)&0\end{bmatrix}}\end{aligned}}}
(which is not always in
t
2
{\displaystyle {\mathfrak {t}}_{2}}
).
Every one-dimensional linear subspace of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is an abelian Lie subalgebra, but it need not be an ideal.
=== Product and semidirect product ===
For two Lie algebras
g
{\displaystyle {\mathfrak {g}}}
and
g
′
{\displaystyle {\mathfrak {g'}}}
, the product Lie algebra is the vector space
g
×
g
′
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}}
consisting of all ordered pairs
(
x
,
x
′
)
,
x
∈
g
,
x
′
∈
g
′
{\displaystyle (x,x'),\,x\in {\mathfrak {g}},\ x'\in {\mathfrak {g'}}}
, with Lie bracket
[
(
x
,
x
′
)
,
(
y
,
y
′
)
]
=
(
[
x
,
y
]
,
[
x
′
,
y
′
]
)
.
{\displaystyle [(x,x'),(y,y')]=([x,y],[x',y']).}
This is the product in the category of Lie algebras. Note that the copies of
g
{\displaystyle {\mathfrak {g}}}
and
g
′
{\displaystyle {\mathfrak {g}}'}
in
g
×
g
′
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}}
commute with each other:
[
(
x
,
0
)
,
(
0
,
x
′
)
]
=
0.
{\displaystyle [(x,0),(0,x')]=0.}
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and
i
{\displaystyle {\mathfrak {i}}}
an ideal of
g
{\displaystyle {\mathfrak {g}}}
. If the canonical map
g
→
g
/
i
{\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}}
splits (i.e., admits a section
g
/
i
→
g
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}\to {\mathfrak {g}}}
, as a homomorphism of Lie algebras), then
g
{\displaystyle {\mathfrak {g}}}
is said to be a semidirect product of
i
{\displaystyle {\mathfrak {i}}}
and
g
/
i
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}}
,
g
=
g
/
i
⋉
i
{\displaystyle {\mathfrak {g}}={\mathfrak {g}}/{\mathfrak {i}}\ltimes {\mathfrak {i}}}
. See also semidirect sum of Lie algebras.
=== Derivations ===
For an algebra A over a field F, a derivation of A over F is a linear map
D
:
A
→
A
{\displaystyle D\colon A\to A}
that satisfies the Leibniz rule
D
(
x
y
)
=
D
(
x
)
y
+
x
D
(
y
)
{\displaystyle D(xy)=D(x)y+xD(y)}
for all
x
,
y
∈
A
{\displaystyle x,y\in A}
. (The definition makes sense for a possibly non-associative algebra.) Given two derivations
D
1
{\displaystyle D_{1}}
and
D
2
{\displaystyle D_{2}}
, their commutator
[
D
1
,
D
2
]
:=
D
1
D
2
−
D
2
D
1
{\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}}
is again a derivation. This operation makes the space
Der
k
(
A
)
{\displaystyle {\text{Der}}_{k}(A)}
of all derivations of A over F into a Lie algebra.
Informally speaking, the space of derivations of A is the Lie algebra of the automorphism group of A. (This is literally true when the automorphism group is a Lie group, for example when F is the real numbers and A has finite dimension as a vector space.) For this reason, spaces of derivations are a natural way to construct Lie algebras: they are the "infinitesimal automorphisms" of A. Indeed, writing out the condition that
(
1
+
ϵ
D
)
(
x
y
)
≡
(
1
+
ϵ
D
)
(
x
)
⋅
(
1
+
ϵ
D
)
(
y
)
(
mod
ϵ
2
)
{\displaystyle (1+\epsilon D)(xy)\equiv (1+\epsilon D)(x)\cdot (1+\epsilon D)(y){\pmod {\epsilon ^{2}}}}
(where 1 denotes the identity map on A) gives exactly the definition of D being a derivation.
Example: the Lie algebra of vector fields. Let A be the ring
C
∞
(
X
)
{\displaystyle C^{\infty }(X)}
of smooth functions on a smooth manifold X. Then a derivation of A over
R
{\displaystyle \mathbb {R} }
is equivalent to a vector field on X. (A vector field v gives a derivation of the space of smooth functions by differentiating functions in the direction of v.) This makes the space
Vect
(
X
)
{\displaystyle {\text{Vect}}(X)}
of vector fields into a Lie algebra (see Lie bracket of vector fields). Informally speaking,
Vect
(
X
)
{\displaystyle {\text{Vect}}(X)}
is the Lie algebra of the diffeomorphism group of X. So the Lie bracket of vector fields describes the non-commutativity of the diffeomorphism group. An action of a Lie group G on a manifold X determines a homomorphism of Lie algebras
g
→
Vect
(
X
)
{\displaystyle {\mathfrak {g}}\to {\text{Vect}}(X)}
. (An example is illustrated below.)
A Lie algebra can be viewed as a non-associative algebra, and so each Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field F determines its Lie algebra of derivations,
Der
F
(
g
)
{\displaystyle {\text{Der}}_{F}({\mathfrak {g}})}
. That is, a derivation of
g
{\displaystyle {\mathfrak {g}}}
is a linear map
D
:
g
→
g
{\displaystyle D\colon {\mathfrak {g}}\to {\mathfrak {g}}}
such that
D
(
[
x
,
y
]
)
=
[
D
(
x
)
,
y
]
+
[
x
,
D
(
y
)
]
{\displaystyle D([x,y])=[D(x),y]+[x,D(y)]}
.
The inner derivation associated to any
x
∈
g
{\displaystyle x\in {\mathfrak {g}}}
is the adjoint mapping
a
d
x
{\displaystyle \mathrm {ad} _{x}}
defined by
a
d
x
(
y
)
:=
[
x
,
y
]
{\displaystyle \mathrm {ad} _{x}(y):=[x,y]}
. (This is a derivation as a consequence of the Jacobi identity.) That gives a homomorphism of Lie algebras,
ad
:
g
→
Der
F
(
g
)
{\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\text{Der}}_{F}({\mathfrak {g}})}
. The image
Inn
F
(
g
)
{\displaystyle {\text{Inn}}_{F}({\mathfrak {g}})}
is an ideal in
Der
F
(
g
)
{\displaystyle {\text{Der}}_{F}({\mathfrak {g}})}
, and the Lie algebra of outer derivations is defined as the quotient Lie algebra,
Out
F
(
g
)
=
Der
F
(
g
)
/
Inn
F
(
g
)
{\displaystyle {\text{Out}}_{F}({\mathfrak {g}})={\text{Der}}_{F}({\mathfrak {g}})/{\text{Inn}}_{F}({\mathfrak {g}})}
. (This is exactly analogous to the outer automorphism group of a group.) For a semisimple Lie algebra (defined below) over a field of characteristic zero, every derivation is inner. This is related to the theorem that the outer automorphism group of a semisimple Lie group is finite.
In contrast, an abelian Lie algebra has many outer derivations. Namely, for a vector space
V
{\displaystyle V}
with Lie bracket zero, the Lie algebra
Out
F
(
V
)
{\displaystyle {\text{Out}}_{F}(V)}
can be identified with
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
.
== Examples ==
=== Matrix Lie algebras ===
A matrix group is a Lie group consisting of invertible matrices,
G
⊂
G
L
(
n
,
R
)
{\displaystyle G\subset \mathrm {GL} (n,\mathbb {R} )}
, where the group operation of G is matrix multiplication. The corresponding Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is the space of matrices which are tangent vectors to G inside the linear space
M
n
(
R
)
{\displaystyle M_{n}(\mathbb {R} )}
: this consists of derivatives of smooth curves in G at the identity matrix
I
{\displaystyle I}
:
g
=
{
X
=
c
′
(
0
)
∈
M
n
(
R
)
:
smooth
c
:
R
→
G
,
c
(
0
)
=
I
}
.
{\displaystyle {\mathfrak {g}}=\{X=c'(0)\in M_{n}(\mathbb {R} ):{\text{ smooth }}c:\mathbb {R} \to G,\ c(0)=I\}.}
The Lie bracket of
g
{\displaystyle {\mathfrak {g}}}
is given by the commutator of matrices,
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. Given a Lie algebra
g
⊂
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(n,\mathbb {R} )}
, one can recover the Lie group as the subgroup generated by the matrix exponential of elements of
g
{\displaystyle {\mathfrak {g}}}
. (To be precise, this gives the identity component of G, if G is not connected.) Here the exponential mapping
exp
:
M
n
(
R
)
→
M
n
(
R
)
{\displaystyle \exp :M_{n}(\mathbb {R} )\to M_{n}(\mathbb {R} )}
is defined by
exp
(
X
)
=
I
+
X
+
1
2
!
X
2
+
1
3
!
X
3
+
⋯
{\displaystyle \exp(X)=I+X+{\tfrac {1}{2!}}X^{2}+{\tfrac {1}{3!}}X^{3}+\cdots }
, which converges for every matrix
X
{\displaystyle X}
.
The same comments apply to complex Lie subgroups of
G
L
(
n
,
C
)
{\displaystyle GL(n,\mathbb {C} )}
and the complex matrix exponential,
exp
:
M
n
(
C
)
→
M
n
(
C
)
{\displaystyle \exp :M_{n}(\mathbb {C} )\to M_{n}(\mathbb {C} )}
(defined by the same formula).
Here are some matrix Lie groups and their Lie algebras.
For a positive integer n, the special linear group
S
L
(
n
,
R
)
{\displaystyle \mathrm {SL} (n,\mathbb {R} )}
consists of all real n × n matrices with determinant 1. This is the group of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to itself that preserve volume and orientation. More abstractly,
S
L
(
n
,
R
)
{\displaystyle \mathrm {SL} (n,\mathbb {R} )}
is the commutator subgroup of the general linear group
G
L
(
n
,
R
)
{\displaystyle \mathrm {GL} (n,\mathbb {R} )}
. Its Lie algebra
s
l
(
n
,
R
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {R} )}
consists of all real n × n matrices with trace 0. Similarly, one can define the analogous complex Lie group
S
L
(
n
,
C
)
{\displaystyle {\rm {SL}}(n,\mathbb {C} )}
and its Lie algebra
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
.
The orthogonal group
O
(
n
)
{\displaystyle \mathrm {O} (n)}
plays a basic role in geometry: it is the group of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to itself that preserve the length of vectors. For example, rotations and reflections belong to
O
(
n
)
{\displaystyle \mathrm {O} (n)}
. Equivalently, this is the group of n x n orthogonal matrices, meaning that
A
T
=
A
−
1
{\displaystyle A^{\mathrm {T} }=A^{-1}}
, where
A
T
{\displaystyle A^{\mathrm {T} }}
denotes the transpose of a matrix. The orthogonal group has two connected components; the identity component is called the special orthogonal group
S
O
(
n
)
{\displaystyle \mathrm {SO} (n)}
, consisting of the orthogonal matrices with determinant 1. Both groups have the same Lie algebra
s
o
(
n
)
{\displaystyle {\mathfrak {so}}(n)}
, the subspace of skew-symmetric matrices in
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
(
X
T
=
−
X
{\displaystyle X^{\rm {T}}=-X}
). See also infinitesimal rotations with skew-symmetric matrices.
The complex orthogonal group
O
(
n
,
C
)
{\displaystyle \mathrm {O} (n,\mathbb {C} )}
, its identity component
S
O
(
n
,
C
)
{\displaystyle \mathrm {SO} (n,\mathbb {C} )}
, and the Lie algebra
s
o
(
n
,
C
)
{\displaystyle {\mathfrak {so}}(n,\mathbb {C} )}
are given by the same formulas applied to n x n complex matrices. Equivalently,
O
(
n
,
C
)
{\displaystyle \mathrm {O} (n,\mathbb {C} )}
is the subgroup of
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
that preserves the standard symmetric bilinear form on
C
n
{\displaystyle \mathbb {C} ^{n}}
.
The unitary group
U
(
n
)
{\displaystyle \mathrm {U} (n)}
is the subgroup of
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
that preserves the length of vectors in
C
n
{\displaystyle \mathbb {C} ^{n}}
(with respect to the standard Hermitian inner product). Equivalently, this is the group of n × n unitary matrices (satisfying
A
∗
=
A
−
1
{\displaystyle A^{*}=A^{-1}}
, where
A
∗
{\displaystyle A^{*}}
denotes the conjugate transpose of a matrix). Its Lie algebra
u
(
n
)
{\displaystyle {\mathfrak {u}}(n)}
consists of the skew-hermitian matrices in
g
l
(
n
,
C
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )}
(
X
∗
=
−
X
{\displaystyle X^{*}=-X}
). This is a Lie algebra over
R
{\displaystyle \mathbb {R} }
, not over
C
{\displaystyle \mathbb {C} }
. (Indeed, i times a skew-hermitian matrix is hermitian, rather than skew-hermitian.) Likewise, the unitary group
U
(
n
)
{\displaystyle \mathrm {U} (n)}
is a real Lie subgroup of the complex Lie group
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
. For example,
U
(
1
)
{\displaystyle \mathrm {U} (1)}
is the circle group, and its Lie algebra (from this point of view) is
i
R
⊂
C
=
g
l
(
1
,
C
)
{\displaystyle i\mathbb {R} \subset \mathbb {C} ={\mathfrak {gl}}(1,\mathbb {C} )}
.
The special unitary group
S
U
(
n
)
{\displaystyle \mathrm {SU} (n)}
is the subgroup of matrices with determinant 1 in
U
(
n
)
{\displaystyle \mathrm {U} (n)}
. Its Lie algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
consists of the skew-hermitian matrices with trace zero.
The symplectic group
S
p
(
2
n
,
R
)
{\displaystyle \mathrm {Sp} (2n,\mathbb {R} )}
is the subgroup of
G
L
(
2
n
,
R
)
{\displaystyle \mathrm {GL} (2n,\mathbb {R} )}
that preserves the standard alternating bilinear form on
R
2
n
{\displaystyle \mathbb {R} ^{2n}}
. Its Lie algebra is the symplectic Lie algebra
s
p
(
2
n
,
R
)
{\displaystyle {\mathfrak {sp}}(2n,\mathbb {R} )}
.
The classical Lie algebras are those listed above, along with variants over any field.
=== Two dimensions ===
Some Lie algebras of low dimension are described here. See the classification of low-dimensional real Lie algebras for further examples.
There is a unique nonabelian Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of dimension 2 over any field F, up to isomorphism. Here
g
{\displaystyle {\mathfrak {g}}}
has a basis
X
,
Y
{\displaystyle X,Y}
for which the bracket is given by
[
X
,
Y
]
=
Y
{\displaystyle \left[X,Y\right]=Y}
. (This determines the Lie bracket completely, because the axioms imply that
[
X
,
X
]
=
0
{\displaystyle [X,X]=0}
and
[
Y
,
Y
]
=
0
{\displaystyle [Y,Y]=0}
.) Over the real numbers,
g
{\displaystyle {\mathfrak {g}}}
can be viewed as the Lie algebra of the Lie group
G
=
A
f
f
(
1
,
R
)
{\displaystyle G=\mathrm {Aff} (1,\mathbb {R} )}
of affine transformations of the real line,
x
↦
a
x
+
b
{\displaystyle x\mapsto ax+b}
.
The affine group G can be identified with the group of matrices
(
a
b
0
1
)
{\displaystyle \left({\begin{array}{cc}a&b\\0&1\end{array}}\right)}
under matrix multiplication, with
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
,
a
≠
0
{\displaystyle a\neq 0}
. Its Lie algebra is the Lie subalgebra
g
{\displaystyle {\mathfrak {g}}}
of
g
l
(
2
,
R
)
{\displaystyle {\mathfrak {gl}}(2,\mathbb {R} )}
consisting of all matrices
(
c
d
0
0
)
.
{\displaystyle \left({\begin{array}{cc}c&d\\0&0\end{array}}\right).}
In these terms, the basis above for
g
{\displaystyle {\mathfrak {g}}}
is given by the matrices
X
=
(
1
0
0
0
)
,
Y
=
(
0
1
0
0
)
.
{\displaystyle X=\left({\begin{array}{cc}1&0\\0&0\end{array}}\right),\qquad Y=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right).}
For any field
F
{\displaystyle F}
, the 1-dimensional subspace
F
⋅
Y
{\displaystyle F\cdot Y}
is an ideal in the 2-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, by the formula
[
X
,
Y
]
=
Y
∈
F
⋅
Y
{\displaystyle [X,Y]=Y\in F\cdot Y}
. Both of the Lie algebras
F
⋅
Y
{\displaystyle F\cdot Y}
and
g
/
(
F
⋅
Y
)
{\displaystyle {\mathfrak {g}}/(F\cdot Y)}
are abelian (because 1-dimensional). In this sense,
g
{\displaystyle {\mathfrak {g}}}
can be broken into abelian "pieces", meaning that it is solvable (though not nilpotent), in the terminology below.
=== Three dimensions ===
The Heisenberg algebra
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
over a field F is the three-dimensional Lie algebra with a basis
X
,
Y
,
Z
{\displaystyle X,Y,Z}
such that
[
X
,
Y
]
=
Z
,
[
X
,
Z
]
=
0
,
[
Y
,
Z
]
=
0
{\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0}
.
It can be viewed as the Lie algebra of 3×3 strictly upper-triangular matrices, with the commutator Lie bracket and the basis
X
=
(
0
1
0
0
0
0
0
0
0
)
,
Y
=
(
0
0
0
0
0
1
0
0
0
)
,
Z
=
(
0
0
1
0
0
0
0
0
0
)
.
{\displaystyle X=\left({\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}}\right),\quad Y=\left({\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}}\right),\quad Z=\left({\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}}\right)~.\quad }
Over the real numbers,
h
3
(
R
)
{\displaystyle {\mathfrak {h}}_{3}(\mathbb {R} )}
is the Lie algebra of the Heisenberg group
H
3
(
R
)
{\displaystyle \mathrm {H} _{3}(\mathbb {R} )}
, that is, the group of matrices
(
1
a
c
0
1
b
0
0
1
)
{\displaystyle \left({\begin{array}{ccc}1&a&c\\0&1&b\\0&0&1\end{array}}\right)}
under matrix multiplication.
For any field F, the center of
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
is the 1-dimensional ideal
F
⋅
Z
{\displaystyle F\cdot Z}
, and the quotient
h
3
(
F
)
/
(
F
⋅
Z
)
{\displaystyle {\mathfrak {h}}_{3}(F)/(F\cdot Z)}
is abelian, isomorphic to
F
2
{\displaystyle F^{2}}
. In the terminology below, it follows that
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
is nilpotent (though not abelian).
The Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
of the rotation group SO(3) is the space of skew-symmetric 3 x 3 matrices over
R
{\displaystyle \mathbb {R} }
. A basis is given by the three matrices
F
1
=
(
0
0
0
0
0
−
1
0
1
0
)
,
F
2
=
(
0
0
1
0
0
0
−
1
0
0
)
,
F
3
=
(
0
−
1
0
1
0
0
0
0
0
)
.
{\displaystyle F_{1}=\left({\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}}\right),\quad F_{2}=\left({\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}}\right),\quad F_{3}=\left({\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}}\right)~.\quad }
The commutation relations among these generators are
[
F
1
,
F
2
]
=
F
3
,
{\displaystyle [F_{1},F_{2}]=F_{3},}
[
F
2
,
F
3
]
=
F
1
,
{\displaystyle [F_{2},F_{3}]=F_{1},}
[
F
3
,
F
1
]
=
F
2
.
{\displaystyle [F_{3},F_{1}]=F_{2}.}
The cross product of vectors in
R
3
{\displaystyle \mathbb {R} ^{3}}
is given by the same formula in terms of the standard basis; so that Lie algebra is isomorphic to
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
. Also,
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
is equivalent to the Spin (physics) angular-momentum component operators for spin-1 particles in quantum mechanics.
The Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
cannot be broken into pieces in the way that the previous examples can: it is simple, meaning that it is not abelian and its only ideals are 0 and all of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
.
Another simple Lie algebra of dimension 3, in this case over
C
{\displaystyle \mathbb {C} }
, is the space
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
of 2 x 2 matrices of trace zero. A basis is given by the three matrices
H
=
(
1
0
0
−
1
)
,
E
=
(
0
1
0
0
)
,
F
=
(
0
0
1
0
)
.
{\displaystyle H=\left({\begin{array}{cc}1&0\\0&-1\end{array}}\right),\ E=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right),\ F=\left({\begin{array}{cc}0&0\\1&0\end{array}}\right).}
The Lie bracket is given by:
[
H
,
E
]
=
2
E
,
{\displaystyle [H,E]=2E,}
[
H
,
F
]
=
−
2
F
,
{\displaystyle [H,F]=-2F,}
[
E
,
F
]
=
H
.
{\displaystyle [E,F]=H.}
Using these formulas, one can show that the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is simple, and classify its finite-dimensional representations (defined below). In the terminology of quantum mechanics, one can think of E and F as raising and lowering operators. Indeed, for any representation of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
, the relations above imply that E maps the c-eigenspace of H (for a complex number c) into the
(
c
+
2
)
{\displaystyle (c+2)}
-eigenspace, while F maps the c-eigenspace into the
(
c
−
2
)
{\displaystyle (c-2)}
-eigenspace.
The Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is isomorphic to the complexification of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
, meaning the tensor product
s
o
(
3
)
⊗
R
C
{\displaystyle {\mathfrak {so}}(3)\otimes _{\mathbb {R} }\mathbb {C} }
. The formulas for the Lie bracket are easier to analyze in the case of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
. As a result, it is common to analyze complex representations of the group
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
by relating them to representations of the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
.
=== Infinite dimensions ===
The Lie algebra of vector fields on a smooth manifold of positive dimension is an infinite-dimensional Lie algebra over
R
{\displaystyle \mathbb {R} }
.
The Kac–Moody algebras are a large class of infinite-dimensional Lie algebras, say over
C
{\displaystyle \mathbb {C} }
, with structure much like that of the finite-dimensional simple Lie algebras (such as
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
).
The Moyal algebra is an infinite-dimensional Lie algebra that contains all the classical Lie algebras as subalgebras.
The Virasoro algebra is important in string theory.
The functor that takes a Lie algebra over a field F to the underlying vector space has a left adjoint
V
↦
L
(
V
)
{\displaystyle V\mapsto L(V)}
, called the free Lie algebra on a vector space V. It is spanned by all iterated Lie brackets of elements of V, modulo only the relations coming from the definition of a Lie algebra. The free Lie algebra
L
(
V
)
{\displaystyle L(V)}
is infinite-dimensional for V of dimension at least 2.
== Representations ==
=== Definitions ===
Given a vector space V, let
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
denote the Lie algebra consisting of all linear maps from V to itself, with bracket given by
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. A representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
on V is a Lie algebra homomorphism
π
:
g
→
g
l
(
V
)
.
{\displaystyle \pi \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V).}
That is,
π
{\displaystyle \pi }
sends each element of
g
{\displaystyle {\mathfrak {g}}}
to a linear map from V to itself, in such a way that the Lie bracket on
g
{\displaystyle {\mathfrak {g}}}
corresponds to the commutator of linear maps.
A representation is said to be faithful if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero has a faithful representation on a finite-dimensional vector space. Kenkichi Iwasawa extended this result to finite-dimensional Lie algebras over a field of any characteristic. Equivalently, every finite-dimensional Lie algebra over a field F is isomorphic to a Lie subalgebra of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
for some positive integer n.
=== Adjoint representation ===
For any Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, the adjoint representation is the representation
ad
:
g
→
g
l
(
g
)
{\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})}
given by
ad
(
x
)
(
y
)
=
[
x
,
y
]
{\displaystyle \operatorname {ad} (x)(y)=[x,y]}
. (This is a representation of
g
{\displaystyle {\mathfrak {g}}}
by the Jacobi identity.)
=== Goals of representation theory ===
One important aspect of the study of Lie algebras (especially semisimple Lie algebras, as defined below) is the study of their representations. Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. Indeed, in the semisimple case, the adjoint representation is already faithful. Rather, the goal is to understand all possible representations of
g
{\displaystyle {\mathfrak {g}}}
. For a semisimple Lie algebra over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The finite-dimensional irreducible representations are well understood from several points of view; see the representation theory of semisimple Lie algebras and the Weyl character formula.
=== Universal enveloping algebra ===
The functor that takes an associative algebra A over a field F to A as a Lie algebra (by
[
X
,
Y
]
:=
X
Y
−
Y
X
{\displaystyle [X,Y]:=XY-YX}
) has a left adjoint
g
↦
U
(
g
)
{\displaystyle {\mathfrak {g}}\mapsto U({\mathfrak {g}})}
, called the universal enveloping algebra. To construct this: given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over F, let
T
(
g
)
=
F
⊕
g
⊕
(
g
⊗
g
)
⊕
(
g
⊗
g
⊗
g
)
⊕
⋯
{\displaystyle T({\mathfrak {g}})=F\oplus {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\oplus \cdots }
be the tensor algebra on
g
{\displaystyle {\mathfrak {g}}}
, also called the free associative algebra on the vector space
g
{\displaystyle {\mathfrak {g}}}
. Here
⊗
{\displaystyle \otimes }
denotes the tensor product of F-vector spaces. Let I be the two-sided ideal in
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
generated by the elements
X
Y
−
Y
X
−
[
X
,
Y
]
{\displaystyle XY-YX-[X,Y]}
for
X
,
Y
∈
g
{\displaystyle X,Y\in {\mathfrak {g}}}
; then the universal enveloping algebra is the quotient ring
U
(
g
)
=
T
(
g
)
/
I
{\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I}
. It satisfies the Poincaré–Birkhoff–Witt theorem: if
e
1
,
…
,
e
n
{\displaystyle e_{1},\ldots ,e_{n}}
is a basis for
g
{\displaystyle {\mathfrak {g}}}
as an F-vector space, then a basis for
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is given by all ordered products
e
1
i
1
⋯
e
n
i
n
{\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}}
with
i
1
,
…
,
i
n
{\displaystyle i_{1},\ldots ,i_{n}}
natural numbers. In particular, the map
g
→
U
(
g
)
{\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})}
is injective.
Representations of
g
{\displaystyle {\mathfrak {g}}}
are equivalent to modules over the universal enveloping algebra. The fact that
g
→
U
(
g
)
{\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})}
is injective implies that every Lie algebra (possibly of infinite dimension) has a faithful representation (of infinite dimension), namely its representation on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. This also shows that every Lie algebra is contained in the Lie algebra associated to some associative algebra.
=== Representation theory in physics ===
The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example is the angular momentum operators, whose commutation relations are those of the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
of the rotation group
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
. Typically, the space of states is far from being irreducible under the pertinent operators, but
one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the hydrogen atom, for example, quantum mechanics textbooks classify (more or less explicitly) the finite-dimensional irreducible representations of the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
.
== Structure theory and classification ==
Lie algebras can be classified to some extent. This is a powerful approach to the classification of Lie groups.
=== Abelian, nilpotent, and solvable ===
Analogously to abelian, nilpotent, and solvable groups, one can define abelian, nilpotent, and solvable Lie algebras.
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is abelian if the Lie bracket vanishes; that is, [x,y] = 0 for all x and y in
g
{\displaystyle {\mathfrak {g}}}
. In particular, the Lie algebra of an abelian Lie group (such as the group
R
n
{\displaystyle \mathbb {R} ^{n}}
under addition or the torus group
T
n
{\displaystyle \mathbb {T} ^{n}}
) is abelian. Every finite-dimensional abelian Lie algebra over a field
F
{\displaystyle F}
is isomorphic to
F
n
{\displaystyle F^{n}}
for some
n
≥
0
{\displaystyle n\geq 0}
, meaning an n-dimensional vector space with Lie bracket zero.
A more general class of Lie algebras is defined by the vanishing of all commutators of given length. First, the commutator subalgebra (or derived subalgebra) of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is
[
g
,
g
]
{\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}
, meaning the linear subspace spanned by all brackets
[
x
,
y
]
{\displaystyle [x,y]}
with
x
,
y
∈
g
{\displaystyle x,y\in {\mathfrak {g}}}
. The commutator subalgebra is an ideal in
g
{\displaystyle {\mathfrak {g}}}
, in fact the smallest ideal such that the quotient Lie algebra is abelian. It is analogous to the commutator subgroup of a group.
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is nilpotent if the lower central series
g
⊇
[
g
,
g
]
⊇
[
[
g
,
g
]
,
g
]
⊇
[
[
[
g
,
g
]
,
g
]
,
g
]
⊇
⋯
{\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}],{\mathfrak {g}}]\supseteq \cdots }
becomes zero after finitely many steps. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is nilpotent if there is a finite sequence of ideals in
g
{\displaystyle {\mathfrak {g}}}
,
0
=
a
0
⊆
a
1
⊆
⋯
⊆
a
r
=
g
,
{\displaystyle 0={\mathfrak {a}}_{0}\subseteq {\mathfrak {a}}_{1}\subseteq \cdots \subseteq {\mathfrak {a}}_{r}={\mathfrak {g}},}
such that
a
j
/
a
j
−
1
{\displaystyle {\mathfrak {a}}_{j}/{\mathfrak {a}}_{j-1}}
is central in
g
/
a
j
−
1
{\displaystyle {\mathfrak {g}}/{\mathfrak {a}}_{j-1}}
for each j. By Engel's theorem, a Lie algebra over any field is nilpotent if and only if for every u in
g
{\displaystyle {\mathfrak {g}}}
the adjoint endomorphism
ad
(
u
)
:
g
→
g
,
ad
(
u
)
v
=
[
u
,
v
]
{\displaystyle \operatorname {ad} (u):{\mathfrak {g}}\to {\mathfrak {g}},\quad \operatorname {ad} (u)v=[u,v]}
is nilpotent.
More generally, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is said to be solvable if the derived series:
g
⊇
[
g
,
g
]
⊇
[
[
g
,
g
]
,
[
g
,
g
]
]
⊇
[
[
[
g
,
g
]
,
[
g
,
g
]
]
,
[
[
g
,
g
]
,
[
g
,
g
]
]
]
⊇
⋯
{\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\supseteq \cdots }
becomes zero after finitely many steps. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is solvable if there is a finite sequence of Lie subalgebras,
0
=
m
0
⊆
m
1
⊆
⋯
⊆
m
r
=
g
,
{\displaystyle 0={\mathfrak {m}}_{0}\subseteq {\mathfrak {m}}_{1}\subseteq \cdots \subseteq {\mathfrak {m}}_{r}={\mathfrak {g}},}
such that
m
j
−
1
{\displaystyle {\mathfrak {m}}_{j-1}}
is an ideal in
m
j
{\displaystyle {\mathfrak {m}}_{j}}
with
m
j
/
m
j
−
1
{\displaystyle {\mathfrak {m}}_{j}/{\mathfrak {m}}_{j-1}}
abelian for each j.
Every finite-dimensional Lie algebra over a field has a unique maximal solvable ideal, called its radical. Under the Lie correspondence, nilpotent (respectively, solvable) Lie groups correspond to nilpotent (respectively, solvable) Lie algebras over
R
{\displaystyle \mathbb {R} }
.
For example, for a positive integer n and a field F of characteristic zero, the radical of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is its center, the 1-dimensional subspace spanned by the identity matrix. An example of a solvable Lie algebra is the space
b
n
{\displaystyle {\mathfrak {b}}_{n}}
of upper-triangular matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
; this is not nilpotent when
n
≥
2
{\displaystyle n\geq 2}
. An example of a nilpotent Lie algebra is the space
u
n
{\displaystyle {\mathfrak {u}}_{n}}
of strictly upper-triangular matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
;
this is not abelian when
n
≥
3
{\displaystyle n\geq 3}
.
=== Simple and semisimple ===
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called simple if it is not abelian and the only ideals in
g
{\displaystyle {\mathfrak {g}}}
are 0 and
g
{\displaystyle {\mathfrak {g}}}
. (In particular, a one-dimensional—necessarily abelian—Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is by definition not simple, even though its only ideals are 0 and
g
{\displaystyle {\mathfrak {g}}}
.) A finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called semisimple if the only solvable ideal in
g
{\displaystyle {\mathfrak {g}}}
is 0. In characteristic zero, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is semisimple if and only if it is isomorphic to a product of simple Lie algebras,
g
≅
g
1
×
⋯
×
g
r
{\displaystyle {\mathfrak {g}}\cong {\mathfrak {g}}_{1}\times \cdots \times {\mathfrak {g}}_{r}}
.
For example, the Lie algebra
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
is simple for every
n
≥
2
{\displaystyle n\geq 2}
and every field F of characteristic zero (or just of characteristic not dividing n). The Lie algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
over
R
{\displaystyle \mathbb {R} }
is simple for every
n
≥
2
{\displaystyle n\geq 2}
. The Lie algebra
s
o
(
n
)
{\displaystyle {\mathfrak {so}}(n)}
over
R
{\displaystyle \mathbb {R} }
is simple if
n
=
3
{\displaystyle n=3}
or
n
≥
5
{\displaystyle n\geq 5}
. (There are "exceptional isomorphisms"
s
o
(
3
)
≅
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)}
and
s
o
(
4
)
≅
s
u
(
2
)
×
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(4)\cong {\mathfrak {su}}(2)\times {\mathfrak {su}}(2)}
.)
The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of their representations. When the ground field F has characteristic zero, every finite-dimensional representation of a semisimple Lie algebra is semisimple (that is, a direct sum of irreducible representations).
A finite-dimensional Lie algebra over a field of characteristic zero is called reductive if its adjoint representation is semisimple. Every reductive Lie algebra is isomorphic to the product of an abelian Lie algebra and a semisimple Lie algebra.
For example,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is reductive for F of characteristic zero: for
n
≥
2
{\displaystyle n\geq 2}
, it is isomorphic to the product
g
l
(
n
,
F
)
≅
F
×
s
l
(
n
,
F
)
,
{\displaystyle {\mathfrak {gl}}(n,F)\cong F\times {\mathfrak {sl}}(n,F),}
where F denotes the center of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
, the 1-dimensional subspace spanned by the identity matrix. Since the special linear Lie algebra
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
is simple,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
contains few ideals: only 0, the center F,
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
, and all of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
.
=== Cartan's criterion ===
Cartan's criterion (by Élie Cartan) gives conditions for a finite-dimensional Lie algebra of characteristic zero to be solvable or semisimple. It is expressed in terms of the Killing form, the symmetric bilinear form on
g
{\displaystyle {\mathfrak {g}}}
defined by
K
(
u
,
v
)
=
tr
(
ad
(
u
)
ad
(
v
)
)
,
{\displaystyle K(u,v)=\operatorname {tr} (\operatorname {ad} (u)\operatorname {ad} (v)),}
where tr denotes the trace of a linear operator. Namely: a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is semisimple if and only if the Killing form is nondegenerate. A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is solvable if and only if
K
(
g
,
[
g
,
g
]
)
=
0.
{\displaystyle K({\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}])=0.}
=== Classification ===
The Levi decomposition asserts that every finite-dimensional Lie algebra over a field of characteristic zero is a semidirect product of its solvable radical and a semisimple Lie algebra. Moreover, a semisimple Lie algebra in characteristic zero is a product of simple Lie algebras, as mentioned above. This focuses attention on the problem of classifying the simple Lie algebras.
The simple Lie algebras of finite dimension over an algebraically closed field F of characteristic zero were classified by Killing and Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. Here the simple Lie algebra of type An is
s
l
(
n
+
1
,
F
)
{\displaystyle {\mathfrak {sl}}(n+1,F)}
, Bn is
s
o
(
2
n
+
1
,
F
)
{\displaystyle {\mathfrak {so}}(2n+1,F)}
, Cn is
s
p
(
2
n
,
F
)
{\displaystyle {\mathfrak {sp}}(2n,F)}
, and Dn is
s
o
(
2
n
,
F
)
{\displaystyle {\mathfrak {so}}(2n,F)}
. The other five are known as the exceptional Lie algebras.
The classification of finite-dimensional simple Lie algebras over
R
{\displaystyle \mathbb {R} }
is more complicated, but it was also solved by Cartan (see simple Lie group for an equivalent classification). One can analyze a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over
R
{\displaystyle \mathbb {R} }
by considering its complexification
g
⊗
R
C
{\displaystyle {\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} }
.
In the years leading up to 2004, the finite-dimensional simple Lie algebras over an algebraically closed field of characteristic
p
>
3
{\displaystyle p>3}
were classified by Richard Earl Block, Robert Lee Wilson, Alexander Premet, and Helmut Strade. (See restricted Lie algebra#Classification of simple Lie algebras.) It turns out that there are many more simple Lie algebras in positive characteristic than in characteristic zero.
== Relation to Lie groups ==
Although Lie algebras can be studied in their own right, historically they arose as a means to study Lie groups.
The relationship between Lie groups and Lie algebras can be summarized as follows. Each Lie group determines a Lie algebra over
R
{\displaystyle \mathbb {R} }
(concretely, the tangent space at the identity). Conversely, for every finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, there is a connected Lie group
G
{\displaystyle G}
with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. This is Lie's third theorem; see the Baker–Campbell–Hausdorff formula. This Lie group is not determined uniquely; however, any two Lie groups with the same Lie algebra are locally isomorphic, and more strongly, they have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitary group SU(2) have isomorphic Lie algebras, but SU(2) is a simply connected double cover of SO(3).
For simply connected Lie groups, there is a complete correspondence: taking the Lie algebra gives an equivalence of categories from simply connected Lie groups to Lie algebras of finite dimension over
R
{\displaystyle \mathbb {R} }
.
The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of Lie groups and the representation theory of Lie groups. For finite-dimensional representations, there is an equivalence of categories between representations of a real Lie algebra and representations of the corresponding simply connected Lie group. This simplifies the representation theory of Lie groups: it is often easier to classify the representations of a Lie algebra, using linear algebra.
Every connected Lie group is isomorphic to its universal cover modulo a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the discrete subgroups of the center, once the Lie algebra is known. For example, the real semisimple Lie algebras were classified by Cartan, and so the classification of semisimple Lie groups is well understood.
For infinite-dimensional Lie algebras, Lie theory works less well. The exponential map need not be a local homeomorphism (for example, in the diffeomorphism group of the circle, there are diffeomorphisms arbitrarily close to the identity that are not in the image of the exponential map). Moreover, in terms of the existing notions of infinite-dimensional Lie groups, some infinite-dimensional Lie algebras do not come from any group.
Lie theory also does not work so neatly for infinite-dimensional representations of a finite-dimensional group. Even for the additive group
G
=
R
{\displaystyle G=\mathbb {R} }
, an infinite-dimensional representation of
G
{\displaystyle G}
can usually not be differentiated to produce a representation of its Lie algebra on the same space, or vice versa. The theory of Harish-Chandra modules is a more subtle relation between infinite-dimensional representations for groups and Lie algebras.
== Real form and complexification ==
Given a complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, a real Lie algebra
g
0
{\displaystyle {\mathfrak {g}}_{0}}
is said to be a real form of
g
{\displaystyle {\mathfrak {g}}}
if the complexification
g
0
⊗
R
C
{\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} }
is isomorphic to
g
{\displaystyle {\mathfrak {g}}}
. A real form need not be unique; for example,
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
has two real forms up to isomorphism,
s
l
(
2
,
R
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {R} )}
and
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
.
Given a semisimple complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, a split form of it is a real form that splits; i.e., it has a Cartan subalgebra which acts via an adjoint representation with real eigenvalues. A split form exists and is unique (up to isomorphism). A compact form is a real form that is the Lie algebra of a compact Lie group. A compact form exists and is also unique up to isomorphism.
== Lie algebra with additional structures ==
A Lie algebra may be equipped with additional structures that are compatible with the Lie bracket. For example, a graded Lie algebra is a Lie algebra (or more generally a Lie superalgebra) with a compatible grading. A differential graded Lie algebra also comes with a differential, making the underlying vector space a chain complex.
For example, the homotopy groups of a simply connected topological space form a graded Lie algebra, using the Whitehead product. In a related construction, Daniel Quillen used differential graded Lie algebras over the rational numbers
Q
{\displaystyle \mathbb {Q} }
to describe rational homotopy theory in algebraic terms.
== Lie ring ==
The definition of a Lie algebra over a field extends to define a Lie algebra over any commutative ring R. Namely, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over R is an R-module with an alternating R-bilinear map
[
,
]
:
g
×
g
→
g
{\displaystyle [\ ,\ ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}}
that satisfies the Jacobi identity. A Lie algebra over the ring
Z
{\displaystyle \mathbb {Z} }
of integers is sometimes called a Lie ring. (This is not directly related to the notion of a Lie group.)
Lie rings are used in the study of finite p-groups (for a prime number p) through the Lazard correspondence. The lower central factors of a finite p-group are finite abelian p-groups. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives; see the example below.
p-adic Lie groups are related to Lie algebras over the field
Q
p
{\displaystyle \mathbb {Q} _{p}}
of p-adic numbers as well as over the ring
Z
p
{\displaystyle \mathbb {Z} _{p}}
of p-adic integers. Part of Claude Chevalley's construction of the finite groups of Lie type involves showing that a simple Lie algebra over the complex numbers comes from a Lie algebra over the integers, and then (with more care) a group scheme over the integers.
=== Examples ===
Here is a construction of Lie rings arising from the study of abstract groups. For elements
x
,
y
{\displaystyle x,y}
of a group, define the commutator
[
x
,
y
]
=
x
−
1
y
−
1
x
y
{\displaystyle [x,y]=x^{-1}y^{-1}xy}
. Let
G
=
G
1
⊇
G
2
⊇
G
3
⊇
⋯
⊇
G
n
⊇
⋯
{\displaystyle G=G_{1}\supseteq G_{2}\supseteq G_{3}\supseteq \cdots \supseteq G_{n}\supseteq \cdots }
be a filtration of a group
G
{\displaystyle G}
, that is, a chain of subgroups such that
[
G
i
,
G
j
]
{\displaystyle [G_{i},G_{j}]}
is contained in
G
i
+
j
{\displaystyle G_{i+j}}
for all
i
,
j
{\displaystyle i,j}
. (For the Lazard correspondence, one takes the filtration to be the lower central series of G.) Then
L
=
⨁
i
≥
1
G
i
/
G
i
+
1
{\displaystyle L=\bigoplus _{i\geq 1}G_{i}/G_{i+1}}
is a Lie ring, with addition given by the group multiplication (which is abelian on each quotient group
G
i
/
G
i
+
1
{\displaystyle G_{i}/G_{i+1}}
), and with Lie bracket
G
i
/
G
i
+
1
×
G
j
/
G
j
+
1
→
G
i
+
j
/
G
i
+
j
+
1
{\displaystyle G_{i}/G_{i+1}\times G_{j}/G_{j+1}\to G_{i+j}/G_{i+j+1}}
given by commutators in the group:
[
x
G
i
+
1
,
y
G
j
+
1
]
:=
[
x
,
y
]
G
i
+
j
+
1
.
{\displaystyle [xG_{i+1},yG_{j+1}]:=[x,y]G_{i+j+1}.}
For example, the Lie ring associated to the lower central series on the dihedral group of order 8 is the Heisenberg Lie algebra of dimension 3 over the field
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
.
== Definition using category-theoretic notation ==
The definition of a Lie algebra can be reformulated more abstractly in the language of category theory. Namely, one can define a Lie algebra in terms of linear maps—that is, morphisms in the category of vector spaces—without considering individual elements. (In this section, the field over which the algebra is defined is assumed to be of characteristic different from 2.)
For the category-theoretic definition of Lie algebras, two braiding isomorphisms are needed. If A is a vector space, the interchange isomorphism
τ
:
A
⊗
A
→
A
⊗
A
{\displaystyle \tau :A\otimes A\to A\otimes A}
is defined by
τ
(
x
⊗
y
)
=
y
⊗
x
.
{\displaystyle \tau (x\otimes y)=y\otimes x.}
The cyclic-permutation braiding
σ
:
A
⊗
A
⊗
A
→
A
⊗
A
⊗
A
{\displaystyle \sigma :A\otimes A\otimes A\to A\otimes A\otimes A}
is defined as
σ
=
(
i
d
⊗
τ
)
∘
(
τ
⊗
i
d
)
,
{\displaystyle \sigma =(\mathrm {id} \otimes \tau )\circ (\tau \otimes \mathrm {id} ),}
where
i
d
{\displaystyle \mathrm {id} }
is the identity morphism. Equivalently,
σ
{\displaystyle \sigma }
is defined by
σ
(
x
⊗
y
⊗
z
)
=
y
⊗
z
⊗
x
.
{\displaystyle \sigma (x\otimes y\otimes z)=y\otimes z\otimes x.}
With this notation, a Lie algebra can be defined as an object
A
{\displaystyle A}
in the category of vector spaces together with a morphism
[
⋅
,
⋅
]
:
A
⊗
A
→
A
{\displaystyle [\cdot ,\cdot ]\colon A\otimes A\rightarrow A}
that satisfies the two morphism equalities
[
⋅
,
⋅
]
∘
(
i
d
+
τ
)
=
0
,
{\displaystyle [\cdot ,\cdot ]\circ (\mathrm {id} +\tau )=0,}
and
[
⋅
,
⋅
]
∘
(
[
⋅
,
⋅
]
⊗
i
d
)
∘
(
i
d
+
σ
+
σ
2
)
=
0.
{\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes \mathrm {id} )\circ (\mathrm {id} +\sigma +\sigma ^{2})=0.}
== Generalization ==
Several generalizations of a Lie algebra have been proposed, many from physics. Among them are graded Lie algebras, Lie superalgebras, Lie n-algebras,
== See also ==
== Remarks ==
== References ==
== Sources ==
Bourbaki, Nicolas (1989). Lie Groups and Lie Algebras: Chapters 1-3. Springer. ISBN 978-3-540-64242-8. MR 1728312.
Erdmann, Karin; Wildon, Mark (2006). Introduction to Lie Algebras. Springer. ISBN 1-84628-040-0. MR 2218355.
Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Hall, Brian C. (2015). Lie groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. doi:10.1007/978-3-319-13467-3. ISBN 978-3319134666. ISSN 0072-5285. MR 3331229.
Humphreys, James E. (1978). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90053-7. MR 0499562.
Jacobson, Nathan (1979) [1962]. Lie Algebras. Dover. ISBN 978-0-486-63832-4. MR 0559927.
Khukhro, E. I. (1998), p-Automorphisms of Finite p-Groups, Cambridge University Press, doi:10.1017/CBO9780511526008, ISBN 0-521-59717-X, MR 1615819
Knapp, Anthony W. (2001) [1986], Representation Theory of Semisimple Groups: an Overview Based on Examples, Princeton University Press, ISBN 0-691-09089-0, MR 1880691
Milnor, John (2010) [1986], "Remarks on infinite-dimensional Lie groups", Collected Papers of John Milnor, vol. 5, American Mathematical Soc., pp. 91–141, ISBN 978-0-8218-4876-0, MR 0830252
O'Connor, J.J; Robertson, E.F. (2000). "Marius Sophus Lie". MacTutor History of Mathematics Archive.
O'Connor, J.J; Robertson, E.F. (2005). "Wilhelm Karl Joseph Killing". MacTutor History of Mathematics Archive.
Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031
Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups (2nd ed.). Springer. ISBN 978-3-540-55008-2. MR 2179691.
Varadarajan, Veeravalli S. (1984) [1974]. Lie Groups, Lie Algebras, and Their Representations. Springer. ISBN 978-0-387-90969-1. MR 0746308.
Wigner, Eugene (1959). Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. Translated by J. J. Griffin. Academic Press. ISBN 978-0127505503. MR 0106711. {{cite book}}: ISBN / Date incompatibility (help)
== External links ==
Kac, Victor G.; et al. Course notes for MIT 18.745: Introduction to Lie Algebras. Archived from the original on 2010-04-20.
"Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
McKenzie, Douglas (2015). "An Elementary Introduction to Lie Algebras for Physicists". | Wikipedia/Abelian_Lie_algebra |
In mathematics, a filtered algebra is a generalization of the notion of a graded algebra. Examples appear in many branches of mathematics, especially in homological algebra and representation theory.
A filtered algebra over the field
k
{\displaystyle k}
is an algebra
(
A
,
⋅
)
{\displaystyle (A,\cdot )}
over
k
{\displaystyle k}
that has an increasing sequence
{
0
}
⊆
F
0
⊆
F
1
⊆
⋯
⊆
F
i
⊆
⋯
⊆
A
{\displaystyle \{0\}\subseteq F_{0}\subseteq F_{1}\subseteq \cdots \subseteq F_{i}\subseteq \cdots \subseteq A}
of subspaces of
A
{\displaystyle A}
such that
A
=
⋃
i
∈
N
F
i
{\displaystyle A=\bigcup _{i\in \mathbb {N} }F_{i}}
and that is compatible with the multiplication in the following sense:
∀
m
,
n
∈
N
,
F
m
⋅
F
n
⊆
F
n
+
m
.
{\displaystyle \forall m,n\in \mathbb {N} ,\quad F_{m}\cdot F_{n}\subseteq F_{n+m}.}
== Associated graded algebra ==
In general, there is the following construction that produces a graded algebra out of a filtered algebra.
If
A
{\displaystyle A}
is a filtered algebra, then the associated graded algebra
G
(
A
)
{\displaystyle {\mathcal {G}}(A)}
is defined as follows:
The multiplication is well-defined and endows
G
(
A
)
{\displaystyle {\mathcal {G}}(A)}
with the structure of a graded algebra, with gradation
{
G
n
}
n
∈
N
.
{\displaystyle \{G_{n}\}_{n\in \mathbb {N} }.}
Furthermore if
A
{\displaystyle A}
is associative then so is
G
(
A
)
{\displaystyle {\mathcal {G}}(A)}
. Also, if
A
{\displaystyle A}
is unital, such that the unit lies in
F
0
{\displaystyle F_{0}}
, then
G
(
A
)
{\displaystyle {\mathcal {G}}(A)}
will be unital as well.
As algebras
A
{\displaystyle A}
and
G
(
A
)
{\displaystyle {\mathcal {G}}(A)}
are distinct (with the exception of the trivial case that
A
{\displaystyle A}
is graded) but as vector spaces they are isomorphic. (One can prove by induction that
⨁
i
=
0
n
G
i
{\displaystyle \bigoplus _{i=0}^{n}G_{i}}
is isomorphic to
F
n
{\displaystyle F_{n}}
as vector spaces).
== Examples ==
Any graded algebra graded by
N
{\displaystyle \mathbb {N} }
, for example
A
=
⨁
n
∈
N
A
n
{\textstyle A=\bigoplus _{n\in \mathbb {N} }A_{n}}
, has a filtration given by
F
n
=
⨁
i
=
0
n
A
i
{\textstyle F_{n}=\bigoplus _{i=0}^{n}A_{i}}
.
An example of a filtered algebra is the Clifford algebra
Cliff
(
V
,
q
)
{\displaystyle \operatorname {Cliff} (V,q)}
of a vector space
V
{\displaystyle V}
endowed with a quadratic form
q
.
{\displaystyle q.}
The associated graded algebra is
⋀
V
{\displaystyle \bigwedge V}
, the exterior algebra of
V
.
{\displaystyle V.}
The symmetric algebra on the dual of an affine space is a filtered algebra of polynomials; on a vector space, one instead obtains a graded algebra.
The universal enveloping algebra of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is also naturally filtered. The PBW theorem states that the associated graded algebra is simply
S
y
m
(
g
)
{\displaystyle \mathrm {Sym} ({\mathfrak {g}})}
.
Scalar differential operators on a manifold
M
{\displaystyle M}
form a filtered algebra where the filtration is given by the degree of differential operators. The associated graded algebra is the commutative algebra of smooth functions on the cotangent bundle
T
∗
M
{\displaystyle T^{*}M}
which are polynomial along the fibers of the projection
π
:
T
∗
M
→
M
{\displaystyle \pi \colon T^{*}M\rightarrow M}
.
The group algebra of a group with a length function is a filtered algebra.
== See also ==
Filtration (mathematics)
Length function
== References ==
Abe, Eiichi (1980). Hopf Algebras. Cambridge: Cambridge University Press. ISBN 0-521-22240-0.
This article incorporates material from Filtered algebra on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Filtered_algebra |
In abstract algebra, the Weyl algebras are abstracted from the ring of differential operators with polynomial coefficients. They are named after Hermann Weyl, who introduced them to study the Heisenberg uncertainty principle in quantum mechanics.
In the simplest case, these are differential operators. Let
F
{\displaystyle F}
be a field, and let
F
[
x
]
{\displaystyle F[x]}
be the ring of polynomials in one variable with coefficients in
F
{\displaystyle F}
. Then the corresponding Weyl algebra consists of differential operators of form
f
m
(
x
)
∂
x
m
+
f
m
−
1
(
x
)
∂
x
m
−
1
+
⋯
+
f
1
(
x
)
∂
x
+
f
0
(
x
)
{\displaystyle f_{m}(x)\partial _{x}^{m}+f_{m-1}(x)\partial _{x}^{m-1}+\cdots +f_{1}(x)\partial _{x}+f_{0}(x)}
This is the first Weyl algebra
A
1
{\displaystyle A_{1}}
. The n-th Weyl algebra
A
n
{\displaystyle A_{n}}
are constructed similarly.
Alternatively,
A
1
{\displaystyle A_{1}}
can be constructed as the quotient of the free algebra on two generators, q and p, by the ideal generated by
(
[
p
,
q
]
−
1
)
{\displaystyle ([p,q]-1)}
. Similarly,
A
n
{\displaystyle A_{n}}
is obtained by quotienting the free algebra on 2n generators by the ideal generated by
(
[
p
i
,
q
j
]
−
δ
i
,
j
)
,
∀
i
,
j
=
1
,
…
,
n
{\displaystyle ([p_{i},q_{j}]-\delta _{i,j}),\quad \forall i,j=1,\dots ,n}
where
δ
i
,
j
{\displaystyle \delta _{i,j}}
is the Kronecker delta.
More generally, let
(
R
,
Δ
)
{\displaystyle (R,\Delta )}
be a partial differential ring with commuting derivatives
Δ
=
{
∂
1
,
…
,
∂
m
}
{\displaystyle \Delta =\lbrace \partial _{1},\ldots ,\partial _{m}\rbrace }
. The Weyl algebra associated to
(
R
,
Δ
)
{\displaystyle (R,\Delta )}
is the noncommutative ring
R
[
∂
1
,
…
,
∂
m
]
{\displaystyle R[\partial _{1},\ldots ,\partial _{m}]}
satisfying the relations
∂
i
r
=
r
∂
i
+
∂
i
(
r
)
{\displaystyle \partial _{i}r=r\partial _{i}+\partial _{i}(r)}
for all
r
∈
R
{\displaystyle r\in R}
. The previous case is the special case where
R
=
F
[
x
1
,
…
,
x
n
]
{\displaystyle R=F[x_{1},\ldots ,x_{n}]}
and
Δ
=
{
∂
x
1
,
…
,
∂
x
n
}
{\displaystyle \Delta =\lbrace \partial _{x_{1}},\ldots ,\partial _{x_{n}}\rbrace }
where
F
{\displaystyle F}
is a field.
This article discusses only the case of
A
n
{\displaystyle A_{n}}
with underlying field
F
{\displaystyle F}
characteristic zero, unless otherwise stated.
The Weyl algebra is an example of a simple ring that is not a matrix ring over a division ring. It is also a noncommutative example of a domain, and an example of an Ore extension.
== Motivation ==
The Weyl algebra arises naturally in the context of quantum mechanics and the process of canonical quantization. Consider a classical phase space with canonical coordinates
(
q
1
,
p
1
,
…
,
q
n
,
p
n
)
{\displaystyle (q_{1},p_{1},\dots ,q_{n},p_{n})}
. These coordinates satisfy the Poisson bracket relations:
{
q
i
,
q
j
}
=
0
,
{
p
i
,
p
j
}
=
0
,
{
q
i
,
p
j
}
=
δ
i
j
.
{\displaystyle \{q_{i},q_{j}\}=0,\quad \{p_{i},p_{j}\}=0,\quad \{q_{i},p_{j}\}=\delta _{ij}.}
In canonical quantization, one seeks to construct a Hilbert space of states and represent the classical observables (functions on phase space) as self-adjoint operators on this space. The canonical commutation relations are imposed:
[
q
^
i
,
q
^
j
]
=
0
,
[
p
^
i
,
p
^
j
]
=
0
,
[
q
^
i
,
p
^
j
]
=
i
ℏ
δ
i
j
,
{\displaystyle [{\hat {q}}_{i},{\hat {q}}_{j}]=0,\quad [{\hat {p}}_{i},{\hat {p}}_{j}]=0,\quad [{\hat {q}}_{i},{\hat {p}}_{j}]=i\hbar \delta _{ij},}
where
[
⋅
,
⋅
]
{\displaystyle [\cdot ,\cdot ]}
denotes the commutator. Here,
q
^
i
{\displaystyle {\hat {q}}_{i}}
and
p
^
i
{\displaystyle {\hat {p}}_{i}}
are the operators corresponding to
q
i
{\displaystyle q_{i}}
and
p
i
{\displaystyle p_{i}}
respectively. Erwin Schrödinger proposed in 1926 the following:
q
j
^
{\displaystyle {\hat {q_{j}}}}
with multiplication by
x
j
{\displaystyle x_{j}}
.
p
^
j
{\displaystyle {\hat {p}}_{j}}
with
−
i
ℏ
∂
x
j
{\displaystyle -i\hbar \partial _{x_{j}}}
.
With this identification, the canonical commutation relation holds.
== Constructions ==
The Weyl algebras have different constructions, with different levels of abstraction.
=== Representation ===
The Weyl algebra
A
n
{\displaystyle A_{n}}
can be concretely constructed as a representation.
In the differential operator representation, similar to Schrödinger's canonical quantization, let
q
j
{\displaystyle q_{j}}
be represented by multiplication on the left by
x
j
{\displaystyle x_{j}}
, and let
p
j
{\displaystyle p_{j}}
be represented by differentiation on the left by
∂
x
j
{\displaystyle \partial _{x_{j}}}
.
In the matrix representation, similar to the matrix mechanics,
A
1
{\displaystyle A_{1}}
is represented by
P
=
[
0
1
0
0
⋯
0
0
2
0
⋯
0
0
0
3
⋯
⋮
⋮
⋮
⋮
⋱
]
,
Q
=
[
0
0
0
0
…
1
0
0
0
⋯
0
1
0
0
⋯
⋮
⋮
⋮
⋮
⋱
]
{\displaystyle P={\begin{bmatrix}0&1&0&0&\cdots \\0&0&2&0&\cdots \\0&0&0&3&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{bmatrix}},\quad Q={\begin{bmatrix}0&0&0&0&\ldots \\1&0&0&0&\cdots \\0&1&0&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{bmatrix}}}
=== Generator ===
A
n
{\displaystyle A_{n}}
can be constructed as a quotient of a free algebra in terms of generators and relations.
One construction starts with an abstract vector space V (of dimension 2n) equipped with a symplectic form ω. Define the Weyl algebra W(V) to be
W
(
V
)
:=
T
(
V
)
/
(
(
v
⊗
u
−
u
⊗
v
−
ω
(
v
,
u
)
,
for
v
,
u
∈
V
)
)
,
{\displaystyle W(V):=T(V)/(\!(v\otimes u-u\otimes v-\omega (v,u),{\text{ for }}v,u\in V)\!),}
where T(V) is the tensor algebra on V, and the notation
(
(
)
)
{\displaystyle (\!()\!)}
means "the ideal generated by".
In other words, W(V) is the algebra generated by V subject only to the relation vu − uv = ω(v, u). Then, W(V) is isomorphic to An via the choice of a Darboux basis for ω.
A
n
{\displaystyle A_{n}}
is also a quotient of the universal enveloping algebra of the Heisenberg algebra, the Lie algebra of the Heisenberg group, by setting the central element of the Heisenberg algebra (namely [q, p]) equal to the unit of the universal enveloping algebra (called 1 above).
=== Quantization ===
The algebra W(V) is a quantization of the symmetric algebra Sym(V). If V is over a field of characteristic zero, then W(V) is naturally isomorphic to the underlying vector space of the symmetric algebra Sym(V) equipped with a deformed product – called the Groenewold–Moyal product (considering the symmetric algebra to be polynomial functions on V∗, where the variables span the vector space V, and replacing iħ in the Moyal product formula with 1).
The isomorphism is given by the symmetrization map from Sym(V) to W(V)
a
1
⋯
a
n
↦
1
n
!
∑
σ
∈
S
n
a
σ
(
1
)
⊗
⋯
⊗
a
σ
(
n
)
.
{\displaystyle a_{1}\cdots a_{n}\mapsto {\frac {1}{n!}}\sum _{\sigma \in S_{n}}a_{\sigma (1)}\otimes \cdots \otimes a_{\sigma (n)}~.}
If one prefers to have the iħ and work over the complex numbers, one could have instead defined the Weyl algebra above as generated by qi and iħ∂qi (as per quantum mechanics usage).
Thus, the Weyl algebra is a quantization of the symmetric algebra, which is essentially the same as the Moyal quantization (if for the latter one restricts to polynomial functions), but the former is in terms of generators and relations (considered to be differential operators) and the latter is in terms of a deformed multiplication.
Stated in another way, let the Moyal star product be denoted
f
⋆
g
{\displaystyle f\star g}
, then the Weyl algebra is isomorphic to
(
C
[
x
1
,
…
,
x
n
]
,
⋆
)
{\displaystyle (\mathbb {C} [x_{1},\dots ,x_{n}],\star )}
.
In the case of exterior algebras, the analogous quantization to the Weyl one is the Clifford algebra, which is also referred to as the orthogonal Clifford algebra.
The Weyl algebra is also referred to as the symplectic Clifford algebra. Weyl algebras represent for symplectic bilinear forms the same structure that Clifford algebras represent for non-degenerate symmetric bilinear forms.
=== D-module ===
The Weyl algebra can be constructed as a D-module. Specifically, the Weyl algebra corresponding to the polynomial ring
R
[
x
1
,
.
.
.
,
x
n
]
{\displaystyle R[x_{1},...,x_{n}]}
with its usual partial differential structure is precisely equal to Grothendieck's ring of differential operations
D
A
R
n
/
R
{\displaystyle D_{\mathbb {A} _{R}^{n}/R}}
.
More generally, let
X
{\displaystyle X}
be a smooth scheme over a ring
R
{\displaystyle R}
. Locally,
X
→
R
{\displaystyle X\to R}
factors as an étale cover over some
A
R
n
{\displaystyle \mathbb {A} _{R}^{n}}
equipped with the standard projection. Because "étale" means "(flat and) possessing null cotangent sheaf", this means that every D-module over such a scheme can be thought of locally as a module over the
n
th
{\displaystyle n^{\text{th}}}
Weyl algebra.
Let
R
{\displaystyle R}
be a commutative algebra over a subring
S
{\displaystyle S}
. The ring of differential operators
D
R
/
S
{\displaystyle D_{R/S}}
(notated
D
R
{\displaystyle D_{R}}
when
S
{\displaystyle S}
is clear from context) is inductively defined as a graded subalgebra of
End
S
(
R
)
{\displaystyle \operatorname {End} _{S}(R)}
:
D
R
0
=
R
{\displaystyle D_{R}^{0}=R}
D
R
k
=
{
d
∈
End
S
(
R
)
:
[
d
,
a
]
∈
D
R
k
−
1
for all
a
∈
R
}
.
{\displaystyle D_{R}^{k}=\left\{d\in \operatorname {End} _{S}(R):[d,a]\in D_{R}^{k-1}{\text{ for all }}a\in R\right\}.}
Let
D
R
{\displaystyle D_{R}}
be the union of all
D
R
k
{\displaystyle D_{R}^{k}}
for
k
≥
0
{\displaystyle k\geq 0}
. This is a subalgebra of
End
S
(
R
)
{\displaystyle \operatorname {End} _{S}(R)}
.
In the case
R
=
S
[
x
1
,
.
.
.
,
x
n
]
{\displaystyle R=S[x_{1},...,x_{n}]}
, the ring of differential operators of order
≤
n
{\displaystyle \leq n}
presents similarly as in the special case
S
=
C
{\displaystyle S=\mathbb {C} }
but for the added consideration of "divided power operators"; these are operators corresponding to those in the complex case which stabilize
Z
[
x
1
,
.
.
.
,
x
n
]
{\displaystyle \mathbb {Z} [x_{1},...,x_{n}]}
, but which cannot be written as integral combinations of higher-order operators, i.e. do not inhabit
D
A
Z
n
/
Z
{\displaystyle D_{\mathbb {A} _{\mathbb {Z} }^{n}/\mathbb {Z} }}
. One such example is the operator
∂
x
1
[
p
]
:
x
1
N
↦
(
N
p
)
x
1
N
−
p
{\displaystyle \partial _{x_{1}}^{[p]}:x_{1}^{N}\mapsto {N \choose p}x_{1}^{N-p}}
.
Explicitly, a presentation is given by
D
S
[
x
1
,
…
,
x
ℓ
]
/
S
n
=
S
⟨
x
1
,
…
,
x
ℓ
,
{
∂
x
i
,
∂
x
i
[
2
]
,
…
,
∂
x
i
[
n
]
}
1
≤
i
≤
ℓ
⟩
{\displaystyle D_{S[x_{1},\dots ,x_{\ell }]/S}^{n}=S\langle x_{1},\dots ,x_{\ell },\{\partial _{x_{i}},\partial _{x_{i}}^{[2]},\dots ,\partial _{x_{i}}^{[n]}\}_{1\leq i\leq \ell }\rangle }
with the relations
[
x
i
,
x
j
]
=
[
∂
x
i
[
k
]
,
∂
x
j
[
m
]
]
=
0
{\displaystyle [x_{i},x_{j}]=[\partial _{x_{i}}^{[k]},\partial _{x_{j}}^{[m]}]=0}
[
∂
x
i
[
k
]
,
x
j
]
=
{
∂
x
i
[
k
−
1
]
if
i
=
j
0
if
i
≠
j
{\displaystyle [\partial _{x_{i}}^{[k]},x_{j}]=\left\{{\begin{matrix}\partial _{x_{i}}^{[k-1]}&{\text{if }}i=j\\0&{\text{if }}i\neq j\end{matrix}}\right.}
∂
x
i
[
k
]
∂
x
i
[
m
]
=
(
k
+
m
k
)
∂
x
i
[
k
+
m
]
when
k
+
m
≤
n
{\displaystyle \partial _{x_{i}}^{[k]}\partial _{x_{i}}^{[m]}={k+m \choose k}\partial _{x_{i}}^{[k+m]}~~~~~{\text{when }}k+m\leq n}
where
∂
x
i
[
0
]
=
1
{\displaystyle \partial _{x_{i}}^{[0]}=1}
by convention. The Weyl algebra then consists of the limit of these algebras as
n
→
∞
{\displaystyle n\to \infty }
.: Ch. IV.16.II
When
S
{\displaystyle S}
is a field of characteristic 0, then
D
R
1
{\displaystyle D_{R}^{1}}
is generated, as an
R
{\displaystyle R}
-module, by 1 and the
S
{\displaystyle S}
-derivations of
R
{\displaystyle R}
. Moreover,
D
R
{\displaystyle D_{R}}
is generated as a ring by the
R
{\displaystyle R}
-subalgebra
D
R
1
{\displaystyle D_{R}^{1}}
. In particular, if
S
=
C
{\displaystyle S=\mathbb {C} }
and
R
=
C
[
x
1
,
.
.
.
,
x
n
]
{\displaystyle R=\mathbb {C} [x_{1},...,x_{n}]}
, then
D
R
1
=
R
+
∑
i
R
∂
x
i
{\displaystyle D_{R}^{1}=R+\sum _{i}R\partial _{x_{i}}}
. As mentioned,
A
n
=
D
R
{\displaystyle A_{n}=D_{R}}
.
== Properties of An ==
Many properties of
A
1
{\displaystyle A_{1}}
apply to
A
n
{\displaystyle A_{n}}
with essentially similar proofs, since the different dimensions commute.
=== General Leibniz rule ===
In particular,
[
q
,
q
m
p
n
]
=
−
n
q
m
p
n
−
1
{\textstyle [q,q^{m}p^{n}]=-nq^{m}p^{n-1}}
and
[
p
,
q
m
p
n
]
=
m
q
m
−
1
p
n
{\textstyle [p,q^{m}p^{n}]=mq^{m-1}p^{n}}
.
=== Degree ===
This allows
A
1
{\displaystyle A_{1}}
to be a graded algebra, where the degree of
∑
m
,
n
c
m
,
n
q
m
p
n
{\displaystyle \sum _{m,n}c_{m,n}q^{m}p^{n}}
is
max
(
m
+
n
)
{\displaystyle \max(m+n)}
among its nonzero monomials. The degree is similarly defined for
A
n
{\displaystyle A_{n}}
.
That is, it has no two-sided nontrivial ideals and has no zero divisors.
=== Derivation ===
That is, any derivation
D
{\textstyle D}
is equal to
[
⋅
,
f
]
{\textstyle [\cdot ,f]}
for some
f
∈
A
n
{\textstyle f\in A_{n}}
; any
f
∈
A
n
{\textstyle f\in A_{n}}
yields a derivation
[
⋅
,
f
]
{\textstyle [\cdot ,f]}
; if
f
,
f
′
∈
A
n
{\textstyle f,f'\in A_{n}}
satisfies
[
⋅
,
f
]
=
[
⋅
,
f
′
]
{\textstyle [\cdot ,f]=[\cdot ,f']}
, then
f
−
f
′
∈
F
{\textstyle f-f'\in F}
.
The proof is similar to computing the potential function for a conservative polynomial vector field on the plane.
== Representation theory ==
=== Zero characteristic ===
In the case that the ground field F has characteristic zero, the nth Weyl algebra is a simple Noetherian domain. It has global dimension n, in contrast to the ring it deforms, Sym(V), which has global dimension 2n.
It has no finite-dimensional representations. Although this follows from simplicity, it can be more directly shown by taking the trace of σ(q) and σ(Y) for some finite-dimensional representation σ (where [q,p] = 1).
t
r
(
[
σ
(
q
)
,
σ
(
Y
)
]
)
=
t
r
(
1
)
.
{\displaystyle \mathrm {tr} ([\sigma (q),\sigma (Y)])=\mathrm {tr} (1)~.}
Since the trace of a commutator is zero, and the trace of the identity is the dimension of the representation, the representation must be zero dimensional.
In fact, there are stronger statements than the absence of finite-dimensional representations. To any finitely generated An-module M, there is a corresponding subvariety Char(M) of V × V∗ called the 'characteristic variety' whose size roughly corresponds to the size of M (a finite-dimensional module would have zero-dimensional characteristic variety). Then Bernstein's inequality states that for M non-zero,
dim
(
char
(
M
)
)
≥
n
{\displaystyle \dim(\operatorname {char} (M))\geq n}
An even stronger statement is Gabber's theorem, which states that Char(M) is a co-isotropic subvariety of V × V∗ for the natural symplectic form.
=== Positive characteristic ===
The situation is considerably different in the case of a Weyl algebra over a field of characteristic p > 0.
In this case, for any element D of the Weyl algebra, the element Dp is central, and so the Weyl algebra has a very large center. In fact, it is a finitely generated module over its center; even more so, it is an Azumaya algebra over its center. As a consequence, there are many finite-dimensional representations which are all built out of simple representations of dimension p.
== Generalizations ==
The ideals and automorphisms of
A
1
{\displaystyle A_{1}}
have been well-studied. The moduli space for its right ideal is known. However, the case for
A
n
{\displaystyle A_{n}}
is considerably harder and is related to the Jacobian conjecture.
For more details about this quantization in the case n = 1 (and an extension using the Fourier transform to a class of integrable functions larger than the polynomial functions), see Wigner–Weyl transform.
Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras.
=== Affine varieties ===
Weyl algebras also generalize in the case of algebraic varieties. Consider a polynomial ring
R
=
C
[
x
1
,
…
,
x
n
]
I
.
{\displaystyle R={\frac {\mathbb {C} [x_{1},\ldots ,x_{n}]}{I}}.}
Then a differential operator is defined as a composition of
C
{\displaystyle \mathbb {C} }
-linear derivations of
R
{\displaystyle R}
. This can be described explicitly as the quotient ring
Diff
(
R
)
=
{
D
∈
A
n
:
D
(
I
)
⊆
I
}
I
⋅
A
n
.
{\displaystyle {\text{Diff}}(R)={\frac {\{D\in A_{n}\colon D(I)\subseteq I\}}{I\cdot A_{n}}}.}
== See also ==
Jacobian conjecture
Dixmier conjecture
== Notes ==
== References ==
Coutinho, S. C. (1995). A Primer of Algebraic D-Modules. Cambridge [England] ; New York, NY, USA: Cambridge University Press. doi:10.1017/cbo9780511623653. ISBN 978-0-521-55119-9.
Coutinho, S. C. (1997). "The Many Avatars of a Simple Algebra". The American Mathematical Monthly. 104 (7): 593–604. doi:10.1080/00029890.1997.11990687. ISSN 0002-9890.
Dirac, P. A. M. (1926). "On Quantum Algebra". Mathematical Proceedings of the Cambridge Philosophical Society. 23 (4): 412–418. doi:10.1017/S0305004100015231. ISSN 0305-0041.
Helmstetter, J.; Micali, A. (2008). Quadratic Mappings and Clifford Algebras. Basel ; Boston: Birkhäuser. ISBN 978-3-7643-8605-4. OCLC 175285188.
Landsman, N.P. (2007). "BETWEEN CLASSICAL AND QUANTUM". Philosophy of Physics. Elsevier. doi:10.1016/b978-044451560-5/50008-7. ISBN 978-0-444-51560-5.
Lounesto, P.; Ablamowicz, R. (2004). Clifford Algebras. Boston: Springer Science & Business Media. ISBN 0-8176-3525-4.
Micali, A.; Boudet, R.; Helmstetter, J. (1992). Clifford Algebras and their Applications in Mathematical Physics. Dordrecht: Springer Science & Business Media. ISBN 0-7923-1623-1.
de Traubenberg, M. Rausch; Slupinski, M. J.; Tanasa, A. (2006). "Finite-dimensional Lie subalgebras of the Weyl algebra". J. Lie Theory. 16: 427–454. arXiv:math/0504224.
Traves, Will (2010). "Differential Operations on Grassmann Varieties". In Campbell, H.; Helminck, A.; Kraft, H.; Wehlau, D. (eds.). Symmetry and Spaces. Progress in Mathematics. Vol. 278. Birkhäuse. pp. 197–207. doi:10.1007/978-0-8176-4875-6_10. ISBN 978-0-8176-4875-6.
Tsit Yuen Lam (2001). A first course in noncommutative rings. Graduate Texts in Mathematics. Vol. 131 (2nd ed.). Springer. p. 6. ISBN 978-0-387-95325-0.
Berest, Yuri; Wilson, George (September 1, 2000). "Automorphisms and ideals of the Weyl algebra". Mathematische Annalen. 318 (1): 127–147. arXiv:math/0102190. doi:10.1007/s002080000115. ISSN 0025-5831.
Cannings, R.C.; Holland, M.P. (1994). "Right Ideals of Rings of Differential Operators". Journal of Algebra. 167 (1). Elsevier BV: 116–141. doi:10.1006/jabr.1994.1179. ISSN 0021-8693.
Lebruyn, L. (1995). "Moduli Spaces for Right Ideals of the Weyl Algebra". Journal of Algebra. 172 (1). Elsevier BV: 32–48. doi:10.1006/jabr.1995.1046. hdl:10067/123950151162165141. ISSN 0021-8693. | Wikipedia/Weyl_algebra |
In algebra, given a commutative ring R, the graded-symmetric algebra of a graded R-module M is the quotient of the tensor algebra of M by the ideal I generated by elements of the form:
x
y
−
(
−
1
)
|
x
|
|
y
|
y
x
{\displaystyle xy-(-1)^{|x||y|}yx}
x
2
{\displaystyle x^{2}}
when |x| is odd
for homogeneous elements x, y in M of degree |x|, |y|. By construction, a graded-symmetric algebra is graded-commutative; i.e.,
x
y
=
(
−
1
)
|
x
|
|
y
|
y
x
{\displaystyle xy=(-1)^{|x||y|}yx}
and is universal for this.
In spite of the name, the notion is a common generalization of a symmetric algebra and an exterior algebra: indeed, if V is a (non-graded) R-module, then the graded-symmetric algebra of V with trivial grading is the usual symmetric algebra of V. Similarly, the graded-symmetric algebra of the graded module with V in degree one and zero elsewhere is the exterior algebra of V.
== References ==
David Eisenbud, Commutative Algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol 150, Springer-Verlag, New York, 1995. ISBN 0-387-94268-8
== External links ==
"rt.representation theory - Definition of the symmetric algebra in arbitrary characteristic for graded vector spaces". MathOverflow. Retrieved 2017-04-18. | Wikipedia/Graded-symmetric_algebra |
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.
Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
== Definition and motivation ==
=== Motivating examples ===
=== Definition ===
Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K:
Right distributivity: (x + y) · z = x · z + y · z
Left distributivity: z · (x + y) = z · x + z · y
Compatibility with scalars: (ax) · (by) = (ab) (x · y).
These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.
When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
== Basic concepts ==
=== Algebra homomorphisms ===
Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as
H
o
m
K
-alg
(
A
,
B
)
.
{\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).}
A K-algebra isomorphism is a bijective K-algebra homomorphism.
=== Subalgebras and ideals ===
A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.
x + y is in L (L is closed under addition),
cx is in L (L is closed under scalar multiplication),
z · x is in L (L is closed under left multiplication by arbitrary elements).
If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
=== Extension of scalars ===
If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product
V
F
:=
V
⊗
K
F
{\displaystyle V_{F}:=V\otimes _{K}F}
. So if A is an algebra over K, then
A
F
{\displaystyle A_{F}}
is an algebra over F.
== Kinds of algebras and examples ==
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
=== Unital algebra ===
An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.
=== Zero algebra ===
An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
A unital zero algebra is the direct sum
K
⊕
V
{\displaystyle K\oplus V}
of a field
K
{\displaystyle K}
and a
K
{\displaystyle K}
-vector space
V
{\displaystyle V}
, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.
More precisely, every element of the algebra may be uniquely written as
k
+
v
{\displaystyle k+v}
with
k
∈
K
{\displaystyle k\in K}
and
v
∈
V
{\displaystyle v\in V}
, and the product is the only bilinear operation such that
v
w
=
0
{\displaystyle vw=0}
for every
v
{\displaystyle v}
and
w
{\displaystyle w}
in
V
{\displaystyle V}
. So, if
k
1
,
k
2
∈
K
{\displaystyle k_{1},k_{2}\in K}
and
v
1
,
v
2
∈
V
{\displaystyle v_{1},v_{2}\in V}
, one has
(
k
1
+
v
1
)
(
k
2
+
v
2
)
=
k
1
k
2
+
(
k
1
v
2
+
k
2
v
1
)
.
{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).}
A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.
This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".
Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module
V
{\displaystyle V}
correspond exactly to the ideals of
K
⊕
V
{\displaystyle K\oplus V}
that are contained in
V
{\displaystyle V}
.
For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.
=== Associative algebra ===
Examples of associative algebras include
the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication.
group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication.
the commutative algebra K[x] of all polynomials over K (see polynomial ring).
algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative.
Incidence algebras are built on certain partially ordered sets.
algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis.
=== Non-associative algebra ===
A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map
A
×
A
→
A
{\displaystyle A\times A\rightarrow A}
. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
Euclidean space R3 with multiplication given by the vector cross product
Octonions
Lie algebras
Jordan algebras
Alternative algebras
Flexible algebras
Power-associative algebras
== Algebras and rings ==
The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism
η
:
K
→
Z
(
A
)
,
{\displaystyle \eta \colon K\to Z(A),}
where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication
K
×
A
→
A
{\displaystyle K\times A\to A}
given by
(
k
,
a
)
↦
η
(
k
)
a
.
{\displaystyle (k,a)\mapsto \eta (k)a.}
Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as
f
(
k
a
)
=
k
f
(
a
)
{\displaystyle f(ka)=kf(a)}
for all
k
∈
K
{\displaystyle k\in K}
and
a
∈
A
{\displaystyle a\in A}
. In other words, the following diagram commutes:
K
η
A
↙
η
B
↘
A
f
⟶
B
{\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}}
== Structure coefficients ==
For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A.
Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars.
These structure coefficients determine the multiplication in A via the following rule:
e
i
e
j
=
∑
k
=
1
n
c
i
,
j
,
k
e
k
{\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}}
where e1,...,en form a basis of A.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as
eiej = ci,jkek.
If you apply this to vectors written in index notation, then this becomes
(xy)k = ci,jkxiyj.
If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
== Classification of low-dimensional unital associative algebras over the complex numbers ==
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,
1
⋅
1
=
1
,
1
⋅
a
=
a
,
a
⋅
1
=
a
.
{\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.}
It remains to specify
a
a
=
1
{\displaystyle \textstyle aa=1}
for the first algebra,
a
a
=
0
{\displaystyle \textstyle aa=0}
for the second algebra.
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify
a
a
=
a
,
b
b
=
b
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0}
for the first algebra,
a
a
=
a
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0}
for the second algebra,
a
a
=
b
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0}
for the third algebra,
a
a
=
1
,
b
b
=
0
,
a
b
=
−
b
a
=
b
{\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b}
for the fourth algebra,
a
a
=
0
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0}
for the fifth algebra.
The fourth of these algebras is non-commutative, and the others are commutative.
== Generalization: algebra over a ring ==
In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).
=== Associative algebras over rings ===
A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to
H
×
H
{\displaystyle \mathbb {H} \times \mathbb {H} }
, the direct product of two quaternion algebras. The center of that ring is
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} }
, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional
R
{\displaystyle \mathbb {R} }
-algebra.
In commutative algebra, if A is a commutative ring, then any unital ring homomorphism
R
→
A
{\displaystyle R\to A}
defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural
Z
{\displaystyle \mathbb {Z} }
-module structure, since one can take the unique homomorphism
Z
→
A
{\displaystyle \mathbb {Z} \to A}
. On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
== See also ==
Algebra over an operad
Alternative algebra
Clifford algebra
Composition algebra
Differential algebra
Free algebra
Geometric algebra
Max-plus algebra
Mutation (algebra)
Operator algebra
Zariski's lemma
== Notes ==
== References ==
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. | Wikipedia/Algebra_homomorphism |
In mathematics, the characteristic of a ring R, often denoted char(R), is defined to be the smallest positive number of copies of the ring's multiplicative identity (1) that will sum to the additive identity (0). If no such number exists, the ring is said to have characteristic zero.
That is, char(R) is the smallest positive number n such that:(p 198, Thm. 23.14)
1
+
⋯
+
1
⏟
n
summands
=
0
{\displaystyle \underbrace {1+\cdots +1} _{n{\text{ summands}}}=0}
if such a number n exists, and 0 otherwise.
== Motivation ==
The special definition of the characteristic zero is motivated by the equivalent definitions characterized in the next section, where the characteristic zero is not required to be considered separately.
The characteristic may also be taken to be the exponent of the ring's additive group, that is, the smallest positive integer n such that:(p 198, Def. 23.12)
a
+
⋯
+
a
⏟
n
summands
=
0
{\displaystyle \underbrace {a+\cdots +a} _{n{\text{ summands}}}=0}
for every element a of the ring (again, if n exists; otherwise zero). This definition applies in the more general class of rngs (see Ring (mathematics) § Multiplicative identity and the term "ring"); for (unital) rings the two definitions are equivalent due to their distributive law.
== Equivalent characterizations ==
The characteristic of a ring R is the natural number n such that n
Z
{\displaystyle \mathbb {Z} }
is the kernel of the unique ring homomorphism from
Z
{\displaystyle \mathbb {Z} }
to R.
The characteristic is the natural number n such that R contains a subring isomorphic to the factor ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
, which is the image of the above homomorphism.
When the non-negative integers {0, 1, 2, 3, ...} are partially ordered by divisibility, then 1 is the smallest and 0 is the largest. Then the characteristic of a ring is the smallest value of n for which n ⋅ 1 = 0. If nothing "smaller" (in this ordering) than 0 will suffice, then the characteristic is 0. This is the appropriate partial ordering because of such facts as that char(A × B) is the least common multiple of char A and char B, and that no ring homomorphism f : A → B exists unless char B divides char A.
The characteristic of a ring R is n precisely if the statement ka = 0 for all a ∈ R implies that k is a multiple of n.
== Case of rings ==
If R and S are rings and there exists a ring homomorphism R → S, then the characteristic of S divides the characteristic of R. This can sometimes be used to exclude the possibility of certain ring homomorphisms. The only ring with characteristic 1 is the zero ring, which has only a single element 0. If a nontrivial ring R does not have any nontrivial zero divisors, then its characteristic is either 0 or prime. In particular, this applies to all fields, to all integral domains, and to all division rings. Any ring of characteristic zero is infinite.
The ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
of integers modulo n has characteristic n. If R is a subring of S, then R and S have the same characteristic. For example, if p is prime and q(X) is an irreducible polynomial with coefficients in the field
F
p
{\displaystyle \mathbb {F} _{p}}
with p elements, then the quotient ring
F
p
[
X
]
/
(
q
(
X
)
)
{\displaystyle \mathbb {F} _{p}[X]/(q(X))}
is a field of characteristic p. Another example: The field
C
{\displaystyle \mathbb {C} }
of complex numbers contains
Z
{\displaystyle \mathbb {Z} }
, so the characteristic of
C
{\displaystyle \mathbb {C} }
is 0.
A
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
-algebra is equivalently a ring whose characteristic divides n. This is because for every ring R there is a ring homomorphism
Z
→
R
{\displaystyle \mathbb {Z} \to R}
, and this map factors through
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
if and only if the characteristic of R divides n. In this case for any r in the ring, then adding r to itself n times gives nr = 0.
If a commutative ring R has prime characteristic p, then we have (x + y)p = xp + yp for all elements x and y in R – the normally incorrect "freshman's dream" holds for power p.
The map x ↦ xp then defines a ring homomorphism R → R, which is called the Frobenius homomorphism. If R is an integral domain it is injective.
== Case of fields ==
As mentioned above, the characteristic of any field is either 0 or a prime number. A field of non-zero characteristic is called a field of finite characteristic or positive characteristic or prime characteristic. The characteristic exponent is defined similarly, except that it is equal to 1 when the characteristic is 0; otherwise it has the same value as the characteristic.
Any field F has a unique minimal subfield, also called its prime field. This subfield is isomorphic to either the rational number field
Q
{\displaystyle \mathbb {Q} }
or a finite field
F
p
{\displaystyle \mathbb {F} _{p}}
of prime order. Two prime fields of the same characteristic are isomorphic, and this isomorphism is unique. In other words, there is essentially a unique prime field in each characteristic.
=== Fields of characteristic zero ===
The fields of characteristic zero are those that have a subfield isomorphic to the field
Q
{\displaystyle \mathbb {Q} }
of the rational numbers. The most common of such fields are the subfields of the field
C
{\displaystyle \mathbb {C} }
of the complex numbers; this includes the real numbers
R
{\displaystyle \mathbb {R} }
and all algebraic number fields.
Other fields of characteristic zero are the p-adic fields that are widely used in number theory.
Fields of rational fractions over the integers or a field of characteristic zero are other common examples.
Ordered fields always have characteristic zero; they include
Q
{\displaystyle \mathbb {Q} }
and
R
.
{\displaystyle \mathbb {R} .}
=== Fields of prime characteristic ===
The finite field GF(pn) has characteristic p.
There exist infinite fields of prime characteristic. For example, the field of all rational functions over
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} }
, the algebraic closure of
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} }
or the field of formal Laurent series
Z
/
p
Z
(
(
T
)
)
{\displaystyle \mathbb {Z} /p\mathbb {Z} ((T))}
.
The size of any finite ring of prime characteristic p is a power of p. Since in that case it contains
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} }
it is also a vector space over that field, and from linear algebra we know that the sizes of finite vector spaces over finite fields are a power of the size of the field. This also shows that the size of any finite vector space is a prime power.
== See also ==
Ring of mixed characteristic
== Notes ==
== References ==
== Sources == | Wikipedia/Characteristic_(algebra) |
In mathematics, the inverse function of a function f (also called the inverse of f) is a function that undoes the operation of f. The inverse of f exists if and only if f is bijective, and if it exists, is denoted by
f
−
1
.
{\displaystyle f^{-1}.}
For a function
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
, its inverse
f
−
1
:
Y
→
X
{\displaystyle f^{-1}\colon Y\to X}
admits an explicit description: it sends each element
y
∈
Y
{\displaystyle y\in Y}
to the unique element
x
∈
X
{\displaystyle x\in X}
such that f(x) = y.
As an example, consider the real-valued function of a real variable given by f(x) = 5x − 7. One can think of f as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of f is the function
f
−
1
:
R
→
R
{\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} }
defined by
f
−
1
(
y
)
=
y
+
7
5
.
{\displaystyle f^{-1}(y)={\frac {y+7}{5}}.}
== Definitions ==
Let f be a function whose domain is the set X, and whose codomain is the set Y. Then f is invertible if there exists a function g from Y to X such that
g
(
f
(
x
)
)
=
x
{\displaystyle g(f(x))=x}
for all
x
∈
X
{\displaystyle x\in X}
and
f
(
g
(
y
)
)
=
y
{\displaystyle f(g(y))=y}
for all
y
∈
Y
{\displaystyle y\in Y}
.
If f is invertible, then there is exactly one function g satisfying this property. The function g is called the inverse of f, and is usually denoted as f −1, a notation introduced by John Frederick William Herschel in 1813.
The function f is invertible if and only if it is bijective. This is because the condition
g
(
f
(
x
)
)
=
x
{\displaystyle g(f(x))=x}
for all
x
∈
X
{\displaystyle x\in X}
implies that f is injective, and the condition
f
(
g
(
y
)
)
=
y
{\displaystyle f(g(y))=y}
for all
y
∈
Y
{\displaystyle y\in Y}
implies that f is surjective.
The inverse function f −1 to f can be explicitly described as the function
f
−
1
(
y
)
=
(
the unique element
x
∈
X
such that
f
(
x
)
=
y
)
{\displaystyle f^{-1}(y)=({\text{the unique element }}x\in X{\text{ such that }}f(x)=y)}
.
=== Inverses and composition ===
Recall that if f is an invertible function with domain X and codomain Y, then
f
−
1
(
f
(
x
)
)
=
x
{\displaystyle f^{-1}\left(f(x)\right)=x}
, for every
x
∈
X
{\displaystyle x\in X}
and
f
(
f
−
1
(
y
)
)
=
y
{\displaystyle f\left(f^{-1}(y)\right)=y}
for every
y
∈
Y
{\displaystyle y\in Y}
.
Using the composition of functions, this statement can be rewritten to the following equations between functions:
f
−
1
∘
f
=
id
X
{\displaystyle f^{-1}\circ f=\operatorname {id} _{X}}
and
f
∘
f
−
1
=
id
Y
,
{\displaystyle f\circ f^{-1}=\operatorname {id} _{Y},}
where idX is the identity function on the set X; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism.
Considering function composition helps to understand the notation f −1. Repeatedly composing a function f: X→X with itself is called iteration. If f is applied n times, starting with the value x, then this is written as f n(x); so f 2(x) = f (f (x)), etc. Since f −1(f (x)) = x, composing f −1 and f n yields f n−1, "undoing" the effect of one application of f.
=== Notation ===
While the notation f −1(x) might be misunderstood, (f(x))−1 certainly denotes the multiplicative inverse of f(x) and has nothing to do with the inverse function of f. The notation
f
⟨
−
1
⟩
{\displaystyle f^{\langle -1\rangle }}
might be used for the inverse function to avoid ambiguity with the multiplicative inverse.
In keeping with the general notation, some English authors use expressions like sin−1(x) to denote the inverse of the sine function applied to x (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of sin (x), which can be denoted as (sin (x))−1. To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin arcus). For instance, the inverse of the sine function is typically called the arcsine function, written as arcsin(x). Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ārea). For instance, the inverse of the hyperbolic sine function is typically written as arsinh(x). The expressions like sin−1(x) can still be useful to distinguish the multivalued inverse from the partial inverse:
sin
−
1
(
x
)
=
{
(
−
1
)
n
arcsin
(
x
)
+
π
n
:
n
∈
Z
}
{\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}}
. Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the f −1 notation should be avoided.
== Examples ==
=== Squaring and square root functions ===
The function f: R → [0,∞) given by f(x) = x2 is not injective because
(
−
x
)
2
=
x
2
{\displaystyle (-x)^{2}=x^{2}}
for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
. Therefore, f is not invertible.
If the domain of the function is restricted to the nonnegative reals, that is, we take the function
f
:
[
0
,
∞
)
→
[
0
,
∞
)
;
x
↦
x
2
{\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}}
with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by
x
↦
x
{\displaystyle x\mapsto {\sqrt {x}}}
.
=== Standard inverse functions ===
The following table shows several standard functions and their inverses:
=== Formula for the inverse ===
Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse
f
−
1
{\displaystyle f^{-1}}
of an invertible function
f
:
R
→
R
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }
has an explicit description as
f
−
1
(
y
)
=
(
the unique element
x
∈
R
such that
f
(
x
)
=
y
)
{\displaystyle f^{-1}(y)=({\text{the unique element }}x\in \mathbb {R} {\text{ such that }}f(x)=y)}
.
This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if f is the function
f
(
x
)
=
(
2
x
+
8
)
3
{\displaystyle f(x)=(2x+8)^{3}}
then to determine
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
for a real number y, one must find the unique real number x such that (2x + 8)3 = y. This equation can be solved:
y
=
(
2
x
+
8
)
3
y
3
=
2
x
+
8
y
3
−
8
=
2
x
y
3
−
8
2
=
x
.
{\displaystyle {\begin{aligned}y&=(2x+8)^{3}\\{\sqrt[{3}]{y}}&=2x+8\\{\sqrt[{3}]{y}}-8&=2x\\{\dfrac {{\sqrt[{3}]{y}}-8}{2}}&=x.\end{aligned}}}
Thus the inverse function f −1 is given by the formula
f
−
1
(
y
)
=
y
3
−
8
2
.
{\displaystyle f^{-1}(y)={\frac {{\sqrt[{3}]{y}}-8}{2}}.}
Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if f is the function
f
(
x
)
=
x
−
sin
x
,
{\displaystyle f(x)=x-\sin x,}
then f is a bijection, and therefore possesses an inverse function f −1. The formula for this inverse has an expression as an infinite sum:
f
−
1
(
y
)
=
∑
n
=
1
∞
y
n
/
3
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
(
θ
θ
−
sin
(
θ
)
3
)
n
)
.
{\displaystyle f^{-1}(y)=\sum _{n=1}^{\infty }{\frac {y^{n/3}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right).}
== Properties ==
Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations.
=== Uniqueness ===
If an inverse function exists for a given function f, then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by f.
=== Symmetry ===
There is a symmetry between a function and its inverse. Specifically, if f is an invertible function with domain X and codomain Y, then its inverse f −1 has domain Y and image X, and the inverse of f −1 is the original function f. In symbols, for functions f:X → Y and f−1:Y → X,
f
−
1
∘
f
=
id
X
{\displaystyle f^{-1}\circ f=\operatorname {id} _{X}}
and
f
∘
f
−
1
=
id
Y
.
{\displaystyle f\circ f^{-1}=\operatorname {id} _{Y}.}
This statement is a consequence of the implication that for f to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by
(
f
−
1
)
−
1
=
f
.
{\displaystyle \left(f^{-1}\right)^{-1}=f.}
The inverse of a composition of functions is given by
(
g
∘
f
)
−
1
=
f
−
1
∘
g
−
1
.
{\displaystyle (g\circ f)^{-1}=f^{-1}\circ g^{-1}.}
Notice that the order of g and f have been reversed; to undo f followed by g, we must first undo g, and then undo f.
For example, let f(x) = 3x and let g(x) = x + 5. Then the composition g ∘ f is the function that first multiplies by three and then adds five,
(
g
∘
f
)
(
x
)
=
3
x
+
5.
{\displaystyle (g\circ f)(x)=3x+5.}
To reverse this process, we must first subtract five, and then divide by three,
(
g
∘
f
)
−
1
(
x
)
=
1
3
(
x
−
5
)
.
{\displaystyle (g\circ f)^{-1}(x)={\tfrac {1}{3}}(x-5).}
This is the composition
(f −1 ∘ g −1)(x).
=== Self-inverses ===
If X is a set, then the identity function on X is its own inverse:
id
X
−
1
=
id
X
.
{\displaystyle {\operatorname {id} _{X}}^{-1}=\operatorname {id} _{X}.}
More generally, a function f : X → X is equal to its own inverse, if and only if the composition f ∘ f is equal to idX. Such a function is called an involution.
=== Graph of the inverse ===
If f is invertible, then the graph of the function
y
=
f
−
1
(
x
)
{\displaystyle y=f^{-1}(x)}
is the same as the graph of the equation
x
=
f
(
y
)
.
{\displaystyle x=f(y).}
This is identical to the equation y = f(x) that defines the graph of f, except that the roles of x and y have been reversed. Thus the graph of f −1 can be obtained from the graph of f by switching the positions of the x and y axes. This is equivalent to reflecting the graph across the line
y = x.
=== Inverses and derivatives ===
By the inverse function theorem, a continuous function of a single variable
f
:
A
→
R
{\displaystyle f\colon A\to \mathbb {R} }
(where
A
⊆
R
{\displaystyle A\subseteq \mathbb {R} }
) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function
f
(
x
)
=
x
3
+
x
{\displaystyle f(x)=x^{3}+x}
is invertible, since the derivative
f′(x) = 3x2 + 1 is always positive.
If the function f is differentiable on an interval I and f′(x) ≠ 0 for each x ∈ I, then the inverse f −1 is differentiable on f(I). If y = f(x), the derivative of the inverse is given by the inverse function theorem,
(
f
−
1
)
′
(
y
)
=
1
f
′
(
x
)
.
{\displaystyle \left(f^{-1}\right)^{\prime }(y)={\frac {1}{f'\left(x\right)}}.}
Using Leibniz's notation the formula above can be written as
d
x
d
y
=
1
d
y
/
d
x
.
{\displaystyle {\frac {dx}{dy}}={\frac {1}{dy/dx}}.}
This result follows from the chain rule (see the article on inverse functions and differentiation).
The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function f : Rn → Rn is invertible in a neighborhood of a point p as long as the Jacobian matrix of f at p is invertible. In this case, the Jacobian of f −1 at f(p) is the matrix inverse of the Jacobian of f at p.
== Real-world examples ==
Let f be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit,
F
=
f
(
C
)
=
9
5
C
+
32
;
{\displaystyle F=f(C)={\tfrac {9}{5}}C+32;}
then its inverse function converts degrees Fahrenheit to degrees Celsius,
C
=
f
−
1
(
F
)
=
5
9
(
F
−
32
)
,
{\displaystyle C=f^{-1}(F)={\tfrac {5}{9}}(F-32),}
since
f
−
1
(
f
(
C
)
)
=
f
−
1
(
9
5
C
+
32
)
=
5
9
(
(
9
5
C
+
32
)
−
32
)
=
C
,
for every value of
C
,
and
f
(
f
−
1
(
F
)
)
=
f
(
5
9
(
F
−
32
)
)
=
9
5
(
5
9
(
F
−
32
)
)
+
32
=
F
,
for every value of
F
.
{\displaystyle {\begin{aligned}f^{-1}(f(C))={}&f^{-1}\left({\tfrac {9}{5}}C+32\right)={\tfrac {5}{9}}\left(({\tfrac {9}{5}}C+32)-32\right)=C,\\&{\text{for every value of }}C,{\text{ and }}\\[6pt]f\left(f^{-1}(F)\right)={}&f\left({\tfrac {5}{9}}(F-32)\right)={\tfrac {9}{5}}\left({\tfrac {5}{9}}(F-32)\right)+32=F,\\&{\text{for every value of }}F.\end{aligned}}}
Suppose f assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example,
f
(
Allan
)
=
2005
,
f
(
Brad
)
=
2007
,
f
(
Cary
)
=
2001
f
−
1
(
2005
)
=
Allan
,
f
−
1
(
2007
)
=
Brad
,
f
−
1
(
2001
)
=
Cary
{\displaystyle {\begin{aligned}f({\text{Allan}})&=2005,\quad &f({\text{Brad}})&=2007,\quad &f({\text{Cary}})&=2001\\f^{-1}(2005)&={\text{Allan}},\quad &f^{-1}(2007)&={\text{Brad}},\quad &f^{-1}(2001)&={\text{Cary}}\end{aligned}}}
Let R be the function that leads to an x percentage rise of some quantity, and F be the function producing an x percentage fall. Applied to $100 with x = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other.
The formula to calculate the pH of a solution is pH = −log10[H+]. In many cases we need to find the concentration of acid from a pH measurement. The inverse function [H+] = 10−pH is used.
== Generalizations ==
=== Partial inverses ===
Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. For example, the function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
is not one-to-one, since x2 = (−x)2. However, the function becomes one-to-one if we restrict to the domain x ≥ 0, in which case
f
−
1
(
y
)
=
y
.
{\displaystyle f^{-1}(y)={\sqrt {y}}.}
(If we instead restrict to the domain x ≤ 0, then the inverse is the negative of the square root of y.)
=== Full inverses ===
Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function:
f
−
1
(
y
)
=
±
y
.
{\displaystyle f^{-1}(y)=\pm {\sqrt {y}}.}
Sometimes, this multivalued inverse is called the full inverse of f, and the portions (such as √x and −√x) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at y is called the principal value of f −1(y).
For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture).
=== Trigonometric inverses ===
The above considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since
sin
(
x
+
2
π
)
=
sin
(
x
)
{\displaystyle \sin(x+2\pi )=\sin(x)}
for every real x (and more generally sin(x + 2πn) = sin(x) for every integer n). However, the sine is one-to-one on the interval
[−π/2, π/2], and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −π/2 and π/2. The following table describes the principal branch of each inverse trigonometric function:
=== Left and right inverses ===
Function composition on the left and on the right need not coincide. In general, the conditions
"There exists g such that g(f(x))=x" and
"There exists g such that f(g(x))=x"
imply different properties of f. For example, let f: R → [0, ∞) denote the squaring map, such that f(x) = x2 for all x in R, and let g: [0, ∞) → R denote the square root map, such that g(x) = √x for all x ≥ 0. Then f(g(x)) = x for all x in [0, ∞); that is, g is a right inverse to f. However, g is not a left inverse to f, since, e.g., g(f(−1)) = 1 ≠ −1.
==== Left inverses ====
If f: X → Y, a left inverse for f (or retraction of f ) is a function g: Y → X such that composing f with g from the left gives the identity function
g
∘
f
=
id
X
.
{\displaystyle g\circ f=\operatorname {id} _{X}{\text{.}}}
That is, the function g satisfies the rule
If f(x)=y, then g(y)=x.
The function g must equal the inverse of f on the image of f, but may take any values for elements of Y not in the image.
A function f with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows:
If g is the left inverse of f, and f(x) = f(y), then g(f(x)) = g(f(y)) = x = y.
If nonempty f: X → Y is injective, construct a left inverse g: Y → X as follows: for all y ∈ Y, if y is in the image of f, then there exists x ∈ X such that f(x) = y. Let g(y) = x; this definition is unique because f is injective. Otherwise, let g(y) be an arbitrary element of X.For all x ∈ X, f(x) is in the image of f. By construction, g(f(x)) = x, the condition for a left inverse.
In classical mathematics, every injective function f with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion {0,1} → R of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set {0,1}.
==== Right inverses ====
A right inverse for f (or section of f ) is a function h: Y → X such that
f
∘
h
=
id
Y
.
{\displaystyle f\circ h=\operatorname {id} _{Y}.}
That is, the function h satisfies the rule
If
h
(
y
)
=
x
{\displaystyle \displaystyle h(y)=x}
, then
f
(
x
)
=
y
.
{\displaystyle \displaystyle f(x)=y.}
Thus, h(y) may be any of the elements of X that map to y under f.
A function f has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice).
If h is the right inverse of f, then f is surjective. For all
y
∈
Y
{\displaystyle y\in Y}
, there is
x
=
h
(
y
)
{\displaystyle x=h(y)}
such that
f
(
x
)
=
f
(
h
(
y
)
)
=
y
{\displaystyle f(x)=f(h(y))=y}
.
If f is surjective, f has a right inverse h, which can be constructed as follows: for all
y
∈
Y
{\displaystyle y\in Y}
, there is at least one
x
∈
X
{\displaystyle x\in X}
such that
f
(
x
)
=
y
{\displaystyle f(x)=y}
(because f is surjective), so we choose one to be the value of h(y).
==== Two-sided inverses ====
An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse.
If
g
{\displaystyle g}
is a left inverse and
h
{\displaystyle h}
a right inverse of
f
{\displaystyle f}
, for all
y
∈
Y
{\displaystyle y\in Y}
,
g
(
y
)
=
g
(
f
(
h
(
y
)
)
=
h
(
y
)
{\displaystyle g(y)=g(f(h(y))=h(y)}
.
A function has a two-sided inverse if and only if it is bijective.
A bijective function f is injective, so it has a left inverse (if f is the empty function,
f
:
∅
→
∅
{\displaystyle f\colon \varnothing \to \varnothing }
is its own left inverse). f is surjective, so it has a right inverse. By the above, the left and right inverse are the same.
If f has a two-sided inverse g, then g is a left inverse and right inverse of f, so f is injective and surjective.
=== Preimages ===
If f: X → Y is any function (not necessarily invertible), the preimage (or inverse image) of an element y ∈ Y is defined to be the set of all elements of X that map to y:
f
−
1
(
y
)
=
{
x
∈
X
:
f
(
x
)
=
y
}
.
{\displaystyle f^{-1}(y)=\left\{x\in X:f(x)=y\right\}.}
The preimage of y can be thought of as the image of y under the (multivalued) full inverse of the function f.
The notion can be generalized to subsets of the range. Specifically, if S is any subset of Y, the preimage of S, denoted by
f
−
1
(
S
)
{\displaystyle f^{-1}(S)}
, is the set of all elements of X that map to S:
f
−
1
(
S
)
=
{
x
∈
X
:
f
(
x
)
∈
S
}
.
{\displaystyle f^{-1}(S)=\left\{x\in X:f(x)\in S\right\}.}
For example, take the function f: R → R; x ↦ x2. This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g.
f
−
1
(
{
1
,
4
,
9
,
16
}
)
=
{
−
4
,
−
3
,
−
2
,
−
1
,
1
,
2
,
3
,
4
}
{\displaystyle f^{-1}(\left\{1,4,9,16\right\})=\left\{-4,-3,-2,-1,1,2,3,4\right\}}
.
The original notion and its generalization are related by the identity
f
−
1
(
y
)
=
f
−
1
(
{
y
}
)
,
{\displaystyle f^{-1}(y)=f^{-1}(\{y\}),}
The preimage of a single element y ∈ Y – a singleton set {y} – is sometimes called the fiber of y. When Y is the set of real numbers, it is common to refer to f −1({y}) as a level set.
== See also ==
Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function
Integral of inverse functions
Inverse Fourier transform
Reversible computing
== Notes ==
== References ==
== Bibliography ==
Briggs, William; Cochran, Lyle (2011). Calculus / Early Transcendentals Single Variable. Addison-Wesley. ISBN 978-0-321-66414-3.
Devlin, Keith J. (2004). Sets, Functions, and Logic / An Introduction to Abstract Mathematics (3 ed.). Chapman & Hall / CRC Mathematics. ISBN 978-1-58488-449-1.
Fletcher, Peter; Patty, C. Wayne (1988). Foundations of Higher Mathematics. PWS-Kent. ISBN 0-87150-164-3.
Lay, Steven R. (2006). Analysis / With an Introduction to Proof (4 ed.). Pearson / Prentice Hall. ISBN 978-0-13-148101-5.
Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006). A Transition to Advanced Mathematics (6 ed.). Thompson Brooks/Cole. ISBN 978-0-534-39900-9.
Thomas Jr., George Brinton (1972). Calculus and Analytic Geometry Part 1: Functions of One Variable and Analytic Geometry (Alternate ed.). Addison-Wesley.
Wolf, Robert S. (1998). Proof, Logic, and Conjecture / The Mathematician's Toolbox. W. H. Freeman and Co. ISBN 978-0-7167-3050-7.
== Further reading ==
Amazigo, John C.; Rubenfeld, Lester A. (1980). "Implicit Functions; Jacobians; Inverse Functions". Advanced Calculus and its Applications to the Engineering and Physical Sciences. New York: Wiley. pp. 103–120. ISBN 0-471-04934-4.
Binmore, Ken G. (1983). "Inverse Functions". Calculus. New York: Cambridge University Press. pp. 161–197. ISBN 0-521-28952-1.
Spivak, Michael (1994). Calculus (3 ed.). Publish or Perish. ISBN 0-914098-89-6.
Stewart, James (2002). Calculus (5 ed.). Brooks Cole. ISBN 978-0-534-39339-7.
== External links ==
"Inverse function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Inverse_function |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.